Compare commits

..

1082 Commits

Author SHA1 Message Date
2c979d1b68 wip 2023-11-17 18:06:26 +11:00
7b93b5e928 Merge branch 'main' into feat/arbitrary-field-types 2023-11-17 15:00:24 +11:00
136ff011b2 3.4.0post1 (#5115)
## What type of PR is this? (check all applicable)

3.4.0post1


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
2023-11-17 14:51:10 +11:00
3bc15a96d5 Update version to 3.4.0post1 2023-11-17 13:39:00 +11:00
43d5bb2038 Updated JS files 2023-11-17 13:36:50 +11:00
8d39eab3a9 fix(ui): metadata error on img2img 2023-11-17 12:31:34 +11:00
dc44debbab fix(ui): fix ts error with custom fields 2023-11-17 12:09:15 +11:00
5ce2dc3a58 feat(ui): fix tooltips for custom types
We need to hold onto the original type of the field so they don't all just show up as "Unknown".
2023-11-17 12:01:39 +11:00
27fd9071ba feat(ui): add support for custom field types
Node authors may now create their own arbitrary/custom field types. Any pydantic model is supported.

Two notes:
1. Your field type's class name must be unique.

Suggest prefixing fields with something related to the node pack as a kind of namespace.

2. Custom field types function as connection-only fields.

For example, if your custom field has string attributes, you will not get a text input for that attribute when you give a node a field with your custom type.

This is the same behaviour as other complex fields that don't have custom UIs in the workflow editor - like, say, a string collection.
2023-11-17 11:32:35 +11:00
62da69b3e8 Release/3.4 (#5112)
## What type of PR is this? (check all applicable)

3.4 Release Updates

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents


## [optional] Are there any post deployment tasks we need to perform?
2023-11-17 08:34:20 +11:00
d2852c767b Bump version to 3.4.0 2023-11-17 08:22:41 +11:00
47f33f1ed1 Update JS files for 3.4 release 2023-11-17 08:21:47 +11:00
1896c6fb44 Merge remote-tracking branch 'origin/main' into release/3.4 2023-11-17 08:09:13 +11:00
47f3515745 fix(nodes,ui): fix missed/canvas temp images in gallery (#5111)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Resolves two bugs introduced in #5106:

1. Linear UI images sometimes didn't make it to the gallery.

This was a race condition. The VAE decode nodes were handled by the
socketInvocationComplete listener. At that moment, the image was marked
as intermediate. Immediately after this node was handled, a
LinearUIOutputInvocation, introduced in #5106, was handled by
socketInvocationComplete. This node internally sets changed the image to
not intermediate.

During the handling of that socketInvocationComplete, RTK Query would
sometimes use its cache instead of retrieving the image DTO again. The
result is that the UI never got the message that the image was not
intermediate, so it wasn't added to the gallery.

This is resolved by refactoring the socketInvocationComplete listener.
We now skip the gallery processing for linear UI events, except for the
LinearUIOutputInvocation. Images now always make it to the gallery, and
network requests to get image DTOs are substantially reduced.

2. Canvas temp images always went into the gallery

The LinearUIOutputInvocation was always setting its image's
is_intermediate to false. This included all canvas images and resulted
in all canvas temp images going to gallery.

This is resolved by making LinearUIOutputInvocation set is_intermediate
based on `self.is_intermediate`. The behaviour now more or less
mirroring the behaviour of is_intermediate on other image-outputting
nodes, except it doesn't save the image again - only changes it.

One extra minor change - LinearUIOutputInvocation only changes
is_intermediate if it differs from the image's current setting. Very
minor optimisation.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue
https://discord.com/channels/1020123559063990373/1149513625321603162/1174721072826945638

## QA Instructions, Screenshots, Recordings

Try to reproduce the issues described int he discord thread:
- Images should always go to the gallery from txt2img and img2img
- Canvas temp images should not go to the gallery unless auto-save is
enabled
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-17 08:05:43 +11:00
950021a61e Merge branch 'main' into fix/missed-images-canvas-temp 2023-11-17 08:00:16 +11:00
5ee55cf46f Added unsharp mask node to communityNodes.md (#5110)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-17 07:51:09 +11:00
91ef24e15c fix(nodes,ui): fix missed/canvas temp images in gallery
Resolves two bugs introduced in #5106:

1. Linear UI images sometimes didn't make it to the gallery.

This was a race condition. The VAE decode nodes were handled by the socketInvocationComplete listener. At that moment, the image was marked as intermediate. Immediately after this node was handled, a LinearUIOutputInvocation, introduced in #5106, was handled by socketInvocationComplete. This node internally sets changed the image to not intermediate.

During the handling of that socketInvocationComplete, RTK Query would sometimes use its cache instead of retrieving the image DTO again. The result is that the UI never got the message that the image was not intermediate, so it wasn't added to the gallery.

This is resolved by refactoring the socketInvocationComplete listener. We now skip the gallery processing for linear UI events, except for the LinearUIOutputInvocation. Images now always make it to the gallery, and network requests to get image DTOs are substantially reduced.

2. Canvas temp images always went into the gallery

The LinearUIOutputInvocation was always setting its image's is_intermediate to false. This included all canvas images and resulted in all canvas temp images going to gallery.

This is resolved by making LinearUIOutputInvocation set is_intermediate based on `self.is_intermediate`. The behaviour now more or less mirroring the behaviour of is_intermediate on other image-outputting nodes, except it doesn't save the image again - only changes it.

One extra minor change - LinearUIOutputInvocation only changes is_intermediate if it differs from the image's current setting. Very minor optimisation.
2023-11-17 07:32:04 +11:00
230dfdb9ad Added unsharp mask node to communityNodes.md 2023-11-16 14:25:06 -06:00
6f719b2c7a feat: add private node for linear UI image outputting (#5106)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

[feat: add private node for linear UI image
outputting](4599517c6c)

Add a LinearUIOutputInvocation node to be the new terminal node for
Linear UI graphs. This node is private and hidden from the Workflow
Editor, as it is an implementation detail.

The Linear UI was using the Save Image node for this purpose. It allowed
every linear graph to end a single node type, which handled saving
metadata and board. This substantially reduced the complexity of the
linear graphs.

This caused two related issues:
- Images were saved to disk twice
- Noticeable delay between when an image was decoded and showed up in
the UI

To resolve this, the new LinearUIOutputInvocation node will handle
adding an image to a board if one is provided.

Metadata is no longer provided in this unified node. Instead, the
metadata graph helpers now need to know the node to add metadata to and
provide it to the last node that actually outputs an image. This is a
`l2i` node for txt2img & img2img graphs, and a different
image-outputting node for canvas graphs.

HRF poses another complication, in that it changes the terminal node. To
handle this, a new metadata util is added called
`setMetadataReceivingNode()`. HRF calls this to change the node that
should receive the graph's metadata.

This resolves the duplicate images issue and improves perf without
otherwise changing the user experience.

---

Also fixed an issue with HRF metadata.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4688
- Closes #4645

## QA Instructions, Screenshots, Recordings

Generate some images with and without a board selected. Images should
end up in the right board per usual, but a bit quicker. Metadata should
still work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-16 20:08:55 +05:30
02ce3bd303 Merge branch 'main' into feat/linear-ui-output-node 2023-11-16 19:05:13 +11:00
4599517c6c feat: add private node for linear UI image outputting
Add a LinearUIOutputInvocation node to be the new terminal node for Linear UI graphs. This node is private and hidden from the Workflow Editor, as it is an implementation detail.

The Linear UI was using the Save Image node for this purpose. It allowed every linear graph to end a single node type, which handled saving metadata and board. This substantially reduced the complexity of the linear graphs.

This caused two related issues:
- Images were saved to disk twice
- Noticeable delay between when an image was decoded and showed up in the UI

To resolve this, the new LinearUIOutputInvocation node will handle adding an image to a board if one is provided.

Metadata is no longer provided in this unified node. Instead, the metadata graph helpers now need to know the node to add metadata to and provide it to the last node that actually outputs an image. This is a `l2i` node for txt2img & img2img graphs, and a different image-outputting node for canvas graphs.

HRF poses another complication, in that it changes the terminal node. To handle this, a new metadata util is added called `setMetadataReceivingNode()`. HRF calls this to change the node that should receive the graph's metadata.

This resolves the duplicate images issue and improves perf without otherwise changing the user experience.
2023-11-16 18:56:59 +11:00
cc747c066c fix(nodes): fix hrf_enabled metadata item
It was a float but should be a bool
2023-11-16 18:47:31 +11:00
3ba547a41a translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1229 of 1229 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-16 18:23:41 +11:00
1a37827bdf (fix) docs formatting 2023-11-16 18:22:21 +11:00
16e990b6e6 Docs/3.4 updates (#5104)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-16 17:52:06 +11:00
be4f3fa5c6 Added LCM-LoRA 2023-11-16 16:32:55 +11:00
d0375ec234 Added FAQ 2023-11-16 16:10:43 +11:00
1bf8625b10 Updates to invocations 2023-11-16 15:35:24 +11:00
5d6040b636 Updated invocations docs 2023-11-16 15:02:06 +11:00
ead1b14ee7 feat: updateable workflow nodes (#5102)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

[fix(nodes): bump version of nodes post-pydantic
v2](5cb3fdb64c)

This was not done, despite new metadata fields being added to many
nodes.

[feat(ui): add update node
functionality](3f6e8e9d6b)

A workflow's nodes may update itself, if its major version matches the
template's major version.

If the major versions do not match, the user will need to delete and
re-add the node (current behaviour).

The update functionality is not automatic (for now). The logic to update
the node is pretty simple, but I want to ensure it works well first
before doing it automatically when a workflow is loaded.

- New `Details` tab on Workflow Inspector, displays node title, type,
version, and notes
- Button to update the node is displayed on the `Details` tab
- Add hook to determine if a node needs an update, may be updated (i.e.
major versions match), and the callback to update the node in state
- Remove the notes modal from the little info icon
- Modularize the node building logic

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

Probably exist but not sure where.

## QA Instructions, Screenshots, Recordings

Load an old workflow with nodes that need to be updated. Click on each
node that needs updating and click the update button. Workflow should
work.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-11-16 12:57:01 +11:00
92a9355ddb chore(ui): lint 2023-11-16 12:46:56 +11:00
7fcf475aec feat(ui): add Update All Nodes button 2023-11-16 12:42:25 +11:00
3f6e8e9d6b feat(ui): add update node functionality
A workflow's nodes may update itself, if its major version matches the template's major version.

If the major versions do not match, the user will need to delete and re-add the node (current behaviour).

The update functionality is not automatic (for now). The logic to update the node is pretty simple, but I want to ensure it works well first before doing it automatically when a workflow is loaded.

- New `Details` tab on Workflow Inspector, displays node title, type, version, and notes
- Button to update the node is displayed on the `Details` tab
- Add hook to determine if a node needs an update, may be updated (i.e. major versions match), and the callback to update the node in state
- Remove the notes modal from the little info icon
- Modularize the node building logic
2023-11-16 11:36:20 +11:00
c9655236cc chore(ui): regen types 2023-11-16 11:21:39 +11:00
5cb3fdb64c fix(nodes): bump version of nodes post-pydantic v2 2023-11-16 11:14:26 +11:00
ae749ada6e pin torch==2.1.0, torchvision=0.16.0 (#5101)
## Description

pin torch==2.1.0, torchvision=0.16.0

Prevents accidental upgrade to unreleased torch 2.1.1, which breaks
stuff

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #5065
2023-11-16 09:38:04 +11:00
36b8549f3a pin torch==2.1.0, torchvision=0.16.0 2023-11-16 09:28:29 +11:00
b6f356f067 Change stylecheck name from "black" to "ruff" (#5090)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: it is trivial

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

After the switch to the "ruff" linter, I noticed that the stylecheck
workflow is still described as "black" in the action logs. This small PR
should fix the issue.
2023-11-15 08:29:41 +11:00
a4f1db7c02 change stylecheck name from "black" to "ruff" 2023-11-14 11:06:10 -05:00
21206bafcf chore: bump pydantic and fastapi
No breaking changes for us.

Pydantic is working on its own faster JSON parser, `jiter`, and 2.5.0 starts bringing this in. See https://github.com/pydantic/jiter

There are a number of other bugfixes and minor changes in this version of pydantic.

The FastAPI update is mostly internal but let's stay up to date.
2023-11-14 14:34:14 +11:00
a047bad391 Revert torch to use cu121 (#5091)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-14 13:47:51 +11:00
909afc266e Update 010_INSTALL_AUTOMATED.md 2023-11-13 20:28:00 -05:00
4039dd148d Update 030_INSTALL_CUDA_AND_ROCM.md 2023-11-13 20:28:00 -05:00
ea0f8b8791 Update 020_INSTALL_MANUAL.md 2023-11-13 20:28:00 -05:00
f412582d60 Update README.md to cu121 2023-11-13 20:28:00 -05:00
c5672adb6b Update 070_INSTALL_XFORMERS.md 2023-11-13 20:28:00 -05:00
0e5c3a641a Revert torch to use cu121 2023-11-13 20:28:00 -05:00
9015e72e1e Update README.md to include M3 (#5092)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-14 12:24:57 +11:00
6b05d27c7a Update 040_INSTALL_DOCKER.md 2023-11-14 12:22:46 +11:00
19d0673085 Update 010_INSTALL_AUTOMATED.md 2023-11-14 12:22:08 +11:00
048b4fe7e8 Update README.md to include M3 2023-11-13 19:11:31 -06:00
e8b83fecff fix(backend): apply clip skip after lora
This handles LoRAs that attempt to modify layers skipped by CLIP Skip.
2023-11-14 11:30:15 +11:00
8883ecb2bf Model Manager Refactor Phase 1 - SQL-based config storage (#5039)
## What type of PR is this? (check all applicable)

- [X] Refactor


## Have you discussed this change with the InvokeAI team?
- [X] Extensively
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

As discussed with @psychedelicious and @RyanJDick, this is the first
phase of the model manager refactor. In this phase, I've added support
for storing model configuration information the `invokeai.db` SQL3
database. All the code is separate from the original model manager, so
for the time being the frontend is still using the original YAML-based
configuration, so the web app still works.

To keep things clean, I've added a new FastAPI route called
`model_records` which can add, update, retrieve and delete model
records.

The architecture is described in the first section of
`docs/contributing/MODEL_MANAGER.md`.

## QA Instructions, Screenshots, Recordings

There is a pytest for the model sql storage backend in
`tests/backend/model_manager_2/test_model_storage_sql.py`.

To populate `invokeai.db` with models from your current `models.yaml`,
do the following:

1. Stop the running server
2. Back up `invokeai.db`
3. Run `pip install -e .` to install the command used in the next step.
4. Run `invokeai-migrate-models-to-db`

This will iterate through `models.yaml` and create equivalent database
entries in the `model_config` table of `invokeai.db`. Only the models
named in the yaml file will be migrated, so anything that is autoloaded
will be ignored.

Note that in order to get the `model_records` router to be recognized by
the swagger API, I had to rebuild the frontend. Not sure why this was
necessary and would appreciate a pointer on a less radical way to do
this.

## Added/updated tests?

- [X] Yes
- [ ] No
2023-11-13 18:59:25 -05:00
2f97f1d6d5 Merge branch 'main' into refactor/model-manager-2 2023-11-13 18:21:16 -05:00
73d6cc824b Update Pytorch to ~2.1.0 in the installer script (#5089)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because it's required

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No, not necessary


## Description

We use Pytorch ~2.1.0 as a dependency for InvokeAI, but the installer
still installs 2.0.1 first until Invoke AIs dependencies kick in which
causes it to get deleted anyway and replaced with 2.1.0. This is
unnecessary and probably not wanted.

Fixed the dependencies for the installation script to install Pytorch
~2.1.0 to begin with.

P.s. Is there any reason why "torchmetrics==0.11.4" is pinned? What is
the reason for that? Does that change with Pytorch 2.1? It seems to work
since we use it already. It would be nice to know the reason.

Greetings

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-13 18:20:36 -05:00
acc0a29dca fixed ruff formatting issues 2023-11-13 18:15:17 -05:00
38c1436f02 resolve conflicts; blackify 2023-11-13 18:12:45 -05:00
efbdb75568 implement psychedelicious recommendations as of 13 November 2023-11-13 17:05:01 -05:00
8929495aeb fix(test): remove unused assignment to value 2023-11-14 08:08:23 +11:00
428f0b265f feat(api): add log stmt to update_model_record route 2023-11-14 08:06:35 +11:00
7daee41ad2 fix(api): remove unused ModelsListValidator 2023-11-14 08:01:44 +11:00
7cdd7b6ad7 feat(api): simplifiy list_model_records handler 2023-11-14 08:00:21 +11:00
bc64cde6f9 chore: ruff lint 2023-11-14 07:57:07 +11:00
4465f97cdf Merge branch 'main' into refactor/model-manager-2 2023-11-14 07:51:57 +11:00
fface2cda7 Update torch to ~2.1.0 in the installer 2023-11-13 17:30:51 +01:00
7fcb8959fb chore(ui): cleanup (#5084)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Bit of a cleanup. 

[chore(ui): delete unused
files](5eaea9dd64)

[feat(ui): add eslint rule
react/jsx-no-bind](3a0ec635c9)

This rule enforces no arrow functions in component props. In practice,
it means all functions passed as component props must be wrapped in
`useCallback()`.

This is a performance optimization to prevent unnecessary rerenders.

The rule is added and all violations have been fixed, whew!

[chore(ui): move useCopyImageToClipboard to
common/hooks/](f2d26a3a3c)

[chore(ui): move MM components & store to
features/](bb52861896)

Somehow they had ended up in `features/ui/tabs` which isn't right

## QA Instructions, Screenshots, Recordings

UI should still work.

It builds successfully, and I tested things out - looks good to me.
2023-11-13 13:22:41 +05:30
dcf0dc4274 Merge branch 'main' into chore/ui/cleanup 2023-11-13 16:33:08 +11:00
bb52861896 chore(ui): move MM components & store to features/
Somehow they had ended up in `features/ui/tabs` which isn't right
2023-11-13 16:32:03 +11:00
f2d26a3a3c chore(ui): move useCopyImageToClipboard to common/hooks/ 2023-11-13 16:23:46 +11:00
04d8f2dfea fix(backend): fix controlnet zip len
Do not use `strict=True` when scaling controlnet conditioning.

When using `guess_mode` (e.g. `more_control` or `more_prompt`), `down_block_res_samples` and `scales` are zipped.

These two objects are of different lengths, so using zip's strict mode raises an error.

In testing, `len(scales) === len(down_block_res_samples) + 1`.

It appears this behaviour is intentional, as the final "extra" item in `scales` is used immediately afterwards.
2023-11-13 15:45:03 +11:00
355d4cf4e2 Update Accelerate to 0.24.X (#5075)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: This is just housekeeping

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No, not needed


## Description

Update Accelerate to the most recent version. No breaking changes.
Tested for 1 week in productive use now.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-13 14:20:05 +11:00
a3a828779a Merge branch 'main' into update-accelerate 2023-11-13 14:10:53 +11:00
8c71ff37ae Update config.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-12 19:03:39 -05:00
ddb65e6034 Merge branch 'main' into chore/ui/cleanup 2023-11-13 10:53:04 +11:00
8366cd2a00 feat: use ruff for lint & format (#5070)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [x] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This PR introduces [`ruff`](https://github.com/astral-sh/ruff) as the
only linter and formatter needed for the project. It is really fast.
Like, alarmingly fast.

It is a drop-in replacement for flake8, isort, black, and much more.
I've configured it similarly to our existing config.

Note: we had enabled a number of flake8 plugins but didn't have the
packages themselves installed, so they did nothing. Ruff used the
existing config, and found a good number of changes needed to adhere to
those flake8 plugins. I've resolved all violations.

### Code changes

- many
[flake8-comprehensions](https://docs.astral.sh/ruff/rules/#flake8-comprehensions-c4)
violations, almost all auto-fixed
- a good handful of
[flake8-bugbear](https://docs.astral.sh/ruff/rules/#flake8-bugbear-b)
violations
- handful of
[pycodestyle](https://docs.astral.sh/ruff/rules/#pycodestyle-e-w)
violations
- some formatting

### Developer Experience

[Ruff integrates with most
editors](https://docs.astral.sh/ruff/integrations/):
- Official VSCode extension
- `ruff-lsp` python package allows it to integrate with any LSP-capable
editor (vim, emacs, etc)
- Can be configured as an external tool in PyCharm

### Github Actions

I've updated the `style-checks` action to use ruff, and deleted the
`pyflakes` action.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #5066 

## QA Instructions, Screenshots, Recordings

Have a poke around, and run the app. There were some logic changes but
it was all pretty straightforward.

~~Not sure how to best test the changed github action.~~ Looks like it
just used the action from this PR, that's kinda unexpected but OK.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-13 10:41:43 +11:00
ab1ec3720a Merge branch 'main' into feat/ruff 2023-11-13 10:32:23 +11:00
3a0ec635c9 feat(ui): add eslint rule react/jsx-no-bind
This rule enforces no arrow functions in component props. In practice, it means all functions passed as component props must be wrapped in `useCallback()`.

This is a performance optimization to prevent unnecessary rerenders.

The rule is added and all violations have been fixed, whew!
2023-11-13 10:01:14 +11:00
8afe517204 add note about discriminated union and Body() issue; blackified 2023-11-12 16:50:05 -05:00
5eaea9dd64 chore(ui): delete unused files 2023-11-13 08:43:27 +11:00
71e298b722 Feat (ui): Add VAE Model to Recall Parameters (#5073)
* adding VAE recall when using all parameters

* adding VAE to the RecallParameters tab in ImageMetadataActions

* checking for nil vae and casting to null if undefined

* adding default VAE to recall actions list if VAE is nullish

* fix(ui): use `lodash-es` for tree-shakeable imports

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-12 21:19:12 +00:00
89a039460d feat(ui): add number inputs for canvas brush color picker (#5067)
* drop-down for the color picker

* fixed the bug in alpha value

* designing done

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-12 21:07:26 +00:00
a342e64772 Merge branch 'main' into feat/ruff 2023-11-13 07:54:06 +11:00
ef8dcf5fae blackify 2023-11-12 14:20:32 -05:00
90a038c685 translationBot(ui): update translation (Italian)
Currently translated at 97.7% (1200 of 1228 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-12 20:24:04 +11:00
024a156114 isort 2023-11-11 13:58:36 -05:00
7ea2a135f1 remove dangling import 2023-11-11 12:24:58 -05:00
af2264b6eb implement workaround for FastAPI and discriminated unions in Body parameter 2023-11-11 12:22:38 -05:00
41bf9ec4a3 Update Accelerate to 0.24.X 2023-11-11 09:46:23 +01:00
520ccdb0a9 Merge branch 'main' into feat/ruff 2023-11-11 15:07:35 +11:00
2b36565e9e awkward workaround for double-Annotated in model_record route 2023-11-10 21:32:44 -05:00
f2c3b7c317 Merge branch 'refactor/model-manager-2' of github.com:invoke-ai/InvokeAI into refactor/model-manager-2 2023-11-10 19:47:01 -05:00
67751a01ab remove unused import 2023-11-10 19:25:05 -05:00
cb8cdefd59 Merge branch 'main' into refactor/model-manager-2 2023-11-10 19:24:19 -05:00
f1c846ba5c blackify 2023-11-10 19:14:29 -05:00
3a6ba236f5 replace _class_map in ModelConfigFactory with a nested discriminated union 2023-11-10 19:14:15 -05:00
1c7ea57492 feat (ui, generation): High Resolution Fix- added automatic resolution toggle and replaced latent upscale with two improved methods (#4905)
* working

* added selector for method

* refactoring graph

* added ersgan method

* fixing yarn build

* add tooltips

* a conjuction

* rephrase

* removed manual sliders, set HRF to calculate dimensions automatically to match 512^2 pixels

* working

* working

* working

* fixed tooltip

* add hrf to use all parameters

* adding hrf method to parameters

* working on parameter recall

* working on parameter recall

* cleaning

* fix(ui): fix unnecessary casts in addHrfToGraph

* chore(ui): use camelCase in addHrfToGraph

* fix(ui): do not add HRF metadata unless HRF is added to graph

* fix(ui): remove unused imports in addHrfToGraph

* feat(ui): do not hide HRF params when disabled, only disable them

* fix(ui): remove unused vars in addHrfToGraph

* feat(ui): default HRF str to 0.35, method ESRGAN

* fix(ui): use isValidBoolean to check hrfEnabled param

* fix(nodes): update CoreMetadataInvocation fields for HRF

* feat(ui): set hrf strength default to 0.45

* fix(ui): set default hrf strength in configSlice

* feat(ui): use translations for HRF features

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-11 00:11:46 +00:00
6494e8e551 chore: ruff format 2023-11-11 10:55:40 +11:00
513fceac82 chore: ruff check - fix pycodestyle 2023-11-11 10:55:33 +11:00
99a8ebe3a0 chore: ruff check - fix flake8-bugbear 2023-11-11 10:55:28 +11:00
3a136420d5 chore: ruff check - fix flake8-comprensions 2023-11-11 10:55:23 +11:00
bd56e9bc81 remove cruft code from router 2023-11-10 18:49:25 -05:00
43f2398e14 feat: use ruff's github output format for action 2023-11-11 10:42:27 +11:00
d0cf98d7f6 feat: add ruff-lsp to support most editors 2023-11-11 10:42:27 +11:00
8111dd6cc5 feat: remove pyflakes gh action
ruff supersedes it
2023-11-11 10:42:27 +11:00
99e4b87fae feat: use ruff in GH style-checks action 2023-11-11 10:42:27 +11:00
884ec0b5df feat: replace isort, flake8 & black with ruff 2023-11-11 10:42:27 +11:00
9ccfa34e04 Update installer.py to use cu118 instead of 121 (#5069)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-11 10:40:47 +11:00
d5aa74623d Merge branch 'main' into Millu-patch-1 2023-11-11 10:39:06 +11:00
d63a614b8b Update Transformers to 4.35 and fix pad_to_multiple_of (#4817)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes, with @blessedcoolant 
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

This PR updates Transformers to the most recent version and fixes the
value `pad_to_multiple_of` for `text_encoder.resize_token_embeddings`
which was introduced with
https://github.com/huggingface/transformers/pull/25088 in Transformers
4.32.0.

According to the [Nvidia
Documentation](https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc),
`Performance is better when equivalent matrix dimensions M, N, and K are
aligned to multiples of 8 bytes (or 64 bytes on A100) for FP16`
This fixes the following error that was popping up before every
invocation starting with Transformers 4.32.0
`You are resizing the embedding layer without providing a
pad_to_multiple_of parameter. This means that the new embedding
dimension will be None. This might induce some performance reduction as
Tensor Cores will not be available. For more details about this, or help
on choosing the correct value for resizing, refer to this guide:
https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc`

This is my first "real" fix PR, so I hope this is fine. Please inform me
if there is anything wrong with this. I am glad to help.

Have a nice day and thank you!


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue:
https://github.com/huggingface/transformers/issues/26303
- Related Discord discussion:
https://discord.com/channels/1020123559063990373/1154152783579197571
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-11 10:38:33 +11:00
cbc905a4d6 Update installer.py to use cu118 instead of 121 2023-11-11 10:36:07 +11:00
b55fc2935e resolve conflicts with commits done on github 2023-11-10 18:26:48 -05:00
0544917161 multiple small fixes suggested in reviews from psychedelicious and ryan 2023-11-10 18:25:37 -05:00
1161dfe055 Update invokeai/app/api/routers/model_records.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-11-10 18:24:55 -05:00
433f347d7e Update invokeai/app/api/routers/model_records.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-11-10 18:22:54 -05:00
33a412a24f Update invokeai/backend/model_manager/config.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-10 18:21:38 -05:00
9316534d97 Update invokeai/app/services/model_records/model_records_sql.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-11-10 17:58:15 -05:00
fdaa661245 revert frontend dist files to main 2023-11-10 17:57:18 -05:00
f1c195afb7 Merge branch 'main' into refactor/model-manager-2 2023-11-10 17:54:28 -05:00
6001d3d71d Change pad_to_multiple_of to be 8 for all cases. Add comment about it's temporary status 2023-11-10 17:51:59 -05:00
b9f607be56 Update to 4.35.X 2023-11-10 17:51:59 -05:00
8831d1ee41 Update Documentation 2023-11-10 17:51:59 -05:00
a0be83e370 Update Transformers to 4.34 and fix pad_to_multiple_of 2023-11-10 17:51:59 -05:00
8702a63197 add support for downloading and installing LCM lora diffusers models 2023-11-10 17:51:30 -05:00
d7f0a7919f chore(ui): manually update vite to fix security issue in hoisted dep
`postcss` is a hoisted dependency of `vite`.
2023-11-10 06:58:22 -08:00
356b5a41a9 wip: Add LCMScheduler 2023-11-10 06:54:36 -08:00
e56a6d85a9 Update diffusers to ~=0.23 (#5063)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-10 12:44:28 +11:00
e22a091d76 Update diffusers to ~=0.23 2023-11-10 11:50:50 +11:00
141d02939a Upstream diffusers PR was merged, this no longer seems necessary (#5060)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-10 11:47:21 +11:00
5cb372e9d0 Merge branch 'main' into remove-deprecated-sdxl-t2i-hack 2023-11-10 11:33:32 +11:00
f95fe68753 chore(ui): manually bump deps with security issues 2023-11-10 09:50:00 +11:00
6d33893844 chore(ui): update all deps 2023-11-10 09:50:00 +11:00
fc53112d8e chore(ui): remove unused deps 2023-11-10 09:50:00 +11:00
41f7aa6ab4 Remove unused import: 2023-11-09 15:06:01 -05:00
9bec755198 Upstream diffusers PR was merged, this no longer seems necessary 2023-11-09 15:02:24 -05:00
2570497d83 fix(installer): fix import of ValidationError
It was being imported from a deprecated module
2023-11-10 06:11:15 +11:00
5d735a714d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1219 of 1219 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-09 10:54:56 -08:00
6aa87f973e fix(nodes): create app/shared/ module to prevent circular imports
We have a number of shared classes, objects, and functions that are used in multiple places. This causes circular import issues.

This commit creates a new `app/shared/` module to hold these shared classes, objects, and functions.

Initially, only `FreeUConfig` and `FieldDescriptions` are moved here. This resolves a circular import issue with custom nodes.

Other shared classes, objects, and functions will be moved here in future commits.
2023-11-09 16:41:55 +11:00
f793fdf3d4 fix(socketio): leave room on unsubscribe
https://discord.com/channels/1020123559063990373/1049495067846524939/1171976251704086559
2023-11-09 12:12:32 +11:00
3b363d0258 fix flake8 lint check failures 2023-11-08 16:52:46 -05:00
36e0faea6b blackify 2023-11-08 16:47:03 -05:00
927f8a66e6 Merge branch 'main' into refactor/model-manager-2 2023-11-08 16:46:08 -05:00
eebc0e7315 Merge branch 'refactor/model-manager-2' of github.com:invoke-ai/InvokeAI into refactor/model-manager-2 2023-11-08 16:45:29 -05:00
6b173cc66f multiple small stylistic changes requested by reviewers 2023-11-08 16:45:26 -05:00
b4732a7308 Update invokeai/app/services/model_records/model_records_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-11-08 13:50:40 -05:00
344a56327a Update invokeai/app/services/model_records/model_records_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-11-08 13:50:01 -05:00
2e404b7cca Fix updater option list numbering
Fix updater option list numbering in invokeai_update.py so that they don't go 1, 2, 2, 3.  The options themselves work fine.
2023-11-07 19:11:25 -08:00
a760bdae9f (fix) update freeU config to be compatible with custom nodes (#5050)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes: @psychedelicious told me to do this :) 
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-06 21:43:18 -08:00
4cfd55936c run black formatting 2023-11-07 16:06:18 +11:00
5c3a27aac6 fixed sorts 2023-11-07 16:03:06 +11:00
d573a23090 Moved FreeU Config Import 2023-11-07 15:48:53 +11:00
351abd2ca2 Merge branch 'invoke-ai:main' into main 2023-11-07 15:43:04 +11:00
9733cd4199 Update xformers to 0.0.22post7 2023-11-06 17:17:03 -08:00
9976bc6908 Update installer.py to cu121 2023-11-06 17:17:03 -08:00
c68db6e40f Update xformers to ~0.0.22 2023-11-06 17:17:03 -08:00
3a50798a52 Update xformers to 0.0.22post7 2023-11-07 12:00:39 +11:00
a98426d2c6 Update installer.py to cu121 2023-11-07 11:57:02 +11:00
504f426f0a Update xformers to ~0.0.22 2023-11-07 11:53:39 +11:00
840cbc1d39 xformers==0.0.20 (#4881)
I'm not sure if it's correct way of handling things, but correcting this
string to '==0.0.20' fixes xformers install for me - and maybe for
others it will too. Sorry for absolutely incorrect PR.

Please see [this
thread](https://github.com/facebookresearch/xformers/issues/740), this
is the issue I had (trying to install InvokeAI with
Automatic/Manual/StableMatrix way).

With ~=0.0.19 (0.0.22):
```
(InvokeAI) pip install torch torchvision xformers~=0.0.19
Collecting torch
  Obtaining dependency information for torch from edce54779f/torch-2.1.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torch-2.1.0-cp311-cp311-win_amd64.whl.metadata (25 kB)
Collecting torchvision
  Obtaining dependency information for torchvision from ab6f42af83/torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata (6.6 kB)
Collecting xformers
  Using cached xformers-0.0.22.post3.tar.gz (3.9 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      Traceback (most recent call last):
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
          main()
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\invokeai\.venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
                 ^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires
          self.run_setup()
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 507, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "C:\Users\Drun\AppData\Local\Temp\pip-build-env-rmhvraqj\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
          exec(code, locals())
        File "<string>", line 23, in <module>
      ModuleNotFoundError: No module named 'torch'
```

With 0.0.20:

```
(InvokeAI) pip install torch torchvision xformers==0.0.20
Collecting torch
  Obtaining dependency information for torch from edce54779f/torch-2.1.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torch-2.1.0-cp311-cp311-win_amd64.whl.metadata (25 kB)
Collecting torchvision
  Obtaining dependency information for torchvision from ab6f42af83/torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata
  Using cached torchvision-0.16.0-cp311-cp311-win_amd64.whl.metadata (6.6 kB)
Collecting xformers==0.0.20
  Obtaining dependency information for xformers==0.0.20 from d4a42f582a/xformers-0.0.20-cp311-cp311-win_amd64.whl.metadata
  Using cached xformers-0.0.20-cp311-cp311-win_amd64.whl.metadata (1.1 kB)
Collecting numpy (from xformers==0.0.20)
  Obtaining dependency information for numpy from 3f826c6d15/numpy-1.26.0-cp311-cp311-win_amd64.whl.metadata
  Using cached numpy-1.26.0-cp311-cp311-win_amd64.whl.metadata (61 kB)
Collecting pyre-extensions==0.0.29 (from xformers==0.0.20)
  Using cached pyre_extensions-0.0.29-py3-none-any.whl (12 kB)
Collecting torch
  Using cached torch-2.0.1-cp311-cp311-win_amd64.whl (172.3 MB)
Collecting filelock (from torch)
  Obtaining dependency information for filelock from 97afbafd9d/filelock-3.12.4-py3-none-any.whl.metadata
  Using cached filelock-3.12.4-py3-none-any.whl.metadata (2.8 kB)
Requirement already satisfied: typing-extensions in c:\users\drun\invokeai\.venv\lib\site-packages (from torch) (4.8.0)
Requirement already satisfied: sympy in c:\users\drun\invokeai\.venv\lib\site-packages (from torch) (1.12)
Collecting networkx (from torch)
  Using cached networkx-3.1-py3-none-any.whl (2.1 MB)
Collecting jinja2 (from torch)
  Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB)
Collecting typing-inspect (from pyre-extensions==0.0.29->xformers==0.0.20)
  Obtaining dependency information for typing-inspect from 107a22063b/typing_inspect-0.9.0-py3-none-any.whl.metadata
  Using cached typing_inspect-0.9.0-py3-none-any.whl.metadata (1.5 kB)
Collecting requests (from torchvision)
  Obtaining dependency information for requests from 0e2d847013/requests-2.31.0-py3-none-any.whl.metadata
  Using cached requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
  Using cached torchvision-0.15.2-cp311-cp311-win_amd64.whl (1.2 MB)
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Obtaining dependency information for pillow!=8.3.*,>=5.3.0 from debe992677/Pillow-10.0.1-cp311-cp311-win_amd64.whl.metadata
  Using cached Pillow-10.0.1-cp311-cp311-win_amd64.whl.metadata (9.6 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Obtaining dependency information for MarkupSafe>=2.0 from 08b85bc194/MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata
  Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl.metadata (3.1 kB)
Collecting charset-normalizer<4,>=2 (from requests->torchvision)
  Obtaining dependency information for charset-normalizer<4,>=2 from 50028bbb26/charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl.metadata
  Using cached charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests->torchvision)
  Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<3,>=1.21.1 (from requests->torchvision)
  Obtaining dependency information for urllib3<3,>=1.21.1 from 9957270221/urllib3-2.0.6-py3-none-any.whl.metadata
  Using cached urllib3-2.0.6-py3-none-any.whl.metadata (6.6 kB)
Collecting certifi>=2017.4.17 (from requests->torchvision)
  Obtaining dependency information for certifi>=2017.4.17 from 2234eab223/certifi-2023.7.22-py3-none-any.whl.metadata
  Using cached certifi-2023.7.22-py3-none-any.whl.metadata (2.2 kB)
Requirement already satisfied: mpmath>=0.19 in c:\users\drun\invokeai\.venv\lib\site-packages (from sympy->torch) (1.3.0)
Collecting mypy-extensions>=0.3.0 (from typing-inspect->pyre-extensions==0.0.29->xformers==0.0.20)
  Using cached mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Using cached xformers-0.0.20-cp311-cp311-win_amd64.whl (97.6 MB)
Using cached Pillow-10.0.1-cp311-cp311-win_amd64.whl (2.5 MB)
Using cached filelock-3.12.4-py3-none-any.whl (11 kB)
Using cached numpy-1.26.0-cp311-cp311-win_amd64.whl (15.8 MB)
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Using cached certifi-2023.7.22-py3-none-any.whl (158 kB)
Using cached charset_normalizer-3.3.0-cp311-cp311-win_amd64.whl (97 kB)
Using cached MarkupSafe-2.1.3-cp311-cp311-win_amd64.whl (17 kB)
Using cached urllib3-2.0.6-py3-none-any.whl (123 kB)
Using cached typing_inspect-0.9.0-py3-none-any.whl (8.8 kB)
Installing collected packages: urllib3, pillow, numpy, networkx, mypy-extensions, MarkupSafe, idna, filelock, charset-normalizer, certifi, typing-inspect, requests, jinja2, torch, pyre-extensions, xformers, torchvision
Successfully installed MarkupSafe-2.1.3 certifi-2023.7.22 charset-normalizer-3.3.0 filelock-3.12.4 idna-3.4 jinja2-3.1.2 mypy-extensions-1.0.0 networkx-3.1 numpy-1.26.0 pillow-10.0.1 pyre-extensions-0.0.29 requests-2.31.0 torch-2.0.1 torchvision-0.15.2 typing-inspect-0.9.0 urllib3-2.0.6 xformers-0.0.20
```

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: I'm no-brainer. It fixed issue for me, so I did PR.
Who knows?

## Technical details:
Windows 11, Standalone clean and freshly-installed Python 3.11
2023-11-06 16:07:03 -08:00
014d6187ab Update pyproject.toml 2023-11-07 10:22:20 +11:00
9fb15fae87 Update pyproject.toml 2023-11-07 10:20:16 +11:00
a07336a020 Merge branch 'main' into patch-1 2023-11-07 10:17:46 +11:00
0718cc2392 Update xformers to 0.0.21 2023-11-07 10:16:44 +11:00
ce22c0fbaa sync pydantic and sql field names; merge routes 2023-11-06 18:08:57 -05:00
935e4632c2 feat(nodes): add freeu support (#4846)
### What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description

**Note: FreeU is not in the current release of diffusers. Looks like it
will be in release 0.22. This PR needs to wait until that is released.**

[feat(nodes): add freeu
support](15b33ad501)

Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU

Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very
simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU
config to the UNet
- No support for model-dependent presets, this will be a future workflow
editor enhancement

Closes https://github.com/invoke-ai/InvokeAI/issues/4845

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4845 

## QA Instructions, Screenshots, Recordings

You'll need to install diffusers from their github repo before testing
this:
`pip install git+https://github.com/huggingface/diffusers`

1. Create a graph like this:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/af17719b-b001-4534-8c4e-883484fd7465)
2. Get a free lunch!

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/279d1a69-1577-4c31-ab82-ebf67f65920d)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/c332c778-0b87-4215-8a36-d4822e06f4de)

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/ebec097b-ad54-4295-b734-33656738a2cf)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/3423140d-c9ce-4697-9993-d2bb0d0f5634)

No FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7cb0e39d-aa87-4a48-a3af-b9f47a866814)
With FreeU:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/9113d2fe-5bd3-474f-8f33-82cdeb7abf82)
2023-11-06 13:58:32 -08:00
a83d8810c4 Merge branch 'main' into feat/nodes/freeu 2023-11-06 13:47:56 -08:00
76b3f8956b Fix ROCm support in Docker container 2023-11-06 13:47:08 -08:00
ff8a8a1963 Merge branch 'main' into feat/nodes/freeu 2023-11-06 09:04:54 -08:00
cb6d0c8851 Re-add feat/mix cnet t2iadapter (#4929)
Reverts invoke-ai/InvokeAI#4923, which was a revert on the premature
merge.

slide to the left. revert, revert.
2023-11-06 22:29:13 +05:30
67f2616d5a Merge branch 'main' into revert-4923-revert-4914-feat/mix-cnet-t2iadapter 2023-11-06 07:34:51 -08:00
f8f1740668 Set Defaults to 1 2023-11-06 07:11:16 -08:00
e66d0f7372 Merge branch 'main' into feat/nodes/freeu 2023-11-06 05:39:58 -08:00
546aaedbe4 Update pyproject.toml 2023-11-06 05:29:17 -08:00
55f8865524 Merge branch 'main' into refactor/model-manager-2 2023-11-05 21:45:26 -05:00
2d051559d1 fix flake8 complaints 2023-11-05 21:45:08 -05:00
7f650d00de translationBot(ui): update translation (Italian)
Currently translated at 97.7% (1191 of 1219 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 11:12:33 -08:00
db9cef0092 re-run isort 2023-11-04 23:50:07 -04:00
72c34aea75 added add_model_record and get_model_record to router api 2023-11-04 23:42:44 -04:00
edeea5237b add sql-based model config store and api 2023-11-04 23:03:26 -04:00
4e6b579526 translationBot(ui): update translation (Italian)
Currently translated at 97.6% (1190 of 1219 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
6334c4adf5 translationBot(ui): update translation (German)
Currently translated at 53.8% (657 of 1219 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-05 12:09:20 +11:00
66b2366efc Remove LowRA from Initial Models (#5016)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Removing LowRA from the initial models as it's been deleted from
CivitAI.

## Related Tickets & Documents

https://discord.com/channels/1020123559063990373/1168415065205112872


- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-11-04 17:14:24 -04:00
e147379aa7 Merge branch 'main' into main 2023-11-04 17:05:01 -04:00
5a821384d3 fix model-not-found error 2023-11-04 08:24:01 -07:00
2bbba323c6 Fix model cache gc.collect() condition. (#5036)
See
https://github.com/invoke-ai/InvokeAI/pull/5034#discussion_r1382381733
2023-11-04 09:09:46 -04:00
aa02ebf8f5 Fix model cache gc.collect() condition. 2023-11-04 08:52:10 -04:00
fb3d0c4b12 Fix bug in model cache reference count checking. 2023-11-03 13:50:40 -07:00
8488ab0134 Reduce frequency that we call gc.collect() in the model cache. 2023-11-03 13:50:40 -07:00
875231ed3d Add reminder to clean up our model cache clearing logic. 2023-11-03 13:50:40 -07:00
43b300498f Remove explicit gc.collect() after transferring models from device to CPU. I'm not sure why this was there in the first place, but it was taking a significant amount of time (up to ~1sec in my tests). 2023-11-03 13:50:40 -07:00
5b420653f9 feat(ui): show placeholder in refiner collapse instead of hiding it, if no refiner models installed 2023-11-03 14:15:24 +11:00
3d32ce2b58 fix(ui): hide refiner collapse if refiner not installed 2023-11-03 14:15:24 +11:00
e391f3c9a8 Skip torch.nn.Embedding.reset_parameters(...) when loading a text encoder model. 2023-11-02 19:41:33 -07:00
6e7a3f0546 (minor) Fix static checks and typo. 2023-11-02 19:20:37 -07:00
4a683cc669 Add a app config parameter to control the ModelCache logging behavior. 2023-11-02 19:20:37 -07:00
3781e56e57 Add log_memory_usage param to ModelCache. 2023-11-02 19:20:37 -07:00
267e709ba2 (minor) Fix int literal typing error. 2023-11-02 19:20:37 -07:00
8ff49109a8 Update get_pretty_snapshot_diff(...) to handle None-snapshots. 2023-11-02 19:20:37 -07:00
bac2a757e8 Replace deepcopy with a pickle roundtrip in apply_ti(...) to improve speed. 2023-11-02 19:05:24 -07:00
a4a7b601a1 Improve LoRA patching speed (#5017)
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Have you updated all relevant documentation?
- [x] Yes
- [ ] No

## Description
Improve LoRA patching speed with the following changes:
- Calculate LoRA layer weights on the same device as the target model.
Prior to this change, weights were always calculated on the CPU. If the
target model is on the GPU, this significantly improves performance.
- Move models to their target devices _before_ applying LoRA patches.
- Improve the ordering of Tensor copy / cast operations.

## QA Instructions, Screenshots, Recordings

Tests:

- [x] Tested with a CUDA GPU, saw savings of ~10secs with 1 LoRA applied
to an SDXL model.
- [x] No regression in CPU-only environment
- [ ] No regression (and possible improvement?) on Mac with MPS.
- [x] Weights get restored correctly after using a LoRA
- [x] Stacking multiple LoRAs

Please hammer away with a variety of LoRAs in case there is some edge
case that I've missed.

## Added/updated tests?

- [x] Yes (Added some minimal unit tests. Definitely would benefit from
more, but it's a step in the right direction.)
- [ ] No
2023-11-02 13:34:10 -04:00
fa7f6a6a10 Further tidying of LoRA patching. Revert some changes that didn't end up being important under the constraint that calculations are done on the same device as the model. 2023-11-02 10:03:17 -07:00
e92b84955c Add minimal unit tests for ModelPatcher.apply_lora(...) 2023-11-02 10:03:17 -07:00
61b17c475a Add TODO note about improving _resolve_lora_key(...). 2023-11-02 10:03:17 -07:00
379d68f595 Patch LoRA on device when model is already on device. 2023-11-02 10:03:17 -07:00
545c811bf1 Remove device and dtype members from LoRAModelRaw, they can too easily get out-of-sync with the underlying layer states. 2023-11-02 10:03:17 -07:00
2ba5b44ec4 Remove unused _lora_forward_hook(...). 2023-11-02 10:03:17 -07:00
7f4ce518b7 auto-format lora.py 2023-11-02 10:03:17 -07:00
6c66adcd90 fix(ui): show collapse labels only if not default value 2023-11-01 14:41:13 +11:00
584b513038 Remove LowRA from Initial Models 2023-11-01 08:55:06 +11:00
94055ae54a translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1215 of 1217 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00
a79c86b901 translationBot(ui): update translation (German)
Currently translated at 51.7% (630 of 1217 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-11-01 05:31:16 +11:00
ed81d6d533 Update contributingNodes.md 2023-10-31 07:17:14 -07:00
63548c5ea7 Update community node installation instructions 2023-10-31 07:17:14 -07:00
8481db96ed Updated workflows 2023-10-31 07:17:14 -07:00
bb68175fd0 Add negative IP Adapter support 2023-10-31 14:30:24 +11:00
316131f69b Add option to invokeai update script to install latest prerelease (#5008)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR gives the user the option of upgrading to the latest PRE-RELEASE
in addition to the default of updating to the latest release. This will
allow users to conveniently try out the latest pre-release for a while
and then back out to the official release if it doesn't work for them.
2023-10-31 08:45:25 +11:00
9721e1382d add option to install latest prerelease 2023-10-30 15:49:27 -04:00
03a64275c6 fix(db): fix deprecated pydantic .json() method 2023-10-31 04:34:51 +11:00
55bfadfd0b fix(nodes): fix DenoiseMaskField.masked_latents_name
This optional field needs to have a default of `None`.
2023-10-31 04:18:09 +11:00
224b09f8fd Enforce Unix line endings in container (#4990)
* (fix) enforce Unix (LF) line endings in docker/ directory

* (fix) update docker docs wrt line endings on Windows

* (fix) static check fixes
2023-10-30 12:34:30 -04:00
8dca194e2c Update communityNodes.md (#4999)
Added Average Images node

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Added a new community node that averages input images.
2023-10-30 11:15:11 +11:00
3a33bd7881 Merge branch 'main' into JPPhoto-average-images 2023-10-30 11:12:40 +11:00
d6d0fd313b Prevent prereleases from showing up in updater (#4997)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR prevents the invokeai update script from offering prereleases.
2023-10-30 10:21:53 +11:00
95b90d22b5 Merge branch 'main' into bugfix/dont-release-prereleases 2023-10-30 06:04:24 +11:00
249618f6b4 translationBot(ui): update translation (German)
Currently translated at 40.3% (491 of 1217 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:26:20 +11:00
8109bc5316 translationBot(ui): update translation (German)
Currently translated at 40.3% (491 of 1217 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:26:20 +11:00
015cec197b translationBot(ui): update translation (German)
Currently translated at 37.7% (460 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 36.4% (444 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 36.0% (439 of 1217 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
54b0c4f3c9 translationBot(ui): update translation (German)
Currently translated at 37.7% (460 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 36.4% (444 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 36.4% (443 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 36.0% (439 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 35.5% (433 of 1217 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
60a105103b translationBot(ui): update translation (German)
Currently translated at 36.0% (439 of 1217 strings)

translationBot(ui): update translation (German)

Currently translated at 35.5% (433 of 1217 strings)

Co-authored-by: Jaulustus <jaulustus@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
67fb2c8129 translationBot(ui): update translation (German)
Currently translated at 35.5% (433 of 1217 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
09bb61f630 translationBot(ui): update translation (English)
Currently translated at 100.0% (1217 of 1217 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
69ba3a7278 translationBot(ui): update translation (Japanese)
Currently translated at 56.1% (683 of 1217 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 40.3% (491 of 1217 strings)

Co-authored-by: Gohsuke Shimada <ghoskay@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
6e05292813 translationBot(ui): update translation (Italian)
Currently translated at 97.6% (1188 of 1217 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-30 05:15:49 +11:00
859e3d5a61 chore: flake8 2023-10-30 01:49:10 +11:00
fe5d2c023b Update communityNodes.md
Added Average Images node
2023-10-28 08:13:51 -05:00
b6c259ab92 Update communityNodes.md (#4981)
Update to Load Video Frame node to reflect changes made in link
locations... a.k.a. fixing broken links.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x ] Documentation Update
- [x ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x ] No, because: Its just a doc change to fix links I made for
resources that the page depends on from my github.

      
## Have you updated all relevant documentation?
- [? ] Yes
- [ ] No


## Description
load vid frame community node layout and link change.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-10-28 20:37:48 +11:00
0fde82a24b Merge branch 'main' into main 2023-10-28 20:35:41 +11:00
4f74549f17 prevent prereleases from showing up in updater 2023-10-27 19:12:48 -04:00
c95c6c5374 Make the merge script work again (#4979)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because n/a

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

The introduction of `BaseModelType.Any` broke the code in the merge
script which relied on sd-1 coming first in the BaseModelType enum. This
assumption has been removed and the code should be less brittle now.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-10-24 17:28:39 -04:00
d946cb78e6 Merge branch 'main' into bugfix/merge-script-display-correct-model-bases 2023-10-24 17:20:36 -04:00
48fc07e049 Make textual inversion script work again (#4978)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

Fix textual inversion training script crash caused by reorg of services.

## Related Tickets & Documents

- closes #4975

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-10-24 14:24:53 -04:00
5c9046580f Merge branch 'main' into bugfix/textual-inversion 2023-10-24 14:17:43 -04:00
d397e80e0d Merge branch 'main' into bugfix/merge-script-display-correct-model-bases 2023-10-24 14:17:19 -04:00
c04099a869 Support conversion of controlnets from safetensors to diffusers format (#4980)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR allows users to install checkpoint (safetensors) versions of
controlnet models. The models will be converted into diffusers format
and cached on the fly.

This only works for sd-1 and sd-2 controlnets, as I was unable to find
controlnet sdxl checkpoint models or their corresponding .yaml config
files.

After updating, please run `invokeai-configure --yes --default-only` to
install the missing config files. Users should be instructed to select
option [7] from the launcher "Re-run the configure script to fix a
broken install or to complete a major upgrade".

## Related Tickets & Documents

User request at
https://discord.com/channels/1020123559063990373/1160318627631870092/1160318627631870092

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #4743
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

See above for instructions on updating the config files after checking
out the PR.
2023-10-24 14:16:52 -04:00
3b7e17c0cc Update communityNodes.md
Update to Load Video Frame node to reflect changes made in link locations... a.k.a. fixing broken links.
2023-10-23 21:46:51 -06:00
6cbc69f3b7 support conversion of controlnets from safetensors to diffusers 2023-10-23 22:06:10 -04:00
c14aa30956 fix the merge script to correctly display models sorted by base 2023-10-23 20:37:33 -04:00
3546c41f4a close #4975 2023-10-23 18:48:14 -04:00
8e948d3f17 fix(assets): re-add missing caution image 2023-10-20 16:50:16 +11:00
02928298d9 fix(nodes): fix missing generation modes (#4960)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

[fix(nodes): fix missing generation
modes](8615d53e65)

Lax typing on the metadata util functions allowed a typing issue to slip
through. Fixed the lax typing, updated core metadata node.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4959 (thanks @coder543)
2023-10-20 11:04:34 +05:30
df4dab53a8 Merge remote-tracking branch 'origin/main' into fix/nodes/fix-generation-mode 2023-10-20 16:23:13 +11:00
8615d53e65 fix(nodes): fix missing generation modes
Lax typing on the metadata util functions allowed a typing issue to slip through. Fixed the lax typing, updated core metadata node.
2023-10-20 16:22:56 +11:00
c8481d29eb fix(nodes): explicitly include custom nodes files (#4958)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

fix(nodes): explicitly include custom nodes files

setuptools ignores markdown files - explicitly include all files in
`"invokeai.app.invocations"` to ensure all custom node files are
included
2023-10-20 10:16:55 +05:30
b7a05734bb Merge branch 'main' into fix/noodes/include-custom-nodes-files 2023-10-20 15:19:39 +11:00
eeeb5dc451 translationBot(ui): update translation (Dutch)
Currently translated at 99.9% (1216 of 1217 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-20 15:18:51 +11:00
3d33b3e1f5 fix(nodes): explicitly include custom nodes files
setuptools ignores markdown files - explicitly include all files in `"invokeai.app.invocations"` to ensure all custom node files are included
2023-10-20 15:18:29 +11:00
7b066681f0 Docker image update: ubuntu23.04+python3.11 (#4953)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

- updates the Docker image with ubuntu23.04 base, python3.11
- use the newer pytorch wheel with cuda12.1 support
- corrects `docker compose` CLI in shell script wrappers and docs
- update / overhaul Docker docs
- clean up obsolete lines in `.gitignore`

## QA Instructions, Screenshots, Recordings

Follow the documentation changes, or simply:

```bash
cd docker
cp .env.sample .env
# Set your INVOKEAI_ROOT in .env
docker compose up
```

## Added/updated tests?

- [ ] Yes
- [x] No : N/A
2023-10-20 14:30:53 +11:00
1177234931 Merge branch 'main' into ebr/docker-py311 2023-10-20 14:28:40 +11:00
824702de99 feat(nodes): change expected structure for custom nodes 2023-10-20 14:28:16 +11:00
8604943e89 feat(nodes): simple custom nodes
Custom nodes may be places in `$INVOKEAI_ROOT/nodes/` (configurable with `custom_nodes_dir` option).

On app startup, an `__init__.py` is copied into the custom nodes dir, which recursively loads all python files in the directory as modules (files starting with `_` are ignored). The custom nodes dir is now a python module itself.

When we `from invocations import *` to load init all invocations, we load the custom nodes dir, registering all custom nodes.
2023-10-20 14:28:16 +11:00
b7f63a4065 fix(ui): fix canvas color picker when value is zero
good ol' zero is false-y
2023-10-19 23:13:35 -04:00
dcd11327c1 fix(db): remove unused, commented out methods 2023-10-20 12:05:13 +11:00
c071262c20 fix(ui): remove getMetadataFromFile query & util
This will all be handled by python going forward
2023-10-20 12:05:13 +11:00
2f4f83280b fix(db): remove extraneous conflict handling in workflow image records 2023-10-20 12:05:13 +11:00
301a8fef92 fix(ui): fix batch metadata logic when graph has no metadata
On canvas, images have no metadata yet, so this needs to be handled
2023-10-20 12:05:13 +11:00
52fbd1b222 fix(ui): remove errant comment 2023-10-20 12:05:13 +11:00
16dacb5f43 fix(nodes): remove constraints on ip adapter metadata fields 2023-10-20 12:05:13 +11:00
b5940039f3 chore: lint 2023-10-20 12:05:13 +11:00
9104979943 chore(ui): regen types 2023-10-20 12:05:13 +11:00
f04462973b feat(ui): create debounced metadata/workflow query hooks
Also added config options for metadata and workflow debounce times (`metadataFetchDebounce` & `workflowFetchDebounce`).

Falls back to 0 if not provided.

In OSS, because we have no major latency concerns, the debounce is 0. But in other environments, it may be desirable to set this to something like 300ms.
2023-10-20 12:05:13 +11:00
2faed653d7 fix(api): deduplicate metadata/workflow extraction logic 2023-10-20 12:05:13 +11:00
23fa2e560a fix: fix tests 2023-10-20 12:05:13 +11:00
0cda7943fa feat(api): add workflow_images junction table
similar to boards, images and workflows may be associated via junction table
2023-10-20 12:05:13 +11:00
6d776bad7e fix(nodes): remove errant print 2023-10-20 12:05:13 +11:00
86c3acf184 fix(nodes): revert optional graph 2023-10-20 12:05:13 +11:00
d32caf7cb1 fix(ui): remove references to metadata accumulator 2023-10-20 12:05:13 +11:00
e3e8d8af02 fix(ui): fix log message 2023-10-20 12:05:13 +11:00
7b6e2bc37f feat(nodes): add field name validation
Protect against using reserved field names
2023-10-20 12:05:13 +11:00
bbae4045c9 fix(nodes): GraphInvocation should use InputField 2023-10-20 12:05:13 +11:00
8910e912c7 chore(ui): regen types 2023-10-20 12:05:13 +11:00
4012388f0a feat: use ModelValidator naming convention for pydantic type adapters
This is the naming convention in the docs and is also clear.
2023-10-20 12:05:13 +11:00
3c4f43314c feat: move workflow/metadata models to baseinvocation.py
needed to prevent circular imports
2023-10-20 12:05:13 +11:00
5a163f02a6 fix(nodes): fix metadata/workflow serialization 2023-10-20 12:05:13 +11:00
f0db4d36e4 feat: metadata refactor
- Refactor how metadata is handled to support a user-defined metadata in graphs
- Update workflow embed handling
- Update UI to work with these changes
- Update tests to support metadata/workflow changes
2023-10-20 12:05:13 +11:00
c2da74c587 feat: add workflows table & service 2023-10-20 12:05:13 +11:00
575c7bbfd8 feat(docker): update docker documentation 2023-10-19 11:26:36 -04:00
f102e38076 feat(docker): update docker image, etc. to python3.11+ubuntu23.04 2023-10-19 11:26:16 -04:00
9195c8c957 feat: dedicated route to get intermediates count
This fixes a weird issue where the list images method needed to handle `None` for its `limit` and `offset` arguments, in order to get a count of all intermediates.
2023-10-19 16:58:51 +11:00
677918df61 Docs Update (python version & T2I (#4867)
* Updated Control Adapter Docs

* fixed typo

* Update docs for 3.10

* Update diffusers language

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Diffusers format

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Current T2I Adapter usage

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Update test-invoke-pip.yml

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-18 23:38:31 -04:00
96e80c71fb chore: lint 2023-10-19 08:52:02 +11:00
da403ba04c fix(api): flesh out types for api_app.py 2023-10-19 08:52:02 +11:00
e4c45012f4 feat(api): add gzip middleware
On our local installs this will be a very minor change. For those running on remote servers, load times should be slightly improved.

It's a small change but I think correct.
2023-10-19 08:52:02 +11:00
ef14ba1713 fix(api): fix uvicorn config loop arg
We were providing the loop itself, not the kind of loop. This didn't appear to cause any issues whatsoever, but now it's correct.
2023-10-19 08:52:02 +11:00
9e06371178 feat(api): serve app via route & add cache-control: no-store
This should prevent `index.html` from *ever* being cached, so UIs will never be out of date.

Minor organisation to accomodate this.

Deleting old unused files from the early days
2023-10-19 08:52:02 +11:00
a459786d73 fix(nodes): enable number to string coercion 2023-10-19 08:43:08 +11:00
fdf02c33d0 Catch generic model errors
Prevent the app from dying on invalid models.
2023-10-19 07:28:33 +11:00
0a01d86ab1 fix(ui): fix multiple control adapters on canvas
We were making an edges for each adapter where we should isntead have one from the adapter's collect node into the denoising node
2023-10-19 07:15:27 +11:00
5e6df975fd fix(nodes): fix math node validation
Update field_validator api for pydantic v2
2023-10-19 06:50:00 +11:00
967a2dad54 Multi-Image IP-Adapter (#4882)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This PR adds the ability to pass multiple images to a single IP-Adapter
(note the difference from using _multiple IP-Adapters at once_, which is
already supported.). The image embeddings are combined in the IP-Adapter
attention layers. This is the same strategy for combining multiple
images as used in Insta-LoRA workflows
(https://civitai.com/articles/2345).

This PR only adds multi-image support in the backend and the node
editor. The Linear UI still needs to be updated.

## QA Instructions, Screenshots, Recordings

I have manually tested the following via the workflow editor:
- Multiple images with a single IP-Adapter
- Multiple images per IP-Adapter, and multiple IP-Adapters
- Both standard and sequential conditioning
- IP-Adapters still work in the Linear UI.

Please hammer at this feature some more with manual testing.

## Added/updated tests?

- [x] Yes
- [ ] No

I updated the existing IP-Adapter smoke test, but it provides pretty
limited coverage of this feature. This feature would probably be best
tested by an end-to-end workflow test, which is not currently supported.
(I'm hoping to put some effort into workflow-level testing soon.)
2023-10-18 10:17:34 -04:00
a078efc0f2 Merge branch 'main' into ryan/multi-image-ip 2023-10-18 08:59:12 -04:00
024aa5eb90 fix(ui): fix field sorting
closes #4934
2023-10-18 15:35:26 +11:00
67a343b3e4 Update pyproject.toml 2023-10-18 11:28:26 +11:00
d27392cc2d remove all references to CLI 2023-10-18 11:28:26 +11:00
9fa8e38163 fix(ui): use pidi processor for sketch (#4931)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

fix(ui): use pidi processor for sketch control adapters
2023-10-18 04:04:42 +05:30
4b197cb6d4 Merge branch 'main' into fix/ui/sketch-pidi-processor 2023-10-18 04:02:30 +05:30
252c9a5f5a fix(backend): fix nsfw/watermarker util types 2023-10-18 09:08:13 +11:00
975ba6b74f fix(ui): use pidi processor for sketch 2023-10-18 08:43:56 +11:00
284a257c25 feat: remove enqueue_graph routes/methods (#4922)
This is totally extraneous - it's almost identical to `enqueue_batch`.
2023-10-17 18:00:40 +00:00
55ad4feb5c Revert "Revert "feat(ui): remove special handling for t2i vs controlnet""
This reverts commit bdf4c4944c.
2023-10-17 11:59:19 -04:00
b7555ddae8 Revert "Revert "chore: lint""
This reverts commit 38e7eb8878.
2023-10-17 11:59:19 -04:00
8afc47018b Revert "Revert "Cleaning up (removing diagnostic prints)""
This reverts commit 6e697b7b6f.
2023-10-17 11:59:19 -04:00
a97ec88e06 Revert "Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.""
This reverts commit c04fb451ee.
2023-10-17 11:59:19 -04:00
282d36b640 Revert "Revert "Fixing some var and arg names.""
This reverts commit 58a0709c1e.
2023-10-17 11:59:19 -04:00
58a0709c1e Revert "Fixing some var and arg names."
This reverts commit f11ba81a8d.
2023-10-17 11:59:11 -04:00
c04fb451ee Revert "Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time."
This reverts commit 378689a519.
2023-10-17 11:59:11 -04:00
6e697b7b6f Revert "Cleaning up (removing diagnostic prints)"
This reverts commit 06f8a3276d.
2023-10-17 11:59:11 -04:00
38e7eb8878 Revert "chore: lint"
This reverts commit fff29d663d.
2023-10-17 11:59:11 -04:00
bdf4c4944c Revert "feat(ui): remove special handling for t2i vs controlnet"
This reverts commit b146993553.
2023-10-17 11:59:11 -04:00
b146993553 feat(ui): remove special handling for t2i vs controlnet 2023-10-17 19:42:06 +11:00
fff29d663d chore: lint 2023-10-17 19:42:06 +11:00
06f8a3276d Cleaning up (removing diagnostic prints) 2023-10-17 19:42:06 +11:00
378689a519 Changes to _apply_standard_conditioning_sequentially() and _apply_cross_attention_controlled_conditioning() to reflect changes to T2I-Adapter implementation to allow usage of T2I-Adapter and ControlNet at the same time.
Also, the PREVIOUS commit (@8d3885d, which was already pushed to github repo) was wrongly commented, but too late to fix without a force push or other mucking that I'm reluctant to do. That commit is actually the one that has all the changes to diffusers_pipeline.py to use additional arg down_intrablock_additional_residuals (introduced in diffusers PR https://github.com/huggingface/diffusers/pull/5362) to detangle T2I-Adapter from ControlNet inputs to main UNet.
2023-10-17 19:42:06 +11:00
f11ba81a8d Fixing some var and arg names. 2023-10-17 19:42:06 +11:00
9542883bb5 update requirements to python 3.10-11 2023-10-17 19:30:31 +11:00
14e25bf277 Merge branch 'main' into feat/nodes/freeu 2023-10-17 16:42:59 +11:00
c69715636d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1217 of 1217 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-17 16:42:37 +11:00
001bba1719 Merge branch 'main' into feat/nodes/freeu 2023-10-17 15:58:00 +11:00
a094f4ca2b fix: pin python-socketio~=5.10.0 2023-10-17 14:59:25 +11:00
9d9592230a chore: lint 2023-10-17 14:59:25 +11:00
685cda89ff feat(api): restore get_session route 2023-10-17 14:59:25 +11:00
2c39557dc9 fix(nodes): fix metadata validation error 2023-10-17 14:59:25 +11:00
c238a7f18b feat(api): chore: pydantic & fastapi upgrade
Upgrade pydantic and fastapi to latest.

- pydantic~=2.4.2
- fastapi~=103.2
- fastapi-events~=0.9.1

**Big Changes**

There are a number of logic changes needed to support pydantic v2. Most changes are very simple, like using the new methods to serialized and deserialize models, but there are a few more complex changes.

**Invocations**

The biggest change relates to invocation creation, instantiation and validation.

Because pydantic v2 moves all validation logic into the rust pydantic-core, we may no longer directly stick our fingers into the validation pie.

Previously, we (ab)used models and fields to allow invocation fields to be optional at instantiation, but required when `invoke()` is called. We directly manipulated the fields and invocation models when calling `invoke()`.

With pydantic v2, this is much more involved. Changes to the python wrapper do not propagate down to the rust validation logic - you have to rebuild the model. This causes problem with concurrent access to the invocation classes and is not a free operation.

This logic has been totally refactored and we do not need to change the model any more. The details are in `baseinvocation.py`, in the `InputField` function and `BaseInvocation.invoke_internal()` method.

In the end, this implementation is cleaner.

**Invocation Fields**

In pydantic v2, you can no longer directly add or remove fields from a model.

Previously, we did this to add the `type` field to invocations.

**Invocation Decorators**

With pydantic v2, we instead use the imperative `create_model()` API to create a new model with the additional field. This is done in `baseinvocation.py` in the `invocation()` wrapper.

A similar technique is used for `invocation_output()`.

**Minor Changes**

There are a number of minor changes around the pydantic v2 models API.

**Protected `model_` Namespace**

All models' pydantic-provided methods and attributes are prefixed with `model_` and this is considered a protected namespace. This causes some conflict, because "model" means something to us, and we have a ton of pydantic models with attributes starting with "model_".

Forunately, there are no direct conflicts. However, in any pydantic model where we define an attribute or method that starts with "model_", we must tell set the protected namespaces to an empty tuple.

```py
class IPAdapterModelField(BaseModel):
    model_name: str = Field(description="Name of the IP-Adapter model")
    base_model: BaseModelType = Field(description="Base model")

    model_config = ConfigDict(protected_namespaces=())
```

**Model Serialization**

Pydantic models no longer have `Model.dict()` or `Model.json()`.

Instead, we use `Model.model_dump()` or `Model.model_dump_json()`.

**Model Deserialization**

Pydantic models no longer have `Model.parse_obj()` or `Model.parse_raw()`, and there are no `parse_raw_as()` or `parse_obj_as()` functions.

Instead, you need to create a `TypeAdapter` object to parse python objects or JSON into a model.

```py
adapter_graph = TypeAdapter(Graph)
deserialized_graph_from_json = adapter_graph.validate_json(graph_json)
deserialized_graph_from_dict = adapter_graph.validate_python(graph_dict)
```

**Field Customisation**

Pydantic `Field`s no longer accept arbitrary args.

Now, you must put all additional arbitrary args in a `json_schema_extra` arg on the field.

**Schema Customisation**

FastAPI and pydantic schema generation now follows the OpenAPI version 3.1 spec.

This necessitates two changes:
- Our schema customization logic has been revised
- Schema parsing to build node templates has been revised

The specific aren't important, but this does present additional surface area for bugs.

**Performance Improvements**

Pydantic v2 is a full rewrite with a rust backend. This offers a substantial performance improvement (pydantic claims 5x to 50x depending on the task). We'll notice this the most during serialization and deserialization of sessions/graphs, which happens very very often - a couple times per node.

I haven't done any benchmarks, but anecdotally, graph execution is much faster. Also, very larges graphs - like with massive iterators - are much, much faster.
2023-10-17 14:59:25 +11:00
19c5435332 fix(ui): copy image via img onload to blob
There's a bug in chrome that screws with headers on fetch requests and 307 responses. This causes images to fail to copy in the commercial environment.

This change attempts to get around this by copying images in a different way (similar to how the canvas works). When the user requests a copy we:
- create an `<img />` element
- set `crossOrigin` if needed
- add an onload handler:
  - create a canvas element
  - draw image onto it
  - export canvas to blob

This is wrapped in a promise which resolves to the blob, which can then be copied to clipboard.

---

A customized version of Konva's `useImage` hook is also included, which returns the image blob in addition to the `<img />` element. Unfortunately, this hook is not suitable for use across the app, because it does all the image fetching up front, regardless of whether we actually want to copy the image.

In other words, we'd have to fetch the whole image file even if the user is just skipping through image metadata, in order to have the blob to copy. The callback approach means we only fetch the image when the user clicks copy. The hook is thus currently unused.
2023-10-17 06:43:19 +11:00
3079c75a60 (minor) Make it more clear that shape annotations are just comments and not commented lines of code. 2023-10-16 08:35:32 -04:00
53b6f0dc73 Merge branch 'main' into ryan/multi-image-ip 2023-10-16 17:16:10 +11:00
70a1202deb fix(api): fix socketio breaking change (#4901)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

Fix for breaking change in `python-socketio` 5.10.0 in which
`enter_room` and `leave_room` were made coroutines.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->


- Closes #4899
2023-10-16 07:29:31 +05:30
9a1aea9caf fix(api): fix socketio breaking change
Fix for breaking change in `python-socketio` 5.10.0 in which `enter_room` and `leave_room` were made coroutines.
2023-10-16 12:18:46 +11:00
388d36b839 fix(db): use RLock instead of Lock
Fixes issues where a db-accessing service wants to call db-accessing methods with locks.
2023-10-16 11:45:24 +11:00
bedb35af8c translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1217 of 1217 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-16 07:57:41 +11:00
dc232438fb translationBot(ui): update translation (Italian)
Currently translated at 97.5% (1187 of 1217 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-16 07:57:41 +11:00
d7edf5aaad fix(ui): fix control adapter translation string (#4888)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

fix(ui): fix control adapter translation string

Missed this during a previous change

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

Reported by @Harvester62 :

https://discord.com/channels/1020123559063990373/1054129386447716433/1162018775437148160
2023-10-15 18:19:41 +05:30
3ad1226d1e Merge branch 'main' into fix/ui/control-adapter-translation-string 2023-10-15 18:16:48 +05:30
86ca9f122d Strip whitespace from model URLs (#4863)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR strips leading and trailing whitespace from URLs that are
entered into either the Web Model Manager import field, or using the
TUI.

## Related Tickets & Documents

Closes #4536


## QA Instructions, Screenshots, Recordings

Try to import a URL with leading or trailing whitespace. Should not work
in current main. This PR should fix it.
2023-10-15 17:53:20 +05:30
2c6772f92f Merge branch 'main' into bugfix/trim-whitespace-from-urls 2023-10-15 17:41:41 +05:30
e6c1e03b8b Bugfix/ignore dot directories on model scan (#4865)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

Mac users have a recurring issue in which a `.DS_Store` directory is
created in their `models` hierarchy, causing the new model scanner to
freak out. This PR skips over any paths that begin with a dot. I haven't
tested it on a Macintosh, so I'm not 100% certain it will do the trick.

## Related Tickets & Documents

- Related Issue #4815 

## QA Instructions, Screenshots, Recordings

Someone with a Mac please try to reproduce the `.DS_Store` crash and
then see if applying this PR addresses the issue.
2023-10-15 17:33:11 +05:30
c9d95e5758 Merge branch 'main' into bugfix/ignore-dot-directories-on-model-scan 2023-10-15 17:23:02 +05:30
10755718b8 fix(ui): reset canvas batchIds on clear/batch cancel (#4890)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This was in the original fix in #4829 but I must have removed it
accidentally.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4889

## QA Instructions, Screenshots, Recordings

- Start from a fresh canvas session (may need to let a generation finish
or reset web UI if yours is locked)
- Invoke/add to queue
- Immediately cancel current, clear queue, or clear batch (can do this
from the queue tab)
- Canvas should return to normal state

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->
2023-10-15 17:10:38 +05:30
459c7b3b74 Merge branch 'main' into fix/ui/reset-canvas-batch-on-clear 2023-10-15 17:05:21 +05:30
353719f81d chore(ui): update deps (#4892)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

Update all dependencies

Resolves https://github.com/invoke-ai/InvokeAI/security/dependabot/26
2023-10-15 17:05:04 +05:30
bd4b260c23 Merge branch 'main' into fix/ui/reset-canvas-batch-on-clear 2023-10-15 17:03:08 +05:30
3e389d3f60 chore(ui): update deps 2023-10-15 19:30:39 +11:00
ffb01f1345 Update facetools.py
Facetools nodes were cutting off faces that extended beyond chunk boundaries in some cases. All faces found are considered and are coalesced rather than pruned, meaning that you should not see half a face any more.
2023-10-15 19:12:10 +11:00
faa0a8236c Merge branch 'main' into fix/ui/reset-canvas-batch-on-clear 2023-10-15 18:46:46 +11:00
e4d73d3659 Merge branch 'main' into fix/ui/control-adapter-translation-string 2023-10-15 18:46:40 +11:00
6994783c17 translationBot(ui): update translation (Italian)
Currently translated at 91.9% (1119 of 1217 strings)

Co-authored-by: psychedelicious <mabianfu@icloud.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-15 18:42:58 +11:00
3f9708f166 translationBot(ui): update translation (Italian)
Currently translated at 91.9% (1119 of 1217 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-15 18:42:58 +11:00
bcf0d8a590 fix(ui): use _other for control adapter collapse 2023-10-15 18:34:25 +11:00
2060ee22f2 fix(ui): reset canvas batchIds on clear/batch cancel
Closes #4889
2023-10-15 18:28:05 +11:00
3fd79b837f fix(ui): fix control adapter translation string 2023-10-15 18:16:10 +11:00
1c099e0abb feat(ui): add tooltip to clear intermediates button when disabled 2023-10-15 17:29:49 +11:00
95cca9493c feat(ui): disable clear intermediates button when queue has items 2023-10-15 17:29:49 +11:00
779c902402 chore(ui): lint 2023-10-15 17:29:49 +11:00
99e6bb48ba fixed problems 2023-10-15 17:29:49 +11:00
c3d6ff5b11 fixed bug #4857 2023-10-15 17:29:49 +11:00
bba962b82f fix(nodes,ui): optional metadata (#4884)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

[fix(nodes,ui): optional
metadata](78b8cfede3)

- Make all metadata items optional. This will reduce errors related to
metadata not being provided when we update the backend but old queue
items still exist
- Fix a bug in t2i adapter metadata handling where it checked for ip
adapter metadata instaed of t2i adapter metadata
- Fix some metadata fields that were not using `InputField`
2023-10-15 05:42:42 +05:30
78b8cfede3 fix(nodes,ui): optional metadata
- Make all metadata items optional. This will reduce errors related to metadata not being provided when we update the backend but old queue items still exist
- Fix a bug in t2i adapter metadata handling where it checked for ip adapter metadata instaed of t2i adapter metadata
- Fix some metadata fields that were not using `InputField`
2023-10-15 10:44:16 +11:00
e9879b9e1f Clean up communityNodes.md (#4870)
* Clean up communityNodes.md

* Update communityNodes.md
2023-10-14 22:01:20 +00:00
e21f3af5ab translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-15 08:12:17 +11:00
2ab7c5f783 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1216 of 1216 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-15 08:12:17 +11:00
8bbd938be9 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (1216 of 1216 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-15 08:12:17 +11:00
b4cee46936 translationBot(ui): update translation (Italian)
Currently translated at 91.4% (1112 of 1216 strings)

translationBot(ui): update translation (Italian)

Currently translated at 90.4% (1100 of 1216 strings)

translationBot(ui): update translation (Italian)

Currently translated at 90.4% (1100 of 1216 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-15 08:12:17 +11:00
48626c40fd fix(backend): handle systems with glibc < 2.33
`mallinfo2` is not available on `glibc` < 2.33.

On these systems, we successfully load the library but get an `AttributeError` on attempting to access `mallinfo2`.

I'm not sure if the old `mallinfo` will work, and not sure how to install it safely to test, so for now we just handle the `AttributeError`.

This means the enhanced memory snapshot logic will be skipped for these systems, which isn't a big deal.
2023-10-15 07:56:55 +11:00
35ebc9e18d Bump invocation versions for the multi-image IP feature. 2023-10-14 13:28:50 -04:00
49279bbe74 Update IP-Adapter unit test for multi-image. 2023-10-14 13:00:52 -04:00
8464450a53 Add support for multi-image IP-Adapter. 2023-10-14 12:50:33 -04:00
a1001b6d10 Merge branch 'main' into bugfix/ignore-dot-directories-on-model-scan 2023-10-14 10:37:55 -04:00
50df641e1b Upload to pypi whenever a branch starting with "release/" is released (#4875)
## What type of PR is this? (check all applicable)


- [X] Optimization
- 

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR changes the pypi-release workflow so that it will upload to PyPi
whenever a release is initiated from the `main` branch or another branch
beginning with `release/`. Previous support for v2.3 branches has been
removed.
2023-10-14 10:24:01 -04:00
22dd64dfa4 Merge branch 'main' into chore/update-pypi-from-release-branches 2023-10-14 10:21:33 -04:00
0a929ca3de Fix/UI/sync translations (#4880)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Weblate has some merge conflicts, attempting to resolve them...
2023-10-14 18:38:17 +05:30
8c61cda4b8 Merge branch 'main' into fix/ui/sync-translations 2023-10-14 18:35:48 +05:30
9db152bf75 xformers==0.0.20
I'm not sure if it's correct way of handling things, but correcting this string to '==0.0.20' fixes xformers install for me - and maybe for others too. 

Please see this thread, this is the issue I had (trying to install InvokeAI):
https://github.com/facebookresearch/xformers/issues/740
2023-10-14 14:59:55 +04:00
75663ec81e feat (ui, generation): High Resolution Fix MVP in Text2Image Linear Flow (#4819)
* added HrfScale type with initial value

* working

* working

* working

* working

* working

* added addHrfToGraph

* continueing to implement this

* working on this

* comments

* working

* made hrf into its own collapse

* working on adding strength slider

* working

* working

* refactoring

* working

* change of this working: 0

* removed onnx support since apparently its not used

* working

* made scale integer

* trying out psycicpebbles idea

* working

* working on this

* working

* added toggle

* comments

* self review

* fixing things

* remove 'any' type

* fixing typing

* changed initial strength value to 3 (large values cause issues)

* set denoising start to be 1 - strength to resemble image to image

* set initial value

* added image to image

* pr1

* pr2

* updating to resolution finding

* working

* working

* working

* working

* working

* working

* working

* working

* working

* use memo

* connect rescale hw to noise

* working

* fixed min bug

* nit

* hides elements conditionally

* style

* feat(ui): add config for HRF, disable if feature disabled or ONNX model in use

* fix(ui): use `useCallback` for HRF toggle

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-14 10:34:41 +00:00
40a568c060 Hide Metadata in Info View (#4787)
* #4665 hides value of the corresponding metadata item by click on arrow

* #4787 return recall button back:)

* #4787 optional hide of metadata item, truncation and scrolling

* remove unused import

* #4787 recall parameters as separate tab in panel

* #4787 remove debug code

* fix(ui): undo changes to dist/locales/en.json

This file is autogenerated by our translation system and shouldn't be modified directly

* feat(ui): use scrollbar-enabled component for parameter recall tab

* fix(ui): revert unnecessary changes to DataViewer component

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-14 21:25:07 +11:00
8e7aa74a16 Merge remote-tracking branch 'weblate/main' 2023-10-14 20:35:21 +11:00
fcba4382b2 upload to pypi whenever a branch starting with "release/" is released 2023-10-13 12:49:24 -04:00
bf9f7271dd add ref to pypi-release workflow to fix release with unintentional changes
v3.3.0 was accidentally released with more changes than intended. This workflows change will allow us release to pypi from a separate branch rather than main.
2023-10-13 18:58:36 +11:00
d3821594df Release/v3.3.0 (#4868)
## What type of PR is this? (check all applicable)

v3.3.0 release

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No
2023-10-13 17:45:34 +11:00
631ad1596f Updated JS files 2023-10-13 17:27:41 +11:00
dfe32e467d Update version to 3.3.0 2023-10-13 17:27:41 +11:00
3575cf3b3b Enable the ram cache slider in invokeai-configure (#4866)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

The `invokeai-configure` TUI's slider for the RAM cache was not picking
up the current settings in `invokeai.yaml`, leading users to think their
change hadn't taken effect. This is fixed in this PR.


## Related Tickets & Documents

First described here:


https://discord.com/channels/1020123559063990373/1161919551441735711/1162058518417907743
2023-10-13 16:08:03 +11:00
15cabc4968 Possibly closes #4815 2023-10-12 23:37:05 -04:00
29c3f49182 enable the ram cache slider in invokeai-configure 2023-10-12 23:04:16 -04:00
21d5969942 strip leading and trailing quotes as well as whitespace 2023-10-12 22:35:02 -04:00
334dcf71c4 Merge branch 'main' into bugfix/trim-whitespace-from-urls 2023-10-12 22:30:44 -04:00
d2149a8380 Fix gratuitous, parasitic, endlessly repeated, pointless menu in version 3.2.0 (#4864)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

A regression in 3.2.0 causes a seemingly nonsensical multiple choice
menu to appear when importing an SD-1 checkpoint model from the
autoimport directory. The menu asks the user to identify which type of
SD-2 model they are trying to import, which makes no sense.

In fact, the menu is popping up because there are now both "epsilon" and
"vprediction" SchedulerPredictionTypes for SD-1 as well as SD-2 models,
and the prober can't determine which prediction type to use. This PR
does two things:

1) rewords the menu as shown below
2) defaults to the most likely choice -- epsilon for v1 models and
vprediction for v2s

Here is the revised multiple-choice menu:
```
Please select the scheduler prediction type of the checkpoint named v1-5-pruned-emaonly.safetensors:
[1] "epsilon" - most v1.5 models and v2 models trained on 512 pixel images
[2] "vprediction" - v2 models trained on 768 pixel images and a few v1.5 models
[3] Accept the best guess;  you can fix it in the Web UI later

select [3]> 
```

Note that one can also put the appropriate config file into the same
directory as the checkpoint you wish to import. Give it the same name as
the model file, but with the extension `.yaml`. For example
`v1-5-pruned-emaonly.yaml`. The system will notice the yaml file and use
that, suppressing the quiz entirely.

## Related Tickets & Documents
- Closes #4768
- Closes #4827
2023-10-12 22:27:28 -04:00
6532d9ffa1 closes #4768 2023-10-12 22:04:54 -04:00
52274087f3 close #4536 2023-10-12 21:24:07 -04:00
89db8c83c2 Add a comment to warn about a necessary action before bumping the diffusers version. 2023-10-12 14:48:10 -04:00
fc09ab7e13 chore: typegen 2023-10-12 12:15:06 -04:00
9646157ad5 fix: fix test imports 2023-10-12 12:15:06 -04:00
b89ec2b9c3 chore(ui): regen types 2023-10-12 12:15:06 -04:00
d2fb29cf0d fix(app): remove errant logger line 2023-10-12 12:15:06 -04:00
d1fce4b70b chore: rebase conflicts 2023-10-12 12:15:06 -04:00
f50f95a81d fix: merge conflicts 2023-10-12 12:15:06 -04:00
3611029057 fix(backend): remove logic to create workflows column
Snuck in there while I was organising
2023-10-12 12:15:06 -04:00
402cf9b0ee feat: refactor services folder/module structure
Refactor services folder/module structure.

**Motivation**

While working on our services I've repeatedly encountered circular imports and a general lack of clarity regarding where to put things. The structure introduced goes a long way towards resolving those issues, setting us up for a clean structure going forward.

**Services**

Services are now in their own folder with a few files:

- `services/{service_name}/__init__.py`: init as needed, mostly empty now
- `services/{service_name}/{service_name}_base.py`: the base class for the service
- `services/{service_name}/{service_name}_{impl_type}.py`: the default concrete implementation of the service - typically one of `sqlite`, `default`, or `memory`
- `services/{service_name}/{service_name}_common.py`: any common items - models, exceptions, utilities, etc

Though it's a bit verbose to have the service name both as the folder name and the prefix for files, I found it is _extremely_ confusing to have all of the base classes just be named `base.py`. So, at the cost of some verbosity when importing things, I've included the service name in the filename.

There are some minor logic changes. For example, in `InvocationProcessor`, instead of assigning the model manager service to a variable to be used later in the file, the service is used directly via the `Invoker`.

**Shared**

Things that are used across disparate services are in `services/shared/`:

- `default_graphs.py`: previously in `services/`
- `graphs.py`: previously in `services/`
- `paginatation`: generic pagination models used in a few services
- `sqlite`: the `SqliteDatabase` class, other sqlite-specific things
2023-10-12 12:15:06 -04:00
88bee96ca3 feat(backend): rename db.py to sqlite.py 2023-10-12 12:15:06 -04:00
5048fc7c9e feat(backend): move pagination models to own file 2023-10-12 12:15:06 -04:00
2a35d93a4d feat(backend): organise service dependencies
**Service Dependencies**

Services that depend on other services now access those services via the `Invoker` object. This object is provided to the service as a kwarg to its `start()` method.

Until now, most services did not utilize this feature, and several services required their dependencies to be initialized and passed in on init.

Additionally, _all_ services are now registered as invocation services - including the low-level services. This obviates issues with inter-dependent services we would otherwise experience as we add workflow storage.

**Database Access**

Previously, we were passing in a separate sqlite connection and corresponding lock as args to services in their init. A good amount of posturing was done in each service that uses the db.

These objects, along with the sqlite startup and cleanup logic, is now abstracted into a simple `SqliteDatabase` class. This creates the shared connection and lock objects, enables foreign keys, and provides a `clean()` method to do startup db maintenance.

This is not a service as it's only used by sqlite services.
2023-10-12 12:15:06 -04:00
10fac5c085 feat(ui): set w/h to multiple of 64 on add t2i 2023-10-12 23:51:01 +11:00
58850ded22 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 98.0% (1186 of 1210 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 98.0% (1179 of 1203 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 97.9% (1175 of 1199 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
f21ebdfaca translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
c4f1e94cc4 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 92.0% (1104 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 92.1% (1105 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 83.2% (998 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 83.0% (996 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 67.5% (810 of 1199 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
dbbcce9f70 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
cc52896bd9 translationBot(ui): update translation (Italian)
Currently translated at 85.5% (1026 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.7% (1016 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.7% (1016 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.4% (1012 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.3% (1011 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 83.5% (1002 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.5% (978 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 80.8% (969 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 80.7% (968 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
d12314fb8b translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
07b88e3017 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
0b85f2487c translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (607 of 607 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
5530d3fcd2 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 95.7% (579 of 605 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
7b1b24900f translationBot(ui): update translation (Russian)
Currently translated at 65.5% (643 of 981 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
f52fb45276 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
fb9f0339a2 translationBot(ui): update translation (Italian)
Currently translated at 81.2% (958 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.2% (958 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 76.6% (904 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 76.5% (903 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.9% (848 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.7% (845 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.7% (845 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 67.8% (799 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 58.5% (689 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 59.8% (640 of 1069 strings)

translationBot(ui): update translation (Italian)

Currently translated at 57.2% (612 of 1069 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (607 of 607 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (605 of 605 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (605 of 605 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (602 of 602 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
ac501ee742 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 96.1% (579 of 602 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
2182ccf8d1 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
fc674ff1b8 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
708ac6a511 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
d0e0b64fc8 translationBot(ui): update translation (Dutch)
Currently translated at 99.6% (591 of 593 strings)

Co-authored-by: Arnold Cordewiner <weblate@a14r.be>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
a23580664d translationBot(ui): update translation (Italian)
Currently translated at 97.8% (589 of 602 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (603 of 603 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (599 of 599 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (592 of 592 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
0edf01d927 translationBot(ui): update translation (Spanish)
Currently translated at 99.6% (601 of 603 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.5% (600 of 603 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (599 of 599 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.8% (594 of 595 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (592 of 592 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
4af5b9cbf7 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
1bf973d46e translationBot(ui): update translation (Polish)
Currently translated at 58.4% (338 of 578 strings)

Co-authored-by: Simona Liliac <simonaliliac@yandex.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pl/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
72252e3ff7 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 100.0% (563 of 563 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
8d2596c288 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (591 of 591 strings)

translationBot(ui): update translation (Italian)

Currently translated at 99.3% (587 of 591 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (586 of 586 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (578 of 578 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (559 of 559 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (559 of 559 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (551 of 551 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
0ffb7ecaa8 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
10f30fc599 translationBot(ui): update translation (Russian)
Currently translated at 99.5% (602 of 605 strings)

translationBot(ui): update translation (Russian)

Currently translated at 99.8% (605 of 606 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (592 of 592 strings)

translationBot(ui): update translation (Russian)

Currently translated at 90.2% (534 of 592 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (543 of 543 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
136570aa1d translationBot(ui): update translation (Italian)
Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (548 of 548 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (546 of 546 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (541 of 541 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (544 of 544 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (543 of 543 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
5a30b507e0 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
d47fbf283c translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 88.0% (477 of 542 strings)

Co-authored-by: Song, Pengcheng <17528592@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
7c24312d3f translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
905cd8c639 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (538 of 538 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
b13ba55c26 translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 8.9% (48 of 536 strings)

Co-authored-by: nekowaiz <nekowaiz@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
82747e2260 translationBot(ui): update translation (Russian)
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 98.8% (536 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (533 of 533 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
910553f49a translationBot(ui): update translation (Italian)
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (540 of 540 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (538 of 538 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 99.8% (535 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (533 of 533 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (533 of 533 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
faabd83717 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (591 of 591 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (586 of 586 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (578 of 578 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (548 of 548 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (546 of 546 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (544 of 544 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (543 of 543 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (540 of 540 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (533 of 533 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.8% (532 of 533 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
5ad77ece4b translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
6b3c413a5b translationBot(ui): update translation (Russian)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (519 of 519 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
2a923d1f69 translationBot(ui): update translation (French)
Currently translated at 80.7% (419 of 519 strings)

Co-authored-by: pand4z31 <pand4zdev31@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
c54a5ce10e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
14fbe41834 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (523 of 523 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (519 of 519 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (515 of 515 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
64ebe042b5 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (523 of 523 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (519 of 519 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (515 of 515 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 23:45:46 +11:00
5b2ed4ffb4 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:45:13 +00:00
a49b8febed translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 98.0% (1186 of 1210 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 98.0% (1179 of 1203 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 97.9% (1175 of 1199 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 12:45:12 +00:00
e543db5a5d translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

translationBot(ui): update translation files

Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:45:10 +00:00
670f3aa165 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 92.0% (1104 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 92.1% (1105 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 83.2% (998 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 83.0% (996 of 1199 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 67.5% (810 of 1199 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 12:45:09 +00:00
c0534d6519 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:45:07 +00:00
7bc6c23dfa translationBot(ui): update translation (Italian)
Currently translated at 87.1% (1054 of 1210 strings)

translationBot(ui): update translation (Italian)

Currently translated at 85.5% (1026 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.7% (1016 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.7% (1016 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.4% (1012 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 84.3% (1011 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 83.5% (1002 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.5% (978 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 80.8% (969 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 80.7% (968 of 1199 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.3% (959 of 1179 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:45:05 +00:00
851ce36250 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:45:04 +00:00
d631088566 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 12:45:01 +00:00
f0bf733309 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (607 of 607 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 12:45:00 +00:00
65af7dd8f8 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 95.7% (579 of 605 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 12:44:59 +00:00
74c666aaa2 translationBot(ui): update translation (Russian)
Currently translated at 65.5% (643 of 981 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 12:44:58 +00:00
45f9aca7e5 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:56 +00:00
9fb624f390 translationBot(ui): update translation (Italian)
Currently translated at 81.2% (958 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 81.2% (958 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 76.6% (904 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 76.5% (903 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.9% (848 of 1179 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.7% (845 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 71.7% (845 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 67.8% (799 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 58.5% (689 of 1177 strings)

translationBot(ui): update translation (Italian)

Currently translated at 59.8% (640 of 1069 strings)

translationBot(ui): update translation (Italian)

Currently translated at 57.2% (612 of 1069 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (607 of 607 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (605 of 605 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (605 of 605 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (602 of 602 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:44:53 +00:00
962e51320b translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 96.1% (579 of 602 strings)

Co-authored-by: nemuruibai <nemuruibai@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 12:44:52 +00:00
44932923eb translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:50 +00:00
ffcf6dfde6 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (605 of 605 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 12:44:46 +00:00
be52eb153c translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:44 +00:00
bd97c6b708 translationBot(ui): update translation (Dutch)
Currently translated at 99.6% (591 of 593 strings)

Co-authored-by: Arnold Cordewiner <weblate@a14r.be>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 12:44:41 +00:00
9940cbfa87 translationBot(ui): update translation (Italian)
Currently translated at 97.8% (589 of 602 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (603 of 603 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (599 of 599 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (592 of 592 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:44:40 +00:00
77aeb9a421 translationBot(ui): update translation (Spanish)
Currently translated at 99.6% (601 of 603 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.5% (600 of 603 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (599 of 599 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.8% (594 of 595 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (592 of 592 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 12:44:38 +00:00
2bad8b9f29 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:36 +00:00
8e943b2ce1 translationBot(ui): update translation (Polish)
Currently translated at 58.4% (338 of 578 strings)

Co-authored-by: Simona Liliac <simonaliliac@yandex.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/pl/
Translation: InvokeAI/Web UI
2023-10-12 12:44:33 +00:00
5d3ab4f333 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 100.0% (563 of 563 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 12:44:32 +00:00
1047d08835 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (591 of 591 strings)

translationBot(ui): update translation (Italian)

Currently translated at 99.3% (587 of 591 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (586 of 586 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (578 of 578 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (559 of 559 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (559 of 559 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (551 of 551 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:44:30 +00:00
516cc258f9 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:28 +00:00
7c2aa1dc20 translationBot(ui): update translation (Russian)
Currently translated at 99.5% (602 of 605 strings)

translationBot(ui): update translation (Russian)

Currently translated at 99.8% (605 of 606 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (596 of 596 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (595 of 595 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (593 of 593 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (592 of 592 strings)

translationBot(ui): update translation (Russian)

Currently translated at 90.2% (534 of 592 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (543 of 543 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 12:44:25 +00:00
035f1e12e1 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (548 of 548 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (546 of 546 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (541 of 541 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (544 of 544 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (543 of 543 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:44:23 +00:00
4c93202ee4 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:20 +00:00
227046bdb0 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 88.0% (477 of 542 strings)

Co-authored-by: Song, Pengcheng <17528592@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-10-12 12:44:17 +00:00
83b123f1f6 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:15 +00:00
320ef15ee9 translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (538 of 538 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-10-12 12:44:11 +00:00
6905c61912 translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 8.9% (48 of 536 strings)

Co-authored-by: nekowaiz <nekowaiz@hotmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2023-10-12 12:44:09 +00:00
494bde785e translationBot(ui): update translation (Russian)
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 98.8% (536 of 542 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (533 of 533 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 12:44:08 +00:00
732ab38ca6 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (540 of 540 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (538 of 538 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 99.8% (535 of 536 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (533 of 533 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (533 of 533 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:44:07 +00:00
ba38aa56a5 translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (591 of 591 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (586 of 586 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (578 of 578 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (563 of 563 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (550 of 550 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (548 of 548 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (546 of 546 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (544 of 544 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (543 of 543 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (542 of 542 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (540 of 540 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (536 of 536 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (533 of 533 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 99.8% (532 of 533 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 12:44:04 +00:00
0a48c5a712 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:44:01 +00:00
133ab91c8d translationBot(ui): update translation (Russian)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (519 of 519 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-10-12 12:43:56 +00:00
7a672bd2b2 translationBot(ui): update translation (French)
Currently translated at 80.7% (419 of 519 strings)

Co-authored-by: pand4z31 <pand4zdev31@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2023-10-12 12:43:51 +00:00
7dee6f51a3 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-10-12 12:43:50 +00:00
3c029eee29 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (523 of 523 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (519 of 519 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (515 of 515 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-10-12 12:43:47 +00:00
1a8f9d1ecb translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (526 of 526 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (523 of 523 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (519 of 519 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (515 of 515 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-10-12 12:43:45 +00:00
80d329c900 fix(ui): fix plurals (#4860) 2023-10-12 18:07:22 +05:30
89db749d89 fix(ui): add missing translation strings 2023-10-12 22:46:47 +11:00
18164fc72a fix(ui): prettier ignores translation files 2023-10-12 21:37:45 +11:00
75de20af6a fix(ui): fix plurals in translation 2023-10-12 21:34:24 +11:00
cb1509bf52 feat(ui): add translation strings for clear intermediates (#4856)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

feat(ui): add translation strings for clear intermediates

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4851

## [optional] Are there any post deployment tasks we need to perform?

@Millu this can go into 3.3.0
2023-10-12 13:16:54 +05:30
10cd814cf7 feat(ui): add translation strings for clear intermediates 2023-10-12 18:35:33 +11:00
8ef38ecc7c fix(ui): only count enabled control adapters in collapse heading 2023-10-12 16:48:01 +11:00
69937d68d2 Maryhipp/dummy bulk download (#4852)
* UI for bulk downloading boards or groups of images

* placeholder route for bulk downloads that does nothing

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-10-11 23:27:22 +00:00
40f9e49b5e Demote model cache logs from warning to debug based on the conversation here: https://discord.com/channels/1020123559063990373/1049495067846524939/1161647290189090816 2023-10-11 12:02:46 -04:00
98fa234529 Bump safetensors to ~=0.4.0 (#4844)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

@Millu pointed out this safetensors PR a few weeks ago, which claimed to
offer a performance benefit:
https://github.com/huggingface/safetensors/pull/362 . It was superseded
by https://github.com/huggingface/safetensors/pull/363 and included in
the latest [safetensors 0.4.0
release](https://github.com/huggingface/safetensors/releases/tag/v0.4.0).

Here are the results from my local performance comparison:
```
Before(0.3.1) / After(0.4.0)

sdxl:main:tokenizer from disk to cpu in                              0.46s / 0.46s
sdxl:main:text_encoder from disk to cpu in                           2.12s / 2.32s
embroidered_style_v1_sdxl.safetensors:sdxl:lora' from disk to cpu in 0.67s / 0.36s
VoxelXL_v1.safetensors:sdxl:lora' from disk to cpu in                1.64s / 0.60s
ryan_db_sdxl_epoch640.safetensors:sdxl:lora' from disk to cpu in     2.46s / 1.40s
sdxl:main:tokenizer_2 from disk to cpu in                            0.37s / 0.39s
sdxl:main:text_encoder_2 from disk to cpu in                         3.78s / 4.70s
sdxl:main:unet from disk to cpu in                                   4.66s / 3.08s
sdxl:main:scheduler from disk to cpu in                              0.34s / 0.33s
sdxl:main:vae from disk to cpu in                                    0.66s / 0.51s

TOTAL GRAPH EXECUTION TIME:                                        56.489s / 53.416s
```

The benefit was marginal on my system (maybe even within measurement
error), but I figured we might as well pull it.
2023-10-11 09:40:47 -04:00
15b33ad501 feat(nodes): add freeu support
Add support for FreeU. See:
- https://huggingface.co/docs/diffusers/main/en/using-diffusers/freeu
- https://github.com/ChenyangSi/FreeU

Implementation:
- `ModelPatcher.apply_freeu()` handles the enabling freeu (which is very simple with diffusers).
- `FreeUConfig` model added to hold the hyperparameters.
- `freeu_config` added as optional sub-field on `UNetField`.
- `FreeUInvocation` added, works like LoRA - chain it to add the FreeU config to the UNet
- No support for model-dependent presets, this will be a future workflow editor enhancement

Closes #4845
2023-10-11 13:49:28 +11:00
fe889235cc Bump safetensors to ~=0.4.0 2023-10-10 18:00:15 -04:00
462c1d4c9b Improve model load times from disk: skip unnecessary weight init (#4840)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

This PR optimizes the time to load models from disk.
In my local testing, SDXL text_encoder_2 models saw the greatest
improvement:
- Before change, load time (disk to cpu): 14 secs
- After change, load time (disk to cpu): 4 secs

See the in-code documentation for an explanation of how this speedup is
achieved.

## Related Tickets & Documents

This change was previously proposed on the HF transformers repo, but did
not get any traction:
https://github.com/huggingface/transformers/issues/18505#issue-1330728188

## QA Instructions, Screenshots, Recordings

I don't expect any adverse effects, but the new context manager is
applied while loading **all** models, so it would make sense to exercise
everything.

## Added/updated tests?

- [x] Yes
- [ ] No
2023-10-10 13:40:20 -04:00
0ed36158c8 Merge branch 'main' into ryan/optimize-model-load 2023-10-10 13:31:08 -04:00
f3c138a208 (minor) Fix Flake8. 2023-10-10 10:06:53 -04:00
61242bf86a Fix bug in skip_torch_weight_init() where the original behavior of torch.nn.Conv*d modules wasn't being restored correctly. 2023-10-10 10:05:50 -04:00
d118d02df4 feat(ui): add mapping for sketch and scribble control adapter processors 2023-10-09 23:24:56 -04:00
58b56e9b1e Add a skip_torch_weight_init() context manager to improve model load times (from disk). 2023-10-09 14:12:56 -04:00
1f751f8c21 fix(ui): remove extraneous cache update 2023-10-09 20:11:21 +11:00
ca95a3bd0d fix(ui): fix canvas soft-lock if canceled before first generation
The canvas needs to be set to staging mode as soon as a canvas-destined batch is enqueued. If the batch is is fully canceled before an image is generated, we need to remove that batch from the canvas `batchIds` watchlist, else canvas gets stuck in staging mode with no way to exit.

The changes here allow the batch status to be tracked, and if a batch has all its items completed, we can remove it from the `batchIds` watchlist. The `batchIds` watchlist now accurately represents *incomplete* canvas batches, fixing this cause of soft lock.
2023-10-09 20:11:21 +11:00
55b40a9425 feat(events): add batch status and queue status to queue item status changed events
The UI will always re-fetch queue and batch status on receiving this event, so we may as well jsut include that data in the event and save the extra network roundtrips.
2023-10-09 20:11:21 +11:00
90083cc88d fix(ui): fix use all hotkey 2023-10-09 20:03:14 +11:00
ead754432a add a lists of t2i adapters to startup set (#4828)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] No, because: Non-controversial

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] N/A


## Description

This adds a list of T2I adapters to the “starter models” offered by the
TUI installer. None of the models is selected by default; this can be
done easily if requested. The models offered to the user are:

```
TencentARC/t2iadapter_canny_sd15v2
TencentARC/t2iadapter_sketch_sd15v2
TencentARC/t2iadapter_depth_sd15v2
TencentARC/t2iadapter_zoedepth_sd15v1
TencentARC/t2i-adapter-canny-sdxl-1.0
TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
TencentARC/t2i-adapter-lineart-sdxl-1.0
TencentARC/t2i-adapter-sketch-sdxl-1.0
```

## Related Tickets & Documents

PR #4612 

## QA Instructions, Screenshots, Recordings

The revised installer has a new IP-ADAPTERS tab that looks like this:


![IMG_0255](https://github.com/invoke-ai/InvokeAI/assets/111189/0e01b1f6-7191-49a1-ac63-2c913826d299)

## Added/updated tests?

- [ ] Yes
- [X] No : It would be good to have a suite of model download tests, but
not set up yet.
2023-10-08 19:49:43 -04:00
fa9ea93477 add a lists of t2i adapters to startup set 2023-10-08 18:53:21 -04:00
fe0cf2c160 remove hardcoded subfolder name from model downloader 2023-10-08 17:45:39 -04:00
a681fa4b03 fix(ui): invalidate query cache for all models on sync models
Also realised the tags were set up incorrectly, fixed that to get type safety with tags.
2023-10-07 22:30:15 +11:00
1cc686734b feat(ui): on base model change, disable control adapters
Previously it deleted them entirely.
2023-10-07 22:30:15 +11:00
82e8b92ba0 feat(ui): display toast when enabling t2i/controlnet and disabling the other 2023-10-07 22:30:15 +11:00
e86658f864 feat(ui): disable invoke button if enabled control adapter model does not match base model 2023-10-07 22:30:15 +11:00
ad136c2680 fix(ui): do not add control adapters with incompatible models to graph 2023-10-07 22:30:15 +11:00
35374ec531 feat(ui): update graphs for multi ip adapter 2023-10-07 22:30:15 +11:00
ed82bf6bb8 feat(ui): disable control adapter buttons if no models available 2023-10-07 22:30:15 +11:00
078c9b6964 feat(nodes,ui): add t2i to linear UI
- Update backend metadata for t2i adapter
- Fix typo in `T2IAdapterInvocation`: `ip_adapter_model` -> `t2i_adapter_model`
- Update linear graphs to use t2i adapter
- Add client metadata recall for t2i adapter
- Fix bug with controlnet metadata recall - processor should be set to 'none' when recalling a control adapter
2023-10-07 22:30:15 +11:00
1a9d2f1701 feat(ui): spruce up control adapter ui 2023-10-07 22:30:15 +11:00
3e93159bce fix(ui): enable duplicated control adapter 2023-10-07 22:30:15 +11:00
b57ebe52e4 chore(ui): "controlnet" -> "controladapters" 2023-10-07 22:30:15 +11:00
ba4616ff89 feat(ui): add limits to enabled control adapters
- only 1 ip adapter at a time
- controlnet and t2i cannot both be active at once
2023-10-07 22:30:15 +11:00
dcfbd49e1b fix(ui): fix control adapters recall 2023-10-07 22:30:15 +11:00
913fc83cbf fix(ui): fix control adapter autoprocess 2023-10-07 22:30:15 +11:00
6b8ce34eb3 fix(ui): fix excessive re-renders 2023-10-07 22:30:15 +11:00
9508e0c9db feat(ui): refactor control adapters
Control adapters logic/state/ui is now generalized to hold controlnet, ip_adapter and t2i_adapter. In the future, other control adapter types can be added.

TODO:
- Limit IP adapter to 1
- Add T2I adapter to linear graphs
- Fix autoprocess
- T2I metadata saving & recall
- Improve on control adapters UI
2023-10-07 22:30:15 +11:00
9c720da021 Bump DenoiseLatentsInvocation version. 2023-10-06 20:43:43 -04:00
e1b576c72d yarn build 2023-10-06 20:43:43 -04:00
971ccfb081 Refactor multi-IP-Adapter to clean up the interface around changing scales. 2023-10-06 20:43:43 -04:00
43a3c3c7ea Fix typo in setting IP-Adapter scales. 2023-10-06 20:43:43 -04:00
4df1cdb34d Tidy _prepare_attention_processors(...) logic. 2023-10-06 20:43:43 -04:00
3f860c3523 Fixup IP-Adapter locale strings. 2023-10-06 20:43:43 -04:00
d8d0c9af09 Fix handling of scales with multiple IP-Adapters. 2023-10-06 20:43:43 -04:00
9403672ac0 Bugfix for multi-ip-adapter in DenoiseLatentsInvocation. 2023-10-06 20:43:43 -04:00
94591840a7 Frontend changes to enable multiple IP-Adapters in the workflow editor. 2023-10-06 20:43:43 -04:00
26b91a538a Fixes to get IP-Adapter tests working with new multi-IP-Adapter support. 2023-10-06 20:43:43 -04:00
7ca456d674 Update IP-Adapter model to enable running multiple IP-Adapters at once. (Not tested yet.) 2023-10-06 20:43:43 -04:00
78828b6b9c WIP - Accept a list of IPAdapterFields in DenoiseLatents. 2023-10-06 20:43:43 -04:00
166ff9d301 Proposal: Support slow tests that depend on models (#4813)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

This PR adds support for slow unit tests that depend on models. It
includes:
- Documentation explaining the handling of fast vs. slow unit tests.
- Utilities to assist with writing tests that depend on models.
- A sample test that loads and runs an IP-Adapter model. This is far
from complete test coverage of IP-Adapter - it's just intended as a
first example of how to write tests with models.

**Suggestion for reviewers**: Start with docs/contributing/TESTS.md

## QA Instructions, Screenshots, Recordings

I've tested it all, but it would make sense for others to try running
both the fast tests and the slow tests.

## Added/updated tests?

- [x] Yes
- [ ] No
2023-10-06 19:55:38 -04:00
4f97bd4418 Merge branch 'main' into ryan/model-tests 2023-10-06 19:47:28 -04:00
e0e001758a Remove @slow decorator in favor of @pytest.mark.slow. 2023-10-06 18:26:06 -04:00
c1887135b3 Improve model cache debug logging (#4784)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

This PR adds detailed debug logging to the model cache in order to give
more visibility into the model cache's memory utilization. **This PR
does not make any functional changes to the model cache.**

Every time a model is moved from disk to CPU, or between CPU/CUDA, a log
like this is emitted:
```bash
[2023-10-03 15:17:20,599]::[InvokeAI]::DEBUG --> Moved model '/home/ryan/invokeai/models/.cache/63742ed45b499e55620c402d6df26a20:sdxl:main:unet' from cpu to cuda in 1.23s.
Estimated model size: 4.782 GB.
Process RAM                    (-4.722): 6.987GB -> 2.265GB
libc mmap allocated            (-4.722): 6.030GB -> 1.308GB
libc arena used                (-0.061): 0.402GB -> 0.341GB
libc arena free                (+0.061): 0.006GB -> 0.067GB
libc total allocated           (-4.722): 6.439GB -> 1.717GB
libc total used                (-4.783): 6.433GB -> 1.649GB
VRAM                           (+4.881): 1.538GB -> 6.418GB
```

## Related Tickets & Documents

https://github.com/invoke-ai/InvokeAI/pull/4694 contains related fixes
to some known memory issues.

## QA Instructions, Screenshots, Recordings

Make sure debug logs are enabled and you should see the new logs.

We should test each of the following environments:
- [x] Linux
- [x] Mac OS + MPS
- [x] Windows

## Added/updated tests?

- [x] Yes
- [ ] No

Added unit tests for the new utilities. Test coverage is still low for
the ModelCache, but not worse than before.
2023-10-06 10:21:42 -04:00
096d195d6e Merge branch 'main' into ryan/model-cache-logging-only 2023-10-06 09:52:45 -04:00
7870b90717 Add TESTS.md documentation. 2023-10-05 15:38:25 -04:00
9854b244fd Fix Flake8 errors by using a pytest conftest.py file. 2023-10-05 15:36:15 -04:00
7d800e1ce3 Fix broken link in documentation to 'Frontend Documentation'. 2023-10-05 15:36:15 -04:00
1c8b1fbc53 POC of a test that depends on models. 2023-10-05 15:35:58 -04:00
594a3aef93 Set MALLOC_MMAP_THRESHOLD_=1048576 by default in invoke.sh. And add it to the manual installation docs. 2023-10-05 14:26:45 -04:00
78377469db Add support for T2I-Adapter in node workflows (#4612)
* Bump diffusers to 0.21.2.

* Add T2IAdapterInvocation boilerplate.

* Add T2I-Adapter model to model-management.

* (minor) Tidy prepare_control_image(...).

* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.

* Add logic for applying T2I-Adapter weights and accumulating.

* Add T2IAdapter to MODEL_CLASSES map.

* yarn typegen

* Add model probes for T2I-Adapter models.

* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.

* Add T2IAdapterModel.convert_if_required(...).

* Fix errors in T2I-Adapter input image sizing logic.

* Fix bug with handling of multiple T2I-Adapters.

* black / flake8

* Fix typo

* yarn build

* Add num_channels param to prepare_control_image(...).

* Link to upstream diffusers bugfix PR that currently requires a workaround.

* feat: Add Color Map Preprocessor

Needed for the color T2I Adapter

* feat: Add Color Map Preprocessor to Linear UI

* Revert "feat: Add Color Map Preprocessor"

This reverts commit a1119a00bf.

* Revert "feat: Add Color Map Preprocessor to Linear UI"

This reverts commit bd8a9b82d8.

* Fix T2I-Adapter field rendering in workflow editor.

* yarn build, yarn typegen

---------

Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-10-05 16:29:16 +11:00
fbe6452c45 Add support for IPAdapterPlusXL based on 6219530507. 2023-10-04 22:35:17 -04:00
3f4ea073d1 fix(ui): throw on fetch err when copying image 2023-10-05 10:43:59 +11:00
8b7f8eaea2 chore: flake8 2023-10-05 09:32:29 +11:00
88e16ce051 fix(nodes): mark session queue items failed on processor error
When the processor has an error and it has a queue item, mark that item failed.

This addresses processor errors resulting in `in_progress` queue items, which create a soft lock of the processor, requiring the user to cancel the `in_progress` item before anything else processes.
2023-10-05 09:32:29 +11:00
421440cae0 feat(nodes): exhaustive graph validation
Makes graph validation logic more rigorous, validating graphs when they are created as part of a session or batch.

`validate_self()` method added to `Graph` model. It does all the validation that `is_valid()` did, plus a few extras:
- unique `node.id` values across graph
- node ids match their key in `Graph.nodes`
- recursively validate subgraphs
- validate all edges
- validate graph is acyclical

The new method is required because `is_valid()` just returned a boolean. That behaviour is retained, but `validate_self()` now raises appropriate exceptions for validation errors. This are then surfaced to the client.

The function is named `validate_self()` because pydantic reserves `validate()`.

There are two main places where graphs are created - in batches and in sessions.

Field validators are added to each of these for their `graph` fields, which call the new validation logic.

**Closes #4744**

In this issue, a batch is enqueued with an invalid graph. The output field is typed as optional while the input field is required. The field types themselves are not relevant - this change addresses the case where an invalid graph was created.

The mismatched types problem is not noticed until we attempt to invoke the graph, because the graph was never *fully* validated. An error is raised during the call to `graph_execution_state.next()` in `invoker.py`. This function prepares the edges and validates them, raising an exception due to the mismatched types.

This exception is caught by the session processor, but it doesn't handle this situation well - the graph is not marked as having an error and the queue item status is never changed. The queue item is therefore forever `in_progress`, so no new queue items are popped - the app won't do anything until the queue item is canceled manually.

This commit addresses this by preventing invalid graphs from being created in the first place, addressing a substantial number of fail cases.
2023-10-05 09:32:29 +11:00
421021cede Add 'make 3d' plugin / community node (#4794)
* Add 'make 3d' plugin.

* Update communityNodes.md

Updated to Repo Link

---------

Co-authored-by: Jordan <srcrr-gitlab@ipriva.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2023-10-04 21:41:21 +00:00
020d4302d1 Change version bump from patch to minor
Because this adds a new field, it's a minor version bump
2023-10-05 08:24:52 +11:00
8c59d2e5af chore: isort 2023-10-05 08:24:52 +11:00
17d451eaa7 feat(images): add png_compress_level config
The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize

Closes #4786
2023-10-05 08:24:52 +11:00
23a06fd06d feat(nodes): clear torch cache after upscaling
This can use many GB of VRAM, so we need to clean up after ourselves.
2023-10-05 08:24:52 +11:00
010c8e8038 Roll back change to buildAdHocUpscaleGraph.ts
Undo the change made here which was causing automated tests to fail.
2023-10-05 08:24:52 +11:00
dfc635223c Update upscale.py with minor style correction 2023-10-05 08:24:52 +11:00
37121a3a24 Add tile_size parameter to ESERGAN node in buildAdHocUpscaleGraph.ts
Adds tile_size parameter to support the changed ESRGAN node in invokeai/app/invocations/upscale.py
2023-10-05 08:24:52 +11:00
51b5de799a Update upscale.py to support tile kwarg of RealESRGANer
Adds tile_size field to the ESRGAN Upscaler node, which sends the tile kwarg to RealESRGANer's constructor, enabling tiled upscaling (default=512)
2023-10-05 08:24:52 +11:00
eadbe6abf7 handle 0 images/assets 2023-10-05 08:11:52 +11:00
16f48a816f fix(ui): add dnd validation logic for multi-select board move 2023-10-05 08:11:52 +11:00
95838e5559 fix(ui): fix remove from board dnd validation
This is fired when the dnd image is moved over the 'none' board. Weren't defaulting to 'none' for the image's board_id, resulting in it being possible to drag a 'none' image onto 'none'.
2023-10-05 08:11:52 +11:00
3e8d62b1d1 fix(ui): fix duplicate image selection
Selections were not being `uniqBy()`'d, or were `uniqBy()`'d without a proper iteratee. This results in duplicate images in selections in certain situations.

Add correct `uniqBy()` to the reducer to prevent this in the future.
2023-10-05 08:11:52 +11:00
2acc93eb8e feat(ui): remove all calls to getBoardImagesTotals/getBoardAssetsTotals
This caused a crapload of network requests any time an image was generated.

The counts are necessary to handle the logic for inserting images into existing image list caches; we have to keep track of the counts.

Replace tag invalidation with manual cache updates in all cases, except the initial request (which is necessary to get the initial image counts).

One subtle change is to make the counts an object instead of a number. This is required for `immer` to handle draft states. This should be raised as a bug with RTK Query, as no error is thrown when attempting to update a primitive immer draft.
2023-10-05 08:11:52 +11:00
fbb61f2334 Revert "Updated js files"
This reverts commit a0e936f3a7.
2023-10-04 22:32:00 +11:00
be85c7972b Updated js files 2023-10-04 22:32:00 +11:00
3a586fc9c4 Prevent caching to ensure updated UI is shown 2023-10-04 22:32:00 +11:00
dedead672f chore(facetools): bump node patch versions
The helper function `generate_face_box_mask()` had a bug that prevented larger faces from being detected in some situations. This is resolved, and its dependent nodes (all the FaceTools nodes) have a patch version bump.
2023-10-04 09:33:14 +11:00
67366921c0 add checkbounds bool
- don't check bounds on first detection before chunking, allows larger faces to be detected
2023-10-04 09:33:14 +11:00
5a1019d858 sort by starred and then created_at to get board cover image 2023-10-04 08:54:47 +11:00
f4ba7be918 refetch baord list when image is starred or unstarred 2023-10-04 08:54:47 +11:00
069d8b5812 feat(ui): move initial IP adapter model selection to listener 2023-10-04 08:41:37 +11:00
24d73d484a IP adapter UI 2023-10-04 08:41:37 +11:00
2479a59e5e Re-enable garbage collection in model cache MemorySnapshots. 2023-10-03 15:18:47 -04:00
7d0ac2c36d (minor) clean up typos. 2023-10-03 15:00:03 -04:00
519b892f0c Add unit test for Struct_mallinfo2.__str__() 2023-10-03 14:25:34 -04:00
763dcacfd3 Add unit test for get_pretty_snapshot_diff(...). 2023-10-03 14:25:34 -04:00
3599d546e6 Add unit test for LibcUtil().mallinfo2(). 2023-10-03 14:25:34 -04:00
22a84930f6 Disable garbage collection in ModelCache calls to MemorySnapshot in order minimize snapshot overhead. 2023-10-03 14:25:34 -04:00
d64e17e043 Add README with info about glib memory fragmentation caused by the model cache. 2023-10-03 14:25:34 -04:00
ba54277011 Catch a more specific exception in environments that do not have a libc shared library. 2023-10-03 14:25:34 -04:00
5915a4a51c Minor fixes. 2023-10-03 14:25:34 -04:00
4580ba0d87 Remove logic to update model cache size estimates dynamically. 2023-10-03 14:25:34 -04:00
b9fd2e9e76 Improve get_pretty_snapshot_diff(...) message formatting. 2023-10-03 14:25:34 -04:00
75b65597af Add malloc info to MemorySnapshot. 2023-10-03 14:25:34 -04:00
2a3c0ab5d2 Move MemorySnapshot to its own file. 2023-10-03 14:25:34 -04:00
7d61373b82 Add LibcUtil class. 2023-10-03 14:25:34 -04:00
7d65555a5a Fix type error in torch device comparison. 2023-10-03 14:25:34 -04:00
123f2b2dbc Update cache model size estimates based on changes in VRAM when moving models to/from CUDA. 2023-10-03 14:25:34 -04:00
1e4e42556e Update model cache device comparison to treat 'cuda' and 'cuda:0' as the same device type. 2023-10-03 14:25:34 -04:00
1f6699ac43 Consolidate all model.to(...) calls in the model cache to use a utility function with better logging. 2023-10-03 14:25:34 -04:00
ace8665411 Add warning log if moving a model from cuda to cpu causes unexpected change in VRAM usage. 2023-10-03 14:25:34 -04:00
7fa5bae8fd Add warning log if moving model from RAM to VRAM causes an unexpected change in VRAM usage. 2023-10-03 14:25:34 -04:00
f9faca7c91 Add warning log if model mis-reports its required cache memory before load from disk. 2023-10-03 14:25:34 -04:00
594fd3ba6d Add debug logging of changes in RAM and VRAM for all model cache operations. 2023-10-03 14:25:34 -04:00
44d68f5ed5 Auto-format model_cache.py. 2023-10-03 14:25:34 -04:00
4bda7d7df5 Add font Inter-Regular.ttf to installed assets (#4775)
## What type of PR is this? (check all applicable)

- [X] Bug Fix


## Have you discussed this change with the InvokeAI team?
- [X] Yes

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

This PR causes the font "Inter-Regular.ttf", which is needed by the
facetools Face Identifier node, to be installed along with other assets
in the virtual environment. It also fixes the font path resolution logic
in the invocation to work with both package and editable installs.

## Related Tickets & Documents

Closes #4771
2023-10-03 09:05:51 -04:00
920c5dd686 remove unneeded os import 2023-10-03 08:53:47 -04:00
4ce00a32f4 add font Inter-Regular.ttf to installed assets 2023-10-03 08:48:50 -04:00
dcbb25dfea feat(ui): staging styling tweak 2023-10-03 13:46:01 +11:00
6c8270dae2 fix(ui): canvas staging area works after undo 2023-10-03 13:46:01 +11:00
b19572199f Release/v3.2.0 (#4766)
## What type of PR is this? (check all applicable)

Release v3.2.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No

Need to update prompting docs 

## Description
3.2.0 release version

## [optional] Are there any post deployment tasks we need to perform?
2023-10-03 11:59:19 +11:00
a673c0aa14 Update JS files 2023-10-03 10:31:35 +11:00
955ef3bc54 Update version to 3.2.0 2023-10-03 10:29:27 +11:00
f002ae8da5 feat(ui): max upscale pixels config (#4765)
* feat(ui): max upscale pixels config

Add `maxUpscalePixels: number` to the app config. The number should be the *total* number of pixels eg `maxUpscalePixels: 4096 * 4096`.

If not provided, any size image may be upscaled.

If the config is provided, users will see be advised if their image is too large for either model, or told to switch to an x2 model if it's only too large for x4.

The message is via tooltip in the popover and via toast if the user uses the hotkey to upscale.

* feat(ui): "mayUpscale" -> "isAllowedToUpscale"
2023-10-02 23:25:05 +00:00
208bf68ba2 fix missing toast message 2023-10-03 07:45:26 +11:00
1aba369c83 invalidate board cache when an image is added to a board 2023-10-02 19:40:11 +11:00
9ac11e793c Added GridtoGif to communityNodes.md (#4755)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description
Grid to Gif is two custom nodes, one that divides a grid image into an
image collection, the other converts an image collection into a animated
gif
2023-10-02 10:44:55 +11:00
9b39888e2f Added GridtoGif to communityNodes.md 2023-10-01 17:42:36 -05:00
c1715144f0 add Character Art Node's to communityNodes.md 2023-10-01 11:10:36 -04:00
929557bc6f Fix typo of Psychedelicious name (#4746)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ x ] No, because:

      
## Have you updated all relevant documentation?
- [x  ] Yes
- [ ] No
2023-09-30 22:48:30 +05:30
811dd93912 Fix typo of Psychedelicious name 2023-09-30 12:35:49 -04:00
9a60dbd5cb add version to cv2 infill (#4741)
cv2 infill node was missing a version in its decorator, resulting in a
red exclamation mark on the node

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: is tiny

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No
2023-09-29 20:36:51 +05:30
637c5b0747 add version to cv2 infill
- cv2 infill was missing a version in its decorator, resulting in a red exclamation mark on the node
2023-09-29 16:58:19 +02:00
27164de8b8 Fix absolute path for font file
Make the font file relative to this source file. Not ideal, but it will work no matter where InvokeAI is launched.
2023-09-29 22:05:04 +10:00
08e40d6d16 fix(ui): fit ip adapter image to panel (#4737)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Very tall IP adapter images didn't get fit to the panel. Now they do
2023-09-29 14:29:39 +05:30
d905c54795 fix(ui): fit ip adapter image to panel 2023-09-29 18:54:34 +10:00
dc1e804887 Workflow editor improvements - add node from empty connection and auto-connect to empy handle. (#4684)
* Initial commit of edge drag feature.

* Fixed build warnings

* code cleanup and drag to existing node

* improved isValidConnection check

* fixed build issues, removed cyclic dependency

* edge created nodes now spawn at cursor

* Add Node popover will no longer show when using drag to delete an edge.

* Fixed collection handling, added priority for handles matching name of source handle, removed current image/notes nodes from filtered list

* Fixed not properly clearing startParams when closing the Add Node popover

* fix(ui): do not allow Collect -> Iterate connection

This can be removed when #3956 is resolved

* feat(ui): use existing node validation logic in add-node-on-drop

This logic handles a number of special cases

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-29 18:12:57 +10:00
95fd2ee6ff Nodes-FaceTools (FaceIdentifier, FaceOff, FaceMask) (#4576)
* node-FaceTools

* Added more documentation for facetools

* invert FaceMask masking

- FaceMask had face protected and surroundings change by default (face white, else black)
- Change to how FaceOff/others work: the opposite where surroundings protected, face changes by default (face black, else white)

* reflect changed facemask behaviour in docs

* add FaceOff+FaceMask workflows

- Add FaceOff and FaceMask example workflows to docs/workflows

* add FaceMask+FaceOff workflows to exampleworkflows.md

- used invokeai URL paths mimicking other workflow URLs, hopefully they translate when/if merged

* inheriting, typehints, black/isort/flake8

- modified FaceMask and FaceOff output classes to inherit base image, height, width from ImageOutput
- Added type annotations to helper functions, required some reworking of code's stored data

* remove credit header

- Was in my personal/repo copy, don't think it's necessary if merged.

* Optionals & image declaration duplication

- Added Optional[] to optional outputs and types
- removed duplication of image = context.services.images.get_pil_images(self.image.image_name) declaration
- Still need to find a way to deal with mask_pil None typing errors

* face(facetools): fix typing issues, add validation, clean up structure

* feat(facetools): update field descriptions

* Update FaceOff_FaceScale2x.json

- update FaceOff workflow after Bounded Image field removed in place of inheriting Image out field from ImageOutput

* feat(facetools): pass through original image on facemask if invalid face ids requested

* feat(facetools): tidy variable names & fn calls

* feat(facetools): bundle inter font, draw ids with it

Inter is a SIL Open Font license. The license is included and is fully permissive. Inter is the same font the UI and commercial application already uses.

Only the "regular" version is bundled.

* chore(facetools): isort & fix mypy issues

* docs(facetools): update and format docs

---------

Co-authored-by: Millun Atluri <millun.atluri@gmail.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-29 17:54:13 +10:00
5f4eb0c3b3 update communitynodes.md to add Rotate/Flip Image to composition pack (#4735)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Adds another node description (Rotate/Flip Image) to Image and Mask
Composition Pack

## Related Tickets & Documents
n/a

## QA Instructions, Screenshots, Recordings
n/a
## Added/updated tests?

- [ ] Yes
- [X] No : n/a
2023-09-29 15:19:48 +10:00
d464ce509b update communitynodes.md to add Rotate/Flip Image to composition pack 2023-09-29 00:37:40 -04:00
3909e68527 fix(ui): data-testId -> data-testid
Must be strict kebab-case for react to pass the attribute to DOM
2023-09-29 12:44:00 +10:00
848e51f72b Update communityNodes.md (#4729)
Added thresholding and halftone nodes.
2023-09-28 23:48:07 +00:00
52f8c9e16f add data-testids to UI components that may be hard to target with automation 2023-09-29 08:58:31 +10:00
5174f382b9 Update LOCAL_DEVELOPMENT.md
add LSP and type checking notes
2023-09-29 00:34:39 +10:00
c7f80cd163 Use metadata ip adapter (#4715)
* add control net to useRecallParams

* got recall controlnets working

* fix metadata viewer controlnet

* fix type errors

* fix controlnet metadata viewer

* add ip adapter to metadata

* added ip adapter to recall parameters

* got ip adapter recall working, still need to fix type errors

* fix type issues

* clean up logs

* python formatting

* cleanup

* fix(ui): only store `image_name` as ip adapter image

* fix(ui): use nullish coalescing operator for numbers

Need to use the nullish coalescing operator `??` instead of false-y coalescing operator `||` when the value being check is a number. This prevents unintended coalescing when the value is zero and therefore false-y.

* feat(ui): fall back on default values for ip adapter metadata

* fix(ui): remove unused schema

* feat(ui): re-use existing schemas in metadata schema

* fix(ui): do not disable invocationCache

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-28 09:05:32 +00:00
309e2414ce enable downloading from subfolders for repo_ids (#4725)
[## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes
      
## Have you updated all relevant documentation?
- [X] Yes

## Description

Very rarely a model lives in the subfolder of a non-pipeline HuggingFace
repo_id. The example I've been working with is
https://huggingface.co/monster-labs/control_v1p_sd15_qrcode_monster/tree/main,
where the improved monster QR code controlnet model lives in the `v2`
subdirectory.

In order to accommodate installing such files, I have made two changes
to the model installer.

1. At installation/configuration time, if a stanza in
`INITIAL_MODELS.yaml` contains the field `subfolder`, then the model
will be installed from the indicated subfolder. The syntax in this case
is:
```
sd-1/controlnet/qrcode_monster:
   repo_id: monster-labs/control_v1p_sd15_qrcode_monster
   subfolder: v2
```
2. From within the Web GUI or the installer TUI, if you wish to indicate
that the model resides in a subfolder, you can tack ":_subfoldername_"
to the end of the repo_id. The resulting repo_id will look like:
```
monster-labs/control_v1p_sd15_qrcode_monster:v2
```

The code for introducing these changes is obscure and somewhat hacky.
However, the whole installer code base has been rewritten for the model
manager refactor (#4252 ) and I will reimplement this feature in a more
elegant way in that PR.
2023-09-28 15:26:18 +10:00
6704f77d87 Merge branch 'main' into feat/install-repoid-folders 2023-09-28 13:49:57 +10:00
045d3f6139 chore: flake8 2023-09-28 13:49:31 +10:00
a0bd8c638e chore(ui): lint 2023-09-28 12:39:00 +10:00
de04a5f441 cleanup 2023-09-28 12:39:00 +10:00
40ed218c26 surface usage errors for cnet and upscale, handle clearing cnet if error occurs 2023-09-28 12:39:00 +10:00
807c6b41c5 surface usage errors for enqueuing batch 2023-09-28 12:39:00 +10:00
f6bbcd0589 remove dangling debug statement 2023-09-27 22:26:26 -04:00
ada22a799e remove dangling debug statement 2023-09-27 22:26:06 -04:00
a42ef9c855 add documentation on syntax to use for subfolder repo_ids 2023-09-27 22:17:29 -04:00
034af2d9f8 enable downloading from subfolders for repo_ids 2023-09-27 22:11:56 -04:00
676ccd8ebb Add IP-Adapter to docs (#4703)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-28 11:11:24 +10:00
a263a4f4cc Update CONTROLNET.md 2023-09-27 20:51:02 -04:00
ef0754cdec Merge branch 'invoke-ai:main' into main 2023-09-28 09:41:29 +10:00
8158124679 fix(ui): usePreselectedImage causing re-renders
This hook was rerendering any time anything changed. Moved it to a logical component, put its useEffects inside the component. This reduces the effect of the rerenders to just that tiny always-null component.
2023-09-28 09:02:45 +10:00
5d31df0cb7 Fix IP-Adapter calculation of memory footprint (#4692)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

The IP-Adapter memory footprint was not being calculated correctly.

I think we could put checks in place to catch this type of error in the
future, but for now I'm just fixing the bug.

## QA Instructions, Screenshots, Recordings

I tested manually in a debugger. There are 3 pathways for calculating
the model size. All were tested:
- From file
- From state_dict
- From model weights

## Added/updated tests?

- [ ] Yes
- [x] No : This would require the ability to run tests that depend on
models. I'm working on this in another branch, but not ready quite yet.
2023-09-27 12:03:04 -04:00
bd63454e51 Merge branch 'main' into bug/ip-adapter-calc-size 2023-09-27 11:55:55 -04:00
062df07de2 fix(ui): fix loading queue item translation 2023-09-27 11:18:43 -04:00
0fc14afcf0 Merge branch 'main' into bug/ip-adapter-calc-size 2023-09-27 09:42:51 -04:00
4a0a1c30db use controlnet from metadata if available (#4658)
* add control net to useRecallParams

* got recall controlnets working

* fix metadata viewer controlnet

* fix type errors

* fix controlnet metadata viewer

* set control image and use correct processor type and node

* clean up logs

* recall processor using substring

* feat(ui): enable controlNet when recalling one

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-27 19:30:50 +10:00
3432fd72f8 fix auto-switch alongside starred images (#4708)
* add skeleton loading state for queue lit

* add optional selectedImage when switching a board

* unstage

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-27 07:51:37 +00:00
05a43c41f9 feat: Improve Staging Toolbar Styling 2023-09-27 17:45:39 +10:00
bb48617101 fix(ui): memoize canvas context menu callback 2023-09-27 17:45:39 +10:00
aa2f68f608 fix(ui): use theme colors for canvas error fallback 2023-09-27 17:45:39 +10:00
fbccce7573 feat(ui): staging area toolbar enhancements
- Current image number & total are displayed
- Left/right wrap around instead of stopping on first/last image
- Disable the left/right/number buttons when showing base layer
- improved translations
2023-09-27 17:45:39 +10:00
a35087ee6e feat(ui): hide mask when staging
Now you can compare inpainted area with new image data
2023-09-27 17:45:39 +10:00
03e463dc89 fix(ui): reset canvas batchIds on staging area init/discard/commit
This prevents the bbox from being used inadvertantly during canvas generation
2023-09-27 17:45:39 +10:00
d467e138a4 fix(ui): canvas is staging if is listening for batch ids 2023-09-27 17:45:39 +10:00
ba4aaea45b fix(ui): memoize event handlers on bounding box 2023-09-27 17:45:39 +10:00
53eb23b8b6 fix(ui): fix canvas staging images offset from bounding box
The staging area used the stage bbox, not the staging area bbox.
2023-09-27 17:45:39 +10:00
8b969053e7 fix: SDXL Refiner using the incorrect node during inpainting 2023-09-27 17:42:42 +10:00
98a076260b fix(ui): only disable cancel item button if value is null/undefined
0 is falsy and the `item_id` is an integer
2023-09-27 14:28:26 +10:00
164877b610 Merge branch 'main' into main 2023-09-27 12:28:24 +10:00
b3f4f28d76 fix: Canvas pull getting cropped for Control Images 2023-09-27 12:25:45 +10:00
acee4bd282 fix: Always use bbox bounds for Controlnet Image (canvas) 2023-09-27 12:25:45 +10:00
fc9a7320eb Update to be more accurate 2023-09-27 12:21:20 +10:00
7c0a083b13 Merge branch 'invoke-ai:main' into main 2023-09-27 11:26:26 +10:00
50d254fdb7 fix(ui): fix types for cache setting 2023-09-27 10:29:19 +10:00
0cfc1c5f86 fix(ui): save cache setting to workflow
Do not strip out unknown values. Quick fix, probably not the best way to handle this.
2023-09-27 10:29:19 +10:00
f35dfa06bb Merge branch 'invoke-ai:main' into main 2023-09-27 10:10:52 +10:00
407bca5063 fix merges 2023-09-27 10:10:09 +10:00
1419977e89 feat(ui): update cache status on queue event
It was polling every 5s before. No need - just invalidate the tag when we have a queue item status change event.
2023-09-27 08:56:14 +10:00
a953944894 feat(ui): updatable edges in workflow editor (#4701)
- Drag the end of an edge away from its handle to disconnect it
- Drop in empty space to delete the edge
- Drop on valid handle to reconnect it
- Update connection logic slightly to allow edge updates
2023-09-26 15:54:35 +00:00
a4cdaa245e feat(ui): improve error handling (#4699)
* feat(ui): add error handling for enqueueBatch route, remove sessions

This re-implements the handling for the session create/invoke errors, but for batches.

Also remove all references to the old sessions routes in the UI.

* feat(ui): improve canvas image error UI

* make canvas error state gray instead of red

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-26 15:24:53 +00:00
105a4234b0 fix(ui): fix color picker on canvas (#4706)
Resolves  #4667

Co-authored-by: Mary Hipp Rogers <maryhipp@gmail.com>
2023-09-26 14:11:12 +00:00
34c563060f feat(ui): store active tab as name, not index (#4697)
This fixes an issue with tab changing when some tabs are disabled.
2023-09-26 14:06:39 +00:00
d45c47db81 fix(backend): remove extra cache arg (#4698) 2023-09-26 10:03:48 -04:00
c771a4027f Give user option to disable the configure TUI during installation (#4676)
## What type of PR is this? (check all applicable)

- [X] Feature


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] No - this should go into release notes.

## Description

During installation, the installer will now ask the user whether they
wish to perform a manual or automatic configuration of invokeai. If they
choose automatic (the default), then the install is performed without
running the TUI of the `invokeai-configure` script. Otherwise the
console-based interface is activated as usual.

This script also bumps up the default model RAM cache size to 7.5, which
improves performance on SDXL models.
2023-09-26 08:15:48 -04:00
3fd27b1aa9 run correct version of black 2023-09-26 08:03:34 -04:00
d59e534cad use heuristic to select RAM cache size during headless install; blackified 2023-09-26 08:03:34 -04:00
0c97a1e7e7 give user option to disable the configure TUI during installation 2023-09-26 08:03:34 -04:00
c8b306d9f8 Update CONTROLNET.md 2023-09-26 19:20:03 +10:00
edd2c54b9e add cache 2023-09-26 18:28:52 +10:00
727cc0dafe add pics 2023-09-26 17:51:08 +10:00
4530bd46dc Added IP-Adapter 2023-09-26 17:30:34 +10:00
c8b109f52e Add 'Random Float' node <3 (#4581)
* Add 'Random Float' node <3

does what it says on the tin :)

* Add random float + random seeded float nodes

altered my random float node as requested by Millu, kept the seeded version as an alternate variant for those that would like to control the randomization seed :)

* Update math.py

* Update math.py

* feat(nodes): standardize fields to match other nodes

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-26 05:57:44 +00:00
a2613948d8 Feature/lru caching 2 (#4657)
* fix(nodes): do not disable invocation cache delete methods

When the runtime disabled flag is on, do not skip the delete methods. This could lead to a hit on a missing resource.

Do skip them when the cache size is 0, because the user cannot change this (must restart app to change it).

* fix(nodes): do not use double-underscores in cache service

* Thread lock for cache

* Making cache LRU

* Bug fixes

* bugfix

* Switching to one Lock and OrderedDict cache

* Removing unused imports

* Move lock cache instance

* Addressing PR comments

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Martin Kristiansen <martin@modyfi.io>
2023-09-26 03:42:09 +00:00
f8392b2f78 Maryhipp/hide use cache checkbox if disabled (#4691)
* add skeleton loading state for queue lit

* hide use cache checkbox if cache is disabled

* undo accidental add

* feat(ui): hide node footer entirely if nothing to show there

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-26 03:26:15 +00:00
358116bc22 feat(ui): use spinner for queue loading state
Skeletons are for when we know the number of specific content items that are loading. When the queue is loading, we don't know how many items there are, or how many will load, so the whole list should be replaced with loading state.

The previous behaviour rendered a static number of skeletons. That number would rarely be the right number - the app shouldn't say "I'm loading 7 queue items", then load none, or load 50.

A future enhancement could use the queue item skeleton component and go by the total number of queue items, as reported by the queue status. I tried this but had some layout jankiness, not worth the effort right now.

The queue item skeleton component's styling was updated to support this future enhancement, making it exactly the same size as a queue item (it was a bit smaller before).
2023-09-26 13:19:49 +10:00
1e3590111d Remove dangling debug statement (#4695)
## What type of PR is this? (check all applicable)

- [X] Bug Fix

## Description

I left a dangling debug statement in a recent merged PR (#4674 ). This
removes it.
2023-09-26 11:08:10 +10:00
063b800280 Merge branch 'main' into bugfix/remove-debug-statement 2023-09-26 10:39:29 +10:00
3935bf92c8 Add image enhance node to composition pack in communitynods, 9 more n… (#4693)
Updates my Image & Mask Composition Pack from 4 to 14 nodes, and moves
the Enhance Image node into it.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [X] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because:
This is an update of my existing community nodes entries.
      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Adds 9 more nodes to my Image & Mask Composition pack including Clipseg,
Image Layer Blend, Masked Latent/Noise Blend, Image Dilate/Erode,
Shadows/Highlights/Midtones masks from image, and more.

## Related Tickets & Documents

n/a

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : out of scope, tested the nodes, will integrate tests with my
own repo in time as is helpful
2023-09-26 09:41:28 +10:00
066e09b517 remove dangling debug statement 2023-09-25 19:30:41 -04:00
869b4a8d49 Add image enhance node to composition pack in communitynods, 9 more nodes
Adds 9 more of my nodes to the Image & Mask Composition Pack in the community nodes page, and integrates the Enhance Image node into that pack as well (formerly it was its own entry).
2023-09-25 18:49:04 -04:00
399ebe443e Fix IP-Adapter calculation of memory footprint. 2023-09-25 18:28:10 -04:00
13919ff300 remove unused vars 2023-09-25 17:45:29 -04:00
634e5652ef add skeleton loading state for queue lit 2023-09-25 17:45:29 -04:00
9bdc718df5 Update 020_INSTALL_MANUAL.md (#4685)
Add some instructions about installing the frontend toolchain when doing
a git-based install.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission

## Description

[Update
020_INSTALL_MANUAL.md](73ca8ccdb3)

Add some instructions about installing the frontend toolchain when doing
a git-based install.
2023-09-25 21:43:08 +10:00
73ca8ccdb3 Update 020_INSTALL_MANUAL.md
Add some instructions about installing the frontend toolchain when doing a git-based install.
2023-09-25 21:17:11 +10:00
f37ffda966 replace case statements with if/else to support python 3.9 2023-09-25 18:33:39 +10:00
5a9777d443 fix: Auto switch Control Adapter processor to Color on relevant models (#4683)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-25 12:48:24 +05:30
8072c05ee0 Merge branch 'main' into color-map-auto 2023-09-25 12:48:12 +05:30
75ff4f4ca3 fix: Auto switch Control Adapter processor to Color on relevant models 2023-09-25 12:47:43 +05:30
30df123221 fix(ui): fix circular dependency (#4679)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

This is actually a platform-specific issue. `madge` is complaining about
a circular dependency on a single file -
`invokeai/frontend/web/src/features/queue/store/nanoStores.ts`. In that
file, we import from the `nanostores` package. Very similar name to the
file itself.

The error only appears on Windows and macOS, I imagine because those
systems both resolve `nanostores` to itself before resolving to the
package.

The solution is simple - rename `nanoStores.ts`. It's now
`queueNanoStore.ts`.


## Related Tickets & Documents

https://discord.com/channels/1020123559063990373/1155434451979993140

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->
2023-09-25 12:47:05 +05:30
06193ddbe8 Merge branch 'main' into fix/ui/fix-circular-dep 2023-09-25 12:45:01 +05:30
ce5122f87c Add installer support for ip-adapters (#4677)
## What type of PR is this? (check all applicable)

- [X] Feature


## Have you discussed this change with the InvokeAI team?
- [X] Yes

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

This PR adds support for selecting and installing IP-Adapters at
configure time. The user is offered the four existing InvokeAI IP
Adapters in the UI as shown below. The matching image encoders are
selected and installed behind the scenes. That is, if the user selects
one of the three sd15 adapters, then the SD encoder will be installed.
If they select the sdxl adapter, then the SDXL encoder will be
installed.


![image](https://github.com/invoke-ai/InvokeAI/assets/111189/19f46401-99fb-4f7b-9a5e-8f2efd0a5b77)

Note that the automatic selection of the encoder does not work when the
installer is run in headless mode. I may be able to fix that soon, but
I'm out of time today.
2023-09-24 23:29:57 -04:00
43ebd68313 Merge branch 'main' into install/install-ip-adapters 2023-09-24 23:19:25 -04:00
ec19fcafb1 fix(ui): fix circular dependency
This is actually a platform-specific issue. `madge` is complaining about a circular dependency on a single file - `invokeai/frontend/web/src/features/queue/store/nanoStores.ts`. In that file, we import from the `nanostores` package. Very similar name to the file itself.

The error only appears on Windows and macOS, I imagine because those systems both resolve `nanostores` to itself before resolving to the package.

The solution is simple - rename `nanoStores.ts`. It's now `queueNanoStore.ts`.
2023-09-25 10:45:38 +10:00
6fcc7d4c4b Re-enable button for seeds set to zero
Change the statement to explicitly look for null and undefined so it doesn't fail to re-enable the button on images with seeds set to zero.
2023-09-25 10:33:35 +10:00
912087e4dc blackify 2023-09-24 19:00:38 -04:00
593fb95213 ip_adapter_sd15 & its encoder will now be installed by default during headless install 2023-09-24 19:00:21 -04:00
6d821b32d3 fix(ui): fix hidden dropdowns
Notably in the change board modal.
2023-09-25 08:13:16 +10:00
297f96c16b add installer support for ip-adapters 2023-09-24 17:31:08 -04:00
0e53b27655 Removing logging import from api_api.py 2023-09-25 07:25:32 +10:00
35ae9f6e71 fix probing for ip_adapter folders (#4669)
## What type of PR is this? (check all applicable)

- [X] Bug Fix
- [ ] Optimizatio

## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] Np

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

ip_adapter models live in a folder containing the file
`image_encoder.txt` and a safetensors file. The load-time probe for new
models was detecting the files contained within the folder rather than
the folder itself, and so models.yaml was not getting correctly updated.
This fixes the issue.

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-24 15:45:46 -04:00
a1d9e6b871 Merge branch 'main' into bugfix/probe_ip_adapter 2023-09-24 15:39:43 -04:00
f05379f965 Enable v_prediction for sd-1 models (#4674)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] Yes

      
## Have you updated all relevant documentation?
- [X] Yes

## Description

It turns out that there are a few SD-1 models that use the
`v_prediction` SchedulerPredictionType. Examples here:
https://huggingface.co/zatochu/EasyFluff/tree/main . Previously we only
allowed the user to set the prediction type for sd-2 models. This PR
does three things:

1. Add a new checkpoint configuration file `v1-inference-v.yaml`. This
will install automatically on new installs, but for existing installs
users will need to update and then run `invokeai-configure` to get it.
2. Change the prompt on the web model install page to indicate that some
SD-1 models use the "v_prediction" method
3. Provide backend support for sd-1 models that use the v_prediction
method.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4277 

## QA Instructions, Screenshots, Recordings

Update, run `invoke-ai-configure --yes --skip-sd --skip-support`, and
then use the web interface to install
https://huggingface.co/zatochu/EasyFluff/resolve/main/EasyFluffV11.2.safetensors
with the prediction type set to "v_prediction." Check that the installed
model uses configuration `v1-inference-v.yaml`.

If "None" is selected from the install menu, check that SD-1 models
default to `v1-inference.yaml` and SD-2 default to
`v2-inference-v.yaml`.

Also try installing a checkpoint at a local path if a like-named config
.yaml file is located next to it in the same directory. This should
override everything else and use the local path .yaml.

## Added/updated tests?

- [ ] Yes
- [X] No
2023-09-24 15:24:36 -04:00
e34e6d6e80 enable v_prediction for sd-1 models 2023-09-24 12:22:29 -04:00
86cb53342a fix probing for ip_adapter folders 2023-09-23 22:32:03 -04:00
e3de996525 Rename getLogger() to get_logger() (#4275)
## What type of PR is this? (check all applicable)

- [X] Refactor
## Have you discussed this change with the InvokeAI team?

- [ ] Yes
- [X] No, because: trivial fix

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No

## Description

It annoyed me that the class method to get the invokeai logger was
`InvokeAILogger.getLogger()`. We do not use camelCase anywhere else. So
this PR renames the method `get_logger()`.
2023-09-23 14:56:23 -07:00
25a71a1791 Merge branch 'main' into refactor/rename-get-logger 2023-09-23 14:49:07 -07:00
d16583ad1c Unpin Safetensors dependencies, safeguard against breaking changes 2023-09-23 10:23:05 -04:00
46db1dd18f feat(ui): allow numbers to connect to strings (#4653)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

Pydantic handles the casting so this is always safe.

Also de-duplicate some validation logic code that was needlessly
duplicated.
2023-09-23 10:09:59 +05:30
4c9344b0ee Merge branch 'main' into feat/ui/allow-number-to-string 2023-09-22 21:02:28 -05:00
cba31efd78 fix(ui): do not process gallery logic for image primitive node 2023-09-23 10:02:55 +10:00
4d01b5c0f2 fix(ui): hide workflow and gallery checkboxes on image primitive
This node doesn't actually *save* the image, so these checkboxes do nothing on it.
2023-09-23 10:02:55 +10:00
e02af8f518 fix(ui): fix node glow styling 2023-09-23 10:02:55 +10:00
c485cf568b feat: Add Color PreProcessor to Linear UI 2023-09-22 17:30:12 -04:00
51451cbf21 fix: Handle cases where tile size > image size 2023-09-22 17:30:12 -04:00
0363a06963 feat: Add Color Map Preprocessor 2023-09-22 17:30:12 -04:00
cc280cbef1 feat(ui): refactor informational popover
- Change translations to use arrays of paragraphs instead of a single paragraph.
- Change component to accept a `feature` prop to identify the feature which the popover describes.
- Add optional `wrapperProps`: passed to the wrapper element, allowing more flexibility when using the popover
- Add optional `popoverProps`: passed to the `<Popover />` component, allowing for overriding individual instances of the popover's props
- Move definitions of features and popover settings to `invokeai/frontend/web/src/common/components/IAIInformationalPopover/constants.ts`
  - Add some type safety to the `feature` prop
  - Edit `POPOVER_DATA` to provide `image`, `href`, `buttonLabel`, and any popover props. The popover props are applied to all instances of the popover for the given feature. Note that the component prop `popoverProps` will override settings here.
- Remove the popover's arrow. Because the popover is wrapping groups of components, sometimes the error ends up pointing to nothing, which looks kinda janky. I've just removed the arrow entirely, but feel free to add it back if you think it looks better.
- Use a `link` variant button with external link icon to better communicate that clicking the button will open a new tab.
- Default the link button label to "Learn More" (if a label is provided, that will be used instead)
- Make default position `top`, but set manually set some to `right` - namely, anything with a dropdown. This prevents the popovers from obscuring or being obscured by the dropdowns.
- Do a bit more restructuring of the Popover component itself, and how it is integrated with other components
- More ref forwarding
- Make the open delay 1s
- Set the popovers to use lazy mounting (eg do not mount until the user opens the thing)
- Update the verbiage for many popover items and add missing dynamic prompts stuff
2023-09-22 13:23:26 -04:00
7544eadd48 fix(nodes): do not use double-underscores in cache service 2023-09-22 13:15:03 -04:00
7d683b4db6 fix(nodes): do not disable invocation cache delete methods
When the runtime disabled flag is on, do not skip the delete methods. This could lead to a hit on a missing resource.

Do skip them when the cache size is 0, because the user cannot change this (must restart app to change it).
2023-09-22 13:15:03 -04:00
60b3c6a201 feat(nodes): provide board_id in image creation 2023-09-22 10:11:20 -04:00
88c8cb61f0 feat(ui): update linear UI to use new board field on save_image
- No longer need to make network request to add image to board after it's finished - removed
- Update linear graphs & upscale graph to save image to the board
- Update autoSwitch logic so when image is generated we still switch to the right board
2023-09-22 10:11:20 -04:00
43fbac26df feat: move board logic to save_image node
- Remove the add-to-board node
- Create `BoardField` field type & add it to `save_image` node
- Add UI for `BoardField`
- Tighten up some loose types
- Make `save_image` node, in workflow editor, default to not intermediate
- Patch bump `save_image`
2023-09-22 10:11:20 -04:00
627444e17c Add images to a board through nodes 2023-09-22 10:11:20 -04:00
5601858f4f feat(ui): allow numbers to connect to strings
Pydantic handles the casting so this is always safe.

Also de-duplicate some validation logic code that was needlessly duplicated.
2023-09-22 21:51:08 +10:00
b152fbf72f Respect INVOKEAI_ prefix on environment variables (#4641)
## What type of PR is this? (check all applicable)
- [X] Bug Fix

## Have you discussed this change with the InvokeAI team?
- [X] Yes

## Have you updated all relevant documentation?
- [ ] Yes
- [X] N/A


## Description

Pedantic was misconfigured and was not picking up the INVOKEAI_ prefix
on environment variables. Therefore, if the system had an unrelated
environment variable such as `version`, this caused pedantic validation
errors.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #4098 

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes — regression tests run; new regression test added.
2023-09-22 02:31:19 +05:30
f95111772a Merge branch 'main' into bugfix/config-env-variables 2023-09-22 02:22:12 +05:30
14ce7cf09c fix circular dep with recallAllParameters (#4640)
* break out separate functions for preselected images, remove recallAllParameters dep as it causes circular logic with model being set

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-21 15:08:32 -04:00
28a1a6939f add regression test 2023-09-21 12:43:34 -04:00
6d2b4013f8 Respect INVOKEAI_ prefix on environment variables 2023-09-21 12:37:27 -04:00
ca7a7b57bb clear out loras before using metadata loras 2023-09-21 11:36:30 -04:00
c5d0e65a24 When an exception happens within the session processor loop, record a… (#4638)
…nd move on

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-21 11:32:57 -04:00
6cc7b55ec5 Add wait on exception 2023-09-21 11:18:57 -04:00
883e9973ec When an exception happens within the session processor loop, record and move on 2023-09-21 11:10:25 -04:00
9e7d829906 fix(ui): do not reset node outputs on queue item completed (#4635)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

fix(ui): do not reset node outputs on queue item completed
2023-09-21 23:57:56 +10:00
456a0a59e0 fix(ui): do not reset node outputs on queue item completed 2023-09-21 09:51:11 -04:00
4f2bf7e7e8 fix(ui): workflow editor side panel remembers positioning
closes #4402
2023-09-21 09:50:39 -04:00
77e93888cf fix(ui): do not poll for cache status unless connected, processor is running and the queue is not empty 2023-09-21 09:45:52 -04:00
fa54974bff feat(nodes): invocation cache reports disabled if max size is 0 2023-09-21 09:45:52 -04:00
7ac99d6bc3 feat(nodes): add enable, disable, status to invocation cache
- New routes to clear, enable, disable and get the status of the cache
- Status includes hits, misses, size, max size, enabled
- Add client cache queries and mutations, abstracted into hooks
- Add invocation cache status area (next to queue status) w/ buttons
2023-09-21 09:45:52 -04:00
aa82f9360c fix(ui): passing Promise into ClipboardItem to make it work in Safari
throwing Error in getBaseLayerBlob, instead of returning nil
using copyBlobToClipboard for both Canvas and Text2Image clipboard functionality
2023-09-21 23:36:05 +10:00
5aefa49d7d fix(ui): popover ref & wrapping of children (wip) 2023-09-21 09:33:32 -04:00
b6e9cd4fe2 feat(ui): show cursor on drag previews 2023-09-21 09:29:57 -04:00
6d1057c560 fix(ui): skip firing collision detection on dnd when droppable scrolled out
Requires some additional logic in the collision detection algorithm.

Closes #4621
2023-09-21 09:29:57 -04:00
b4790002c7 Add python-socketio depencency (mandatory) 2023-09-21 08:57:41 -04:00
e02700a782 Fix/nodes/clipskip metadata optional (#4628)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Fixes failure on SDXL metadata node, introduced by me in #4625
2023-09-21 10:34:00 +05:30
83ce8ef1ec fix(nodes): clipskip metadata entry is optional 2023-09-21 14:55:21 +10:00
19e487b5ee feat(ui): enable control adapters on image drop (#4627)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

[feat(ui): enable control adapters on image
drop](aa4b56baf2)

- Dropping/uploading an image on control adapter enables it (controlnet
& ip adapter)
- The image components are always enabled to allow this
2023-09-21 10:25:04 +05:30
aa4b56baf2 feat(ui): enable control adapters on image drop
- Dropping/uploading an image on control adapter enables it (controlnet & ip adapter)
- The image components are always enabled to allow this
2023-09-21 14:50:55 +10:00
d3a2be69f1 feat(ui): hide clipskip on sdxl; do not add to metadata (#4625)
Hide it until #4624 is ready

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission



## Description

feat(ui): hide clipskip on sdxl; do not add to metadata
Hide it until #4624 is ready

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4618
2023-09-21 09:44:13 +05:30
02c087ee37 feat(ui): hide clipskip on sdxl; do not add to metadata
Hide it until #4624 is ready
2023-09-21 14:10:44 +10:00
cab8d9bb20 fix(ui): add control adapters to canvas coherence pass (#4623)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

fix(ui): add control adapters to canvas coherence pass

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #4619
- Closes #4589 

## QA Instructions, Screenshots, Recordings

I cannot figure out how to get the CLIP Vision model installed but I can
confirm that the graph is correct, because I get a Model Not Found error
that references this model, when invoking with IP adapter enabled..
2023-09-21 09:34:02 +05:30
28e6a7139b fix(ui): add control adapters to canvas coherence pass 2023-09-21 13:07:15 +10:00
1625854eaf fix(nodes): fix ip-adapter field positioning on workflow editor 2023-09-20 21:52:29 -04:00
f87b042162 feat(nodes): Center pasted nodes at mouse location (#4595)
* Initial commit.  Feature works, but code might need some cleanup

* Cleaned up diff

* Made mousePosition a XYPosition again so its nicely typed

* Fixed yarn issues

* Paste now properly takes node width/height into account when pasting

* feat(ui): use react's types in the `onMouseMove` `reactflow` handler

* feat(ui): use refs to access `reactflow`'s DOM elements

* feat(ui): use a ref to store cursor position in nodes

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-21 11:16:15 +10:00
183e2c3ee0 fix(queue): fix duplicate queue item status events 2023-09-20 20:28:31 -04:00
098d506b95 Update accelerate to .23 2023-09-20 20:20:06 -04:00
7aa33c352b Update Diffusers to .21 2023-09-20 20:20:06 -04:00
bf62553150 (minor) Update documentation to reflect that a bug was fixed in InvokeAI/ip_adapter_sdxl_vit_h by e178288fb6 2023-09-20 20:18:33 -04:00
2b08d9e53b feat(ui): disable queue-related buttons when disconnected 2023-09-20 20:07:50 -04:00
8954953eca fix(ui): no duplicate network requests on app startup 2023-09-20 20:07:50 -04:00
eb2fcbe28a chore: flake8 2023-09-21 10:00:17 +10:00
e78b36a9f7 feat(ui): render input components for polymorphic fields
Polymorphic fields now render the appropriate input component for their base type.

For example, float polymorphics will render the number input box.

You no longer need to specify ui_type to force it to display.

TODO: The UI *may* break if a list is provided as the default value for a polymorphic field.
2023-09-21 10:00:17 +10:00
144ede031e feat(nodes): remove ui_type overrides for polymorphic fields 2023-09-21 10:00:17 +10:00
8ca37bba33 Update CONFIGURATION.md (#4610)
Fixed typo missing backtick

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission
2023-09-21 09:48:06 +10:00
a608340c89 Merge branch 'main' into patch-2 2023-09-21 09:45:59 +10:00
7fecebf7db feat(ui): add greyscale invoke logo to invoke button when as icon 2023-09-20 19:30:17 -04:00
b915d74127 Remove fastapi-socketio dependency, doesn't really do much for us and… (#4552)
* Remove fastapi-socketio dependency, doesn't really do much for us and isn't well maintained

* Run python black

* Remove fastapi_socketio import

* Add __app as class variable in case we ever need it later

* Run isort

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-20 22:30:01 +00:00
6ec347bd41 set default for informational popups to be disabled (#4611)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-20 18:12:50 -04:00
e54843acc9 Merge branch 'main' into remove-tooltip-default 2023-09-20 18:04:08 -04:00
0960518088 add techjedi's database maintenance script 2023-09-20 17:46:49 -04:00
21de74fac4 set default for informational popups to be disabled 2023-09-20 17:43:22 -04:00
8ce9b6c51e Update CONFIGURATION.md
Fixed typo missing backtick
2023-09-20 17:33:04 -04:00
b64ade586d feature: support TAESD - Tiny Autoencoder for Stable Diffusion (#4316)
[TAESD - Tiny Autoencoder for Stable
Diffusion](https://github.com/madebyollin/taesd) - is a tiny VAE that
provides significantly better results than my single-multiplication hack
but is still very fast.

The entire TAESD model weights are under 10 MB!

This PR requires diffusers 0.20:
- [x] #4311 

## To Do

Test with
- [x] SD 1.x
- [ ] SD 2.x: #4415 
- [x] SDXL

## Have you discussed this change with the InvokeAI team?
- See [TAESD Invocation
API](https://discord.com/channels/1020123559063990373/1137857402453119166)
      
## Have you updated all relevant documentation?
- [ ] No


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

Should be able to import these models:
- [madebyollin/taesd](https://huggingface.co/madebyollin/taesd)
- [madebyollin/taesdxl](https://huggingface.co/madebyollin/taesdxl)

and use them as VAE.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [x] Some. There are new tests for VaeFolderProbe based on VAE
configurations, but no tests that require the full model weights.
2023-09-20 17:23:20 -04:00
3c44a74ba5 Merge branch 'main' into feat/taesd 2023-09-20 17:13:11 -04:00
24d0901d8e wrap control net button with div to add width (#4608)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-20 16:02:48 -04:00
b1b5f70ea6 Merge branch 'main' into feat/taesd 2023-09-20 12:54:17 -07:00
6392098961 lint 2023-09-20 12:53:25 -07:00
2c39aec22d test(model management): test VaeFolderProbe 2023-09-20 12:48:59 -07:00
d066bc6d19 wrap control net button with div to add width 2023-09-20 15:44:15 -04:00
e487bcd0f7 feat(model management): guess whether a VAE is for SDXL based on its name 2023-09-20 12:07:12 -07:00
e0f8274f49 feat(model management): guess whether a VAE is for SDXL based on its name 2023-09-20 12:06:55 -07:00
69e3513e90 add missing UTILITIES.md (#4607)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission

This is a doc file that was missing from PR #4587 . Since that PR was
already merged. I’m pushing it in now.
2023-09-20 11:21:43 -07:00
7e706f02cb add missing UTILITIES.md 2023-09-20 14:19:27 -04:00
41dad2013a [Feature] Command-line script for viewing PNG metadata (#4587)
## What type of PR is this? (check all applicable)

- [X] Feature

## Have you discussed this change with the InvokeAI team?
- [X] No, because it is trivial

      
## Have you updated all relevant documentation?
- [X] Yes -- added a new page listing all the command-line scripts and
their most useful options.

## Description

InvokeAI version 2.3 had a script called `invokeai-metadata` that
accepted a list of png images and printed out JSON-formatted embedded
metadata. I used to use the script for sorting and tagging images
outside of the InvokeAI Web UI framework, and I think people might still
find it useful.

This script stopped working in 3.0 and I didn't notice that until just
now. This PR restores it to a functional state.

## Related Tickets & Documents

None
2023-09-20 14:17:00 -04:00
3f554d6824 Merge branch 'main' into feat/prettyprint-metadata 2023-09-20 14:06:47 -04:00
202c5a48c6 Merge branch 'main' into feat/prettyprint-metadata 2023-09-20 14:06:23 -04:00
2d71f6f4b8 add documentation 2023-09-20 13:49:29 -04:00
0420874f56 reimplement the old invokeai-metadata command 2023-09-20 13:49:29 -04:00
f222b871e9 Merge remote-tracking branch 'origin/main' into feat/taesd
# Conflicts:
#	invokeai/backend/model_management/model_probe.py
2023-09-20 10:46:55 -07:00
8b8d589033 (wip) add informational popover base component and sample (#4522)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Adds a new common component `IAIInformationPopover` that composes JSX to
be rendered within a popover as a tooltip. We were not able to use the
`Tooltip` component provided by chakra because you cannot interact with
elements within those (at least not that I could get working).

This just a sample over positive prompt. We need content from
@hipsterusername and @Millu before we can roll this out.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-20 13:37:12 -04:00
f4c895257a Merge branch 'main' into maryhipp/informational-popover 2023-09-20 13:32:06 -04:00
10af5a26f2 update component to not use selectFromResult 2023-09-20 13:31:50 -04:00
1088adeb0a Merge branch 'main' into maryhipp/informational-popover 2023-09-20 13:28:22 -04:00
ad49380cd1 restore text of Invoke button (#4606)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-20 13:13:28 -04:00
b2fe24c401 restore text of Invoke button 2023-09-20 13:07:42 -04:00
b128db1d58 Merge branch 'main' into maryhipp/informational-popover 2023-09-20 12:38:36 -04:00
f7f0630d97 feat(backend): selective invalidation for invocation cache (#4597)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

This change enhances the invocation cache logic to delete cache entries
when the resources to which they refer are deleted.

For example, a cached output may refer to "some_image.png". If that
image is deleted, and this particular cache entry is later retrieved by
a node, that node's successors will receive references to the now
non-existent "some_image.png". When they attempt to use that image, they
will fail.

To resolve this, we need to invalidate the cache when the resources to
which it refers are deleted. Two options:
- Invalidate the whole cache on every image/latents/etc delete
- Selectively invalidate cache entries when their resources are deleted

Node outputs can be any shape, with any number of resource references in
arbitrarily nested pydantic models. Traversing that structure to
identify resources is not trivial.

But invalidating the whole cache is a bit heavy-handed. It would be nice
to be more selective.

Simple solution:
- Invocation outputs' resource references are always string identifiers
- like the image's or latents' name
- Invocation outputs can be stringified, which includes said identifiers
- When the invocation is cached, we store the stringified output
alongside the "live" output classes
- When a resource is deleted, pass its identifier to the cache service,
which can then invalidate any cache entries that refer to it

The images and latents storage services have been outfitted with
`on_deleted()` callbacks, and the cache service registers itself to
handle those events. This logic was copied from `ItemStorageABC`.

`on_changed()` callback are also added to the images and latents
services, though these are not currently used. Just following the
existing pattern.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

Reproduce the issue on main:
- Create a graph in workflow editor with two connected resize nodes
- Add an image to the first
- Enable cache on both
- Run the graph
- Clear Intermediates (in settings)
- Disable cache on the *second* node
- Run the graph, it should fail

Switch to the PR branch and start over, doing the exact same steps. You
shouldn't get any errors.

Example graph to start with:

![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/c2f0f170-fff4-44f8-8d56-2d8b07ef6440)


## Added/updated tests?

- [~] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_
2023-09-20 11:09:39 -04:00
5075e9c899 fix more merge conflicts 2023-09-20 10:56:12 -04:00
3c1549cf5c Merge branch 'main' into fix/nodes/selective-cache-invalidation 2023-09-20 10:41:23 -04:00
9faa53ceb1 feat(ui): consolidate advanced params (#4599) 2023-09-21 00:19:31 +10:00
32672cfeda ui: misc small fixes (#4600)
* feat(ui): tweak queue UI components

* fix(ui): manually dispatch queue status query on queue item status change

RTK Query occasionally aborts the query that occurs when the tag is invalidated, especially if multples of them fire in rapid succession.

This resulted in the queue status and progress bar sometimes not reseting when the queue finishes its last item.

Manually dispatch the query now to get around this. Eventually should probably move this to a socket so we don't need to keep responding to socket with HTTP requests. Just send ti directly via socket

* chore(ui): remove errant console.logs

* fix(ui): do not accumulate node outputs in outputs area

* fix(ui): fix merge issue

---------

Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-09-21 00:15:39 +10:00
b5266f89ad fix(ui): fallback to null for invalid metadata values (#4575) 2023-09-20 14:02:58 +00:00
7a3b467ce0 fixed merge conflicts 2023-09-20 10:00:11 -04:00
bdfdf854fc fix: canvas not working on queue
Add `batch_id` to outbound events. This necessitates adding it to both `InvocationContext` and `InvocationQueueItem`. This allows the canvas to receive images.

When the user enqueues a batch on the canvas, it is expected that all images from that batch are directed to the canvas.

The simplest, most flexible solution is to add the `batch_id` to the invocation context-y stuff. Then everything knows what batch it came from, and we can have the canvas pick up images associated with its list of canvas `batch_id`s.
2023-09-20 09:57:10 -04:00
1c38cce16d feat(ui): add confirmation dialog box to clear queue button 2023-09-20 09:26:55 -04:00
4cdca45228 feat(api): add route to clear invocation cache 2023-09-20 22:53:25 +10:00
bfed08673a fix(test): fix tests 2023-09-20 18:40:40 +10:00
c1aa2b82eb feat(nodes): default node_cache_size in MemoryInvocationCache to 0 (fully disabled) 2023-09-20 18:40:24 +10:00
0a09f84b07 feat(backend): selective invalidation for invocation cache
This change enhances the invocation cache logic to delete cache entries when the resources to which they refer are deleted.

For example, a cached output may refer to "some_image.png". If that image is deleted, and this particular cache entry is later retrieved by a node, that node's successors will receive references to the now non-existent "some_image.png". When they attempt to use that image, they will fail.

To resolve this, we need to invalidate the cache when the resources to which it refers are deleted. Two options:
- Invalidate the whole cache on every image/latents/etc delete
- Selectively invalidate cache entries when their resources are deleted

Node outputs can be any shape, with any number of resource references in arbitrarily nested pydantic models. Traversing that structure to identify resources is not trivial.

But invalidating the whole cache is a bit heavy-handed. It would be nice to be more selective.

Simple solution:
- Invocation outputs' resource references are always string identifiers - like the image's or latents' name
- Invocation outputs can be stringified, which includes said identifiers
- When the invocation is cached, we store the stringified output alongside the "live" output classes
- When a resource is deleted, pass its identifier to the cache service, which can then invalidate any cache entries that refer to it

The images and latents storage services have been outfitted with `on_deleted()` callbacks, and the cache service registers itself to handle those events. This logic was copied from `ItemStorageABC`.

`on_changed()` callback are also added to the images and latents services, though these are not currently used. Just following the existing pattern.
2023-09-20 18:26:47 +10:00
b7938d9ca9 feat: queued generation (#4502)
* fix(config): fix typing issues in `config/`

`config/invokeai_config.py`:
- use `Optional` for things that are optional
- fix typing of `ram_cache_size()` and `vram_cache_size()`
- remove unused and incorrectly typed method `autoconvert_path`
- fix types and logic for `parse_args()`, in which `InvokeAIAppConfig.initconf` *must* be a `DictConfig`, but function would allow it to be set as a `ListConfig`, which presumably would cause issues elsewhere

`config/base.py`:
- use `cls` for first arg of class methods
- use `Optional` for things that are optional
- fix minor type issue related to setting of `env_prefix`
- remove unused `add_subparser()` method, which calls `add_parser()` on an `ArgumentParser` (method only available on the `_SubParsersAction` object, which is returned from ArgumentParser.add_subparsers()`)

* feat: queued generation and batches

Due to a very messy branch with broad addition of `isort` on `main` alongside it, some git surgery was needed to get an agreeable git history. This commit represents all of the work on queued generation. See PR for notes.

* chore: flake8, isort, black

* fix(nodes): fix incorrect service stop() method

* fix(nodes): improve names of a few variables

* fix(tests): fix up tests after changes to batches/queue

* feat(tests): add unit tests for session queue helper functions

* feat(ui): dynamic prompts is always enabled

* feat(queue): add queue_status_changed event

* feat(ui): wip queue graphs

* feat(nodes): move cleanup til after invoker startup

* feat(nodes): add cancel_by_batch_ids

* feat(ui): wip batch graphs & UI

* fix(nodes): remove `Batch.batch_id` from required

* fix(ui): cleanup and use fixedCacheKey for all mutations

* fix(ui): remove orphaned nodes from canvas graphs

* fix(nodes): fix cancel_by_batch_ids result count

* fix(ui): only show cancel batch tooltip when batches were canceled

* chore: isort

* fix(api): return `[""]` when dynamic prompts generates no prompts

Just a simple fallback so we always have a prompt.

* feat(ui): dynamicPrompts.combinatorial is always on

There seems to be little purpose in using the combinatorial generation for dynamic prompts. I've disabled it by hiding it from the UI and defaulting combinatorial to true. If we want to enable it again in the future it's straightforward to do so.

* feat: add queue_id & support logic

* feat(ui): fix upscale button

It prepends the upscale operation to queue

* feat(nodes): return queue item when enqueuing a single graph

This facilitates one-off graph async workflows in the client.

* feat(ui): move controlnet autoprocess to queue

* fix(ui): fix non-serializable DOMRect in redux state

* feat(ui): QueueTable performance tweaks

* feat(ui): update queue list

Queue items expand to show the full queue item. Just as JSON for now.

* wip threaded session_processor

* feat(nodes,ui): fully migrate queue to session_processor

* feat(nodes,ui): add processor events

* feat(ui): ui tweaks

* feat(nodes,ui): consolidate events, reduce network requests

* feat(ui): cleanup & abstract queue hooks

* feat(nodes): optimize batch permutation

Use a generator to do only as much work as is needed.

Previously, though we only ended up creating exactly as many queue items as was needed, there was still some intermediary work that calculated *all* permutations. When that number was very high, the system had a very hard time and used a lot of memory.

The logic has been refactored to use a generator. Additionally, the batch validators are optimized to return early and use less memory.

* feat(ui): add seed behaviour parameter

This dynamic prompts parameter allows the seed to be randomized per prompt or per iteration:
- Per iteration: Use the same seed for all prompts in a single dynamic prompt expansion
- Per prompt: Use a different seed for every single prompt

"Per iteration" is appropriate for exploring a the latents space with a stable starting noise, while "Per prompt" provides more variation.

* fix(ui): remove extraneous random seed nodes from linear graphs

* fix(ui): fix controlnet autoprocess not working when queue is running

* feat(queue): add timestamps to queue status updates

Also show execution time in queue list

* feat(queue): change all execution-related events to use the `queue_id` as the room, also include `queue_item_id` in InvocationQueueItem

This allows for much simpler handling of queue items.

* feat(api): deprecate sessions router

* chore(backend): tidy logging in `dependencies.py`

* fix(backend): respect `use_memory_db`

* feat(backend): add `config.log_sql` (enables sql trace logging)

* feat: add invocation cache

Supersedes #4574

The invocation cache provides simple node memoization functionality. Nodes that use the cache are memoized and not re-executed if their inputs haven't changed. Instead, the stored output is returned.

## Results

This feature provides anywhere some significant to massive performance improvement.

The improvement is most marked on large batches of generations where you only change a couple things (e.g. different seed or prompt for each iteration) and low-VRAM systems, where skipping an extraneous model load is a big deal.

## Overview

A new `invocation_cache` service is added to handle the caching. There's not much to it.

All nodes now inherit a boolean `use_cache` field from `BaseInvocation`. This is a node field and not a class attribute, because specific instances of nodes may want to opt in or out of caching.

The recently-added `invoke_internal()` method on `BaseInvocation` is used as an entrypoint for the cache logic.

To create a cache key, the invocation is first serialized using pydantic's provided `json()` method, skipping the unique `id` field. Then python's very fast builtin `hash()` is used to create an integer key. All implementations of `InvocationCacheBase` must provide a class method `create_key()` which accepts an invocation and outputs a string or integer key.

## In-Memory Implementation

An in-memory implementation is provided. In this implementation, the node outputs are stored in memory as python classes. The in-memory cache does not persist application restarts.

Max node cache size is added as `node_cache_size` under the `Generation` config category.

It defaults to 512 - this number is up for discussion, but given that these are relatively lightweight pydantic models, I think it's safe to up this even higher.

Note that the cache isn't storing the big stuff - tensors and images are store on disk, and outputs include only references to them.

## Node Definition

The default for all nodes is to use the cache. The `@invocation` decorator now accepts an optional `use_cache: bool` argument to override the default of `True`.

Non-deterministic nodes, however, should set this to `False`. Currently, all random-stuff nodes, including `dynamic_prompt`, are set to `False`.

The field name `use_cache` is now effectively a reserved field name and possibly a breaking change if any community nodes use this as a field name. In hindsight, all our reserved field names should have been prefixed with underscores or something.

## One Gotcha

Leaf nodes probably want to opt out of the cache, because if they are not cached, their outputs are not saved again.

If you run the same graph multiple times, you only end up with a single image output, because the image storage side-effects are in the `invoke()` method, which is bypassed if we have a cache hit.

## Linear UI

The linear graphs _almost_ just work, but due to the gotcha, we need to be careful about the final image-outputting node. To resolve this, a `SaveImageInvocation` node is added and used in the linear graphs.

This node is similar to `ImagePrimitive`, except it saves a copy of its input image, and has `use_cache` set to `False` by default.

This is now the leaf node in all linear graphs, and is the only node in those graphs with `use_cache == False` _and_ the only node with `is_intermedate == False`.

## Workflow Editor

All nodes now have a footer with a new `Use Cache [ ]` checkbox. It defaults to the value set by the invocation in its python definition, but can be changed by the user.

The workflow/node validation logic has been updated to migrate old workflows to use the new default values for `use_cache`. Users may still want to review the settings that have been chosen. In the event of catastrophic failure when running this migration, the default value of `True` is applied, as this is correct for most nodes.

Users should consider saving their workflows after loading them in and having them updated.

## Future Enhancements - Callback

A future enhancement would be to provide a callback to the `use_cache` flag that would be run as the node is executed to determine, based on its own internal state, if the cache should be used or not.

This would be useful for `DynamicPromptInvocation`, where the deterministic behaviour is determined by the `combinatorial: bool` field.

## Future Enhancements - Persisted Cache

Similar to how the latents storage is backed by disk, the invocation cache could be persisted to the database or disk. We'd need to be very careful about deserializing outputs, but it's perhaps worth exploring in the future.

* fix(ui): fix queue list item width

* feat(nodes): do not send the whole node on every generator progress

* feat(ui): strip out old logic related to sessions

Things like `isProcessing` are no longer relevant with queue. Removed them all & updated everything be appropriate for queue. May be a few little quirks I've missed...

* feat(ui): fix up param collapse labels

* feat(ui): click queue count to go to queue tab

* tidy(queue): update comment, query format

* feat(ui): fix progress bar when canceling

* fix(ui): fix circular dependency

* feat(nodes): bail on node caching logic if `node_cache_size == 0`

* feat(nodes): handle KeyError on node cache pop

* feat(nodes): bypass cache codepath if caches is disabled

more better no do thing

* fix(ui): reset api cache on connect/disconnect

* feat(ui): prevent enqueue when no prompts generated

* feat(ui): add queue controls to workflow editor

* feat(ui): update floating buttons & other incidental UI tweaks

* fix(ui): fix missing/incorrect translation keys

* fix(tests): add config service to mock invocation services

invoking needs access to `node_cache_size` to occur

* optionally remove pause/resume buttons from queue UI

* option to disable prepending

* chore(ui): remove unused file

* feat(queue): remove `order_id` entirely, `item_id` is now an autoinc pk

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-20 15:09:24 +10:00
977e348a35 Update communityNodes.md with Prompt Tools & XY grid nodes(#4446)
* Update communityNodes.md

Adding Prompt Tools and XY grid nodes

* Update communityNodes.md

Added the new PromptStrength and PromptStrengthCombine Nodes

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2023-09-20 14:24:55 +10:00
864f2270c3 feat: Add IP Adapter to InvokeAI (Node & Linear) (#4429)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description (edit by @blessedcoolant , @RyanJDick )

This PR adds support for IP-Adapters (a technique for image-based
prompts) in Invoke AI. Currently only available in the Node UI.

IP-Adapter Paper: [IP-Adapter: Text Compatible Image Prompt Adapter for
Text-to-Image Diffusion Models](https://arxiv.org/abs/2308.06721)
IP-Adapter reference code: https://github.com/tencent-ailab/IP-Adapter

On order to test, install the following models via the InvokeAI UI:

Image Encoders:

[InvokeAI/ip_adapter_sd_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder)

[InvokeAI/ip_adapter_sdxl_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sdxl_image_encoder)

IP-Adapters:

[InvokeAI/ip_adapter_sd15](https://huggingface.co/InvokeAI/ip_adapter_sd15)

[InvokeAI/ip_adapter_plus_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_sd15)

[InvokeAI/ip_adapter_plus_face_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15)

[InvokeAI/ip_adapter_sdxl](https://huggingface.co/InvokeAI/ip_adapter_sdxl)

Old instructions (for reference only):

> In order to test, you need to download and place the following models
in your InvokeAI models directory.
> 
> - SD 1.5 - https://huggingface.co/h94/IP-Adapter/tree/main/models -->
Download the models and the `image_encoder` folder to
`models/core/ip_adapters/sd-1`
> - SDXL - https://huggingface.co/h94/IP-Adapter/tree/main/sdxl_models
-Download the models and the `image_encoder` folder to
`models/core/ip_adapaters/sdxl`
> 
> This is only temporary. This needs to be handled differently. I
outlined them here.
https://github.com/invoke-ai/InvokeAI/pull/4429#issuecomment-1705776570

## Examples using this PR

### Image variations, no text prompt
Leftmost image in each row is original image used for input to
IP-Adapter. The other rows are example outputs with different seeds,
other parameters identical.

![ipadapter_invokai_example1](https://github.com/invoke-ai/InvokeAI/assets/303100/cae18b97-14a9-4499-8d87-f07faa8ad13a)







## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-19 14:31:08 -04:00
8b44d83859 yarn build 2023-09-19 14:03:22 -04:00
0b6315de71 Merge branch 'main' into feat/ip-adapter 2023-09-19 13:49:20 -04:00
578e682562 Merge branch 'main' into feat/taesd 2023-09-19 13:48:12 +10:00
92b49e45bb Address flake8 error. 2023-09-18 16:33:16 -04:00
b05b8ef677 Switch to using torch 2.0 attention for IP-Adapter (more memory-efficient). 2023-09-18 16:30:53 -04:00
382e2139bd Clear incompatible IP-Adapter when base model changes in the Linear UI. 2023-09-18 12:57:23 -04:00
d7ebe3f048 Merge branch 'maryhipp/informational-popover' of https://github.com/invoke-ai/InvokeAI into maryhipp/informational-popover 2023-09-18 11:03:06 -04:00
5c2bdf626b fix coherence copy 2023-09-18 11:03:02 -04:00
390a1c9fbb add in compositing settings header info popups 2023-09-18 11:01:43 -04:00
c46d9b8768 fix ts error in build 2023-09-18 10:31:50 -04:00
ef8d9843dd Merge branch 'main' into maryhipp/informational-popover 2023-09-18 10:16:16 -04:00
dc2e1a42bc add param negative conditioning tooltip 2023-09-18 09:12:03 -04:00
1869874433 chore(ui): lint 2023-09-18 16:01:20 +10:00
94f16b1c69 feat(ui): provide feedback when recalling invalid lora 2023-09-18 16:01:20 +10:00
cc0482ae8b feat(ui): simplify lora recall check 2023-09-18 16:01:20 +10:00
fdf9833c39 add toast 2023-09-18 16:01:20 +10:00
5a961bb58e first pass to recall LoRAs 2023-09-18 16:01:20 +10:00
627750eded Adding excludes to flake8 config 2023-09-18 15:10:04 +10:00
2a3909da94 isort: fix issues 2023-09-17 12:14:58 +12:00
e0dddbd38e chore: fix isort issues 2023-09-17 12:13:03 +12:00
231b7a5000 fix: Upload not working correctly on the ip Adapter image upload 2023-09-17 12:08:35 +12:00
b7773c9962 chore: black & lint fixes 2023-09-17 12:00:21 +12:00
11c501fc80 fix: Upload issue with the ip adapter image uploader 2023-09-17 11:58:15 +12:00
7be5743011 feat: Add IP Adapter Begin & End Percent to Linear UI 2023-09-17 11:53:05 +12:00
c48e648cbb Added per-step setting of IP-Adapter weights (for param easing, etc.) 2023-09-16 12:36:16 -07:00
29b4ddcc7f Merge branch 'feat/ip-adapter' of github.com:invoke-ai/InvokeAI into feat/ip-adapter 2023-09-16 09:32:41 -07:00
7ee13879e3 Added check in IP-Adapter to avoid begin/end step percent handling if use of IP-Adapter is already turned off due to potential clash with other cross attention control. 2023-09-16 09:29:50 -07:00
ced297ed21 Initial implementation of IP-Adapter "begin_step_percent" and "end_step_percent" for controlling on which steps IP-Adapter is applied in the denoising loop. 2023-09-16 08:24:12 -07:00
3e813ead1f chore: extract the adapter info initial state 2023-09-16 10:59:19 -04:00
820ec08e9a feat: Update Control Adapter Collapse active status to reflect IP Adapter 2023-09-16 10:59:19 -04:00
4dd289b337 feat: Handle IP Adapter Image being reset on being deleted. 2023-09-16 10:59:19 -04:00
b60b1e359e fix: Decrease the size of the IP Adapter Image Reset Button 2023-09-16 10:59:19 -04:00
208286e97a wip: Improve the IP Adapter UI 2023-09-16 10:59:19 -04:00
f7b64304ae wip: Add IP Adapter To Linear UI 2023-09-16 10:59:19 -04:00
834751e877 Merge branch 'main' into feat/ip-adapter 2023-09-16 07:06:46 +12:00
d94d4ef83f Missed Translations (#4529)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
A few Missed Translations From the Translation Update

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-16 06:54:29 +12:00
e7a10d310f Merge branch 'main' into maryhipp/informational-popover 2023-09-15 14:52:57 -04:00
682d6998bc Merge branch 'main' into moretranslation 2023-09-16 06:52:24 +12:00
2ce07a4730 popovers updates 2023-09-15 14:48:36 -04:00
dc9074f65d Unmasked default (#4553)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ X ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
Mask Edge was set to default, and producing poor results. I've updated
the default back to Unmasked.
2023-09-16 06:48:00 +12:00
45d5ab20ec remove individual popover components 2023-09-15 14:36:36 -04:00
b75c56768d Unmasked default 2023-09-15 13:52:11 -04:00
343df03a92 isort 2023-09-15 13:18:00 -04:00
b57acb7353 Merge branch 'main' into feat/ip-adapter 2023-09-15 13:15:25 -04:00
7bf7c16a5d Merge branch 'main' into maryhipp/informational-popover 2023-09-15 13:12:25 -04:00
56340c24c8 IP-Adapter Model Management (#4540)
Note: The target branch is `feat/ip-adapter`, not `main`. After a
cursory review here, I'll merge for an in-depth review as part of
https://github.com/invoke-ai/InvokeAI/pull/4429.

## Description

This branch adds model management support for IP-Adapter models. There
are a few notable/unusual aspects to how it is implemented:
- We have defined a model format that works better with our model
manager than the 'official' IP-Adapter repo, and will be hosting the
IP-Adapter models ourselves (See `invokeai/backend/ip_adapter/README.md`
for a description of the expected model formats.)
- The CLIP Vision models and IP-Adapter models are handled independently
in the model manager. The IP-Adapter model info has a reference to the
CLIP model that it is intended to be run with.
- The `BaseModelType.Any` field was added for CLIP Vision models, as
they don't have a clear 1-to-1 association with a particular base model.

## QA Instructions, Screenshots, Recordings

Install the following models via the InvokeAI UI:

Image Encoders:
-
[InvokeAI/ip_adapter_sd_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder)
-
[InvokeAI/ip_adapter_sdxl_image_encoder](https://huggingface.co/InvokeAI/ip_adapter_sdxl_image_encoder)

IP-Adapters:
-
[InvokeAI/ip_adapter_sd15](https://huggingface.co/InvokeAI/ip_adapter_sd15)
-
[InvokeAI/ip_adapter_plus_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_sd15)
-
[InvokeAI/ip_adapter_plus_face_sd15](https://huggingface.co/InvokeAI/ip_adapter_plus_face_sd15)
-
[InvokeAI/ip_adapter_sdxl](https://huggingface.co/InvokeAI/ip_adapter_sdxl)
2023-09-15 12:42:02 -04:00
afe9756667 Merge branch 'main' into feat/taesd 2023-09-15 12:19:19 -04:00
ff3150a818 Update lora hotfix to new diffusers version(scale argument added) 2023-09-15 12:19:01 -04:00
fcea65770f added optional popovers for users to learn more about each setting 2023-09-15 10:37:05 -04:00
273271f091 Merge branch 'moretranslation' of https://github.com/mickr777/InvokeAI into moretranslation 2023-09-15 14:14:04 +10:00
54dc912c83 Revert some test Changes 2023-09-15 14:13:54 +10:00
571f50adf7 Merge branch 'main' into moretranslation 2023-09-15 14:06:26 +10:00
368bd6f778 Prettier Fixes 2023-09-15 14:04:28 +10:00
7481251127 More Translations and Fixes 2023-09-15 13:58:48 +10:00
16664da5b6 black 2023-09-14 23:49:02 -04:00
c104807201 Update list of supported IP-Adapters. 2023-09-14 23:43:19 -04:00
990ce9a1da Lookup IP-Adapter linked image encoder from disk instead of storing in model config metadata. 2023-09-14 23:06:57 -04:00
604fc006b1 fix(ui): construct openapi url from window.location.origin 2023-09-14 23:06:39 -04:00
5a42774fbe Update FEATURE_REQUEST.yml
Added some verbiage about making feature requests singular and focused.

Updated the placeholder to something more Invoke-y.
2023-09-14 22:19:03 -04:00
704e016f05 feat(ui): disable immutable redux check
The immutable and serializable checks for redux can cause substantial performance issues. The immutable check in particular is pretty heavy. It's only run in dev mode, but this and really slow down the already-slower performance of dev mode.

The most important one for us is serializable, which has far less of a performance impact.

The immutable check is largely redundant because we use immer-backed RTK for everything and immer gives us confidence there.

Disable the immutable check, leaving serializable in.
2023-09-14 22:02:29 -04:00
a1ef079d1f Merge branch 'main' into moretranslation 2023-09-15 11:34:48 +10:00
34a09cb4ca fix(ui): fix send to canvas crash
A few weeks back, we changed how the canvas scales in response to changes in window/panel size.

This introduced a bug where if we the user hadn't already clicked the canvas tab once to initialize the stage elements, the stage's dimensions were zero, then the calculation of the stage's scale ends up zero, then something is divided by that zero and Konva dies.

This is only a problem on Chromium browsers - somehow Firefox handles it gracefully.

Now, when calculating the stage scale, never return a 0 - if it's a zero, return 1 instead. This is enough to fix the crash, but the image ends up centered on the top-left corner of the stage (the origin of the canvas).

Because the canvas elements are not initialized at this point (we haven't switched tabs yet), the stage dimensions fall back to (0,0). This means the center of the stage is also (0,0) - so the image is centered on (0,0), the top-left corner of the stage.

To fix this, we need to ensure we:
- Change to the canvas tab before actually setting the image, so the stage elements are able to initialize
- Use `flushSync` to flush DOM updates for this tab change so we actually have DOM elements to work with
- Update the stage dimensions once on first load of it (so in the effect that sets up the resize observer, we update the stage dimensions)

The result now is the expected behaviour - images sent to canvas do not crash and end up in the center of the canvas.
2023-09-15 11:05:53 +10:00
18095ecc44 yarn build 2023-09-14 16:56:51 -04:00
fe19f11abf Bump DenoiseLatentsInvocation minor version. 2023-09-14 16:54:07 -04:00
c2f074dc2f Fix python static checks. 2023-09-14 16:48:47 -04:00
e02a557454 Fix frontend typescript errors. 2023-09-14 16:43:43 -04:00
fca60862e2 Add README.md describing IP-Adapter model formats. 2023-09-14 16:02:07 -04:00
94c186bb4c Fix bug in IPAdapter.to(...). 2023-09-14 15:45:25 -04:00
a22c8cb3a1 Improve robustness of check for IPAdapter vs IPAdapterPlus. 2023-09-14 15:25:41 -04:00
781e8521d5 Eliminate the need for IPAdapter.initialize(). 2023-09-14 15:02:59 -04:00
d114d0ba95 Remove need for the image_encoder param in IPAdapter.initialize(). 2023-09-14 14:14:35 -04:00
cc8b7a74da (minor) Delete minor TODO. 2023-09-14 13:04:34 -04:00
388554448a Add CLIP Vision model to IP-Adapter info and use this to infer which model to use. 2023-09-14 11:57:53 -04:00
cadc0839a6 typegen 2023-09-14 11:19:52 -04:00
d5160648d0 Add support for downloading IP-Adapter models from HF. 2023-09-14 11:18:43 -04:00
6d0ea42a94 Get CLIPVision model download from HF working. 2023-09-14 09:54:10 -04:00
0f93991087 Remove multiple of 8 requirement for ImageResizeInvocation (#4538)
Testing required the width and height to be multiples of 8. This is no longer needed.
2023-09-14 08:56:17 -04:00
2c1100509f Add BaseModelType.Any to be used by CLIPVisionModel. 2023-09-14 08:19:55 -04:00
ad5f61e3b5 Merge branch 'main' into moretranslation 2023-09-14 13:36:37 +10:00
f6738d647e fix(ui): store customStarUI outside redux
JSX is not serializable, so it cannot be in redux. Non-serializable global state may be put into `nanostores`.

- Use `nanostores` for `customStarUI`
- Use `nanostores` for `headerComponent`
- Re-enable the serializable & immutable check redux middlewares
2023-09-14 12:13:03 +10:00
c34b359c36 (minor) Remove duplicate TODO. 2023-09-13 21:25:20 -04:00
77d135967f Update IPAdapterModel to respect requested torch_dtype. 2023-09-13 21:06:42 -04:00
ebf26687cb (minor) Remove unnecessary TODO. 2023-09-13 21:03:42 -04:00
2f5e923008 Removed duplicate import in model_cache.py 2023-09-13 19:33:43 -04:00
b7296000e4 made MPS calls conditional on MPS actually being the chosen device with backend available 2023-09-13 19:33:43 -04:00
fab055995e Add empty_cache() for MPS hardware. 2023-09-13 19:33:43 -04:00
1c8991a3df Use CLIPVisionModel under model management for IP-Adapter. 2023-09-13 19:10:02 -04:00
3d52656176 Add CLIPVisionModel to model management. 2023-09-13 17:14:20 -04:00
d989c7fa34 add option for custom star ui (#4530)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-13 20:48:10 +00:00
a2777decd4 Add a IPAdapterModelField for passing passing IP-Adapter models between nodes. 2023-09-13 13:40:59 -04:00
d219167849 fix(latent): remove temporary workaround for lack of TAESD tiling support.
Now available in diffusers 0.21: https://github.com/huggingface/diffusers/pull/4627
2023-09-13 09:40:06 -07:00
090db1ab3a Merge remote-tracking branch 'origin/main' into feat/taesd 2023-09-13 09:17:53 -07:00
468253aa14 typegen 2023-09-13 08:27:24 -04:00
3ee9a21647 Initial (barely) working version of IP-Adapter model management. 2023-09-13 08:27:24 -04:00
0d823901ef Add IPAdapter to model_management __init__.py 2023-09-13 08:27:24 -04:00
7ee55489bb Improve model search warning messages. 2023-09-13 08:27:24 -04:00
163ece9aee Initial skeleton for IPAdapter model management. 2023-09-13 08:27:24 -04:00
3920d5c90d Missed Translations 2023-09-13 21:15:36 +10:00
0f0366f1f3 Update collections.py (#4513)
* Update collections.py

RangeOfSizeInvocation was not taking step into account when generating the end point of the range

* - updated the node description to refelect this mod
- added a gt=0 constraint to ensure only a positive size of the range
- moved the + 1 to be on the size. To ensure the range is the requested size in cases where the step is negative
- formatted with Black

* Removed +1 from the range calculation

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-13 18:26:41 +10:00
4e05dcfe2e Prompts from file support nodes (#3964)
* New classes to support the PromptsFromFileInvocation Class
- PromptPosNegOutput
- PromptSplitNegInvocation
- PromptJoinInvocation
- PromptReplaceInvocation

* - Added PromptsToFileInvocation,
- PromptSplitNegInvocation
  - now counts the bracket depth so ensures it cout the numbr of open and close brackets match.
  - checks for escaped [ ] so ignores them if escaped e.g \[
- PromptReplaceInvocation - now has a user regex. and no regex in made caseinsesitive

* Update prompt.py

created class PromptsToFileInvocationOutput and use it in PromptsToFileInvocation instead of BaseInvocationOutput

* Update prompt.py

* Added schema_extra title and tags  for PromptReplaceInvocation, PromptJoinInvocation,  PromptSplitNegInvocation and PromptsToFileInvocation

* Added PTFileds Collect and Expand

* update to nodes v1

* added ui_type to file_path for PromptToFile

* update params for the primitive types used, remove the ui_type filepath, promptsToFile now only accepts collections until a fix is available

* updated the parameters for the StringOutput primitive

* moved the prompt tools nodes out of the prompt.py into prompt_tools.py

* more rework for v1

* added github link

* updated to use "@invocation"

* updated tags

* Adde new nodes PromptStrength and PromptStrengthsCombine

* chore: black

* feat(nodes): add version to prompt nodes

* renamed nodes from prompt related to string related. Also moved them into a strings.py file.  Also moved and renamed the PromptsFromFileInvocation from prompt.py to strings.py.  The PTfileds still remain in the Prompt_tool.py for now.

* added , version="1.0.0" to the invocations

* removed the PTField related nodes and the prompt-tools.py file all new nodes now live in the

* formatted prompt.py and strings.py with Black and fixed silly mistake in the new StringSplitInvocation

* - Revert Prompt.py back to original
- Update strings.py to be only StringJoin, StringJoinThre, StringReplace, StringSplitNeg, StringSplit

* applied isort to imports

* fix(nodes): typos in `strings.py`

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2023-09-13 08:06:38 +00:00
8c63173b0c Translation update (#4503)
* Update Translations

* Fix Prettier Issue

* Fix Error in invokebutton.tsx

* More Translations

* few Fixes

* More Translations

* More Translations and lint Fixes

* Update constants.ts

Revert "Update constants.ts"

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-09-13 17:31:34 +10:00
30792cb259 chore: flake8 2023-09-13 16:50:25 +10:00
a88f16b81c chore: isort 2023-09-13 16:50:25 +10:00
fb188ce63e feat(nodes): update float_math and integer_math to use new ui_choice_labels 2023-09-13 16:50:25 +10:00
57ebf735e6 feat(nodes): add InputField.ui_choice_labels: dict[str, str]
This maps values to labels for multiple-choice fields.

This allows "enum" fields (i.e. `Literal["val1", "val2", ...]` fields) to use code-friendly string values for choices, but present this to the UI as human-friendly labels.
2023-09-13 16:50:25 +10:00
ec0f6e7248 chore: black 2023-09-13 16:50:25 +10:00
93c55ebcf2 fixed validator when operation is first input 2023-09-13 16:50:25 +10:00
41f2eaa4de updated name references for Float To Integer 2023-09-13 16:50:25 +10:00
244201b45d Cleanup documentation 2023-09-13 16:50:25 +10:00
486b8506aa Combined nodes to Float and Int general maths 2023-09-13 16:50:25 +10:00
79ca181276 documentation update 2023-09-13 16:50:25 +10:00
dbde08f3d4 Updated default value on round to multiple 2023-09-13 16:50:25 +10:00
e542608534 changed float_to_int to generalized round_multiple node 2023-09-13 16:50:25 +10:00
99ee47b79b Added square root function 2023-09-13 16:50:25 +10:00
005087a652 Added float math 2023-09-13 16:50:25 +10:00
e9f5814c6d Update invokeai version to 3.1.1 2023-09-12 23:07:20 -04:00
c68b55f8e6 Update latest tag format 2023-09-12 23:07:20 -04:00
a21f5f259c Added crop option to ImagePasteInvocation (#4507)
* Added crop option to ImagePasteInvocation

ImagePasteInvocation extended the image with transparency when pasting outside of the base image's bounds. This introduces a new option to crop the resulting image back to the original base image.

* Updated version for ImagePasteInvocation as 3.1.1 was released.
2023-09-12 21:31:35 +00:00
7b2e6deaf1 add toggle for shouldDisableInformationalPopovers 2023-09-12 16:33:46 -04:00
63f94579c5 add informational popover base component and sample 2023-09-12 16:10:43 -04:00
e467ca7f1b Apply black, isort, flake8 2023-09-12 13:01:58 -04:00
0450c28f14 Adding pre-commit to test dependencies 2023-09-12 13:01:58 -04:00
e88d7c242f isort wip 3 2023-09-12 13:01:58 -04:00
caea6d11c6 isort wip 2 2023-09-12 13:01:58 -04:00
5615c31799 isort wip 2023-09-12 13:01:58 -04:00
4390a051ca isort wip 2023-09-12 13:01:58 -04:00
fafa21569a Adding isort GHA and pre-commit hooks 2023-09-12 13:01:58 -04:00
77a4fabc66 Update contributingNodes.md with correct community nodes link 2023-09-12 12:01:44 -04:00
5cbdcdaa1f adding nodes 2023-09-12 12:01:44 -04:00
044b6ac07a update model merging 2023-09-12 12:01:44 -04:00
774ade679d updated ti training 2023-09-12 12:01:44 -04:00
bf6c5cbe77 update development guide 2023-09-12 12:01:44 -04:00
7dd20090c2 update na & development docs 2023-09-12 12:01:44 -04:00
7c3fb3c54a updated nodes docs 2023-09-12 12:01:44 -04:00
2c8521b25d updated naming 2023-09-12 12:01:44 -04:00
179a3aaa71 support & triaging 2023-09-12 12:01:44 -04:00
49423a791d updated workflow links 2023-09-12 12:01:44 -04:00
666b5d7a60 added example workflows 2023-09-12 12:01:44 -04:00
2a0dbe3b5b update quick links in Readme 2023-09-12 12:01:44 -04:00
eb48718459 Update README 2023-09-12 12:01:44 -04:00
d4143136d0 Update new developer docs 2023-09-12 12:01:44 -04:00
f6ced9f54b new contributor docs 2023-09-12 12:01:44 -04:00
c82ea5a812 SDXL prompting 2023-09-12 12:01:44 -04:00
17891ae703 Update communityNodes.md info 2023-09-12 12:01:44 -04:00
e94dc47d56 Update contributingNodes.md with correct information 2023-09-12 12:01:44 -04:00
3dfff278aa Merge branch 'main' into feat/taesd 2023-09-12 17:47:53 +10:00
aa7d945b23 IP-Adapter Re-Factor (#4496)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

**NOTE!!!** This PR is against `feat/ip-adapter`, not `main`. I created
a PR because I made some pretty significant changes that I thought might
spark discussion.

I don't think it makes sense to do a full in-depth review here. If
possible, let's try to agree on the high-level approach and then merge
this and do an in-depth review on the original PR.

High-level changes:
- Split `IPAdapterField` from the `ControlField` and make them separate
inputs on the `DenoiseLatentsInvocation`
- Create context manager that handles patching/un-patching the UNet with
IP-Adapter attention blocks (`IPAdapter.apply_ip_adapter_attention()`)
- Pass IP-Adapter conditioning via `cross_attention_kwargs` rather than
concatenating it to the text embedding. This helps avoid breaking other
features (like long prompts).
- Remove unused blocks of the IP-Adapter implementation and do some
general tidying.

Out of scope:
- I haven't looked at model management yet. I'd like to get this merged
into `feat/ip-adapter` and then look at model management separately.
2023-09-11 18:51:10 -04:00
e060fef540 dont try to load image at all if shouldFetchMetadataFromApi (#4511)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-09-11 11:11:32 -04:00
88db094cf2 Merge branch 'main' into feat/taesd 2023-09-11 22:11:25 +10:00
183f66c70c fixed quick links responsiveness (#4488)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description
There was an issue with the responsiveness of the quick links buttons in
the documentation.

## Related Tickets & Documents

- Related Issue #4455
- Closes #4455

## QA Instructions, Screenshots, Recordings

• On the documentation website, go to the Home page, scroll down to the
quick-links section.

[Home - InvokeAI Stable Diffusion Toolkit
Docs.webm](https://github.com/invoke-ai/InvokeAI/assets/92071471/0a7095c1-9d78-47f2-8da7-9c1e796bea3d)

## Added/updated tests?

- [ ] Yes
- [x] No : _It is a minor change in the documentation website._

## [optional] Are there any post deployment tasks we need to perform? No
2023-09-09 12:34:09 +10:00
abc50ce88b Merge branch 'main' into main 2023-09-09 12:31:26 +10:00
50a0691514 flake8 2023-09-08 18:05:31 -04:00
a255624984 black 2023-09-08 17:55:23 -04:00
2630fe3608 Remove unused ip_adapter/utils.py file. 2023-09-08 16:25:34 -04:00
dee6f86d5e Set 'title' for IP-Adapter fields with non-default names. 2023-09-08 16:14:17 -04:00
6ca6cf713c Tidy IPAdapter. Add types, improve field/method naming. 2023-09-08 16:00:58 -04:00
3f7d5b4e0f Remove redundant IPAdapterXL class. 2023-09-08 15:46:10 -04:00
91596d9527 Re-factor IPAdapter to patch UNet in a context manager. 2023-09-08 15:39:22 -04:00
d0a7832326 fix(tests): clarify test_deny_nodes xfail.reason 2023-09-08 13:24:37 -04:00
75bc43b2a5 fix(tests): make test_deny_nodes as xfail :( 2023-09-08 13:24:37 -04:00
4395ee3c03 feat: parse config before importing anything else
We need to parse the config before doing anything related to invocations to ensure that the invocations union picks up on denied nodes.

- Move that to the top of api_app and cli_app
- Wrap subsequent imports in `if True:`, as a hack to satisfy flake8 and not have to noqa every line or the whole file
- Add tests to ensure graph validation fails when using a denied node, and that the invocations union does not have denied nodes (this indirectly provides confidence that the generated OpenAPI schema will not include denied nodes)
2023-09-08 13:24:37 -04:00
1d2636aa90 feat: ignore unknown args
Do not throw when parsing unknown args, instead parse only known args print the unknown ones (supersedes #4216)
2023-09-08 13:24:37 -04:00
24d9357fdc feat(ui): truncate error messages in toasts to 128 characters 2023-09-08 13:24:37 -04:00
74cc409c72 feat(ui): add nodesAllowlist to config 2023-09-08 13:24:37 -04:00
cc92ce3da5 feat(backend): allow/deny nodes - do not parse args again 2023-09-08 13:24:37 -04:00
7254a6a517 feat(ui): add UI-level nodes denylist
This simply hides nodes from the workflow editor. The nodes will still work if an API request is made with them. For example, you could hide `iterate` nodes from the workflow editor, but if the Linear UI makes use of those nodes, they will still function.

- Update `AppConfig` with optional property `nodesDenylist: string[]`
- If provided, nodes are filtered out by `type` in the workflow editor
2023-09-08 13:24:37 -04:00
dc771d9645 feat(backend): allow/deny nodes
Allow denying and explicitly allowing nodes. When a not-allowed node is used, a pydantic `ValidationError` will be raised.

- When collecting all invocations, check against the allowlist and denylist first. When pydantic constructs any unions related to nodes, the denied nodes will be omitted
- Add `allow_nodes` and `deny_nodes` to `InvokeAIAppConfig`. These are `Union[list[str], None]`, and may be populated with the `type` of invocations.
- When `allow_nodes` is `None`, allow all nodes, else if it is `list[str]`, only allow nodes in the list
- When `deny_nodes` is `None`, deny no nodes, else if it is `list[str]`, deny nodes in the list
- `deny_nodes` overrides `allow_nodes`
2023-09-08 13:24:37 -04:00
d669f0855d Comment unused IPAdapter generate(...) methods. 2023-09-08 13:12:42 -04:00
b2d5b53b5f Pass IP-Adapter conditioning via cross_attention_kwargs instead of concatenating to the text embedding. This avoids interference with other features that manipulate the text embedding (e.g. long prompts). 2023-09-08 11:47:36 -04:00
ddc148b70b Move ConditioningData and its field classes to their own file. This will allow new conditioning types to be added more cleanly without introducing circular dependencies. 2023-09-08 11:00:11 -04:00
47ea71d9bd fixed quick links responsiveness 2023-09-08 08:38:06 -04:00
dccf291f64 3.1.1rc1 Release (#4493)
## What type of PR is this? (check all applicable)

3.1.1 Release build & updates


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 16:05:23 +10:00
d3a94e5853 Update release version to 3.1.1rc1 2023-09-08 15:27:22 +10:00
0166d7ba2b new frontend build 2023-09-08 15:22:22 +10:00
b700809e14 Maryhipp/option fetch metadata from api (#4491)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

Adds a configuration option to fetch metadata and workflows from api
isntead of the image file. Needed for commercial.
2023-09-08 15:29:13 +12:00
501cb4c1e2 Merge branch 'main' into maryhipp/option-fetch-metadata-from-api 2023-09-08 11:56:02 +10:00
56399a650a fix(ui): use zod to parse metdata when fetching from api 2023-09-08 11:55:25 +10:00
e4035a51af fix(ui): add missing config property 2023-09-08 11:55:10 +10:00
cf83ddea15 fix(docs): Correct spelling and grammar in feature request template (#4490)
Minor corrections to spell and grammar in the feature request template.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:

This PR should be self explanatory.
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Minor corrections to spell and grammar in the feature request template.

No code or behavioural changes.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

N/A

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

There are no tests for the issue template.

## [optional] Are there any post deployment tasks we need to perform?
2023-09-08 11:37:02 +10:00
c2d43f007b Specify the image_embedding_len in the IPAttnProcessor rather than the text embedding length. This enables the IPAttnProcessor to handle text embeddings of varying lengths. 2023-09-07 18:20:21 -04:00
Sam
a79d5901c7 Correct spelling and grammar in feature request template
Minor corrections to spell and grammar in the feature request template
2023-09-08 07:47:55 +10:00
7703bf2ca1 Delete IP-Adapter copies of AttnProcessor and AttnProcessor2_0, which were unmodified from diffusers. 2023-09-07 15:00:13 -04:00
b5e1ba34b3 Merge branch 'main' into refactor/rename-get-logger 2023-09-07 23:19:59 +10:00
a98c37b7a3 Added extra steps to update the Cudnnn DLL found in the Torch packages (#4459)
I added extra steps to update the Cudnnn DLL found in the Torch package
because it wasn't optimised or didn't use the lastest version. So
manually updating it can speed up iteration but the result might differ
from each card. Exemple i passed from 3 it/s to a steady 20 it/s.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-09-07 13:38:46 +10:00
252adb9e70 Fixed typos 2023-09-07 13:16:25 +10:00
40a0b2c366 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-07 03:25:26 +02:00
cfc4caf231 Update 030_INSTALL_CUDA_AND_ROCM.md
Added Extra step and clarification on how to choose between 11x or 12x update for Cudnnn dll
2023-09-07 03:24:13 +02:00
23fdf0156f Clean up IP-Adapter in diffusers_pipeline.py - WIP 2023-09-06 20:42:20 -04:00
cdbf40c9b2 Revert ControlNetInvocation changes. 2023-09-06 19:30:30 -04:00
46c9dcb113 Run yarn build. 2023-09-06 17:16:01 -04:00
6df79045fa Run typegen. 2023-09-06 17:03:37 -04:00
d776e0a0a9 Split ControlField and IpAdapterField. 2023-09-06 17:03:37 -04:00
e16598c48a Merge branch 'main' into patch-2 2023-09-06 13:59:59 +10:00
6506ce3e68 Updated "\" to be escaped in markdown 2023-09-06 13:58:53 +10:00
3afa73cd33 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-06 13:55:33 +10:00
81ea742aea cleanup 2023-09-05 16:55:44 -04:00
15d28bfdbf add option to fetch metadata from api instead of reading off of png 2023-09-05 16:54:29 -04:00
53f2369d18 Update 030_INSTALL_CUDA_AND_ROCM.md 2023-09-05 08:06:39 -04:00
04c0a83bff Added extra steps to update the Cudnnn DLL found in the Torch packages
I added extra steps to update the Cudnnn DLL found in the Torch package because it wasn't optimised or didn't use the lastest version. So manually updating it can speed up iteration but the result might differ from each card. Exemple i passed from 3 it/s to a steady 20 it/s.
2023-09-05 06:54:06 +02:00
94ec3da7b5 chore: regen scheme merge 2023-09-05 15:23:16 +12:00
f44496a579 Merge branch 'main' into feat/ip-adapter 2023-09-05 15:22:15 +12:00
99fe95ab03 fix: Add validation for image_encoder model too 2023-09-05 14:49:41 +12:00
95ecb1a0c1 fix(ip_adapter): add None to types 2023-09-05 12:30:00 +10:00
bd15874cf6 feat(nodes): add control_type validation & fix types 2023-09-05 12:24:54 +10:00
30ab81b6bb fix: Update paths so they are serializable in the nodes 2023-09-05 13:50:21 +12:00
78195491bc fix: Make the adapter models use new local paths 2023-09-05 13:39:54 +12:00
58aa159a50 fix(backend): fix remaining instances of getLogger() 2023-09-05 10:43:30 +10:00
d8f7c19030 Merge branch 'main' into refactor/rename-get-logger 2023-09-05 10:37:53 +10:00
c63390f6e1 fix: Temporarily update the ControlField zod model
While we decide how to go ahead with this .
2023-09-05 12:29:05 +12:00
cbd451c610 chore: Regen Schema 2023-09-05 12:13:08 +12:00
b0f91f2e75 fix: Remove types on adapter nodes. Superseded by the decorator 2023-09-05 12:12:19 +12:00
3ac68cde66 chore: flake8 cleanup 2023-09-05 12:07:12 +12:00
a69b1cd598 chore: Add Versioning data to new adapters + update model paths 2023-09-05 11:54:50 +12:00
65a76a086b cleanup: Some basic cleanup 2023-09-05 11:54:28 +12:00
07381e5a26 cleanup: merge conflicts 2023-09-05 11:37:12 +12:00
6bb378a101 Merge branch 'main' into feat/ip-adapter 2023-09-05 11:35:19 +12:00
7df67d077a Merge branch 'main' into feat/taesd 2023-09-01 22:18:40 -07:00
b761807219 Merge branch 'main' into feat/ip-adapter 2023-09-02 11:31:08 +10:00
fb1b03960e Added IP-Adapter SDXL support. Added IP-Adapter "Plus" (more detail) model support. 2023-09-01 04:40:30 -07:00
74bfb5e1f9 First commit of separate node for IP-Adapter.
And it own dataclasses for passing info.
2023-08-31 23:07:15 -07:00
bc1bce18b0 Merge branch 'main' into feat/taesd 2023-08-31 20:26:10 -07:00
942ecbbde4 Merge branch 'feat/ip-adapter' of github.com:invoke-ai/InvokeAI into feat/ip-adapter 2023-08-30 18:35:53 -07:00
79db0e9e93 More cleanup after rebasing to main. 2023-08-30 18:29:06 -07:00
0c17f8604f Resolving rebase conflict, redirecting control imports to invocations/control_adapter 2023-08-30 17:35:31 -07:00
054edc4077 Oops, forgot to add control_adapter.py for control nodes in last refactor commit 2023-08-30 17:31:46 -07:00
5a9993772d Added ip_adapter_strength parameter to adjust weighting of IP-Adapter's added cross-attention layers 2023-08-30 17:28:30 -07:00
f2cd9e9ae2 Working POC for IP-Adapters. Not fully nodified yet, lots of caveats, hardwired model paths, etc. 2023-08-30 17:28:30 -07:00
9f86cfa471 Working POC of IP-Adapters. Not fully nodified yet. 2023-08-30 17:28:30 -07:00
8c1390166f Modifying code from https://github.com/tencent-ailab/IP-Adapter. Also adding license notice at top. 2023-08-30 17:28:30 -07:00
1ad98ce999 Core ip_adapter files from https://github.com/tencent-ailab/IP-Adapter
Copied into InvokeAI since IP-Adapter repo is not a package. Is there a better way to do this for non-packaged Python code while still keeping InvokeAI install easy?
2023-08-30 17:28:30 -07:00
5f4a62810e Added ip_adapter_strength parameter to adjust weighting of IP-Adapter's added cross-attention layers 2023-08-29 10:47:37 -07:00
35b7ae90ae Working POC for IP-Adapters. Not fully nodified yet, lots of caveats, hardwired model paths, etc. 2023-08-29 10:47:37 -07:00
9ed4d487d2 Working POC of IP-Adapters. Not fully nodified yet. 2023-08-29 10:47:37 -07:00
69d37217b8 Modifying code from https://github.com/tencent-ailab/IP-Adapter. Also adding license notice at top. 2023-08-29 10:47:37 -07:00
7afdefb0e5 Core ip_adapter files from https://github.com/tencent-ailab/IP-Adapter
Copied into InvokeAI since IP-Adapter repo is not a package. Is there a better way to do this for non-packaged Python code while still keeping InvokeAI install easy?
2023-08-29 10:47:37 -07:00
24132a7950 Merge branch 'main' into refactor/rename-get-logger 2023-08-28 11:38:37 +10:00
dff466244d Merge remote-tracking branch 'origin/main' into feat/taesd
# Conflicts:
#	invokeai/app/invocations/latent.py
2023-08-25 15:21:47 -07:00
45d172d5a8 Merge branch 'main' into refactor/rename-get-logger 2023-08-20 16:08:32 -04:00
f5d95ffed5 Merge branch 'main' into feat/taesd 2023-08-18 18:23:34 -07:00
6f9c1c6d4e Merge remote-tracking branch 'origin/dep/diffusers020' into feat/taesd
# Conflicts:
#	invokeai/app/invocations/latent.py
2023-08-18 14:19:27 -07:00
811c82a677 lint: formatting 2023-08-18 14:06:14 -07:00
4f0e43ec1b fix(TAESD): correct usage of singledispatchmethod so normal VAE still works 2023-08-18 14:05:12 -07:00
26a7b7b66d feat(model_probe): provide more clues when we fail to load a model. 2023-08-17 20:08:53 -07:00
8611ffe32d feat(TAESD): support TAESD — Tiny Autoencoder for Stable Diffusion 2023-08-17 20:08:53 -07:00
3cb6d333f6 Merge branch 'main' into refactor/rename-get-logger 2023-08-17 20:31:30 -04:00
4570702dd0 hotfix for crashing api 2023-08-17 20:17:10 -04:00
1d107f30e5 remove getLogger() completely 2023-08-17 19:17:38 -04:00
79084e9e20 Merge branch 'main' into refactor/rename-get-logger 2023-08-17 19:01:17 -04:00
fc9b4539a3 Merge branch 'main' into refactor/rename-get-logger 2023-08-16 09:19:52 -04:00
09ef57718e fix docs 2023-08-14 20:20:35 -04:00
cab8239ba8 add get_logger() as alias for getLogger() 2023-08-14 20:18:09 -04:00
1032 changed files with 64118 additions and 27060 deletions

1
.gitattributes vendored
View File

@ -2,3 +2,4 @@
# Only affects text files and ignores other file types.
# For more info see: https://www.aleksandrhovhannisyan.com/blog/crlf-vs-lf-normalizing-line-endings-in-git/
* text=auto
docker/** text eol=lf

View File

@ -1,5 +1,5 @@
name: Feature Request
description: Commit a idea or Request a new feature
description: Contribute a idea or request a new feature
title: '[enhancement]: '
labels: ['enhancement']
# assignees:
@ -9,14 +9,14 @@ body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this Feature request!
Thanks for taking the time to fill out this feature request!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a simmilar issue already exists for the feature you want to request
to see if a similar issue already exists for the feature you want to request
options:
- label: I have searched the existing issues
required: true
@ -34,12 +34,9 @@ body:
id: whatisexpected
attributes:
label: What should this feature add?
description: Please try to explain the functionality this feature should add
description: Explain the functionality this feature should add. Feature requests should be for single features. Please create multiple requests if you want to request multiple features.
placeholder: |
Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
I'd like a button that creates an image of banana sushi every time I press it. Each image should be different. There should be a toggle next to the button that enables strawberry mode, in which the images are of strawberry sushi instead.
validations:
required: true
@ -51,6 +48,6 @@ body:
- type: textarea
attributes:
label: Aditional Content
label: Additional Content
description: Add any other context or screenshots about the feature request here.
placeholder: This is a Mockup of the design how I imagine it <screenshot>
placeholder: This is a mockup of the design how I imagine it <screenshot>

View File

@ -1,20 +0,0 @@
on:
pull_request:
push:
branches:
- main
- development
- 'release-candidate-*'
jobs:
pyflakes:
name: runner / pyflakes
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: pyflakes
uses: reviewdog/action-pyflakes@v1
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
reporter: github-pr-review

View File

@ -28,7 +28,7 @@ jobs:
run: twine check dist/*
- name: check PyPI versions
if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
python -c "\

View File

@ -1,6 +1,4 @@
name: style checks
# just formatting and flake8 for now
# TODO: add isort later
on:
pull_request:
@ -8,7 +6,7 @@ on:
branches: main
jobs:
black:
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
@ -20,8 +18,7 @@ jobs:
- name: Install dependencies with pip
run: |
pip install black flake8 Flake8-pyproject
pip install ruff
# - run: isort --check-only .
- run: black --check .
- run: flake8
- run: ruff check --output-format=github .
- run: ruff format --check .

12
.gitignore vendored
View File

@ -1,8 +1,5 @@
.idea/
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
@ -136,12 +133,10 @@ celerybeat.pid
# Environments
.env
.venv
.venv*
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
@ -186,11 +181,6 @@ cython_debug/
.scratch/
.vscode/
# ignore environment.yml and requirements.txt
# these are links to the real files in environments-and-requirements
environment.yml
requirements.txt
# source installer files
installer/*zip
installer/install.bat

View File

@ -15,3 +15,10 @@ repos:
language: system
entry: flake8
types: [python]
- id: isort
name: isort
stages: [commit]
language: system
entry: isort
types: [python]

View File

@ -46,13 +46,13 @@ the foundation for multiple commercial products.
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/">Code and
Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
Tutorials</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
Ideas & Q&A</a>]
[<a
href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>]
<div align="center">
@ -123,7 +123,7 @@ and go to http://localhost:9090.
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.9 through 3.11 installed on your machine. Earlier or
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
@ -161,7 +161,7 @@ the command `npm install -g yarn` if needed)
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
_For Linux with an AMD GPU:_
@ -175,7 +175,7 @@ the command `npm install -g yarn` if needed)
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2:_
_For Macintoshes, either Intel or M1/M2/M3:_
```sh
pip install InvokeAI --use-pep517
@ -368,9 +368,9 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Node Architecture & Editor (Beta)*
### *Workflows & Nodes*
Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
@ -383,8 +383,9 @@ Invoke AI provides an organized gallery system for easily storing, accessing, an
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
### Latest Changes
@ -395,20 +396,18 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
problems and other issues. For more help, please join our [Discord][discord link]
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
@ -424,7 +423,7 @@ their time, hard work and effort.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
Original portions of the software are Copyright (c) 2023 by respective contributors.

View File

@ -1,13 +1,15 @@
## Make a copy of this file named `.env` and fill in the values below.
## Any environment variables supported by InvokeAI can be specified here.
## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# This **must** be an absolute path.
INVOKEAI_ROOT=
HUGGINGFACE_TOKEN=
# Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup
# GPU_DRIVER=cuda
# CONTAINER_UID=1000
## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # or rocm
# CONTAINER_UID=1000

View File

@ -2,7 +2,7 @@
## Builder stage
FROM library/ubuntu:22.04 AS builder
FROM library/ubuntu:23.04 AS builder
ARG DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
@ -10,7 +10,7 @@ RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt update && apt-get install -y \
git \
python3.10-venv \
python3-venv \
python3-pip \
build-essential
@ -18,8 +18,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.0.1
ARG TORCHVISION_VERSION=0.15.2
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@ -35,9 +35,9 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.4.2"; \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu118"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
pip install $extra_index_url_arg \
torch==$TORCH_VERSION \
@ -70,7 +70,7 @@ RUN --mount=type=cache,target=/usr/lib/node_modules \
#### Runtime stage ---------------------------------------
FROM library/ubuntu:22.04 AS runtime
FROM library/ubuntu:23.04 AS runtime
ARG DEBIAN_FRONTEND=noninteractive
ENV PYTHONUNBUFFERED=1
@ -85,6 +85,7 @@ RUN apt update && apt install -y --no-install-recommends \
iotop \
bzip2 \
gosu \
magic-wormhole \
libglib2.0-0 \
libgl1-mesa-glx \
python3-venv \
@ -94,10 +95,6 @@ RUN apt update && apt install -y --no-install-recommends \
libstdc++-10-dev &&\
apt-get clean && apt-get autoclean
# globally add magic-wormhole
# for ease of transferring data to and from the container
# when running in sandboxed cloud environments; e.g. Runpod etc.
RUN pip install magic-wormhole
ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
@ -120,9 +117,7 @@ WORKDIR ${INVOKEAI_SRC}
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
# Create unprivileged user and make the local dir
RUN useradd --create-home --shell /bin/bash -u 1000 --comment "container local user" invoke
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R invoke:invoke ${INVOKEAI_ROOT}
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]

View File

@ -5,7 +5,7 @@ All commands are to be run from the `docker` directory: `cd docker`
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-compose-on-ubuntu-22-04).
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
@ -20,7 +20,6 @@ This is done via Docker Desktop preferences
## Quickstart
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
@ -42,20 +41,22 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
You can also set these values in `docker compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (most values are optional):
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
```
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing!
See the `docker compose.yaml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.
### Reconfigure the runtime directory
@ -63,7 +64,7 @@ Can be used to download additional models from the supported model list
In conjunction with `INVOKEAI_ROOT` can be also used to initialize a runtime directory
```
```yaml
command:
- invokeai-configure
- --yes
@ -71,7 +72,7 @@ command:
Or install models:
```
```yaml
command:
- invokeai-model-install
```
```

View File

@ -5,7 +5,7 @@ build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker-compose build args:"
echo "docker compose build args:"
echo $build_args
docker-compose build $build_args
docker compose build $build_args

View File

@ -15,6 +15,10 @@ services:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile

View File

@ -19,7 +19,7 @@ set -e -o pipefail
# Default UID: 1000 chosen due to popularity on Linux systems. Possibly 501 on MacOS.
USER_ID=${CONTAINER_UID:-1000}
USER=invoke
USER=ubuntu
usermod -u ${USER_ID} ${USER} 1>/dev/null
configure() {

View File

@ -1,8 +1,11 @@
#!/usr/bin/env bash
set -e
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
docker-compose up --build -d
docker-compose logs -f
docker compose up -d
docker compose logs -f

View File

@ -488,7 +488,7 @@ sections describe what's new for InvokeAI.
- A choice of installer scripts that automate installation and configuration.
See
[Installation](installation/index.md).
[Installation](installation/INSTALLATION.md).
- A streamlined manual installation process that works for both Conda and
PIP-only installs. See
[Manual Installation](installation/020_INSTALL_MANUAL.md).
@ -657,7 +657,7 @@ sections describe what's new for InvokeAI.
## v1.13 <small>(3 September 2022)</small>
- Support image variations (see [VARIATIONS](features/VARIATIONS.md)
- Support image variations (see [VARIATIONS](deprecated/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and
reviewers)
- Supports a Google Colab notebook for a standalone server running on Google

Binary file not shown.

Before

Width:  |  Height:  |  Size: 490 KiB

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 244 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 948 KiB

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 420 KiB

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 197 KiB

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 216 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

View File

@ -1,39 +1,41 @@
# How to Contribute
# Contributing
## Welcome to Invoke AI
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
## Contributing to Invoke AI
# Methods of Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
## Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md).
### Areas of contribution:
**New Contributors:** If youre unfamiliar with contributing to open source projects, take a look at our [new contributor guide](contribution_guides/newContributorChecklist.md).
#### Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Nodes
If youd like to add a Node, please see our [nodes contribution guide](../nodes/contributingNodes.md).
#### Nodes
If youd like to help with development, please see our [nodes contribution guide](/nodes/contributingNodes). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Support and Triaging
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate.
#### Documentation
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work.
## Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
#### Translation
## Translation
If you'd like to help with translation, please see our [translation guide](contribution_guides/translation.md).
#### Tutorials
## Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
### Contributors
# Contributors
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
### Code of Conduct
# Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
@ -47,8 +49,7 @@ By making a contribution to this project, you certify that:
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
# Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).

View File

@ -1,6 +1,6 @@
# Invocations
# Nodes
Features in InvokeAI are added in the form of modular node-like systems called
Features in InvokeAI are added in the form of modular nodes systems called
**Invocations**.
An Invocation is simply a single operation that takes in some inputs and gives
@ -9,13 +9,34 @@ complex functionality.
## Invocations Directory
InvokeAI Invocations can be found in the `invokeai/app/invocations` directory.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
You can add your new functionality to one of the existing Invocations in this
directory or create a new file in this directory as per your needs.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
Example `nodes` subfolder structure:
```py
├── __init__.py # Invoke-managed custom node loader
├── cool_node
├── __init__.py # see example below
└── cool_node.py
└── my_node_pack
├── __init__.py # see example below
├── tasty_node.py
├── bodacious_node.py
├── utils.py
└── extra_nodes
└── fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
```py
from .cool_node import CoolInvocation
```
**Note:** _All Invocations must be inside this directory for InvokeAI to
recognize them as valid Invocations._
## Creating A New Invocation

View File

@ -47,34 +47,9 @@ pip install ".[dev,test]"
These are optional groups of packages which are defined within the `pyproject.toml`
and will be required for testing the changes you make to the code.
### Running Tests
We use [pytest](https://docs.pytest.org/en/7.2.x/) for our test suite. Tests can
be found under the `./tests` folder and can be run with a single `pytest`
command. Optionally, to review test coverage you can append `--cov`.
```zsh
pytest --cov
```
Test outcomes and coverage will be reported in the terminal. In addition a more
detailed report is created in both XML and HTML format in the `./coverage`
folder. The HTML one in particular can help identify missing statements
requiring tests to ensure coverage. This can be run by opening
`./coverage/html/index.html`.
For example.
```zsh
pytest --cov; open ./coverage/html/index.html
```
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)
### Tests
See the [tests documentation](./TESTS.md) for information about running and writing tests.
### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server —
@ -167,6 +142,23 @@ and so you'll have access to the same python environment as the InvokeAI app.
This is _super_ handy.
#### Enabling Type-Checking with Pylance
We use python's typing system in InvokeAI. PR reviews will include checking that types are present and correct. We don't enforce types with `mypy` at this time, but that is on the horizon.
Using a code analysis tool to automatically type check your code (and types) is very important when writing with types. These tools provide immediate feedback in your editor when types are incorrect, and following their suggestions lead to fewer runtime bugs.
Pylance, installed at the beginning of this guide, is the de-facto python LSP (language server protocol). It provides type checking in the editor (among many other features). Once installed, you do need to enable type checking manually:
- Open a python file
- Look along the status bar in VSCode for `{ } Python`
- Click the `{ }`
- Turn type checking on - basic is fine
You'll now see red squiggly lines where type issues are detected. Hover your cursor over the indicated symbols to see what's wrong.
In 99% of cases when the type checker says there is a problem, there really is a problem, and you should take some time to understand and resolve what it is pointing out.
#### Debugging configs with `launch.json`
Debugging configs are managed in a `launch.json` file. Like most VSCode configs,

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,89 @@
# InvokeAI Backend Tests
We use `pytest` to run the backend python tests. (See [pyproject.toml](/pyproject.toml) for the default `pytest` options.)
## Fast vs. Slow
All tests are categorized as either 'fast' (no test annotation) or 'slow' (annotated with the `@pytest.mark.slow` decorator).
'Fast' tests are run to validate every PR, and are fast enough that they can be run routinely during development.
'Slow' tests are currently only run manually on an ad-hoc basis. In the future, they may be automated to run nightly. Most developers are only expected to run the 'slow' tests that directly relate to the feature(s) that they are working on.
As a rule of thumb, tests should be marked as 'slow' if there is a chance that they take >1s (e.g. on a CPU-only machine with slow internet connection). Common examples of slow tests are tests that depend on downloading a model, or running model inference.
## Running Tests
Below are some common test commands:
```bash
# Run the fast tests. (This implicitly uses the configured default option: `-m "not slow"`.)
pytest tests/
# Equivalent command to run the fast tests.
pytest tests/ -m "not slow"
# Run the slow tests.
pytest tests/ -m "slow"
# Run the slow tests from a specific file.
pytest tests/path/to/slow_test.py -m "slow"
# Run all tests (fast and slow).
pytest tests -m ""
```
## Test Organization
All backend tests are in the [`tests/`](/tests/) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`.
TODO: The above statement is aspirational. A re-organization of legacy tests is required to make it true.
## Tests that depend on models
There are a few things to keep in mind when adding tests that depend on models.
1. If a required model is not already present, it should automatically be downloaded as part of the test setup.
2. If a model is already downloaded, it should not be re-downloaded unnecessarily.
3. Take reasonable care to keep the total number of models required for the tests low. Whenever possible, re-use models that are already required for other tests. If you are adding a new model, consider including a comment to explain why it is required/unique.
There are several utilities to help with model setup for tests. Here is a sample test that depends on a model:
```python
import pytest
import torch
from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.util.test_utils import install_and_load_model
@pytest.mark.slow
def test_model(model_installer, torch_device):
model_info = install_and_load_model(
model_installer=model_installer,
model_path_id_or_url="HF/dummy_model_id",
model_name="dummy_model",
base_model=BaseModelType.StableDiffusion1,
model_type=ModelType.Dummy,
)
dummy_input = build_dummy_input(torch_device)
with torch.no_grad(), model_info as model:
model.to(torch_device, dtype=torch.float32)
output = model(dummy_input)
# Validate output...
```
## Test Coverage
To review test coverage, append `--cov` to your pytest command:
```bash
pytest tests/ --cov
```
Test outcomes and coverage will be reported in the terminal. In addition, a more detailed report is created in both XML and HTML format in the `./coverage` folder. The HTML output is particularly helpful in identifying untested statements where coverage should be improved. The HTML report can be viewed by opening `./coverage/html/index.html`.
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)

View File

@ -4,14 +4,21 @@
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
## **Get Started**
To get started, take a look at our [new contributors checklist](newContributorChecklist.md)
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](development_guides/contributingToFrontend.md)
* #### [Frontend Documentation](./contributingToFrontend.md)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub.
There are two paths to making a development contribution:
@ -23,69 +30,20 @@ There are two paths to making a development contribution:
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For frontend related work, **@psychedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@psychedelicious**.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.
Our [Code of Conduct](../../CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.

View File

@ -10,4 +10,4 @@ When updating or creating documentation, please keep in mind InvokeAI is a tool
## Help & Questions
Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.
Please ping @imic or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.

View File

@ -0,0 +1,68 @@
# New Contributor Guide
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.

View File

@ -211,8 +211,8 @@ Here are the invoke> command that apply to txt2img:
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](../features/VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](../features/VARIATIONS.md) for now to use this. |
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](VARIATIONS.md). |
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](VARIATIONS.md) for now to use this. |
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |

View File

@ -126,6 +126,6 @@ amounts of image-to-image variation even when the seed is fixed and the
`-v` argument is very low. Others are more deterministic. Feel free to
experiment until you find the combination that you like.
Also be aware of the [Perlin Noise](OTHER.md#thresholding-and-perlin-noise-initialization-options)
Also be aware of the [Perlin Noise](../features/OTHER.md#thresholding-and-perlin-noise-initialization-options)
feature, which provides another way of introducing variability into your
image generation requests.

View File

@ -82,7 +82,7 @@ format of YAML files can be found
[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/).
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [7] in the launcher, "Re-run the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Reading Environment Variables
@ -159,7 +159,7 @@ groups in `invokeia.yaml`:
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |

View File

@ -1,13 +1,11 @@
---
title: ControlNet
title: Control Adapters
---
# :material-loupe: ControlNet
# :material-loupe: Control Adapters
## ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
@ -19,9 +17,6 @@ image generation, providing you with a way to direct the network
towards generating images that better fit your desired style or
outcome.
### How it works
ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each
specific ControlNet model, and then inserting that control information
@ -29,35 +24,21 @@ into the generation process. This can be used to adjust the style,
composition, or other aspects of the image to better achieve a
specific result.
### Models
#### Installation
InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently
InvokeAI only supports "diffuser" style ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
different effects or styles in your generated images.
***InvokeAI does not currently support checkpoint-format
ControlNets. These come in the form of a single file with the
extension `.safetensors`.***
To install ControlNet Models:
Diffuser-style ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname"). The easiest way to install them is
1. The easiest way to install them is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
`invoke.sh`/`invoke.bat` launcher to select item [4] and then navigate
to the CONTROLNETS section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox:
repo_ids in the "Additional models" textbox.
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the ControlNet. The ID is in the format "author/repoName"
![Model Installer -
Controlnetl](../assets/installing-models/model-installer-controlnet.png){:width="640px"}
Command-line users can launch the model installer using the command
`invokeai-model-install`.
_Be aware that some ControlNet models require additional code
functionality in order to work properly, so just installing a
@ -65,6 +46,17 @@ third-party ControlNet model may not have the desired effect._ Please
read and follow the documentation for installing a third party model
not currently included among InvokeAI's default list.
Currently InvokeAI **only** supports 🤗 Diffusers-format ControlNet models. These are
folders that contain the files `config.json` and/or
`diffusion_pytorch_model.safetensors` and
`diffusion_pytorch_model.fp16.safetensors`. The name of the folder is
the name of the model.
🤗 Diffusers-format ControlNet models are available at HuggingFace
(http://huggingface.co) and accessed via their repo IDs (identifiers
in the format "author/modelname").
#### ControlNet Models
The models currently supported include:
**Canny**:
@ -96,6 +88,8 @@ A model that generates normal maps from input images, allowing for more realisti
**Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**QR Code Monster**:
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
@ -104,7 +98,7 @@ The OpenPose control model allows for the identification of the general pose of
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile (experimental)**:
**Tile**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
@ -117,12 +111,10 @@ The Tile Model can be a powerful tool in your arsenal for enhancing image qualit
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
## Using ControlNet
### Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
@ -134,3 +126,54 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
## T2I-Adapter
[T2I-Adapter](https://github.com/TencentARC/T2I-Adapter) is a tool similar to ControlNet that allows for control over the generation process by providing control information during the generation process. T2I-Adapter models tend to be smaller and more efficient than ControlNets.
##### Installation
To install T2I-Adapter Models:
1. The easiest way to install models is
to use the InvokeAI model installer application. Use the
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
to the T2I-Adapters section. Select the models you wish to install and
press "APPLY CHANGES". You may also enter additional HuggingFace
repo_ids in the "Additional models" textbox.
2. Using the "Add Model" function of the model manager, enter the HuggingFace Repo ID of the T2I-Adapter. The ID is in the format "author/repoName"
#### Usage
Each T2I Adapter has two settings that are applied.
Weight - Strength of the model applied to the generation for the section, defined by start/end.
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each section can be expanded with the "Show Advanced" button in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in during the generation process.
## IP-Adapter
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
![IP-Adapter + IMG2IMG](https://raw.githubusercontent.com/tencent-ailab/IP-Adapter/main/assets/demo/image-to-image.jpg)
#### Installation
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](https://www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
Each IP-Adapter has two settings that are applied to the IP-Adapter:
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.

53
docs/features/LORAS.md Normal file
View File

@ -0,0 +1,53 @@
---
title: LoRAs & LCM-LoRAs
---
# :material-library-shelves: LoRAs & LCM-LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## LoRAs
Low-Rank Adaptation (LoRA) files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
## LCM-LoRAs
Latent Consistency Models (LCMs) allowed a reduced number of steps to be used to generate images with Stable Diffusion. These are created by distilling base models, creating models that only require a small number of steps to generate images. However, LCMs require that any fine-tune of a base model be distilled to be used as an LCM.
LCM-LoRAs are models that provide the benefit of LCMs but are able to be used as LoRAs and applied to any fine tune of a base model. LCM-LoRAs are created by training a small number of adapters, rather than distilling the entire fine-tuned base model. The resulting LoRA can be used the same way as a standard LoRA, but with a greatly reduced step count. This enables SDXL images to be generated up to 10x faster than without the use of LCM-LoRAs.
**Using LCM-LoRAs**
LCM-LoRAs are natively supported in InvokeAI throughout the application. To get started, install any diffusers format LCM-LoRAs using the model manager and select it in the LoRA field.
There are a number parameter differences when using LCM-LoRAs and standard generation:
- When using LCM-LoRAs, the LoRA strength should be lower than if using a standard LoRA, with 0.35 recommended as a starting point.
- The LCM scheduler should be used for generation
- CFG-Scale should be reduced to ~1
- Steps should be reduced in the range of 4-8
Standard LoRAs can also be used alongside LCM-LoRAs, but will also require a lower strength, with 0.45 being recommended as a starting point.
More information can be found here: https://huggingface.co/blog/lcm_lora#fast-inference-with-sdxl-lcm-loras

View File

@ -2,17 +2,51 @@
title: Model Merging
---
# :material-image-off: Model Merging
## How to Merge Models
As of version 2.3, InvokeAI comes with a script that allows you to
merge two or three diffusers-type models into a new merged model. The
InvokeAI provides the ability to merge two or three diffusers-type models into a new merged model. The
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
## How to Merge Models
Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models.
## Settings
* Model Selection: there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
"Convert" option in the Web-based Model Manager tab.
You must select at least two models to merge. The third can be left
at "None" if you desire.
* Alpha: This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method: This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available.
* Save Location: The location you want the merged model to be saved in. Default is in the InvokeAI root folder
* Name for merged model: This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
* Ignore Mismatches / Force: Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
(`invoke.sh` or `invoke.bat`) and choosing the option (4) for _merge
models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the
merged model name.
@ -40,34 +74,4 @@ this to get back.
If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI.
## The Settings
* Model Selection -- there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha -- This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method -- This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available. (TODO: cite a reference that describes what these
interpolation methods actually do and how to decide among them).
* Force -- Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
* Name for merged model - This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".

View File

@ -142,7 +142,7 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
### Escaping parentheses () and speech marks ""
### Escaping parentheses and speech marks
If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
@ -246,7 +246,7 @@ To create a Dynamic Prompt, follow these steps:
Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {2$$style1|style2|style3}.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {style1|style2|style3}.
### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
@ -273,3 +273,36 @@ Below are some useful strategies for creating Dynamic Prompts:
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
## SDXL Prompting
Prompting with SDXL is slightly different than prompting with SD1.5 or SD2.1 models - SDXL expects a prompt _and_ a style.
### Prompting
<figure markdown>
![SDXL prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt.png)
</figure>
In the prompt box, enter a positive or negative prompt as you normally would.
For the style box you can enter a style that you want the image to be generated in. You can use styles from this example list, or any other style you wish: anime, photographic, digital art, comic book, fantasy art, analog film, neon punk, isometric, low poly, origami, line art, cinematic, 3d model, pixel art, etc.
### Concatenated Prompts
InvokeAI also has the option to concatenate the prompt and style inputs, by pressing the "link" button in the Positive Prompt box.
This concatenates the prompt & style inputs, and passes the joined prompt and style to the SDXL model.
![SDXL concatenated prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt-concatenated.png)

View File

@ -1,12 +1,3 @@
---
title: Textual Inversion Embeddings and LoRAs
---
# :material-library-shelves: Textual Inversions and LoRAs
With the advances in research, many new capabilities are available to customize the knowledge and understanding of novel concepts not originally contained in the base model.
## Using Textual Inversion Files
Textual inversion (TI) files are small models that customize the output of
@ -21,15 +12,16 @@ TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
different TI training packages. InvokeAI supports both formats, but its
[built-in TI training system](TRAINING.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
amassed a large ligrary of &gt;800 community-contributed TI files covering a
[Hugging Face](https://huggingface.co/sd-concepts-library) has
amassed a large library of &gt;800 community-contributed TI files covering a
broad range of subjects and styles. You can also install your own or others' TI files
by placing them in the designated directory for the compatible model type
### An Example
Here are a few examples to illustrate how it works. All these images were
generated using the command-line client and the Stable Diffusion 1.5 model:
Here are a few examples to illustrate how it works. All these images
were generated using the legacy command-line client and the Stable
Diffusion 1.5 model:
| Japanese gardener | Japanese gardener &lt;ghibli-face&gt; | Japanese gardener &lt;hoi4-leaders&gt; | Japanese gardener &lt;cartoona-animals&gt; |
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
@ -60,29 +52,4 @@ files it finds there for compatible models. At startup you will see a message si
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
```
To use these when generating, simply type the `<` key in your prompt to open the Textual Inversion WebUI and
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.
## Using LoRAs
LoRA files are models that customize the output of Stable Diffusion
image generation. Larger than embeddings, but much smaller than full
models, they augment SD with improved understanding of subjects and
artistic styles.
Unlike TI files, LoRAs do not introduce novel vocabulary into the
model's known tokens. Instead, LoRAs augment the model's weights that
are applied to generate imagery. LoRAs may be supplied with a
"trigger" word that they have been explicitly trained on, or may
simply apply their effect without being triggered.
LoRAs are typically stored in .safetensors files, which are the most
secure way to store and transmit these types of weights. You may
install any number of `.safetensors` LoRA files simply by copying them
into the `autoimport/lora` directory of the corresponding InvokeAI models
directory (usually `invokeai` in your home directory).
To use these when generating, open the LoRA menu item in the options
panel, select the LoRAs you want to apply and ensure that they have
the appropriate weight recommended by the model provider. Typically,
most LoRAs perform best at a weight of .75-1.
select the embedding you'd like to use. This UI has type-ahead support, so you can easily find supported embeddings.

View File

@ -43,27 +43,22 @@ into the directory
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start the front end by selecting choice (3):
start training tool selecting choice (3):
```sh
Do you want to generate images using the
1: Browser-based UI
2: Command-line interface
3: Run textual inversion training
4: Merge models (diffusers type only)
5: Download and install models
6: Change InvokeAI startup options
7: Re-run the configure script to fix a broken install
8: Open the developer console
9: Update InvokeAI
10: Command-line help
Q: Quit
Please enter 1-10, Q: [1]
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
From the command line, with the InvokeAI virtual environment active,
you can launch the front end with the command `invokeai-ti --gui`.
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:

336
docs/features/UTILITIES.md Normal file
View File

@ -0,0 +1,336 @@
---
title: Command-line Utilities
---
# :material-file-document: Utilities
# Command-line Utilities
InvokeAI comes with several scripts that are accessible via the
command line. To access these commands, start the "developer's
console" from the launcher (`invoke.bat` menu item [7]). Users who are
familiar with Python can alternatively activate InvokeAI's virtual
environment (typically, but not necessarily `invokeai/.venv`).
In the developer's console, type the script's name to run it. To get a
synopsis of what a utility does and the command-line arguments it
accepts, pass it the `-h` argument, e.g.
```bash
invokeai-merge -h
```
## **invokeai-web**
This script launches the web server and is effectively identical to
selecting option [1] in the launcher. An advantage of launching the
server from the command line is that you can override any setting
configuration option in `invokeai.yaml` using like-named command-line
arguments. For example, to temporarily change the size of the RAM
cache to 7 GB, you can launch as follows:
```bash
invokeai-web --ram 7
```
## **invokeai-merge**
This is the model merge script, the same as launcher option [3]. Call
it with the `--gui` command-line argument to start the interactive
console-based GUI. Alternatively, you can run it non-interactively
using command-line arguments as illustrated in the example below which
merges models named `stable-diffusion-1.5` and `inkdiffusion` into a new model named
`my_new_model`:
```bash
invokeai-merge --force --base-model sd-1 --models stable-diffusion-1.5 inkdiffusion --merged_model_name my_new_model
```
## **invokeai-ti**
This is the textual inversion training script that is run by launcher
option [2]. Call it with `--gui` to run the interactive console-based
front end. It can also be run non-interactively. It has about a
zillion arguments, but a typical training session can be launched
with:
```bash
invokeai-ti --model stable-diffusion-1.5 \
--placeholder_token 'jello' \
--learnable_property object \
--num_train_epochs 50 \
--train_data_dir /path/to/training/images \
--output_dir /path/to/trained/model
```
(Note that \\ is the Linux/Mac long-line continuation character. Use ^
in Windows).
## **invokeai-install**
This is the console-based model install script that is run by launcher
option [4]. If called without arguments, it will launch the
interactive console-based interface. It can also be used
non-interactively to list, add and remove models as shown by these
examples:
* This will download and install three models from CivitAI, HuggingFace,
and local disk:
```bash
invokeai-install --add https://civitai.com/api/download/models/161302 ^
gsdf/Counterfeit-V3.0 ^
D:\Models\merge_model_two.safetensors
```
(Note that ^ is the Windows long-line continuation character. Use \\ on
Linux/Mac).
* This will list installed models of type `main`:
```bash
invokeai-model-install --list-models main
```
* This will delete the models named `voxel-ish` and `realisticVision`:
```bash
invokeai-model-install --delete voxel-ish realisticVision
```
## **invokeai-configure**
This is the console-based configure script that ran when InvokeAI was
first installed. You can run it again at any time to change the
configuration, repair a broken install.
Called without any arguments, `invokeai-configure` enters interactive
mode with two screens. The first screen is a form that provides access
to most of InvokeAI's configuration options. The second screen lets
you download, add, and delete models interactively. When you exit the
second screen, the script will add any missing "support models"
needed for core functionality, and any selected "sd weights" which are
the model checkpoint/diffusers files.
This behavior can be changed via a series of command-line
arguments. Here are some of the useful ones:
* `invokeai-configure --skip-sd-weights --skip-support-models`
This will run just the configuration part of the utility, skipping
downloading of support models and stable diffusion weights.
* `invokeai-configure --yes`
This will run the configure script non-interactively. It will set the
configuration options to their default values, install/repair support
models, and download the "recommended" set of SD models.
* `invokeai-configure --yes --default_only`
This will run the configure script non-interactively. In contrast to
the previous command, it will only download the default SD model,
Stable Diffusion v1.5
* `invokeai-configure --yes --default_only --skip-sd-weights`
This is similar to the previous command, but will not download any
SD models at all. It is usually used to repair a broken install.
By default, `invokeai-configure` runs on the currently active InvokeAI
root folder. To run it against a different root, pass it the `--root
</path/to/root>` argument.
Lastly, you can use `invokeai-configure` to create a working root
directory entirely from scratch. Assuming you wish to make a root directory
named `InvokeAI-New`, run this command:
```bash
invokeai-configure --root InvokeAI-New --yes --default_only
```
This will create a minimally functional root directory. You can now
launch the web server against it with `invokeai-web --root InvokeAI-New`.
## **invokeai-update**
This is the interactive console-based script that is run by launcher
menu item [8] to update to a new version of InvokeAI. It takes no
command-line arguments.
## **invokeai-metadata**
This is a script which takes a list of InvokeAI-generated images and
outputs their metadata in the same JSON format that you get from the
`</>` button in the Web GUI. For example:
```bash
$ invokeai-metadata ffe2a115-b492-493c-afff-7679aa034b50.png
ffe2a115-b492-493c-afff-7679aa034b50.png:
{
"app_version": "3.1.0",
"cfg_scale": 8.0,
"clip_skip": 0,
"controlnets": [],
"generation_mode": "sdxl_txt2img",
"height": 1024,
"loras": [],
"model": {
"base_model": "sdxl",
"model_name": "stable-diffusion-xl-base-1.0",
"model_type": "main"
},
"negative_prompt": "",
"negative_style_prompt": "",
"positive_prompt": "military grade sushi dinner for shock troopers",
"positive_style_prompt": "",
"rand_device": "cpu",
"refiner_cfg_scale": 7.5,
"refiner_model": {
"base_model": "sdxl-refiner",
"model_name": "sd_xl_refiner_1.0",
"model_type": "main"
},
"refiner_negative_aesthetic_score": 2.5,
"refiner_positive_aesthetic_score": 6.0,
"refiner_scheduler": "euler",
"refiner_start": 0.8,
"refiner_steps": 20,
"scheduler": "euler",
"seed": 387129902,
"steps": 25,
"width": 1024
}
```
You may list multiple files on the command line.
## **invokeai-import-images**
InvokeAI uses a database to store information about images it
generated, and just copying the image files from one InvokeAI root
directory to another does not automatically import those images into
the destination's gallery. This script allows you to bulk import
images generated by one instance of InvokeAI into a gallery maintained
by another. It also works on images generated by older versions of
InvokeAI, going way back to version 1.
This script has an interactive mode only. The following example shows
it in action:
```bash
$ invokeai-import-images
===============================================================================
This script will import images generated by earlier versions of
InvokeAI into the currently installed root directory:
/home/XXXX/invokeai-main
If this is not what you want to do, type ctrl-C now to cancel.
===============================================================================
= Configuration & Settings
Found invokeai.yaml file at /home/XXXX/invokeai-main/invokeai.yaml:
Database : /home/XXXX/invokeai-main/databases/invokeai.db
Outputs : /home/XXXX/invokeai-main/outputs/images
Use these paths for import (yes) or choose different ones (no) [Yn]:
Inputs: Specify absolute path containing InvokeAI .png images to import: /home/XXXX/invokeai-2.3/outputs/images/
Include files from subfolders recursively [yN]?
Options for board selection for imported images:
1) Select an existing board name. (found 4)
2) Specify a board name to create/add to.
3) Create/add to board named 'IMPORT'.
4) Create/add to board named 'IMPORT' with the current datetime string appended (.e.g IMPORT_20230919T203519Z).
5) Create/add to board named 'IMPORT' with a the original file app_version appended (.e.g IMPORT_2.2.5).
Specify desired board option: 3
===============================================================================
= Import Settings Confirmation
Database File Path : /home/XXXX/invokeai-main/databases/invokeai.db
Outputs/Images Directory : /home/XXXX/invokeai-main/outputs/images
Import Image Source Directory : /home/XXXX/invokeai-2.3/outputs/images/
Recurse Source SubDirectories : No
Count of .png file(s) found : 5785
Board name option specified : IMPORT
Database backup will be taken at : /home/XXXX/invokeai-main/databases/backup
Notes about the import process:
- Source image files will not be modified, only copied to the outputs directory.
- If the same file name already exists in the destination, the file will be skipped.
- If the same file name already has a record in the database, the file will be skipped.
- Invoke AI metadata tags will be updated/written into the imported copy only.
- On the imported copy, only Invoke AI known tags (latest and legacy) will be retained (dream, sd-metadata, invokeai, invokeai_metadata)
- A property 'imported_app_version' will be added to metadata that can be viewed in the UI's metadata viewer.
- The new 3.x InvokeAI outputs folder structure is flat so recursively found source imges will all be placed into the single outputs/images folder.
Do you wish to continue with the import [Yn] ?
Making DB Backup at /home/lstein/invokeai-main/databases/backup/backup-20230919T203519Z-invokeai.db...Done!
===============================================================================
Importing /home/XXXX/invokeai-2.3/outputs/images/17d09907-297d-4db3-a18a-60b337feac66.png
... (5785 more lines) ...
===============================================================================
= Import Complete - Elpased Time: 0.28 second(s)
Source File(s) : 5785
Total Imported : 5783
Skipped b/c file already exists on disk : 1
Skipped b/c file already exists in db : 0
Errors during import : 1
```
## **invokeai-db-maintenance**
This script helps maintain the integrity of your InvokeAI database by
finding and fixing three problems that can arise over time:
1. An image was manually deleted from the outputs directory, leaving a
dangling image record in the InvokeAI database. This will cause a
black image to appear in the gallery. This is an "orphaned database
image record." The script can fix this by running a "clean"
operation on the database, removing the orphaned entries.
2. An image is present in the outputs directory but there is no
corresponding entry in the database. This can happen when the image
is added manually to the outputs directory, or if a crash occurred
after the image was generated but before the database was
completely updated. The symptom is that the image is present in the
outputs folder but doesn't appear in the InvokeAI gallery. This is
called an "orphaned image file." The script can fix this problem by
running an "archive" operation in which orphaned files are moved
into a directory named `outputs/images-archive`. If you wish, you
can then run `invokeai-image-import` to reimport these images back
into the database.
3. The thumbnail for an image is missing, again causing a black
gallery thumbnail. This is fixed by running the "thumbnaiils"
operation, which simply regenerates and re-registers the missing
thumbnail.
You can find and fix all three of these problems in a single go by
executing this command:
```bash
invokeai-db-maintenance --operation all
```
Or you can run just the clean and thumbnail operations like this:
```bash
invokeai-db-maintenance -operation clean, thumbnail
```
If called without any arguments, the script will ask you which
operations you wish to perform.
## **invokeai-migrate3**
This script will migrate settings and models (but not images!) from an
InvokeAI v2.3 root folder to an InvokeAI 3.X folder. Call it with the
source and destination root folders like this:
```bash
invokeai-migrate3 --from ~/invokeai-2.3 --to invokeai-3.1.1
```
Both directories must previously have been properly created and
initialized by `invokeai-configure`. If you wish to migrate the images
contained in the older root as well, you can use the
`invokeai-image-migrate` script described earlier.
---
Copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team

View File

@ -20,7 +20,7 @@ a single convenient digital artist-optimized user interface.
### * [Prompt Engineering](PROMPTS.md)
Get the images you want with the InvokeAI prompt engineering language.
### * The [LoRA, LyCORIS and Textual Inversion Models](CONCEPTS.md)
### * The [LoRA, LyCORIS, LCM-LoRA Models](CONCEPTS.md)
Add custom subjects and styles using a variety of fine-tuned models.
### * [ControlNet](CONTROLNET.md)
@ -28,7 +28,7 @@ Learn how to install and use ControlNet models for fine control over
image output.
### * [Image-to-Image Guide](IMG2IMG.md)
Use a seed image to build new creations in the CLI.
Use a seed image to build new creations.
## Model Management
@ -40,7 +40,7 @@ guide also covers optimizing models to load quickly.
Teach an old model new tricks. Merge 2-3 models together to create a
new model that combines characteristics of the originals.
### * [Textual Inversion](TRAINING.md)
### * [Textual Inversion](TEXTUAL_INVERSIONS.md)
Personalize models by adding your own style or subjects.
## Other Features
@ -51,6 +51,9 @@ Prevent InvokeAI from displaying unwanted racy images.
### * [Controlling Logging](LOGGING.md)
Control how InvokeAI logs status messages.
### * [Command-line Utilities](UTILITIES.md)
A list of the command-line utilities available with InvokeAI.
<!-- OUT OF DATE
### * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns,

43
docs/help/FAQ.md Normal file
View File

@ -0,0 +1,43 @@
# FAQs
**Where do I get started? How can I install Invoke?**
- You can download the latest installers [here](https://github.com/invoke-ai/InvokeAI/releases) - Note that any releases marked as *pre-release* are in a beta state. You may experience some issues, but we appreciate your help testing those! For stable/reliable installations, please install the **[Latest Release](https://github.com/invoke-ai/InvokeAI/releases/latest)**
**How can I download models? Can I use models I already have downloaded?**
- Models can be downloaded through the model manager, or through option [4] in the invoke.bat/invoke.sh launcher script. To download a model through the Model Manager, use the HuggingFace Repo ID by pressing the “Copy” button next to the repository name. Alternatively, to download a model from CivitAi, use the download link in the Model Manager.
- Models that are already downloaded can be used by creating a symlink to the model location in the `autoimport` folder or by using the Model Mangers “Scan for Models” function.
**My images are taking a long time to generate. How can I speed up generation?**
- A common solution is to reduce the size of your RAM & VRAM cache to 0.25. This ensures your system has enough memory to generate images.
- Additionally, check the [hardware requirements](https://invoke-ai.github.io/InvokeAI/#hardware-requirements) to ensure that your system is capable of generating images.
- Lastly, double check your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup.
**Ive installed Python on Windows but the installer says it cant find it?**
- Then ensure that you checked **'Add python.exe to PATH'** when installing Python. This can be found at the bottom of the Python Installer window. If you already have Python installed, this can be done with the modify / repair feature of the installer.
**Ive installed everything successfully but I still get an error about Triton when starting Invoke?**
- This can be safely ignored. InvokeAI doesn't use Triton, but if you are on Linux and wish to dismiss the error, you can install Triton.
**I updated to 3.4.0 and now xFormers cant load C++/CUDA?**
- An issue occurred with your PyTorch update. Follow these steps to fix :
1. Launch your invoke.bat / invoke.sh and select the option to open the developer console
2. Run:`pip install ".[xformers]" --upgrade --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121`
- If you run into an error with `typing_extensions`, re-open the developer console and run: `pip install -U typing-extensions`
**It says my pip is out of date - is that why my install isn't working?**
- An out of date won't cause an installation to fail. The cause of the error can likely be found above the message that says pip is out of date.
- If you saw that warning but the install went well, don't worry about it (but you can update pip afterwards if you'd like).
**How can I generate the exact same that I found on the internet?**
Most example images with prompts that you'll find on the internet have been generated using different software, so you can't expect to get identical results. In order to reproduce an image, you need to replicate the exact settings and processing steps, including (but not limited to) the model, the positive and negative prompts, the seed, the sampler, the exact image size, any upscaling steps, etc.
**Where can I get more help?**
- Create an issue on [GitHub](https://github.com/invoke-ai/InvokeAI/issues) or post in the [#help channel](https://discord.com/channels/1020123559063990373/1149510134058471514) of the InvokeAI Discord

View File

@ -57,7 +57,9 @@ Prompts provide the models directions on what to generate. As a general rule of
Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what youd like to see. (Like Stable Diffusion!)
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at https://models.invoke.ai
Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision!
- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas*

View File

@ -15,7 +15,8 @@ title: Home
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@6.2.1/css/fontawesome.min.css">
<style>
.button {
width: 300px;
width: 100%;
max-width: 100%;
height: 50px;
background-color: #448AFF;
color: #fff;
@ -27,8 +28,9 @@ title: Home
.button-container {
display: grid;
grid-template-columns: repeat(3, 300px);
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 20px;
justify-content: center;
}
.button:hover {
@ -99,16 +101,13 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
<div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
!!! Note
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
## :octicons-link-24: Quick Links
<div class="button-container">
<a href="installation/INSTALLATION"> <button class="button">Installation</button> </a>
<a href="features/"> <button class="button">Features</button> </a>
<a href="help/gettingStartedWithAI/"> <button class="button">Getting Started</button> </a>
<a href="help/FAQ/"> <button class="button">FAQ</button> </a>
<a href="contributing/CONTRIBUTING/"> <button class="button">Contributing</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/"> <button class="button">Code and Downloads</button> </a>
<a href="https://github.com/invoke-ai/InvokeAI/issues"> <button class="button">Bug Reports </button> </a>
@ -141,10 +140,10 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
<!-- seperator -->
### Prompt Engineering
- [Prompt Syntax](features/PROMPTS.md)
- [Generating Variations](features/VARIATIONS.md)
### InvokeAI Configuration
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3
@ -163,10 +162,8 @@ still a work in progress, but coming soon.
### Command-Line Interface Retired
The original "invokeai" command-line interface has been retired. The
`invokeai` command will now launch a new command-line client that can
be used by developers to create and test nodes. It is not intended to
be used for routine image generation or manipulation.
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
@ -198,6 +195,7 @@ The list of schedulers has been completely revamped and brought up to date:
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.

View File

@ -40,7 +40,7 @@ experimental versions later.
this, open up a command-line window ("Terminal" on Linux and
Macintosh, "Command" or "Powershell" on Windows) and type `python
--version`. If Python is installed, it will print out the version
number. If it is version `3.9.*`, `3.10.*` or `3.11.*` you meet
number. If it is version `3.10.*` or `3.11.*` you meet
requirements.
!!! warning "What to do if you have an unsupported version"
@ -48,7 +48,7 @@ experimental versions later.
Go to [Python Downloads](https://www.python.org/downloads/)
and download the appropriate installer package for your
platform. We recommend [Version
3.10.9](https://www.python.org/downloads/release/python-3109/),
3.10.12](https://www.python.org/downloads/release/python-3109/),
which has been extensively tested with InvokeAI.
_Please select your platform in the section below for platform-specific
@ -179,7 +179,7 @@ experimental versions later.
you will have the choice of CUDA (NVidia cards), ROCm (AMD cards),
or CPU (no graphics acceleration). On Windows, you'll have the
choice of CUDA vs CPU, and on Macs you'll be offered CPU only. When
you select CPU on M1 or M2 Macintoshes, you will get MPS-based
you select CPU on M1/M2/M3 Macintoshes, you will get MPS-based
graphics acceleration without installing additional drivers. If you
are unsure what GPU you are using, you can ask the installer to
guess.
@ -471,7 +471,7 @@ Then type the following commands:
=== "NVIDIA System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu121
pip install xformers
```

View File

@ -32,7 +32,7 @@ gaming):
* **Python**
version 3.9 through 3.11
version 3.10 through 3.11
* **CUDA Tools**
@ -65,7 +65,7 @@ gaming):
To install InvokeAI with virtual environments and the PIP package
manager, please follow these steps:
1. Please make sure you are using Python 3.9 through 3.11. The rest of the install
1. Please make sure you are using Python 3.10 through 3.11. The rest of the install
procedure depends on this and will not work with other versions:
```bash
@ -148,7 +148,7 @@ manager, please follow these steps:
=== "CUDA (NVidia)"
```bash
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
=== "ROCm (AMD)"
@ -256,6 +256,10 @@ manager, please follow these steps:
*highly recommended** if your virtual environment is located outside of
your runtime directory.
!!! tip
On linux, it is recommended to run invokeai with the following env var: `MALLOC_MMAP_THRESHOLD_=1048576`. For example: `MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web`. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via `invoke.sh`.
10. Render away!
Browse the [features](../features/index.md) section to learn about all the
@ -287,7 +291,7 @@ manager, please follow these steps:
Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument
to get usage instructions.
### Developer Install
## Developer Install
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
@ -296,23 +300,34 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
1. From the command line, run this command:
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
> **Why do I need the frontend toolchain**?
>
> The InvokeAI project uses trunk-based development. That means our `main` branch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in `main` - we only build it for production releases.
>
> That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
2. From the command line, run this command:
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
git clone https://github.com/<your_github_username>/InvokeAI.git
```
This will create a directory named `InvokeAI` and populate it with the
full source code from the InvokeAI repository.
full source code from your fork of the InvokeAI repository.
2. Activate the InvokeAI virtual environment as per step (4) of the manual
3. Activate the InvokeAI virtual environment as per step (4) of the manual
installation protocol (important!)
3. Enter the InvokeAI repository directory and run one of these
4. Enter the InvokeAI repository directory and run one of these
commands, based on your GPU:
=== "CUDA (NVidia)"
```bash
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
pip install -e .[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
=== "ROCm (AMD)"
@ -333,11 +348,15 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
You can now run `invokeai` and its related commands. The code will be
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files
and watch the code's behavior change.
4. If you wish to contribute to the InvokeAI project, you are
When you pull in new changes to the repo, be sure to re-build the UI.
7. If you wish to contribute to the InvokeAI project, you are
encouraged to establish a GitHub account and "fork"
https://github.com/invoke-ai/InvokeAI into your own copy of the
repository. You can then use GitHub functions to create and submit
@ -356,7 +375,7 @@ you can do so using this unsupported recipe:
mkdir ~/invokeai
conda create -n invokeai python=3.10
conda activate invokeai
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu118
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
invokeai-configure --root ~/invokeai
invokeai --root ~/invokeai --web
```

View File

@ -57,11 +57,35 @@ familiar with containerization technologies such as Docker.
For downloads and instructions, visit the [NVIDIA CUDA Container
Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
### cuDNN Installation for 40/30 Series Optimization* (Optional)
1. Find the InvokeAI folder
2. Click on .venv folder - e.g., YourInvokeFolderHere\\.venv
3. Click on Lib folder - e.g., YourInvokeFolderHere\\.venv\Lib
4. Click on site-packages folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages
5. Click on Torch directory - e.g., YourInvokeFolderHere\InvokeAI\\.venv\Lib\site-packages\torch
6. Click on the lib folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib
7. Copy everything inside the folder and save it elsewhere as a backup.
8. Go to __https://developer.nvidia.com/cudnn__
9. Login or create an Account.
10. Choose the newer version of cuDNN. **Note:**
There are two versions, 11.x or 12.x for the differents architectures(Turing,Maxwell Etc...) of GPUs.
You can find which version you should download from [this link](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html).
13. Download the latest version and extract it from the download location
14. Find the bin folder E\cudnn-windows-x86_64-__Whatever Version__\bin
15. Copy and paste the .dll files into YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib **Make sure to copy, and not move the files**
16. If prompted, replace any existing files
**Notes:**
* If no change is seen or any issues are encountered, follow the same steps as above and paste the torch/lib backup folder you made earlier and replace it. If you didn't make a backup, you can also uninstall and reinstall torch through the command line to repair this folder.
* This optimization is intended for the newer version of graphics card (40/30 series) but results have been seen with older graphics card.
### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/cu118` as described in the [Manual
https://download.pytorch.org/whl/cu121` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
## :simple-amd: ROCm

View File

@ -4,38 +4,49 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker
!!! warning "For most users"
!!! warning "macOS and AMD GPU Users"
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md)
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
because Docker containers can not access the GPU on macOS.
!!! tip "For developers"
!!! warning "AMD GPU Users"
For container-related development tasks or for enabling easy
deployment to other environments (on-premises or cloud), follow these
instructions.
Container support for AMD GPUs has been reported to work by the community, but has not received
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
use the `build.sh` script to build the image for this to take effect at build time.
For general use, install locally to leverage your machine's GPU.
!!! tip "Linux and Windows Users"
For optimal performance, configure your Docker daemon to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Why containers?
They provide a flexible, reliable way to build and deploy InvokeAI. You'll also
use a Docker volume to store the largest model files and image outputs as a
first step in decoupling storage and compute. Future enhancements can do this
for other assets. See [Processes](https://12factor.net/processes) under the
Twelve-Factor App methodology for details on why running applications in such a
stateless fashion is important.
They provide a flexible, reliable way to build and deploy InvokeAI.
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
methodology for details on why running applications in such a stateless fashion is important.
You can specify the target platform when building the image and running the
container. You'll also need to specify the InvokeAI requirements file that
matches the container's OS and the architecture it will run on.
The container is configured for CUDA by default, but can be built to support AMD GPUs
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
Developers on Apple silicon (M1/M2): You
Developers on Apple silicon (M1/M2/M3): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
and performance is reduced compared with running it directly on macOS but for
development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
## TL;DR
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
```bash
# docker compose commands should be run from the `docker` directory
cd docker
docker compose up
```
## Installation in a Linux container (desktop)
### Prerequisites
@ -58,222 +69,44 @@ a token and copy it, since you will need in for the next step.
### Setup
Set the fork you want to use and other variables.
Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary.
!!! tip
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
I preffer to save my env vars
in the repository root in a `.env` (or `.envrc`) file to automatically re-apply
them when I come back.
The build- and run- scripts contain default values for almost everything,
besides the [Hugging Face Token](https://huggingface.co/settings/tokens) you
created in the last step.
Some Suggestions of variables you may want to change besides the Token:
At a minimum, you might want to set the `INVOKEAI_ROOT` environment variable
to point to the location where you wish to store your InvokeAI models, configuration, and outputs.
<figure markdown>
| Environment-Variable <img width="220" align="right"/> | Default value <img width="360" align="right"/> | Description |
| ----------------------------------------------------- | ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `HUGGING_FACE_HUB_TOKEN` | No default, but **required**! | This is the only **required** variable, without it you can't download the huggingface models |
| `REPOSITORY_NAME` | The Basename of the Repo folder | This name will used as the container repository/image name |
| `VOLUMENAME` | `${REPOSITORY_NAME,,}_data` | Name of the Docker Volume where model files will be stored |
| `ARCH` | arch of the build machine | Can be changed if you want to build the image for another arch |
| `CONTAINER_REGISTRY` | ghcr.io | Name of the Container Registry to use for the full tag |
| `CONTAINER_REPOSITORY` | `$(whoami)/${REPOSITORY_NAME}` | Name of the Container Repository |
| `CONTAINER_FLAVOR` | `cuda` | The flavor of the image to built, available options are `cuda`, `rocm` and `cpu`. If you choose `rocm` or `cpu`, the extra-index-url will be selected automatically, unless you set one yourself. |
| `CONTAINER_TAG` | `${INVOKEAI_BRANCH##*/}-${CONTAINER_FLAVOR}` | The Container Repository / Tag which will be used |
| `INVOKE_DOCKERFILE` | `Dockerfile` | The Dockerfile which should be built, handy for development |
| `PIP_EXTRA_INDEX_URL` | | If you want to use a custom pip-extra-index-url |
| `INVOKEAI_ROOT` | `~/invokeai` | **Required** - the location of your InvokeAI root directory. It will be created if it does not exist.
| `HUGGING_FACE_HUB_TOKEN` | | InvokeAI will work without it, but some of the integrations with HuggingFace (like downloading from models from private repositories) may not work|
| `GPU_DRIVER` | `cuda` | Optionally change this to `rocm` to build the image for AMD GPUs. NOTE: Use the `build.sh` script to build the image for this to take effect.
</figure>
#### Build the Image
I provided a build script, which is located next to the Dockerfile in
`docker/build.sh`. It can be executed from repository root like this:
Use the standard `docker compose build` command from within the `docker` directory.
```bash
./docker/build.sh
```
The build Script not only builds the container, but also creates the docker
volume if not existing yet.
If using an AMD GPU:
a: set the `GPU_DRIVER=rocm` environment variable in `docker-compose.yml` and continue using `docker compose build` as usual, or
b: set `GPU_DRIVER=rocm` in the `.env` file and use the `build.sh` script, provided for convenience
#### Run the Container
After the build process is done, you can run the container via the provided
`docker/run.sh` script
Use the standard `docker compose up` command, and generally the `docker compose` [CLI](https://docs.docker.com/compose/reference/) as usual.
```bash
./docker/run.sh
```
Once the container starts up (and configures the InvokeAI root directory if this is a new installation), you can access InvokeAI at [http://localhost:9090](http://localhost:9090)
When used without arguments, the container will start the webserver and provide
you the link to open it. But if you want to use some other parameters you can
also do so.
## Troubleshooting / FAQ
!!! example "run script example"
```bash
./docker/run.sh "banana sushi" -Ak_lms -S42 -s10
```
This would generate the legendary "banana sushi" with Seed 42, k_lms Sampler and 10 steps.
Find out more about available CLI-Parameters at [features/CLI.md](../../features/CLI/#arguments)
---
## Running the container on your GPU
If you have an Nvidia GPU, you can enable InvokeAI to run on the GPU by running
the container with an extra environment variable to enable GPU usage and have
the process run much faster:
```bash
GPU_FLAGS=all ./docker/run.sh
```
This passes the `--gpus all` to docker and uses the GPU.
If you don't have a GPU (or your host is not yet setup to use it) you will see a
message like this:
`docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].`
You can use the full set of GPU combinations documented here:
https://docs.docker.com/config/containers/resource_constraints/#gpu
For example, use `GPU_FLAGS=device=GPU-3a23c669-1f69-c64e-cf85-44e9b07e7a2a` to
choose a specific device identified by a UUID.
---
!!! warning "Deprecated"
From here on you will find the the previous Docker-Docs, which will still
provide some usefull informations.
## Usage (time to have fun)
### Startup
If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier.
If you're **directly on macOS follow these startup instructions**. With the
Conda environment activated (`conda activate ldm`), run the interactive
interface that combines the functionality of the original scripts `txt2img` and
`img2img`: Use the more accurate but VRAM-intensive full precision math because
half-precision requires autocast and won't work. By default the images are saved
in `outputs/img-samples/`.
```Shell
python3 scripts/invoke.py --full_precision
```
You'll get the script's prompt. You can see available options or quit.
```Shell
invoke> -h
invoke> q
```
### Text to Image
For quick (but bad) image results test with 5 steps (default 50) and 1 sample
image. This will let you know that everything is set up correctly. Then increase
steps to 100 or more for good (but slower) results. The prompt can be in quotes
or not.
```Shell
invoke> The hulk fighting with sheldon cooper -s5 -n1
invoke> "woman closeup highly detailed" -s 150
# Reuse previous seed and apply face restoration
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
```
You'll need to experiment to see if face restoration is making it better or
worse for your specific prompt.
If you're on a container the output is set to the Docker volume. You can copy it
wherever you want. You can download it from the Docker Desktop app, Volumes,
my-vol, data. Or you can copy it from your Mac terminal. Keep in mind
`docker cp` can't expand `*.png` so you'll need to specify the image file name.
On your host Mac (you can use the name of any container that mounted the
volume):
```Shell
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
```
### Image to Image
You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing.
`strength` is a value between 0.0 and 1.0 that controls the amount of noise that
is added to the input image. Values that approach 1.0 allow for lots of
variations but will also produce images that are not semantically consistent
with the input. 0.0 preserves image exactly, 1.0 replaces it completely.
Make sure your input image size dimensions are multiples of 64 e.g. 512x512.
Otherwise you'll get `Error: product of dimension sizes > 2**31'`. If you still
get the error
[try a different size](https://support.apple.com/guide/preview/resize-rotate-or-flip-an-image-prvw2015/mac#:~:text=image's%20file%20size-,In%20the%20Preview%20app%20on%20your%20Mac%2C%20open%20the%20file,is%20shown%20at%20the%20bottom.)
like 512x256.
If you're on a Docker container, copy your input image into the Docker volume
```Shell
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
```
Try it out generating an image (or more). The `invoke` script needs absolute
paths to find the image so don't use `~`.
If you're on your Mac
```Shell
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
```
If you're on a Linux container on your Mac
```Shell
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
```
### Web Interface
You can use the `invoke` script with a graphical web interface. Start the web
server with:
```Shell
python3 scripts/invoke.py --full_precision --web
```
If it's running on your Mac point your Mac web browser to
<http://127.0.0.1:9090>
Press Control-C at the command line to stop the web server.
### Notes
Some text you can add at the end of the prompt to make it very pretty:
```Shell
cinematic photo, highly detailed, cinematic lighting, ultra-detailed, ultrarealistic, photorealism, Octane Rendering, cyberpunk lights, Hyper Detail, 8K, HD, Unreal Engine, V-Ray, full hd, cyberpunk, abstract, 3d octane render + 4k UHD + immense detail + dramatic lighting + well lit + black, purple, blue, pink, cerulean, teal, metallic colours, + fine details, ultra photoreal, photographic, concept art, cinematic composition, rule of thirds, mysterious, eerie, photorealism, breathtaking detailed, painting art deco pattern, by hsiao, ron cheng, john james audubon, bizarre compositions, exquisite detail, extremely moody lighting, painted by greg rutkowski makoto shinkai takashi takeuchi studio ghibli, akihiko yoshida
```
The original scripts should work as well.
```Shell
python3 scripts/orig_scripts/txt2img.py --help
python3 scripts/orig_scripts/txt2img.py --ddim_steps 100 --n_iter 1 --n_samples 1 --plms --prompt "new born baby kitten. Hyper Detail, Octane Rendering, Unreal Engine, V-Ray"
python3 scripts/orig_scripts/txt2img.py --ddim_steps 5 --n_iter 1 --n_samples 1 --plms --prompt "ocean" # or --klms
```
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
and you may have cloned this repository before the issue was fixed. To solve this, please change
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

View File

@ -84,7 +84,7 @@ InvokeAI root directory's `autoimport` folder.
### Installation via `invokeai-model-install`
From the `invoke` launcher, choose option [5] "Download and install
From the `invoke` launcher, choose option [4] "Download and install
models." This will launch the same script that prompted you to select
models at install time. You can use this to add models that you
skipped the first time around. It is all right to specify a model that
@ -171,3 +171,16 @@ subfolders and organize them as you wish.
The location of the autoimport directories are controlled by settings
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
### Installing models that live in HuggingFace subfolders
On rare occasions you may need to install a diffusers-style model that
lives in a subfolder of a HuggingFace repo id. In this event, simply
add ":_subfolder-name_" to the end of the repo id. For example, if the
repo id is "monster-labs/control_v1p_sd15_qrcode_monster" and the model
you wish to fetch lives in a subfolder named "v2", then the repo id to
pass to the various model installers should be
```
monster-labs/control_v1p_sd15_qrcode_monster:v2
```

View File

@ -59,8 +59,7 @@ Prior to installing PyPatchMatch, you need to take the following steps:
`from patchmatch import patch_match`: It should look like the following:
```py
Python 3.9.5 (default, Nov 23 2021, 15:27:38)
[GCC 9.3.0] on linux
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from patchmatch import patch_match
Compiling and loading c extensions from "/home/lstein/Projects/InvokeAI/.invokeai-env/src/pypatchmatch/patchmatch".

View File

@ -28,7 +28,7 @@ command line, then just be sure to activate it's virtual environment.
Then run the following three commands:
```sh
pip install xformers~=0.0.19
pip install xformers~=0.0.22
pip install triton # WON'T WORK ON WINDOWS
python -m xformers.info output
```
@ -42,7 +42,7 @@ If all goes well, you'll see a report like the
following:
```sh
xFormers 0.0.20
xFormers 0.0.22
memory_efficient_attention.cutlassF: available
memory_efficient_attention.cutlassB: available
memory_efficient_attention.flshattF: available
@ -59,14 +59,14 @@ swiglu.gemm_fused_operand_sum: available
swiglu.fused.p.cpp: available
is_triton_available: True
is_functorch_available: False
pytorch.version: 2.0.1+cu118
pytorch.version: 2.1.0+cu121
pytorch.cuda: available
gpu.compute_capability: 8.9
gpu.name: NVIDIA GeForce RTX 4070
build.info: available
build.cuda_version: 1108
build.python_version: 3.10.11
build.torch_version: 2.0.1+cu118
build.torch_version: 2.1.0+cu121
build.env.TORCH_CUDA_ARCH_LIST: 5.0+PTX 6.0 6.1 7.0 7.5 8.0 8.6
build.env.XFORMERS_BUILD_TYPE: Release
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None
@ -92,33 +92,22 @@ installed from source. These instructions were written for a system
running Ubuntu 22.04, but other Linux distributions should be able to
adapt this recipe.
#### 1. Install CUDA Toolkit 11.8
#### 1. Install CUDA Toolkit 12.1
You will need the CUDA developer's toolkit in order to compile and
install xFormers. **Do not try to install Ubuntu's nvidia-cuda-toolkit
package.** It is out of date and will cause conflicts among the NVIDIA
driver and binaries. Instead install the CUDA Toolkit package provided
by NVIDIA itself. Go to [CUDA Toolkit 11.8
Downloads](https://developer.nvidia.com/cuda-11-8-0-download-archive)
by NVIDIA itself. Go to [CUDA Toolkit 12.1
Downloads](https://developer.nvidia.com/cuda-12-1-0-download-archive)
and use the target selection wizard to choose your platform and Linux
distribution. Select an installer type of "runfile (local)" at the
last step.
This will provide you with a recipe for downloading and running a
install shell script that will install the toolkit and drivers. For
example, the install script recipe for Ubuntu 22.04 running on a
x86_64 system is:
install shell script that will install the toolkit and drivers.
```
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run
```
Rather than cut-and-paste this example, We recommend that you walk
through the toolkit wizard in order to get the most up to date
installer for your system.
#### 2. Confirm/Install pyTorch 2.01 with CUDA 11.8 support
#### 2. Confirm/Install pyTorch 2.1.0 with CUDA 12.1 support
If you are using InvokeAI 3.0.2 or higher, these will already be
installed. If not, you can check whether you have the needed libraries
@ -133,7 +122,7 @@ Then run the command:
python -c 'exec("import torch\nprint(torch.__version__)")'
```
If it prints __1.13.1+cu118__ you're good. If not, you can install the
If it prints __2.1.0+cu121__ you're good. If not, you can install the
most up to date libraries with this command:
```sh

View File

@ -17,14 +17,32 @@ This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### **[Automated Installer](010_INSTALL_AUTOMATED.md)**
## **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
#### [Manual Installation](020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
## **[Docker Installation](040_INSTALL_DOCKER.md)**
This method is recommended for those familiar with running Docker containers.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
@ -63,43 +81,3 @@ images in full-precision mode:
- GTX 1650 series cards
- GTX 1660 series cards
## Installation options
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
✅ This is the recommended option for first time users.
2. [Manual Installation](020_INSTALL_MANUAL.md)
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
This method is recommended for users who have previously used `conda`
or `pip` in the past, developers, and anyone who wishes to remain on
the cutting edge of future InvokeAI development and is willing to put
up with occasional glitches and breakage.
3. [Docker Installation](040_INSTALL_DOCKER.md)
We also offer a method for creating Docker containers containing
InvokeAI and its dependencies. This method is recommended for
individuals with experience with Docker containers and understand
the pluses and minuses of a container-based install.
## Quick Guides
* [Installing CUDA and ROCm Drivers](./030_INSTALL_CUDA_AND_ROCM.md)
* [Installing XFormers](./070_INSTALL_XFORMERS.md)
* [Installing PyPatchMatch](./060_INSTALL_PATCHMATCH.md)
* [Installing New Models](./050_INSTALLING_MODELS.md)

View File

@ -79,7 +79,7 @@ title: Manual Installation, Linux
and obtaining an access token for downloading. It will then download and
install the weights files for you.
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing
the same thing.
7. Start generating images!
@ -112,7 +112,7 @@ title: Manual Installation, Linux
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../features/CLI.md#model-selection-and-importation). The
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
8. Subsequently, to relaunch the script, be sure to run "conda activate

View File

@ -150,7 +150,7 @@ will do our best to help.
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../features/CLI.md#model-selection-and-importation). The
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
---

View File

@ -128,7 +128,7 @@ python scripts/invoke.py --web --max_load_models=3 \
```
These options are described in detail in the
[Command-Line Interface](../../features/CLI.md) documentation.
[Command-Line Interface](../../deprecated/CLI.md) documentation.
## Troubleshooting

View File

@ -75,7 +75,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
obtaining an access token for downloading. It will then download and install the
weights files for you.
Please look [here](../INSTALL_MANUAL.md) for a manual process for doing the
Please look [here](../020_INSTALL_MANUAL.md) for a manual process for doing the
same thing.
8. Start generating images!
@ -108,7 +108,7 @@ Note that you will need NVIDIA drivers, Python 3.10, and Git installed beforehan
To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for
either the CLI or the Web UI. See [Command Line
Client](../../features/CLI.md#model-selection-and-importation). The
Client](../../deprecated/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`.
9. Subsequently, to relaunch the script, first activate the Anaconda

View File

@ -1,13 +1,36 @@
# Using the Node Editor
# Using the Workflow Editor
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
The workflow editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
## Features
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
![linearview](../assets/nodes/linearview.png)
### Renaming Fields and Nodes
Any node or input field can be renamed in the workflow editor. If the input field you have renamed has been added to the Linear View, the changed name will be reflected in the Linear View and the node.
### Managing Nodes
* Ctrl+C to copy a node
* Ctrl+V to paste a node
* Backspace/Delete to delete a node
* Shift+Click to drag and select multiple nodes
### Node Caching
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Nodes Editor and build workflows to suit your needs.
## Important Concepts
## Important Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -37,7 +60,7 @@ It is common to want to use both the same seed (for continuity) and random seeds
### ControlNet
The ControlNet node outputs a Control, which can be provided as input to non-image *ToLatents nodes. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
The ControlNet node outputs a Control, which can be provided as input to a Denoise Latents node. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png)
@ -59,10 +82,9 @@ Iteration is a common concept in any processing, and means to repeat a process w
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
### Batch / Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
Batch or multiple image generation in the workflow editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate, meaning this example will generate 4 images. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection. This noise can then be fed to the Denoise Latents node for it to iterate through the denoising process with the different seeds provided.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)

View File

@ -4,30 +4,59 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To use a node, add the node to the `nodes` folder found in your InvokeAI install location.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
The suggested method is to use `git clone` to clone the repository the node is found in. This allows for easy updates of the node in the future.
## Community Nodes
If you'd prefer, you can also just download the `.py` file from the linked repository and add it to the `nodes` folder.
### FaceTools
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
- Community Nodes
+ [Average Images](#average-images)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Ideal Size](#ideal-size)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
- [Help](#help)
**Node Link:** https://github.com/ymgenesis/FaceTools/
**FaceMask Output Examples**
![5cc8abce-53b0-487a-b891-3bf94dcc8960](https://github.com/invoke-ai/InvokeAI/assets/25252829/43f36d24-1429-4ab1-bd06-a4bedfe0955e)
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
--------------------------------
### Ideal Size
### Average Images
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Description:** This node takes in a collection of images of the same size and averages them as output. It converts everything to RGB mode first.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
**Node Link:** https://github.com/JPPhoto/average-images-node
--------------------------------
### Depth Map from Wavefront OBJ
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
**Node Link:** https://github.com/dwringer/depth-from-obj-node
**Example Usage:**
</br><img src="https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg" width="500" />
--------------------------------
### Film Grain
@ -37,22 +66,19 @@ To use a community node graph, download the the `.json` node graph file and load
**Node Link:** https://github.com/JPPhoto/film-grain-node
--------------------------------
### Image Picker
### Generative Grammar-Based Prompt Nodes
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no nonterminal terms remain in the string.
**Node Link:** https://github.com/JPPhoto/image-picker-node
This includes 3 Nodes:
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
--------------------------------
### Retroize
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
**Node Link:** https://github.com/Ar7ific1al/invokeai-retroizeinode/
**Retroize Output Examples**
![image](https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974)
**Example Usage:**
</br><img src="https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg" width="500" />
--------------------------------
### GPT2RandomPromptMaker
@ -65,31 +91,129 @@ To use a community node graph, download the the `.json` node graph file and load
Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment.
![9acf5aef-7254-40dd-95b3-8eac431dfab0 (1)](https://github.com/mickr777/InvokeAI/assets/115216705/8496ba09-bcdd-4ff7-8076-ff213b6a1e4c)
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/8496ba09-bcdd-4ff7-8076-ff213b6a1e4c" width="200" />
--------------------------------
### Grid to Gif
**Description:** One node that turns a grid image into an image collection, one node that turns an image collection into a gif.
**Node Link:** https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/GridToGif.py
**Example Node Graph:** https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/Grid%20to%20Gif%20Example%20Workflow.json
**Output Examples**
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/input.png" width="300" />
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/output.gif" width="300" />
--------------------------------
### Halftone
**Description**: Halftone converts the source image to grayscale and then performs halftoning. CMYK Halftone converts the image to CMYK and applies a per-channel halftoning to make the source image look like a magazine or newspaper. For both nodes, you can specify angles and halftone dot spacing.
**Node Link:** https://github.com/JPPhoto/halftone-node
**Example**
Input:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/fd5efb9f-4355-4409-a1c2-c1ca99e0cab4" width="300" />
Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/7e606f29-e68f-4d46-b3d5-97f799a4ec2f" width="300" />
CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 15 Nodes:
- *Adjust Image Hue Plus* - Rotate the hue of an image in one of several different color spaces.
- *Blend Latents/Noise (Masked)* - Use a mask to blend part of one latents tensor [including Noise outputs] into another. Can be used to "renoise" sections during a multi-stage [masked] denoising process.
- *Enhance Image* - Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
- *Equivalent Achromatic Lightness* - Calculates image lightness accounting for Helmholtz-Kohlrausch effect based on a method described by High, Green, and Nussbaum (2023).
- *Text to Mask (Clipseg)* - Input a prompt and an image to generate a mask representing areas of the image matched by the prompt.
- *Text to Mask Advanced (Clipseg)* - Output up to four prompt masks combined with logical "and", logical "or", or as separate channels of an RGBA image.
- *Image Layer Blend* - Perform a layered blend of two images using alpha compositing. Opacity of top layer is selectable, with optional mask and several different blend modes/color spaces.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Image Dilate or Erode* - Dilate or expand a mask (or any image!). This is equivalent to an expand/contract operation.
- *Image Value Thresholds* - Clip an image to pure black/white beyond specified thresholds.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
- *Rotate/Flip Image* - Rotate an image in degrees clockwise/counterclockwise about its center, optionally resizing the image boundaries to fit, or flipping it about the vertical and/or horizontal axes.
- *Shadows/Highlights/Midtones* - Extract three masks (with adjustable hard or soft thresholds) representing shadows, midtones, and highlights regions of an image.
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
**Node Link:** https://github.com/dwringer/composition-nodes
</br><img src="https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_pack_overview.jpg" width="500" />
--------------------------------
### Image to Character Art Image Nodes
**Description:** Group of nodes to convert an input image into ascii/unicode art Image
**Node Link:** https://github.com/mickr777/imagetoasciiimage
**Output Examples**
<img src="https://user-images.githubusercontent.com/115216705/271817646-8e061fcc-9a2c-4fa9-bcc7-c0f7b01e9056.png" width="300" /><img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/3c4990eb-2f42-46b9-90f9-0088b939dc6a" width="300" /></br>
<img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/fee7f800-a4a8-41e2-a66b-c66e4343307e" width="300" />
<img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/1d9c1003-a45f-45c2-aac7-46470bb89330" width="300" />
--------------------------------
### Image Picker
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
**Node Link:** https://github.com/JPPhoto/image-picker-node
--------------------------------
### Load Video Frame
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs.
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs.
**Node Link:** https://github.com/helix4u/load_video_frame
**Example Node Graph:** https://github.com/helix4u/load_video_frame/blob/main/Example_Workflow.json
**Output Example:**
=======
![Example animation](https://github.com/helix4u/load_video_frame/blob/main/testmp4_embed_converted.gif)
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
<img src="https://raw.githubusercontent.com/helix4u/load_video_frame/main/_git_assets/testmp4_embed_converted.gif" width="500" />
--------------------------------
### Make 3D
**Description:** Create compelling 3D stereo images from 2D originals.
**Node Link:** [https://gitlab.com/srcrr/shift3d/-/raw/main/make3d.py](https://gitlab.com/srcrr/shift3d)
**Example Node Graph:** https://gitlab.com/srcrr/shift3d/-/raw/main/example-workflow.json?ref_type=heads&inline=false
**Output Examples**
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
--------------------------------
### Oobabooga
**Description:** asks a local LLM running in Oobabooga's Text-Generation-Webui to write a prompt based on the user input.
**Link:** https://github.com/sammyf/oobabooga-node
**Example:**
"describe a new mystical creature in its natural environment"
@ -99,7 +223,7 @@ Generated Prompt: An enchanted weapon will be usable by any character regardless
"The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface.
As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace."
![glimmerwing_small](https://github.com/sammyf/oobabooga-node/assets/42468608/cecdd820-93dd-4c35-abbf-607e001fb2ed)
<img src="https://github.com/sammyf/oobabooga-node/assets/42468608/cecdd820-93dd-4c35-abbf-607e001fb2ed" width="300" />
**Requirement**
@ -107,62 +231,37 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
**Note**
This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
This node works best with SDXL models, especially as the style can be described independently of the LLM's output.
--------------------------------
### Depth Map from Wavefront OBJ
### Prompt Tools
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
**Description:** A set of InvokeAI nodes that add general prompt manipulation tools. These were written to accompany the PromptsFromFile node and other prompt generation nodes.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
1. PromptJoin - Joins to prompts into one.
2. PromptReplace - performs a search and replace on a prompt. With the option of using regex.
3. PromptSplitNeg - splits a prompt into positive and negative using the old V2 method of [] for negative.
4. PromptToFile - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
5. PTFieldsCollect - Converts image generation fields into a Json format string that can be passed to Prompt to file.
6. PTFieldsExpand - Takes Json string and converts it to individual generation parameters This can be fed from the Prompts to file node.
7. PromptJoinThree - Joins 3 prompt together.
8. PromptStrength - This take a string and float and outputs another string in the format of (string)strength like the weighted format of compel.
9. PromptStrengthCombine - This takes a collection of prompt strength strings and outputs a string in the .and() or .blend() format that can be fed into a proper prompt node.
**Node Link:** https://github.com/dwringer/depth-from-obj-node
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md
**Example Usage:**
![depth from obj usage graph](https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg)
**Node Link:** https://github.com/skunkworxdark/Prompt-tools-nodes
--------------------------------
### Enhance Image (simple adjustments)
### Retroize
**Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
**Node Link:** https://github.com/Ar7ific1al/invokeai-retroizeinode/
**Node Link:** https://github.com/dwringer/image-enhance-node
**Retroize Output Examples**
**Example Usage:**
![enhance image usage graph](https://raw.githubusercontent.com/dwringer/image-enhance-node/main/image_enhance_usage.jpg)
--------------------------------
### Generative Grammar-Based Prompt Nodes
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no more nonterminal terms remain in the string.
This includes 3 Nodes:
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
**Example Usage:**
![lookups usage example graph](https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg)
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 4 Nodes:
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
**Node Link:** https://github.com/dwringer/composition-nodes
**Example Usage:**
![composition nodes usage graph](https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_nodes_usage.jpg)
<img src="https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974" width="500" />
--------------------------------
### Size Stepper Nodes
@ -174,10 +273,9 @@ A third node is included, *Random Switch (Integers)*, which is just a generic ve
**Node Link:** https://github.com/dwringer/size-stepper-nodes
**Example Usage:**
![size stepper usage graph](https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg)
</br><img src="https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg" width="500" />
--------------------------------
### Text font to Image
**Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
@ -186,27 +284,70 @@ A third node is included, *Random Switch (Integers)*, which is just a generic ve
**Output Examples**
![a3609d48-d9b7-41f0-b280-063d857986fb](https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36)
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36" width="300" />
Results after using the depth controlnet
![9133eabb-bcda-4326-831e-1b641228b178](https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a)
![4f9a3fa8-9be9-4236-8a3e-fcec66decd2a](https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc)
![babd69c4-9d60-4a55-a834-5e8397f62610](https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89)
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a" width="300" />
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc" width="300" />
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89" width="300" />
--------------------------------
### Thresholding
**Description:** This node generates masks for highlights, midtones, and shadows given an input image. You can optionally specify a blur for the lookup table used in making those masks from the source image.
**Node Link:** https://github.com/JPPhoto/thresholding-node
**Examples**
Input:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c88ada13-fb3d-484c-a4fe-947b44712632" width="300" />
Highlights/Midtones/Shadows:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/727021c1-36ff-4ec8-90c8-105e00de986d" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0b721bfc-f051-404e-b905-2f16b824ddfe" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/04c1297f-1c88-42b6-a7df-dd090b976286" width="300" />
Highlights/Midtones/Shadows (with LUT blur enabled):
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/19aa718a-70c1-4668-8169-d68f4bd13771" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
--------------------------------
### Unsharp Mask
**Description:** Applies an unsharp mask filter to an image, preserving its alpha channel in the process.
**Node Link:** https://github.com/JPPhoto/unsharp-mask-node
--------------------------------
### XY Image to Grid and Images to Grids nodes
**Description:** Image to grid nodes and supporting tools.
1. "Images To Grids" node - Takes a collection of images and creates a grid(s) of images. If there are more images than the size of a single grid then multiple grids will be created until it runs out of images.
2. "XYImage To Grid" node - Converts a collection of XYImages into a labeled Grid of images. The XYImages collection has to be built using the supporting nodes. See example node setups for more details.
See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/XYGrid_nodes
--------------------------------
### Example Node Template
**Description:** This node allows you to do super cool things with InvokeAI.
**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py
**Node Link:** https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/invocations/prompt.py
**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json
**Example Workflow:** https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json
**Output Examples**
![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"}
</br><img src="https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png" width="500" />
## Disclaimer

View File

@ -4,10 +4,10 @@ To learn about the specifics of creating a new node, please visit our [Node crea
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
- Make sure the node is contained in a new Python (.py) file. Preferably, the node is in a repo with a README detailing the nodes usage & examples to help others more easily use your node. Including the tag "invokeai-node" in your repository's README can also help other users find it more easily.
- Submit a pull request with a link to your node(s) repo in GitHub against the `main` branch to add the node to the [Community Nodes](communityNodes.md) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does. Example output images and workflows are very helpful for other users looking to use your node.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you may be asked for permission to include it in the core project.
### Community Node Template

View File

@ -1,6 +1,6 @@
# List of Default Nodes
The table below contains a list of the default nodes shipped with InvokeAI and their descriptions.
The table below contains a list of the default nodes shipped with InvokeAI and their descriptions.
| Node <img width=160 align="right"> | Function |
|: ---------------------------------- | :--------------------------------------------------------------------------------------|
@ -17,11 +17,13 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Conditioning Primitive | A conditioning tensor primitive value|
|Content Shuffle Processor | Applies content shuffle processing to image|
|ControlNet | Collects ControlNet info to pass to other nodes|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Denoise Latents | Denoises noisy latents to decodable images|
|Divide Integers | Divides two numbers|
|Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|[FaceMask](./detailedNodes/faceTools.md#facemask) | Generates masks for faces in an image to use with Inpainting|
|[FaceIdentifier](./detailedNodes/faceTools.md#faceidentifier) | Identifies and labels faces in an image|
|[FaceOff](./detailedNodes/faceTools.md#faceoff) | Creates a new image that is a scaled bounding box with a mask on the face for Inpainting|
|Float Math | Perform basic math operations on two floats|
|Float Primitive Collection | A collection of float primitive values|
|Float Primitive | A float primitive value|
|Float Range | Creates a range|
@ -29,6 +31,7 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Blur Image | Blurs an image|
|Extract Image Channel | Gets a channel from an image.|
|Image Primitive Collection | A collection of image primitive values|
|Integer Math | Perform basic math operations on two integers|
|Convert Image Mode | Converts an image to a different mode.|
|Crop Image | Crops an image to a specified box. The box can be outside of the image.|
|Image Hue Adjustment | Adjusts the Hue of an image.|
@ -42,6 +45,8 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Paste Image | Pastes an image into another image.|
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
|Resize Image | Resizes an image to specific dimensions|
|Round Float | Rounds a float to a specified number of decimal places|
|Float to Integer | Converts a float to an integer. Optionally rounds to an even multiple of a input number.|
|Scale Image | Scales an image by a factor|
|Image to Latents | Encodes an image into latents.|
|Add Invisible Watermark | Add an invisible watermark to an image|
@ -72,6 +77,7 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in __init__ to receive providers.|
|ONNX Text to Latents | Generates latents from conditionings.|
|ONNX Model Loader | Loads a main model, outputting its submodels.|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Openpose Processor | Applies Openpose processing to image|
|PIDI Processor | Applies PIDI processing to image|
|Prompts from File | Loads prompts from a text file|
@ -93,5 +99,6 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|String Primitive | A string primitive value|
|Subtract Integers | Subtracts two numbers|
|Tile Resample Processor | Tile resampler processor|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput|
|Zoe (Depth) Processor | Applies Zoe depth processing to image|

View File

@ -0,0 +1,154 @@
# Face Nodes
## FaceOff
FaceOff mimics a user finding a face in an image and resizing the bounding box
around the head in Canvas.
Enter a face ID (found with FaceIdentifier) to choose which face to mask.
Just as you would add more context inside the bounding box by making it larger
in Canvas, the node gives you a padding input (in pixels) which will
simultaneously add more context, and increase the resolution of the bounding box
so the face remains the same size inside it.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing. If the detected masks are imperfect and stray
too far outside/inside of faces, the node gives you X & Y offsets to shrink/grow
the masks by a multiplier.
FaceOff will output the face in a bounded image, taking the face off of the
original image for input into any node that accepts image inputs. The node also
outputs a face mask with the dimensions of the bounded image. The X & Y outputs
are for connecting to the X & Y inputs of the Paste Image node, which will place
the bounded image back on the original image using these coordinates.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Face ID | The face ID to process, numbered from 0. Multiple faces not supported. Find a face's ID with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Padding | All-axis padding around the mask in pixels |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Output | Description |
| ------------- | ------------------------------------------------ |
| Bounded Image | Original image bound, cropped, and resized |
| Width | The width of the bounded image in pixels |
| Height | The height of the bounded image in pixels |
| Mask | The output mask |
| X | The x coordinate of the bounding box's left side |
| Y | The y coordinate of the bounding box's top side |
## FaceMask
FaceMask mimics a user drawing masks on faces in an image in Canvas.
The "Face IDs" input allows the user to select specific faces to be masked.
Leave empty to detect and mask all faces, or a comma-separated list for a
specific combination of faces (ex: `1,2,4`). A single integer will detect and
mask that specific face. Find face IDs with the FaceIdentifier node.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing.
If the detected masks are imperfect and stray too far outside/inside of faces,
the node gives you X & Y offsets to shrink/grow the masks by a multiplier. All
masks shrink/grow together by the X & Y offset values.
By default, masks are created to change faces. When masks are inverted, they
change surrounding areas, protecting faces.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Face IDs | Comma-separated list of face ids to mask eg '0,2,7'. Numbered from 0. Leave empty to mask all. Find face IDs with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Invert Mask | Toggle to invert the face mask |
| Output | Description |
| ------ | --------------------------------- |
| Image | The original image |
| Width | The width of the image in pixels |
| Height | The height of the image in pixels |
| Mask | The output face mask |
## FaceIdentifier
FaceIdentifier outputs an image with detected face IDs printed in white numbers
onto each face.
Face IDs can then be used in FaceMask and FaceOff to selectively mask all, a
specific combination, or single faces.
The FaceIdentifier output image is generated for user reference, and isn't meant
to be passed on to other image-processing nodes.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing. If an image is changed in the slightest, run
it through FaceIdentifier again to get updated FaceIDs.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Output | Description |
| ------ | ------------------------------------------------------------------------------------------------ |
| Image | The original image with small face ID numbers printed in white onto each face for user reference |
| Width | The width of the original image in pixels |
| Height | The height of the original image in pixels |
## Tips
- If not all target faces are being detected, activate Chunk to bypass full
image face detection and greatly improve detection success.
- Final results will vary between full-image detection and chunking for faces
that are detectable by both due to the nature of the process. Try either to
your taste.
- Be sure Minimum Confidence is set the same when using FaceIdentifier with
FaceOff/FaceMask.
- For FaceOff, use the color correction node before faceplace to correct edges
being noticeable in the final image (see example screenshot).
- Non-inpainting models may struggle to paint/generate correctly around faces.
- If your face won't change the way you want it to no matter what you change,
consider that the change you're trying to make is too much at that resolution.
For example, if an image is only 512x768 total, the face might only be 128x128
or 256x256, much smaller than the 512x512 your SD1.5 model was probably
trained on. Try increasing the resolution of the image by upscaling or
resizing, add padding to increase the bounding box's resolution, or use an
image where the face takes up more pixels.
- If the resulting face seems out of place pasted back on the original image
(ie. too large, not proportional), add more padding on the FaceOff node to
give inpainting more context. Context and good prompting are important to
keeping things proportional.
- If you find the mask is too big/small and going too far outside/inside the
area you want to affect, adjust the x & y offsets to shrink/grow the mask area
- Use a higher denoise start value to resemble aspects of the original face or
surroundings. Denoise start = 0 & denoise end = 1 will make something new,
while denoise start = 0.50 & denoise end = 1 will be 50% old and 50% new.
- mediapipe isn't good at detecting faces with lots of face paint, hair covering
the face, etc. Anything that obstructs the face will likely result in no faces
being detected.
- If you find your face isn't being detected, try lowering the minimum
confidence value from 0.5. This could result in false positives, however
(random areas being detected as faces and masked).
- After altering an image and wanting to process a different face in the newly
altered image, run the altered image through FaceIdentifier again to see the
new Face IDs. MediaPipe will most likely detect faces in a different order
after an image has been changed in the slightest.

View File

@ -1,15 +1,18 @@
# Example Workflows
TODO: Will update once uploading workflows is available.
We've curated some example workflows for you to get started with Workflows in InvokeAI
## Text2Image
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
## Image2Image
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
## ControlNet
## Upscaling
## Inpainting / Outpainting
## LoRAs
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL Text to Image with Refiner](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/SDXL_w_Refiner_Text_to_Image.json)
* [Multi ControlNet (Canny & Depth)](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Multi_ControlNet_Canny_and_Depth.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale_w_Canny_ControlNet.json)
* [Prompt From File](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Prompt_from_File.json)
* [Face Detailer with IP-Adapter & ControlNet](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/Face_Detailer_with_IP-Adapter_and_Canny.json.json)
* [FaceMask](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceMask.json)
* [FaceOff with 2x Face Scaling](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceOff_FaceScale2x.json)
* [QR Code Monster](https://github.com/invoke-ai/InvokeAI/blob/docs/main/docs/workflows/QR_Code_Monster.json)

View File

@ -0,0 +1,964 @@
{
"name": "ESRGAN img2img upscale w_ Canny ControlNet",
"author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "tiled, upscale controlnet, default",
"notes": "",
"exposedFields": [
{
"nodeId": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"fieldName": "model"
},
{
"nodeId": "771bdf6a-0813-4099-a5d8-921a138754d4",
"fieldName": "image"
},
{
"nodeId": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"fieldName": "prompt"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "invocation",
"data": {
"id": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "compel",
"inputs": {
"prompt": {
"id": "5f762fae-d791-42d9-8ab5-2b830c33ff20",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "8ac95f40-317d-4513-bbba-b99effd3b438",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46c65b2b-c0b5-40c2-b183-74e9451c6d56",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1261.0015571435993,
"y": 1513.9276360694537
}
},
{
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"type": "invocation",
"data": {
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "b35ae88a-f2d2-43f6-958c-8c624391250f",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "02f243cb-c6e2-42c5-8be9-ef0519d54383",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "7762ed13-5b28-40f4-85f1-710942ceb92a",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "69566153-1918-417d-a3bb-32e9e857ef6b",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 433.44132965778,
"y": 1419.9552496403696
}
},
{
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"type": "invocation",
"data": {
"id": "771bdf6a-0813-4099-a5d8-921a138754d4",
"type": "image",
"inputs": {
"image": {
"id": "0f6d68a2-38bd-4f65-a112-0a256c7a2678",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "76f6f9b6-755b-4373-93fa-6a779998d2c8",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "6858e46b-707c-444f-beda-9b5f4aecfdf8",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "421bdc6e-ecd1-4935-9665-d38ab8314f79",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 11.612243766002848,
"y": 1989.909405085168
}
},
{
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "invocation",
"data": {
"id": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "esrgan",
"inputs": {
"metadata": {
"id": "8fa0c7eb-5bd3-4575-98e7-72285c532504",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "3c949799-a504-41c9-b342-cff4b8146c48",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"model_name": {
"id": "77cb4750-53d6-4c2c-bb5c-145981acbf17",
"name": "model_name",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "RealESRGAN_x4plus.pth"
},
"tile_size": {
"id": "7787b3ad-46ee-4248-995f-bc740e1f988b",
"name": "tile_size",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 400
}
},
"outputs": {
"image": {
"id": "37e6308e-e926-4e07-b0db-4e8601f495d0",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "c194d84a-fac7-4856-b646-d08477a5ad2b",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "b2a6206c-a9c8-4271-a055-0b93a7f7d505",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.1.0"
},
"width": 320,
"height": 339,
"position": {
"x": 436.07457889056195,
"y": 1967.3109314112623
}
},
{
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "invocation",
"data": {
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "canny_image_processor",
"inputs": {
"metadata": {
"id": "52c877c8-25d9-4949-8518-f536fcdd152d",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "e0af11fe-4f95-4193-a599-cf40b6a963f5",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"low_threshold": {
"id": "ab775f7b-f556-4298-a9d6-2274f3a6c77c",
"name": "low_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 100
},
"high_threshold": {
"id": "9e58b615-06e4-417f-b0d8-63f1574cd174",
"name": "high_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 200
}
},
"outputs": {
"image": {
"id": "61feb8bf-95c9-4634-87e2-887fc43edbdf",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "9e203e41-73f7-4cfa-bdca-5040e5e60c55",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "ec7d99dc-0d82-4495-a759-6423808bff1c",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 339,
"position": {
"x": 1221.7155516160597,
"y": 1971.0099052871012
}
},
{
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
"type": "invocation",
"data": {
"id": "ca1d020c-89a8-4958-880a-016d28775cfa",
"type": "controlnet",
"inputs": {
"image": {
"id": "2973c126-e301-4595-a7dc-d6e1729ccdbf",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "4bb4d987-8491-4839-b41b-6e2f546fe2d0",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "sd-controlnet-canny",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "a3cf387a-b58f-4058-858f-6a918efac609",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1
},
"begin_step_percent": {
"id": "e0614f69-8a58-408b-9238-d3a44a4db4e0",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "ac683539-b6ed-4166-9294-2040e3ede206",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "f00b21de-cbd7-4901-8efc-e7134a2dc4c8",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "cafb60ee-3959-4d57-a06c-13b83be6ea4f",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "dfb88dd1-12bf-4034-9268-e726f894c131",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 1681.7783532660528,
"y": 1845.0516454465633
}
},
{
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "invocation",
"data": {
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "noise",
"inputs": {
"seed": {
"id": "f76b0e01-b601-423f-9b5f-ab7a1f10fe82",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "eec326d6-710c-45de-a25c-95704c80d7e2",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "2794a27d-5337-43ca-95d9-41b673642c94",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "ae7654e3-979e-44a1-8968-7e3199e91e66",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "8b6dc166-4ead-4124-8ac9-529814b0cbb9",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "e3fe3940-a277-4838-a448-5f81f2a7d99d",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "48ecd6ef-c216-40d5-9d1b-d37bd00c82e7",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1660.5387878479382,
"y": 1664.7391082353483
}
},
{
"id": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "invocation",
"data": {
"id": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "e127084b-72f5-4fe4-892b-84f34f88bce9",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "72cde4ee-55de-4d3e-9057-74e741c04e20",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "747f7023-1c19-465b-bec8-1d9695dd3505",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "80860292-633c-46f2-83d0-60d0029b65d2",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "ebc71e6f-9148-4f12-b455-5e1f179d1c3a",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "ced44b8f-3bad-4c34-8113-13bc0faed28a",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "79bf4b77-3502-4f72-ba8b-269c4c3c5c72",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "ed56e2b8-f477-41a2-b9f5-f15f4933ae65",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "146b790c-b08e-437c-a2e1-e393c2c1c41a",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "75ed3df1-d261-4b8e-a89b-341c4d7161fb",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "eab9a61d-9b64-44d3-8d90-4686f5887cb0",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "2dc8d637-58fd-4069-ad33-85c32d958b7b",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "6767e40a-97c6-4487-b3c9-cad1c150bf9f",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "6251efda-d97d-4ff1-94b5-8cc6b458c184",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "4e7986a4-dff2-4448-b16b-1af477b81f8b",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "dad525dd-d2f8-4f07-8c8d-51f2a3c5456e",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "af03a089-4739-40c6-8b48-25d458d63c2f",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 2128.740065979906,
"y": 1232.6219060454753
}
},
{
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "invocation",
"data": {
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "l2i",
"inputs": {
"metadata": {
"id": "9f7a1a9f-7861-4f09-874b-831af89b7474",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "a5b42432-8ee7-48cd-b61c-b97be6e490a2",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "890de106-e6c3-4c2c-8d67-b368def64894",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "b8e5a2ca-5fbc-49bd-ad4c-ea0e109d46e3",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "fdaf6264-4593-4bd2-ac71-8a0acff261af",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "94c5877d-6c78-4662-a836-8a84fc75d0a0",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "2a854e42-1616-42f5-b9ef-7b73c40afc1d",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "dd649053-1433-4f31-90b3-8bb103efc5b1",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2559.4751127537957,
"y": 1246.6000376741406
}
},
{
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "invocation",
"data": {
"id": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "i2l",
"inputs": {
"image": {
"id": "9e6c4010-0f79-4587-9062-29d9a8f96b3b",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "b9ed2ec4-e8e3-4d69-8a42-27f2d983bcd6",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "bb48d10b-2440-4c46-b835-646ae5ebc013",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "1048612c-c0f4-4abf-a684-0045e7d158f8",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"latents": {
"id": "55301367-0578-4dee-8060-031ae13c7bf8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "2eb65690-1f20-4070-afbd-1e771b9f8ca9",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "d5bf64c7-c30f-43b8-9bc2-95e7718c1bdc",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 325,
"position": {
"x": 848.091172736516,
"y": 1618.7467772496016
}
},
{
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "invocation",
"data": {
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "compel",
"inputs": {
"prompt": {
"id": "5f762fae-d791-42d9-8ab5-2b830c33ff20",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "8ac95f40-317d-4513-bbba-b99effd3b438",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46c65b2b-c0b5-40c2-b183-74e9451c6d56",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1280.0309709777139,
"y": 1213.3027983934699
}
}
],
"edges": [
{
"source": "771bdf6a-0813-4099-a5d8-921a138754d4",
"sourceHandle": "image",
"target": "f7564dd2-9539-47f2-ac13-190804461f4e",
"targetHandle": "image",
"id": "reactflow__edge-771bdf6a-0813-4099-a5d8-921a138754d4image-f7564dd2-9539-47f2-ac13-190804461f4eimage",
"type": "default"
},
{
"source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"sourceHandle": "image",
"target": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"targetHandle": "image",
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-1d887701-df21-4966-ae6e-a7d82307d7bdimage",
"type": "default"
},
{
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"sourceHandle": "width",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"targetHandle": "width",
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dwidth-f50624ce-82bf-41d0-bdf7-8aab11a80d48width",
"type": "default"
},
{
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"sourceHandle": "height",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"targetHandle": "height",
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dheight-f50624ce-82bf-41d0-bdf7-8aab11a80d48height",
"type": "default"
},
{
"source": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"sourceHandle": "noise",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "noise",
"id": "reactflow__edge-f50624ce-82bf-41d0-bdf7-8aab11a80d48noise-c3737554-8d87-48ff-a6f8-e71d2867f434noise",
"type": "default"
},
{
"source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"sourceHandle": "latents",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "latents",
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dlatents-c3737554-8d87-48ff-a6f8-e71d2867f434latents",
"type": "default"
},
{
"source": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"sourceHandle": "conditioning",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-e8bf67fe-67de-4227-87eb-79e86afdfc74conditioning-c3737554-8d87-48ff-a6f8-e71d2867f434negative_conditioning",
"type": "default"
},
{
"source": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"sourceHandle": "conditioning",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bconditioning-c3737554-8d87-48ff-a6f8-e71d2867f434positive_conditioning",
"type": "default"
},
{
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"sourceHandle": "clip",
"target": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"targetHandle": "clip",
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bclip",
"type": "default"
},
{
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"sourceHandle": "clip",
"target": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"targetHandle": "clip",
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-e8bf67fe-67de-4227-87eb-79e86afdfc74clip",
"type": "default"
},
{
"source": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"sourceHandle": "image",
"target": "ca1d020c-89a8-4958-880a-016d28775cfa",
"targetHandle": "image",
"id": "reactflow__edge-1d887701-df21-4966-ae6e-a7d82307d7bdimage-ca1d020c-89a8-4958-880a-016d28775cfaimage",
"type": "default"
},
{
"source": "ca1d020c-89a8-4958-880a-016d28775cfa",
"sourceHandle": "control",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "control",
"id": "reactflow__edge-ca1d020c-89a8-4958-880a-016d28775cfacontrol-c3737554-8d87-48ff-a6f8-e71d2867f434control",
"type": "default"
},
{
"source": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"sourceHandle": "latents",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"targetHandle": "latents",
"id": "reactflow__edge-c3737554-8d87-48ff-a6f8-e71d2867f434latents-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0latents",
"type": "default"
},
{
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"sourceHandle": "vae",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"targetHandle": "vae",
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0vae",
"type": "default"
},
{
"source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"sourceHandle": "image",
"target": "5ca498a4-c8c8-4580-a396-0c984317205d",
"targetHandle": "image",
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-5ca498a4-c8c8-4580-a396-0c984317205dimage",
"type": "default"
},
{
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"sourceHandle": "unet",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"targetHandle": "unet",
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dunet-c3737554-8d87-48ff-a6f8-e71d2867f434unet",
"type": "default"
}
]
}

1081
docs/workflows/FaceMask.json Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,985 @@
{
"name": "Multi ControlNet (Canny & Depth)",
"author": "Millu",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
"version": "0.1.0",
"contact": "millun@invoke.ai",
"tags": "ControlNet, canny, depth",
"notes": "",
"exposedFields": [
{
"nodeId": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"fieldName": "model"
},
{
"nodeId": "7ce68934-3419-42d4-ac70-82cfc9397306",
"fieldName": "prompt"
},
{
"nodeId": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"fieldName": "prompt"
},
{
"nodeId": "c4b23e64-7986-40c4-9cad-46327b12e204",
"fieldName": "image"
},
{
"nodeId": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"fieldName": "image"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"type": "invocation",
"data": {
"id": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"type": "image",
"inputs": {
"image": {
"id": "189c8adf-68cc-4774-a729-49da89f6fdf1",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "Depth Input Image"
}
},
"outputs": {
"image": {
"id": "1a31cacd-9d19-4f32-b558-c5e4aa39ce73",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "12f298fd-1d11-4cca-9426-01240f7ec7cf",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "c47dabcb-44e8-40c9-992d-81dca59f598e",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 3617.163483500202,
"y": 40.5529847930888
}
},
{
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "invocation",
"data": {
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "controlnet",
"inputs": {
"image": {
"id": "4e0a3172-d3c2-4005-a84c-fa12a404f8a0",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "8cb2d998-4086-430a-8b13-94cbc81e3ca3",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "sd-controlnet-depth",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "5e32bd8a-9dc8-42d8-9bcc-c2b0460c0b0f",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1
},
"begin_step_percent": {
"id": "c258a276-352a-416c-8358-152f11005c0c",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "43001125-0d70-4f87-8e79-da6603ad6c33",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "d2f14561-9443-4374-9270-e2f05007944e",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "727ee7d3-8bf6-4c7d-8b8a-43546b3b59cd",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "b034aa0f-4d0d-46e4-b5e3-e25a9588d087",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 4477.604342844504,
"y": -49.39005411272677
}
},
{
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "invocation",
"data": {
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "compel",
"inputs": {
"prompt": {
"id": "7c2c4771-2161-4d77-aced-ff8c4b3f1c15",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "06d59e91-9cca-411d-bf05-86b099b3e8f7",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "858bc33c-134c-4bf6-8855-f943e1d26f14",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 4444.706437017514,
"y": -924.0715320874991
}
},
{
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "invocation",
"data": {
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "f4a915a5-593e-4b6d-9198-c78eb5cefaed",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "ee24fb16-da38-4c66-9fbc-e8f296ed40d2",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "f3fb0524-8803-41c1-86db-a61a13ee6a33",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "5c4878a8-b40f-44ab-b146-1c1f42c860b3",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 3837.096149678291,
"y": -1050.015351148365
}
},
{
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "invocation",
"data": {
"id": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "compel",
"inputs": {
"prompt": {
"id": "7c2c4771-2161-4d77-aced-ff8c4b3f1c15",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "06d59e91-9cca-411d-bf05-86b099b3e8f7",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "858bc33c-134c-4bf6-8855-f943e1d26f14",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 4449.356038911986,
"y": -1201.659695420063
}
},
{
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "invocation",
"data": {
"id": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "controlnet",
"inputs": {
"image": {
"id": "4e0a3172-d3c2-4005-a84c-fa12a404f8a0",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "8cb2d998-4086-430a-8b13-94cbc81e3ca3",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "sd-controlnet-canny",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "5e32bd8a-9dc8-42d8-9bcc-c2b0460c0b0f",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1
},
"begin_step_percent": {
"id": "c258a276-352a-416c-8358-152f11005c0c",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "43001125-0d70-4f87-8e79-da6603ad6c33",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "d2f14561-9443-4374-9270-e2f05007944e",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "727ee7d3-8bf6-4c7d-8b8a-43546b3b59cd",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "b034aa0f-4d0d-46e4-b5e3-e25a9588d087",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 4479.68542130465,
"y": -618.4221638099414
}
},
{
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
"type": "invocation",
"data": {
"id": "c4b23e64-7986-40c4-9cad-46327b12e204",
"type": "image",
"inputs": {
"image": {
"id": "189c8adf-68cc-4774-a729-49da89f6fdf1",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "Canny Input Image"
}
},
"outputs": {
"image": {
"id": "1a31cacd-9d19-4f32-b558-c5e4aa39ce73",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "12f298fd-1d11-4cca-9426-01240f7ec7cf",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "c47dabcb-44e8-40c9-992d-81dca59f598e",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 3593.7474460420153,
"y": -538.1200472386865
}
},
{
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "invocation",
"data": {
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "collect",
"inputs": {
"item": {
"id": "b16ae602-8708-4b1b-8d4f-9e0808d429ab",
"name": "item",
"type": "CollectionItem",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"collection": {
"id": "d8987dd8-dec8-4d94-816a-3e356af29884",
"name": "collection",
"type": "Collection",
"fieldKind": "output"
}
},
"label": "ControlNet Collection",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 104,
"position": {
"x": 4866.191497139488,
"y": -299.0538619537037
}
},
{
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "invocation",
"data": {
"id": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "midas_depth_image_processor",
"inputs": {
"metadata": {
"id": "77f91980-c696-4a18-a9ea-6e2fc329a747",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "50710a20-2af5-424d-9d17-aa08167829c6",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"a_mult": {
"id": "f3b26f9d-2498-415e-9c01-197a8d06c0a5",
"name": "a_mult",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 2
},
"bg_th": {
"id": "4b1eb3ae-9d4a-47d6-b0ed-da62501e007f",
"name": "bg_th",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0.1
}
},
"outputs": {
"image": {
"id": "b4ed637c-c4a0-4fdd-a24e-36d6412e4ccf",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "6bf9b609-d72c-4239-99bd-390a73cc3a9c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "3e8aef09-cf44-4e3e-a490-d3c9e7b23119",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 339,
"position": {
"x": 4054.229311491893,
"y": -31.611411056365725
}
},
{
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "invocation",
"data": {
"id": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "canny_image_processor",
"inputs": {
"metadata": {
"id": "08331ea6-99df-4e61-a919-204d9bfa8fb2",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"image": {
"id": "33a37284-06ac-459c-ba93-1655e4f69b2d",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"low_threshold": {
"id": "21ec18a3-50c5-4ba1-9642-f921744d594f",
"name": "low_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 100
},
"high_threshold": {
"id": "ebeab271-a5ff-4c88-acfd-1d0271ab6ed4",
"name": "high_threshold",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 200
}
},
"outputs": {
"image": {
"id": "c0caadbf-883f-4cb4-a62d-626b9c81fc4e",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "df225843-8098-49c0-99d1-3b0b6600559f",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "e4abe0de-aa16-41f3-9cd7-968b49db5da3",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 339,
"position": {
"x": 4095.757337055795,
"y": -455.63440891935863
}
},
{
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "invocation",
"data": {
"id": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "l2i",
"inputs": {
"metadata": {
"id": "2f269793-72e5-4ff3-b76c-fab4f93e983f",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "4aaedd3b-cc77-420c-806e-c7fa74ec4cdf",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "432b066a-2462-4d18-83d9-64620b72df45",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "61f86e0f-7c46-40f8-b3f5-fe2f693595ca",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "39b6c89a-37ef-4a7e-9509-daeca49d5092",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "6204e9b0-61dd-4250-b685-2092ba0e28e6",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "b4140649-8d5d-4d2d-bfa6-09e389ede5f9",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "f3a0c0c8-fc24-4646-8be1-ed8cdd140828",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 5678.726701377887,
"y": -351.6792416734579
}
},
{
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "invocation",
"data": {
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "869cd309-c238-444b-a1a0-5021f99785ba",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "343447b4-1e37-4e9e-8ac7-4d04864066af",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "b556571e-0cf9-4e03-8cfc-5caad937d957",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "a3b3d2de-9308-423e-b00d-c209c3e6e808",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "b13c50a4-ec7e-4579-b0ef-2fe5df2605ea",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "57d5d755-f58f-4347-b991-f0bca4a0ab29",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "323e78a6-880a-4d73-a62c-70faff965aa6",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "c25fdc17-a089-43ac-953e-067c45d5c76b",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "6cde662b-e633-4569-b6b4-ec87c52c9c11",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "276a4df9-bb26-4505-a4d3-a94e18c7b541",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "48d40c51-b5e2-4457-a428-eef0696695e8",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "75dd8af2-e7d7-48b4-a574-edd9f6e686ad",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "9223d67b-1dd7-4b34-a45f-ed0a725d9702",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "4ee99177-6923-4b7f-8fe0-d721dd7cb05b",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "7fb4e326-a974-43e8-9ee7-2e3ab235819d",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "6bb8acd0-8973-4195-a095-e376385dc705",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "795dea52-1c7d-4e64-99f7-2f60ec6e3ab9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 5274.672987098195,
"y": -823.0752416664332
}
}
],
"edges": [
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "clip",
"target": "7ce68934-3419-42d4-ac70-82cfc9397306",
"targetHandle": "clip",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-7ce68934-3419-42d4-ac70-82cfc9397306clip",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "clip",
"target": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"targetHandle": "clip",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-273e3f96-49ea-4dc5-9d5b-9660390f14e1clip",
"type": "default"
},
{
"source": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"sourceHandle": "control",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"targetHandle": "item",
"id": "reactflow__edge-a33199c2-8340-401e-b8a2-42ffa875fc1ccontrol-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default"
},
{
"source": "d204d184-f209-4fae-a0a1-d152800844e1",
"sourceHandle": "control",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"targetHandle": "item",
"id": "reactflow__edge-d204d184-f209-4fae-a0a1-d152800844e1control-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default"
},
{
"source": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"sourceHandle": "image",
"target": "018b1214-c2af-43a7-9910-fb687c6726d7",
"targetHandle": "image",
"id": "reactflow__edge-8e860e51-5045-456e-bf04-9a62a2a5c49eimage-018b1214-c2af-43a7-9910-fb687c6726d7image",
"type": "default"
},
{
"source": "018b1214-c2af-43a7-9910-fb687c6726d7",
"sourceHandle": "image",
"target": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"targetHandle": "image",
"id": "reactflow__edge-018b1214-c2af-43a7-9910-fb687c6726d7image-a33199c2-8340-401e-b8a2-42ffa875fc1cimage",
"type": "default"
},
{
"source": "c4b23e64-7986-40c4-9cad-46327b12e204",
"sourceHandle": "image",
"target": "c826ba5e-9676-4475-b260-07b85e88753c",
"targetHandle": "image",
"id": "reactflow__edge-c4b23e64-7986-40c4-9cad-46327b12e204image-c826ba5e-9676-4475-b260-07b85e88753cimage",
"type": "default"
},
{
"source": "c826ba5e-9676-4475-b260-07b85e88753c",
"sourceHandle": "image",
"target": "d204d184-f209-4fae-a0a1-d152800844e1",
"targetHandle": "image",
"id": "reactflow__edge-c826ba5e-9676-4475-b260-07b85e88753cimage-d204d184-f209-4fae-a0a1-d152800844e1image",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "vae",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"targetHandle": "vae",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9vae-9db25398-c869-4a63-8815-c6559341ef12vae",
"type": "default"
},
{
"source": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"sourceHandle": "latents",
"target": "9db25398-c869-4a63-8815-c6559341ef12",
"targetHandle": "latents",
"id": "reactflow__edge-ac481b7f-08bf-4a9d-9e0c-3a82ea5243celatents-9db25398-c869-4a63-8815-c6559341ef12latents",
"type": "default"
},
{
"source": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"sourceHandle": "collection",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "control",
"id": "reactflow__edge-ca4d5059-8bfb-447f-b415-da0faba5a143collection-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cecontrol",
"type": "default"
},
{
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"sourceHandle": "unet",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "unet",
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9unet-ac481b7f-08bf-4a9d-9e0c-3a82ea5243ceunet",
"type": "default"
},
{
"source": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"sourceHandle": "conditioning",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-273e3f96-49ea-4dc5-9d5b-9660390f14e1conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenegative_conditioning",
"type": "default"
},
{
"source": "7ce68934-3419-42d4-ac70-82cfc9397306",
"sourceHandle": "conditioning",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7ce68934-3419-42d4-ac70-82cfc9397306conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cepositive_conditioning",
"type": "default"
}
]
}

View File

@ -0,0 +1,719 @@
{
"name": "Prompt from File",
"author": "InvokeAI",
"description": "Sample workflow using prompt from file capabilities of InvokeAI ",
"version": "0.1.0",
"contact": "millun@invoke.ai",
"tags": "text2image, prompt from file, default",
"notes": "",
"exposedFields": [
{
"nodeId": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"fieldName": "model"
},
{
"nodeId": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"fieldName": "file_path"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "invocation",
"data": {
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "compel",
"inputs": {
"prompt": {
"id": "dcdf3f6d-9b96-4bcd-9b8d-f992fefe4f62",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "3f1981c9-d8a9-42eb-a739-4f120eb80745",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46205e6c-c5e2-44cb-9c82-1cd20b95674a",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1177.3417789657444,
"y": -102.0924766641035
}
},
{
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"type": "invocation",
"data": {
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"type": "prompt_from_file",
"inputs": {
"file_path": {
"id": "37e37684-4f30-4ec8-beae-b333e550f904",
"name": "file_path",
"type": "string",
"fieldKind": "input",
"label": "Prompts File Path",
"value": ""
},
"pre_prompt": {
"id": "7de02feb-819a-4992-bad3-72a30920ddea",
"name": "pre_prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"post_prompt": {
"id": "95f191d8-a282-428e-bd65-de8cb9b7513a",
"name": "post_prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"start_line": {
"id": "efee9a48-05ab-4829-8429-becfa64a0782",
"name": "start_line",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1
},
"max_prompts": {
"id": "abebb428-3d3d-49fd-a482-4e96a16fff08",
"name": "max_prompts",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1
}
},
"outputs": {
"collection": {
"id": "77d5d7f1-9877-4ab1-9a8c-33e9ffa9abf3",
"name": "collection",
"type": "StringCollection",
"fieldKind": "output"
}
},
"label": "Prompts from File",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 589,
"position": {
"x": 394.181884547075,
"y": -423.5345157864633
}
},
{
"id": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "invocation",
"data": {
"id": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "iterate",
"inputs": {
"collection": {
"id": "4c564bf8-5ed6-441e-ad2c-dda265d5785f",
"name": "collection",
"type": "Collection",
"fieldKind": "input",
"label": "",
"value": []
}
},
"outputs": {
"item": {
"id": "36340f9a-e7a5-4afa-b4b5-313f4e292380",
"name": "item",
"type": "CollectionItem",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 104,
"position": {
"x": 792.8735298060233,
"y": -432.6964953027252
}
},
{
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "invocation",
"data": {
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "3f264259-3418-47d5-b90d-b6600e36ae46",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "8e182ea2-9d0a-4c02-9407-27819288d4b5",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "d67d9d30-058c-46d5-bded-3d09d6d1aa39",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "89641601-0429-4448-98d5-190822d920d8",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": -47.66201354137797,
"y": -299.218193067033
}
},
{
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "invocation",
"data": {
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "compel",
"inputs": {
"prompt": {
"id": "dcdf3f6d-9b96-4bcd-9b8d-f992fefe4f62",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "",
"value": ""
},
"clip": {
"id": "3f1981c9-d8a9-42eb-a739-4f120eb80745",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "46205e6c-c5e2-44cb-9c82-1cd20b95674a",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 1175.0187896425462,
"y": -420.64289413577114
}
},
{
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "invocation",
"data": {
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "noise",
"inputs": {
"seed": {
"id": "b722d84a-eeee-484f-bef2-0250c027cb67",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "d5f8ce11-0502-4bfc-9a30-5757dddf1f94",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "f187d5ff-38a5-4c3f-b780-fc5801ef34af",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "12f112b8-8b76-4816-b79e-662edc9f9aa5",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "08576ad1-96d9-42d2-96ef-6f5c1961933f",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "f3e1f94a-258d-41ff-9789-bd999bd9f40d",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "6cefc357-4339-415e-a951-49b9c2be32f4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 389,
"position": {
"x": 809.1964864135837,
"y": 183.2735123359796
}
},
{
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"type": "invocation",
"data": {
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"type": "rand_int",
"inputs": {
"low": {
"id": "b9fc6cf1-469c-4037-9bf0-04836965826f",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "06eac725-0f60-4ba2-b8cd-7ad9f757488c",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "df08c84e-7346-4e92-9042-9e5cb773aaff",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 218,
"position": {
"x": 354.19913145404166,
"y": 301.86324846905165
}
},
{
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "invocation",
"data": {
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "l2i",
"inputs": {
"metadata": {
"id": "022e4b33-562b-438d-b7df-41c3fd931f40",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "67cb6c77-a394-4a66-a6a9-a0a7dcca69ec",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "7b3fd9ad-a4ef-4e04-89fa-3832a9902dbd",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "5ac5680d-3add-4115-8ec0-9ef5bb87493b",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "db8297f5-55f8-452f-98cf-6572c2582152",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "d8778d0c-592a-4960-9280-4e77e00a7f33",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "c8b0a75a-f5de-4ff2-9227-f25bb2b97bec",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "83c05fbf-76b9-49ab-93c4-fa4b10e793e4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2037.861329274915,
"y": -329.8393457509562
}
},
{
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "invocation",
"data": {
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "751fb35b-3f23-45ce-af1c-053e74251337",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "b9dc06b6-7481-4db1-a8c2-39d22a5eacff",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "6e15e439-3390-48a4-8031-01e0e19f0e1d",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "bfdfb3df-760b-4d51-b17b-0abb38b976c2",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "47770858-322e-41af-8494-d8b63ed735f3",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "2ba78720-ee02-4130-a348-7bc3531f790b",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "a874dffb-d433-4d1a-9f59-af4367bb05e4",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "36e021ad-b762-4fe4-ad4d-17f0291c40b2",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "98d3282d-f9f6-4b5e-b9e8-58658f1cac78",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "f2ea3216-43d5-42b4-887f-36e8f7166d53",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "d0780610-a298-47c8-a54e-70e769e0dfe2",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "fdb40970-185e-4ea8-8bb5-88f06f91f46a",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "e05b538a-1b5a-4aa5-84b1-fd2361289a81",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "463a419e-df30-4382-8ffb-b25b25abe425",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "559ee688-66cf-4139-8b82-3d3aa69995ce",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "0b4285c2-e8b9-48e5-98f6-0a49d3f98fd2",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "8b0881b9-45e5-47d5-b526-24b6661de0ee",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1570.9941088179146,
"y": -407.6505491604564
}
}
],
"edges": [
{
"source": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"sourceHandle": "collection",
"target": "1b89067c-3f6b-42c8-991f-e3055789b251",
"targetHandle": "collection",
"id": "reactflow__edge-1b7e0df8-8589-4915-a4ea-c0088f15d642collection-1b89067c-3f6b-42c8-991f-e3055789b251collection",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "clip",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"targetHandle": "clip",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-fc9d0e35-a6de-4a19-84e1-c72497c823f6clip",
"type": "default"
},
{
"source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"sourceHandle": "item",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"targetHandle": "prompt",
"id": "reactflow__edge-1b89067c-3f6b-42c8-991f-e3055789b251item-fc9d0e35-a6de-4a19-84e1-c72497c823f6prompt",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "clip",
"target": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"targetHandle": "clip",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-c2eaf1ba-5708-4679-9e15-945b8b432692clip",
"type": "default"
},
{
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"sourceHandle": "value",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"targetHandle": "seed",
"id": "reactflow__edge-dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5value-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77seed",
"type": "default"
},
{
"source": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"sourceHandle": "conditioning",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-fc9d0e35-a6de-4a19-84e1-c72497c823f6conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5epositive_conditioning",
"type": "default"
},
{
"source": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"sourceHandle": "conditioning",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-c2eaf1ba-5708-4679-9e15-945b8b432692conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enegative_conditioning",
"type": "default"
},
{
"source": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"sourceHandle": "noise",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "noise",
"id": "reactflow__edge-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77noise-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enoise",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "unet",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"targetHandle": "unet",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426unet-2fb1577f-0a56-4f12-8711-8afcaaaf1d5eunet",
"type": "default"
},
{
"source": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"sourceHandle": "latents",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"targetHandle": "latents",
"id": "reactflow__edge-2fb1577f-0a56-4f12-8711-8afcaaaf1d5elatents-491ec988-3c77-4c37-af8a-39a0c4e7a2a1latents",
"type": "default"
},
{
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"sourceHandle": "vae",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"targetHandle": "vae",
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426vae-491ec988-3c77-4c37-af8a-39a0c4e7a2a1vae",
"type": "default"
}
]
}

View File

@ -0,0 +1,758 @@
{
"name": "QR Code Monster",
"author": "InvokeAI",
"description": "Sample workflow for create images with QR code Monster ControlNet",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "qrcode, controlnet, default",
"notes": "",
"exposedFields": [
{
"nodeId": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"fieldName": "image"
},
{
"nodeId": "aca3b054-bfba-4392-bd20-6476f59504df",
"fieldName": "prompt"
},
{
"nodeId": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"fieldName": "prompt"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"type": "invocation",
"data": {
"id": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"type": "compel",
"inputs": {
"prompt": {
"id": "6a1fe244-5656-4f8c-91d1-1fb474e28807",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "f24688f3-29b8-4a2d-8603-046e5a5c7250",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "700528eb-3f8b-4745-b540-34f919b5b228",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 773.0502679628016,
"y": 1622.4836086770556
}
},
{
"id": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"type": "invocation",
"data": {
"id": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "cb36b6d3-6c1f-4911-a200-646745b0ff74",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "7246895b-b252-49bc-b952-8d801b4672f7",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "3c2aedb8-30d5-4d4b-99df-d06a0d7bedc6",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "b9743815-5501-4bbb-8bde-8bd6ba298a4e",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 211.58866462619744,
"y": 1376.0542388105248
}
},
{
"id": "aca3b054-bfba-4392-bd20-6476f59504df",
"type": "invocation",
"data": {
"id": "aca3b054-bfba-4392-bd20-6476f59504df",
"type": "compel",
"inputs": {
"prompt": {
"id": "6a1fe244-5656-4f8c-91d1-1fb474e28807",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "f24688f3-29b8-4a2d-8603-046e5a5c7250",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "700528eb-3f8b-4745-b540-34f919b5b228",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 770.6491131680111,
"y": 1316.379247112241
}
},
{
"id": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"type": "invocation",
"data": {
"id": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"type": "image",
"inputs": {
"image": {
"id": "89ba5d58-28c9-4e04-a5df-79fb7a6f3531",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": "QR Code / Hidden Image"
}
},
"outputs": {
"image": {
"id": "54335653-0e17-42da-b9e8-83c5fb5af670",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "a3c65953-39ea-4d97-8858-d65154ff9d11",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "2c7db511-ebc9-4286-a46b-bc11e0fd779f",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 225,
"position": {
"x": 700.5034176864369,
"y": 1981.749600549388
}
},
{
"id": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"type": "invocation",
"data": {
"id": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"type": "noise",
"inputs": {
"seed": {
"id": "7c6c76dd-127b-4829-b1ec-430790cb7ed7",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "8ec6a525-a421-40d8-a17e-39e7b6836438",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "6af1e58a-e2ee-4ec4-9f06-d8d0412922ca",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "26662e99-5720-43a6-a5d8-06c9dab0e261",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "cb4c4dfc-a744-49eb-af4f-677448e28407",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "97e87be6-e81f-40a3-a522-28ebe4aad0ac",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "80784420-f1e1-47b0-bd1d-1d381a15e22d",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1182.460291960481,
"y": 1759.592972960265
}
},
{
"id": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"type": "invocation",
"data": {
"id": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"type": "controlnet",
"inputs": {
"image": {
"id": "1f683889-9f14-40c8-af29-4b991b211a3a",
"name": "image",
"type": "ImageField",
"fieldKind": "input",
"label": ""
},
"control_model": {
"id": "a933b21d-22c1-4e06-818f-15416b971282",
"name": "control_model",
"type": "ControlNetModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "qrcode_monster",
"base_model": "sd-1"
}
},
"control_weight": {
"id": "198a0825-e55e-4496-bc54-c3d7b02f3d75",
"name": "control_weight",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 1.4
},
"begin_step_percent": {
"id": "c85ce42f-22af-42a0-8993-676002fb275e",
"name": "begin_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"end_step_percent": {
"id": "a61a65c4-9e6f-4fe2-96a5-1294d17ec6e4",
"name": "end_step_percent",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"control_mode": {
"id": "1aa45cfa-0249-46b7-bf24-3e38e92f5fa0",
"name": "control_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "balanced"
},
"resize_mode": {
"id": "a89d3cb9-a141-4cea-bb49-977bf267377b",
"name": "resize_mode",
"type": "enum",
"fieldKind": "input",
"label": "",
"value": "just_resize"
}
},
"outputs": {
"control": {
"id": "c9a1fc7e-cb25-45a9-adff-1a97c9ff04d6",
"name": "control",
"type": "ControlField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 508,
"position": {
"x": 1165.434407461108,
"y": 1862.916856351665
}
},
{
"id": "28542b66-5a00-4780-a318-0a036d2df914",
"type": "invocation",
"data": {
"id": "28542b66-5a00-4780-a318-0a036d2df914",
"type": "l2i",
"inputs": {
"metadata": {
"id": "a38e8f55-7f2c-4fcc-a71f-d51e2eb0374a",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "80e97bc8-e716-4175-9115-5b58495aa30c",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "5641bce6-ac2b-47eb-bb32-2f290026b7e1",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "9e75eb16-ae48-47ed-b180-e0409d377436",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "0518b0ce-ee37-437b-8437-cc2976a3279f",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "ec2ff985-a7eb-401f-92c4-1217cddad6a2",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "ba1d1720-6d67-4eca-9e9d-b97d08636774",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "10bcf8f4-6394-422f-b0c0-51680f3bfb25",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2110.8415693683014,
"y": 1487.253341116115
}
},
{
"id": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"type": "invocation",
"data": {
"id": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "8e6aceaa-a986-4ab2-9c04-5b1027b3daf6",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "fbbaa712-ca1a-420b-9016-763f2a29d68c",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "a3b3d5d2-c0f9-4b89-a9b3-8de9418f7bb5",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "e491e664-2f8c-4f49-b3e4-57b051fbb9c5",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "f0318abd-ed65-4cad-86a7-48d1c19a6d14",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "f7c24c51-496f-44c4-836a-c734e529fec0",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "54f7656a-fb0d-4d9e-a459-f700f7dccd2e",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "363ee440-040d-499b-bf84-bf5391b08681",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "5c93d4e5-1064-4700-ab1d-d12e1e9b5ba7",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "e1948eb3-7407-43b0-93e3-139470f186b7",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "5675b2c3-adfb-49ee-b33c-26bdbfab1fed",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "89cd4ab3-3bfc-4063-9de5-91d42305c651",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "ec01df90-5042-418d-b6d6-86b251c13770",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "561cde00-cb20-42ae-9bd3-4f477f73fbe1",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "f9addefe-efcc-4e01-8945-6ebbc934b002",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "6d48f78b-d681-422a-8677-0111bd0625f1",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "f25997b8-6316-44ce-b696-b82e4ed51ae5",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1597.9598293300219,
"y": 1420.4637727891632
}
},
{
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"type": "invocation",
"data": {
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"type": "rand_int",
"inputs": {
"low": {
"id": "051f22f9-2d4f-414f-bc51-84af2d626efa",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "77206186-f264-4224-9589-f925cf903dc9",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "a7ed9387-3a24-4d34-b7c5-f713bd544ab1",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1178.16746986153,
"y": 1663.9433412808876
}
}
],
"edges": [
{
"source": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"target": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"id": "59349822-af20-4e0e-a53f-3ba135d00c3f-280fd8a7-3b0c-49fe-8be4-6246e08b6c9a-collapsed",
"type": "collapsed"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "clip",
"target": "aca3b054-bfba-4392-bd20-6476f59504df",
"targetHandle": "clip",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1clip-aca3b054-bfba-4392-bd20-6476f59504dfclip",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "clip",
"target": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"targetHandle": "clip",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1clip-3db7cee0-31e2-4a3d-94a1-268cb16177ddclip",
"type": "default"
},
{
"source": "a6cc0986-f928-4a7e-8d44-ba2d4b36f54a",
"sourceHandle": "image",
"target": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"targetHandle": "image",
"id": "reactflow__edge-a6cc0986-f928-4a7e-8d44-ba2d4b36f54aimage-2ac03cf6-0326-454a-bed0-d8baef2bf30dimage",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "vae",
"target": "28542b66-5a00-4780-a318-0a036d2df914",
"targetHandle": "vae",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1vae-28542b66-5a00-4780-a318-0a036d2df914vae",
"type": "default"
},
{
"source": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"sourceHandle": "noise",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "noise",
"id": "reactflow__edge-280fd8a7-3b0c-49fe-8be4-6246e08b6c9anoise-9755ae4c-ef30-4db3-80f6-a31f98979a11noise",
"type": "default"
},
{
"source": "3db7cee0-31e2-4a3d-94a1-268cb16177dd",
"sourceHandle": "conditioning",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3db7cee0-31e2-4a3d-94a1-268cb16177ddconditioning-9755ae4c-ef30-4db3-80f6-a31f98979a11negative_conditioning",
"type": "default"
},
{
"source": "aca3b054-bfba-4392-bd20-6476f59504df",
"sourceHandle": "conditioning",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-aca3b054-bfba-4392-bd20-6476f59504dfconditioning-9755ae4c-ef30-4db3-80f6-a31f98979a11positive_conditioning",
"type": "default"
},
{
"source": "610384f1-6f0c-4847-a9a2-37ce7f456ed1",
"sourceHandle": "unet",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "unet",
"id": "reactflow__edge-610384f1-6f0c-4847-a9a2-37ce7f456ed1unet-9755ae4c-ef30-4db3-80f6-a31f98979a11unet",
"type": "default"
},
{
"source": "2ac03cf6-0326-454a-bed0-d8baef2bf30d",
"sourceHandle": "control",
"target": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"targetHandle": "control",
"id": "reactflow__edge-2ac03cf6-0326-454a-bed0-d8baef2bf30dcontrol-9755ae4c-ef30-4db3-80f6-a31f98979a11control",
"type": "default"
},
{
"source": "9755ae4c-ef30-4db3-80f6-a31f98979a11",
"sourceHandle": "latents",
"target": "28542b66-5a00-4780-a318-0a036d2df914",
"targetHandle": "latents",
"id": "reactflow__edge-9755ae4c-ef30-4db3-80f6-a31f98979a11latents-28542b66-5a00-4780-a318-0a036d2df914latents",
"type": "default"
},
{
"source": "59349822-af20-4e0e-a53f-3ba135d00c3f",
"sourceHandle": "value",
"target": "280fd8a7-3b0c-49fe-8be4-6246e08b6c9a",
"targetHandle": "seed",
"id": "reactflow__edge-59349822-af20-4e0e-a53f-3ba135d00c3fvalue-280fd8a7-3b0c-49fe-8be4-6246e08b6c9aseed",
"type": "default"
}
]
}

View File

@ -0,0 +1,751 @@
{
"name": "SDXL Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"notes": "",
"exposedFields": [
{
"nodeId": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"fieldName": "model"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "prompt"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "style"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "prompt"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "style"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Negative Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 793,
"position": {
"x": 1275,
"y": -350
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -300
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -350
}
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "sdxl_model_loader",
"inputs": {
"model": {
"id": "39f9e799-bc95-4318-a200-30eed9e60c42",
"name": "model",
"type": "SDXLMainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-xl-base-1-0",
"base_model": "sdxl",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "2626a45e-59aa-4609-b131-2d45c5eaed69",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "7c9c42fa-93d5-4639-ab8b-c4d9b0559baf",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"clip2": {
"id": "0dafddcf-a472-49c1-a47c-7b8fab4c8bc9",
"name": "clip2",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "ee6a6997-1b3c-4ff3-99ce-1e7bfba2750c",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 258,
"position": {
"x": 475,
"y": 25
}
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Positive Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 793,
"position": {
"x": 900,
"y": -350
}
},
{
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "invocation",
"data": {
"id": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "l2i",
"inputs": {
"metadata": {
"id": "88971324-3fdb-442d-b8b7-7612478a8622",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "da0e40cb-c49f-4fa5-9856-338b91a65f6b",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "ae5164ce-1710-4ec5-a83a-6113a0d1b5c0",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "2ccfd535-1a7b-4ecf-84db-9430a64fb3d7",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "64f07d5a-54a2-429c-8c5b-0c2a3a8e5cd5",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "9b281eaa-6504-407d-a5ca-1e5e8020a4bf",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "98e545f3-b53b-490d-b94d-bed9418ccc75",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "4a74bd43-d7f7-4c7f-bb3b-d09bb2992c46",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2112.5626808057173,
"y": -174.24042139280238
}
},
{
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "invocation",
"data": {
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "29b73dfa-a06e-4b4a-a844-515b9eb93a81",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "a81e6f5b-f4de-4919-b483-b6e2f067465a",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "4ba06bb7-eb45-4fb9-9984-31001b545587",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "36ee8a45-ca69-44bc-9bc3-aa881e6045c0",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "2a2024e0-a736-46ec-933c-c1c1ebe96943",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "be219d5e-41b7-430a-8fb5-bc21a31ad219",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "3adfb7ae-c9f7-4a40-b6e0-4c2050bd1a99",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "14423e0d-7215-4ee0-b065-f9e95eaa8d7d",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "e73bbf98-6489-492b-b83c-faed215febac",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "dab351b3-0c86-4ea5-9782-4e8edbfb0607",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "192daea0-a90a-43cc-a2ee-0114a8e90318",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "ee386a55-d4c7-48c1-ac57-7bc4e3aada7a",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "3a922c6a-3d8c-4c9e-b3ec-2f4d81cda077",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "cd7ce032-835f-495f-8b45-d57272f33132",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "6260b84f-8361-470a-98d8-5b22a45c2d8c",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "aede0ecf-25b6-46be-aa30-b77f79715deb",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "519abf62-d475-48ef-ab8f-66136bc0e499",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1642.955772577545,
"y": -230.2485847594651
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2-55705012-79b9-4aac-9f26-c0b10309785b-collapsed",
"type": "collapsed"
},
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "vae",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"targetHandle": "vae",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22vae-63e91020-83b2-4f35-b174-ad9692aabb48vae",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "unet",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"targetHandle": "unet",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbunet",
"type": "default"
},
{
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"sourceHandle": "conditioning",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbpositive_conditioning",
"type": "default"
},
{
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"sourceHandle": "conditioning",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnegative_conditioning",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnoise",
"type": "default"
}
]
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,597 @@
{
"name": "Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
"exposedFields": [
{
"nodeId": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"fieldName": "model"
},
{
"nodeId": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"fieldName": "prompt"
},
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 995.7263915923627,
"y": 239.67783573351227
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 389,
"position": {
"x": 993.4442117555518,
"y": 605.6757415334787
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "993eabd2-40fd-44fe-bce7-5d0c7075ddab",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "5c18c9db-328d-46d0-8cb9-143391c410be",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "6effcac0-ec2f-4bf5-a49e-a2c29cf921f4",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "57683ba3-f5f5-4f58-b9a2-4b83dacad4a1",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 226,
"position": {
"x": 163.04436745878343,
"y": 254.63156870373479
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 261,
"position": {
"x": 595.7263915923627,
"y": 239.67783573351227
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
},
"width": 320,
"height": 218,
"position": {
"x": 541.094822888628,
"y": 694.5704476446829
}
},
{
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "invocation",
"data": {
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "denoise_latents",
"inputs": {
"positive_conditioning": {
"id": "90b7f4f8-ada7-4028-8100-d2e54f192052",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "9393779e-796c-4f64-b740-902a1177bf53",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"noise": {
"id": "8e17f1e5-4f98-40b1-b7f4-86aeeb4554c1",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "9b63302d-6bd2-42c9-ac13-9b1afb51af88",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
},
"cfg_scale": {
"id": "87dd04d3-870e-49e1-98bf-af003a810109",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "f369d80f-4931-4740-9bcd-9f0620719fab",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "747d10e5-6f02-445c-994c-0604d814de8c",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "1de84a4e-3a24-4ec8-862b-16ce49633b9b",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"unet": {
"id": "ffa6fef4-3ce2-4bdb-9296-9a834849489b",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
},
"control": {
"id": "077b64cb-34be-4fcc-83f2-e399807a02bd",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
},
"ip_adapter": {
"id": "1d6948f7-3a65-4a65-a20c-768b287251aa",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"t2i_adapter": {
"id": "75e67b09-952f-4083-aaf4-6b804d690412",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "334d4ba3-5a99-4195-82c5-86fb3f4f7d43",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "0d3dbdbf-b014-4e95-8b18-ff2ff9cb0bfa",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "70fa5bbc-0c38-41bb-861a-74d6d78d2f38",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "98ee0e6c-82aa-4e8f-8be5-dc5f00ee47f0",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "e8cb184a-5e1a-47c8-9695-4b8979564f5d",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
},
"width": 320,
"height": 646,
"position": {
"x": 1476.5794704734735,
"y": 256.80174342731783
}
},
{
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "invocation",
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "l2i",
"inputs": {
"metadata": {
"id": "ab375f12-0042-4410-9182-29e30db82c85",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "3a7e7efd-bff5-47d7-9d48-615127afee78",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a1f5f7a1-0795-4d58-b036-7820c0b0ef2b",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
},
"tiled": {
"id": "da52059a-0cee-4668-942f-519aa794d739",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "c4841df3-b24e-4140-be3b-ccd454c2522c",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
}
},
"outputs": {
"image": {
"id": "72d667d0-cf85-459d-abf2-28bd8b823fe7",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "c8c907d8-1066-49d1-b9a6-83bdcd53addc",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "230f359c-b4ea-436c-b372-332d7dcdca85",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
},
"width": 320,
"height": 267,
"position": {
"x": 2037.9648469717395,
"y": 426.10844427600136
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"type": "default"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "unet",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"type": "default"
},
{
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"sourceHandle": "latents",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "latents",
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "vae",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"type": "default"
}
]
}

View File

@ -14,7 +14,7 @@ fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3.0-latest"
LATEST_TAG="v3-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."

View File

@ -1,7 +1,7 @@
@echo off
setlocal EnableExtensions EnableDelayedExpansion
@rem This script requires the user to install Python 3.9 or higher. All other
@rem This script requires the user to install Python 3.10 or higher. All other
@rem requirements are downloaded as needed.
@rem change to the script's directory
@ -19,7 +19,7 @@ set INVOKEAI_VERSION=latest
set INSTRUCTIONS=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
set TROUBLESHOOTING=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting
set PYTHON_URL=https://www.python.org/downloads/windows/
set MINIMUM_PYTHON_VERSION=3.9.0
set MINIMUM_PYTHON_VERSION=3.10.0
set PYTHON_URL=https://www.python.org/downloads/release/python-3109/
set err_msg=An error has occurred and the script could not continue.
@ -28,8 +28,7 @@ set err_msg=An error has occurred and the script could not continue.
echo This script will install InvokeAI and its dependencies.
echo.
echo BEFORE YOU START PLEASE MAKE SURE TO DO THE FOLLOWING
echo 1. Install python 3.9 or 3.10. Python version 3.11 and above are
echo not supported at the moment.
echo 1. Install python 3.10 or 3.11. Python version 3.9 is no longer supported.
echo 2. Double-click on the file WinLongPathsEnabled.reg in order to
echo enable long path support on your system.
echo 3. Install the Visual C++ core libraries.
@ -46,19 +45,19 @@ echo ***** Checking and Updating Python *****
call python --version >.tmp1 2>.tmp2
if %errorlevel% == 1 (
set err_msg=Please install Python 3.10. See %INSTRUCTIONS% for details.
set err_msg=Please install Python 3.10-11. See %INSTRUCTIONS% for details.
goto err_exit
)
for /f "tokens=2" %%i in (.tmp1) do set python_version=%%i
if "%python_version%" == "" (
set err_msg=No python was detected on your system. Please install Python version %MINIMUM_PYTHON_VERSION% or higher. We recommend Python 3.10.9 from %PYTHON_URL%
set err_msg=No python was detected on your system. Please install Python version %MINIMUM_PYTHON_VERSION% or higher. We recommend Python 3.10.12 from %PYTHON_URL%
goto err_exit
)
call :compareVersions %MINIMUM_PYTHON_VERSION% %python_version%
if %errorlevel% == 1 (
set err_msg=Your version of Python is too low. You need at least %MINIMUM_PYTHON_VERSION% but you have %python_version%. We recommend Python 3.10.9 from %PYTHON_URL%
set err_msg=Your version of Python is too low. You need at least %MINIMUM_PYTHON_VERSION% but you have %python_version%. We recommend Python 3.10.12 from %PYTHON_URL%
goto err_exit
)

View File

@ -8,10 +8,10 @@ cd $scriptdir
function version { echo "$@" | awk -F. '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; }
MINIMUM_PYTHON_VERSION=3.9.0
MINIMUM_PYTHON_VERSION=3.10.0
MAXIMUM_PYTHON_VERSION=3.11.100
PYTHON=""
for candidate in python3.11 python3.10 python3.9 python3 python ; do
for candidate in python3.11 python3.10 python3 python ; do
if ppath=`which $candidate`; then
# when using `pyenv`, the executable for an inactive Python version will exist but will not be operational
# we check that this found executable can actually run

View File

@ -13,7 +13,7 @@ from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Union
SUPPORTED_PYTHON = ">=3.9.0,<=3.11.100"
SUPPORTED_PYTHON = ">=3.10.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
BOOTSTRAP_VENV_PREFIX = "invokeai-installer-tmp"
@ -67,7 +67,6 @@ class Installer:
# Cleaning up temporary directories on Windows results in a race condition
# and a stack trace.
# `ignore_cleanup_errors` was only added in Python 3.10
# users of Python 3.9 will see a gnarly stack trace on installer exit
if OS == "Windows" and int(platform.python_version_tuple()[1]) >= 10:
venv_dir = TemporaryDirectory(prefix=BOOTSTRAP_VENV_PREFIX, ignore_cleanup_errors=True)
else:
@ -139,13 +138,6 @@ class Installer:
except shutil.SameFileError:
venv.create(venv_dir, with_pip=True, symlinks=True)
# upgrade pip in Python 3.9 environments
if int(platform.python_version_tuple()[1]) == 9:
from plumbum import FG, local
pip = local[get_pip_from_venv(venv_dir)]
pip["install", "--upgrade", "pip"] & FG
return venv_dir
def install(
@ -252,7 +244,7 @@ class InvokeAiInstance:
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch~=2.0.0",
"torch==2.1.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"--force-reinstall",
@ -332,6 +324,7 @@ class InvokeAiInstance:
Configure the InvokeAI runtime directory
"""
auto_install = False
# set sys.argv to a consistent state
new_argv = [sys.argv[0]]
for i in range(1, len(sys.argv)):
@ -340,13 +333,17 @@ class InvokeAiInstance:
new_argv.append(el)
new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]:
new_argv.append(el)
auto_install = True
sys.argv = new_argv
import messages
import requests # to catch download exceptions
from messages import introduction
introduction()
auto_install = auto_install or messages.user_wants_auto_configuration()
if auto_install:
sys.argv.append("--yes")
else:
messages.introduction()
from invokeai.frontend.install.invokeai_configure import invokeai_configure
@ -463,10 +460,10 @@ def get_torch_source() -> (Union[str, None], str):
url = "https://download.pytorch.org/whl/cpu"
if device == "cuda":
url = "https://download.pytorch.org/whl/cu118"
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu118"
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@ -5,6 +5,7 @@ InvokeAI Installer
import argparse
import os
from pathlib import Path
from installer import Installer
if __name__ == "__main__":

View File

@ -7,7 +7,7 @@ import os
import platform
from pathlib import Path
from prompt_toolkit import prompt
from prompt_toolkit import HTML, prompt
from prompt_toolkit.completion import PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
@ -65,17 +65,50 @@ def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:")
dest_confirmed = Confirm.ask(
":stop_sign: Are you sure you want to (re)install in this location?",
":stop_sign: (re)install in this location?",
default=False,
)
else:
print(f"InvokeAI will be installed in {dest}")
dest_confirmed = not Confirm.ask("Would you like to pick a different location?", default=False)
dest_confirmed = Confirm.ask("Use this location?", default=True)
console.line()
return dest_confirmed
def user_wants_auto_configuration() -> bool:
"""Prompt the user to choose between manual and auto configuration."""
console.rule("InvokeAI Configuration Section")
console.print(
Panel(
Group(
"\n".join(
[
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
"",
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
"",
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
]
),
),
box=box.MINIMAL,
padding=(1, 1),
)
)
choice = (
prompt(
HTML("Choose <b>&lt;a&gt;</b>utomatic or <b>&lt;m&gt;</b>anual configuration [a/m] (a): "),
validator=Validator.from_callable(
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
),
)
or "a"
)
return choice.lower().startswith("a")
def dest_path(dest=None) -> Path:
"""
Prompt the user for the destination path and create the path
@ -104,7 +137,7 @@ def dest_path(dest=None) -> Path:
path_completer = PathCompleter(
only_directories=True,
expanduser=True,
get_paths=lambda: [browse_start],
get_paths=lambda: [browse_start], # noqa: B023
# get_paths=lambda: [".."].extend(list(browse_start.iterdir()))
)
@ -116,7 +149,7 @@ def dest_path(dest=None) -> Path:
completer=path_completer,
default=str(browse_start) + os.sep,
vi_mode=True,
complete_while_typing=True
complete_while_typing=True,
# Test that this is not needed on Windows
# complete_style=CompleteStyle.READLINE_LIKE,
)

View File

@ -4,7 +4,7 @@ Project homepage: https://github.com/invoke-ai/InvokeAI
Preparations:
You will need to install Python 3.9 or higher for this installer
You will need to install Python 3.10 or higher for this installer
to work. Instructions are given here:
https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
@ -14,15 +14,15 @@ Preparations:
python --version
If all is well, it will print "Python 3.X.X", where the version number
is at least 3.9.*, and not higher than 3.11.*.
is at least 3.10.*, and not higher than 3.11.*.
If this works, check the version of the Python package manager, pip:
pip --version
You should get a message that indicates that the pip package
installer was derived from Python 3.9 or 3.10. For example:
"pip 22.3.1 from /usr/bin/pip (python 3.9)"
installer was derived from Python 3.10 or 3.11. For example:
"pip 22.0.1 from /usr/bin/pip (python 3.10)"
Long Paths on Windows:

View File

@ -9,14 +9,14 @@ set INVOKEAI_ROOT=.
:start
echo Desired action:
echo 1. Generate images with the browser-based interface
echo 2. Explore InvokeAI nodes using a command-line interface
echo 3. Run textual inversion training
echo 4. Merge models (diffusers type only)
echo 5. Download and install models
echo 6. Change InvokeAI startup options
echo 7. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 8. Open the developer console
echo 9. Update InvokeAI
echo 2. Run textual inversion training
echo 3. Merge models (diffusers type only)
echo 4. Download and install models
echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console
echo 8. Update InvokeAI
echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help
echo Q - Quit
set /P choice="Please enter 1-10, Q: [1] "
@ -25,24 +25,21 @@ IF /I "%choice%" == "1" (
echo Starting the InvokeAI browser-based UI..
python .venv\Scripts\invokeai-web.exe %*
) ELSE IF /I "%choice%" == "2" (
echo Starting the InvokeAI command-line..
python .venv\Scripts\invokeai.exe %*
) ELSE IF /I "%choice%" == "3" (
echo Starting textual inversion training..
python .venv\Scripts\invokeai-ti.exe --gui
) ELSE IF /I "%choice%" == "4" (
) ELSE IF /I "%choice%" == "3" (
echo Starting model merging script..
python .venv\Scripts\invokeai-merge.exe --gui
) ELSE IF /I "%choice%" == "5" (
) ELSE IF /I "%choice%" == "4" (
echo Running invokeai-model-install...
python .venv\Scripts\invokeai-model-install.exe
) ELSE IF /I "%choice%" == "6" (
) ELSE IF /I "%choice%" == "5" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --skip-sd-weight --skip-support-models
) ELSE IF /I "%choice%" == "7" (
) ELSE IF /I "%choice%" == "6" (
echo Running invokeai-configure...
python .venv\Scripts\invokeai-configure.exe --yes --skip-sd-weight
) ELSE IF /I "%choice%" == "8" (
) ELSE IF /I "%choice%" == "7" (
echo Developer Console
echo Python command is:
where python
@ -54,12 +51,15 @@ IF /I "%choice%" == "1" (
echo *************************
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "9" (
) ELSE IF /I "%choice%" == "8" (
echo Running invokeai-update...
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
) ELSE IF /I "%choice%" == "10" (
echo Displaying command line help...
python .venv\Scripts\invokeai.exe --help %*
python .venv\Scripts\invokeai-web.exe --help %*
pause
exit /b
) ELSE IF /I "%choice%" == "q" (

View File

@ -46,6 +46,9 @@ if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1
fi
# Avoid glibc memory fragmentation. See invokeai/backend/model_management/README.md for details.
export MALLOC_MMAP_THRESHOLD_=1048576
# Primary function for the case statement to determine user input
do_choice() {
case $1 in
@ -55,55 +58,50 @@ do_choice() {
invokeai-web $PARAMS
;;
2)
clear
printf "Explore InvokeAI nodes using a command-line interface\n"
invokeai $PARAMS
;;
3)
clear
printf "Textual inversion training\n"
invokeai-ti --gui $PARAMS
;;
4)
3)
clear
printf "Merge models (diffusers type only)\n"
invokeai-merge --gui $PARAMS
;;
5)
4)
clear
printf "Download and install models\n"
invokeai-model-install --root ${INVOKEAI_ROOT}
;;
6)
5)
clear
printf "Change InvokeAI startup options\n"
invokeai-configure --root ${INVOKEAI_ROOT} --skip-sd-weights --skip-support-models
;;
7)
6)
clear
printf "Re-run the configure script to fix a broken install or to complete a major upgrade\n"
invokeai-configure --root ${INVOKEAI_ROOT} --yes --default_only --skip-sd-weights
;;
8)
7)
clear
printf "Open the developer console\n"
file_name=$(basename "${BASH_SOURCE[0]}")
bash --init-file "$file_name"
;;
9)
8)
clear
printf "Update InvokeAI\n"
python -m invokeai.frontend.install.invokeai_update
;;
9)
clear
printf "Running the db maintenance script\n"
invokeai-db-maintenance --root ${INVOKEAI_ROOT}
;;
10)
clear
printf "Command-line help\n"
invokeai --help
;;
"HELP 1")
clear
printf "Command-line help\n"
invokeai --help
invokeai-web --help
;;
*)
clear
@ -118,14 +116,16 @@ do_choice() {
do_dialog() {
options=(
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI")
2 "Textual inversion training"
3 "Merge models (diffusers type only)"
4 "Download and install models"
5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console"
8 "Update InvokeAI"
9 "Run the InvokeAI image database maintenance script"
10 "Command-line help"
)
choice=$(dialog --clear \
--backtitle "\Zb\Zu\Z3InvokeAI" \
@ -149,14 +149,14 @@ do_line_input() {
printf " ** For a more attractive experience, please install the 'dialog' utility using your package manager. **\n\n"
printf "What would you like to do?\n"
printf "1: Generate images using the browser-based interface\n"
printf "2: Explore InvokeAI nodes using the command-line interface\n"
printf "3: Run textual inversion training\n"
printf "4: Merge models (diffusers type only)\n"
printf "5: Download and install models\n"
printf "6: Change InvokeAI startup options\n"
printf "7: Re-run the configure script to fix a broken install\n"
printf "8: Open the developer console\n"
printf "9: Update InvokeAI\n"
printf "2: Run textual inversion training\n"
printf "3: Merge models (diffusers type only)\n"
printf "4: Download and install models\n"
printf "5: Change InvokeAI startup options\n"
printf "6: Re-run the configure script to fix a broken install\n"
printf "7: Open the developer console\n"
printf "8: Update InvokeAI\n"
printf "9: Run the InvokeAI image database maintenance script\n"
printf "10: Command-line help\n"
printf "Q: Quit\n\n"
read -p "Please enter 1-10, Q: [1] " yn

View File

@ -1,37 +1,38 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from logging import Logger
import sqlite3
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
from invokeai.app.services.board_images import (
BoardImagesService,
BoardImagesServiceDependencies,
)
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.app.services.workflow_image_records.workflow_image_records_sqlite import SqliteWorkflowImageRecordsStorage
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
from ..services.default_graphs import create_system_graphs
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from ..services.graph import GraphExecutionState, LibraryGraph
from ..services.image_file_storage import DiskImageFileStorage
from ..services.invocation_queue import MemoryInvocationQueue
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from ..services.board_images.board_images_default import BoardImagesService
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from ..services.boards.boards_default import BoardService
from ..services.config import InvokeAIAppConfig
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from ..services.invocation_processor.invocation_processor_default import DefaultInvocationProcessor
from ..services.invocation_queue.invocation_queue_memory import MemoryInvocationQueue
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.processor import DefaultInvocationProcessor
from ..services.sqlite import SqliteItemStorage
from ..services.model_manager_service import ModelManagerService
from ..services.batch_manager import BatchManager
from ..services.batch_manager_storage import SqliteBatchProcessStorage
from ..services.invocation_stats import InvocationStatsService
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.shared.default_graphs import create_system_graphs
from ..services.shared.graph import GraphExecutionState, LibraryGraph
from ..services.shared.sqlite import SqliteDatabase
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
@ -51,7 +52,7 @@ def check_internet() -> bool:
return False
logger = InvokeAILogger.getLogger()
logger = InvokeAILogger.get_logger()
class ApiDependencies:
@ -65,84 +66,71 @@ class ApiDependencies:
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
events = FastAPIEventService(event_handler_id)
output_folder = config.output_path
# TODO: build a file/path manager?
db_path = config.db_path
db_path.parent.mkdir(parents=True, exist_ok=True)
db_location = str(db_path)
db = SqliteDatabase(config, logger)
db_conn = sqlite3.connect(db_location, check_same_thread=False) # TODO: figure out a better threading solution
configuration = config
logger = logger
graph_execution_manager = SqliteItemStorage[GraphExecutionState](conn=db_conn, table_name="graph_executions")
urls = LocalUrlService()
image_record_storage = SqliteImageRecordStorage(conn=db_conn)
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
board_image_records = SqliteBoardImageRecordStorage(db=db)
board_images = BoardImagesService()
board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_library = SqliteItemStorage[LibraryGraph](db=db, table_name="graphs")
image_files = DiskImageFileStorage(f"{output_folder}/images")
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
board_record_storage = SqliteBoardRecordStorage(conn=db_conn)
board_image_record_storage = SqliteBoardImageRecordStorage(conn=db_conn)
boards = BoardService(
services=BoardServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
board_images = BoardImagesService(
services=BoardImagesServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
images = ImageService(
services=ImageServiceDependencies(
board_image_record_storage=board_image_record_storage,
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
)
)
batch_manager_storage = SqliteBatchProcessStorage(conn=db_conn)
batch_manager = BatchManager(batch_manager_storage)
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
queue = MemoryInvocationQueue()
session_processor = DefaultSessionProcessor()
session_queue = SqliteSessionQueue(db=db)
urls = LocalUrlService()
workflow_image_records = SqliteWorkflowImageRecordsStorage(db=db)
workflow_records = SqliteWorkflowRecordsStorage(db=db)
services = InvocationServices(
model_manager=ModelManagerService(config, logger),
events=events,
latents=latents,
images=images,
batch_manager=batch_manager,
boards=boards,
board_image_records=board_image_records,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](conn=db_conn, table_name="graphs"),
board_records=board_records,
boards=boards,
configuration=configuration,
events=events,
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
configuration=config,
performance_statistics=InvocationStatsService(graph_execution_manager),
graph_library=graph_library,
image_files=image_files,
image_records=image_records,
images=images,
invocation_cache=invocation_cache,
latents=latents,
logger=logger,
model_manager=model_manager,
model_records=model_record_service,
names=names,
performance_statistics=performance_statistics,
processor=processor,
queue=queue,
session_processor=session_processor,
session_queue=session_queue,
urls=urls,
workflow_image_records=workflow_image_records,
workflow_records=workflow_records,
)
create_system_graphs(services.graph_library)
ApiDependencies.invoker = Invoker(services)
db.clean()
@staticmethod
def shutdown():
if ApiDependencies.invoker:

View File

@ -7,7 +7,7 @@ from typing import Any
from fastapi_events.dispatcher import dispatch
from ..services.events import EventServiceBase
from ..services.events.events_base import EventServiceBase
class FastAPIEventService(EventServiceBase):
@ -28,7 +28,7 @@ class FastAPIEventService(EventServiceBase):
self.__queue.put(None)
def dispatch(self, event_name: str, payload: Any) -> None:
self.__queue.put(dict(event_name=event_name, payload=payload))
self.__queue.put({"event_name": event_name, "payload": payload})
async def __dispatch_from_queue(self, stop_event: threading.Event):
"""Get events on from the queue and dispatch them, from the correct thread"""

View File

@ -7,6 +7,7 @@ from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
@ -103,3 +104,43 @@ async def set_log_level(
"""Sets the log verbosity level"""
ApiDependencies.invoker.services.logger.setLevel(level)
return LogLevel(ApiDependencies.invoker.services.logger.level)
@app_router.delete(
"/invocation_cache",
operation_id="clear_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def clear_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.clear()
@app_router.put(
"/invocation_cache/enable",
operation_id="enable_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def enable_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.enable()
@app_router.put(
"/invocation_cache/disable",
operation_id="disable_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def disable_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.disable()
@app_router.get(
"/invocation_cache/status",
operation_id="get_invocation_cache_status",
responses={200: {"model": InvocationCacheStatus}},
)
async def get_invocation_cache_status() -> InvocationCacheStatus:
"""Clears the invocation cache"""
return ApiDependencies.invoker.services.invocation_cache.get_status()

View File

@ -1,106 +0,0 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from fastapi import Body, HTTPException, Path, Response
from fastapi.routing import APIRouter
from invokeai.app.services.batch_manager_storage import BatchSession, BatchSessionNotFoundException
# Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403
from ...services.batch_manager import Batch, BatchProcessResponse
from ...services.graph import Graph
from ..dependencies import ApiDependencies
batches_router = APIRouter(prefix="/v1/batches", tags=["sessions"])
@batches_router.post(
"/",
operation_id="create_batch",
responses={
200: {"model": BatchProcessResponse},
400: {"description": "Invalid json"},
},
)
async def create_batch(
graph: Graph = Body(description="The graph to initialize the session with"),
batch: Batch = Body(description="Batch config to apply to the given graph"),
) -> BatchProcessResponse:
"""Creates a batch process"""
return ApiDependencies.invoker.services.batch_manager.create_batch_process(batch, graph)
@batches_router.put(
"/b/{batch_process_id}/invoke",
operation_id="start_batch",
responses={
202: {"description": "Batch process started"},
404: {"description": "Batch session not found"},
},
)
async def start_batch(
batch_process_id: str = Path(description="ID of Batch to start"),
) -> Response:
"""Executes a batch process"""
try:
ApiDependencies.invoker.services.batch_manager.run_batch_process(batch_process_id)
return Response(status_code=202)
except BatchSessionNotFoundException:
raise HTTPException(status_code=404, detail="Batch session not found")
@batches_router.delete(
"/b/{batch_process_id}",
operation_id="cancel_batch",
responses={202: {"description": "The batch is canceled"}},
)
async def cancel_batch(
batch_process_id: str = Path(description="The id of the batch process to cancel"),
) -> Response:
"""Cancels a batch process"""
ApiDependencies.invoker.services.batch_manager.cancel_batch_process(batch_process_id)
return Response(status_code=202)
@batches_router.get(
"/incomplete",
operation_id="list_incomplete_batches",
responses={200: {"model": list[BatchProcessResponse]}},
)
async def list_incomplete_batches() -> list[BatchProcessResponse]:
"""Lists incomplete batch processes"""
return ApiDependencies.invoker.services.batch_manager.get_incomplete_batch_processes()
@batches_router.get(
"/",
operation_id="list_batches",
responses={200: {"model": list[BatchProcessResponse]}},
)
async def list_batches() -> list[BatchProcessResponse]:
"""Lists all batch processes"""
return ApiDependencies.invoker.services.batch_manager.get_batch_processes()
@batches_router.get(
"/b/{batch_process_id}",
operation_id="get_batch",
responses={200: {"model": BatchProcessResponse}},
)
async def get_batch(
batch_process_id: str = Path(description="The id of the batch process to get"),
) -> BatchProcessResponse:
"""Gets a Batch Process"""
return ApiDependencies.invoker.services.batch_manager.get_batch(batch_process_id)
@batches_router.get(
"/b/{batch_process_id}/sessions",
operation_id="get_batch_sessions",
responses={200: {"model": list[BatchSession]}},
)
async def get_batch_sessions(
batch_process_id: str = Path(description="The id of the batch process to get"),
) -> list[BatchSession]:
"""Gets a list of batch sessions for a given batch process"""
return ApiDependencies.invoker.services.batch_manager.get_sessions(batch_process_id)

View File

@ -4,9 +4,9 @@ from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.services.board_record_storage import BoardChanges
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from ..dependencies import ApiDependencies

View File

@ -1,20 +1,18 @@
import io
import traceback
from typing import Optional
from PIL import Image
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from PIL import Image
from pydantic import BaseModel, Field, ValidationError
from invokeai.app.invocations.baseinvocation import MetadataField, MetadataFieldValidator, WorkflowFieldValidator
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@ -45,20 +43,41 @@ async def upload_image(
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type.startswith("image"):
if not file.content_type or not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
contents = await file.read()
metadata = None
workflow = None
contents = await file.read()
try:
pil_image = Image.open(io.BytesIO(contents))
if crop_visible:
bbox = pil_image.getbbox()
pil_image = pil_image.crop(bbox)
except Exception:
# Error opening the image
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=415, detail="Failed to read image")
# TODO: retain non-invokeai metadata on upload?
# attempt to parse metadata from image
metadata_raw = pil_image.info.get("invokeai_metadata", None)
if metadata_raw:
try:
metadata = MetadataFieldValidator.validate_json(metadata_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
# attempt to parse workflow from image
workflow_raw = pil_image.info.get("invokeai_workflow", None)
if workflow_raw is not None:
try:
workflow = WorkflowFieldValidator.validate_json(workflow_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
try:
image_dto = ApiDependencies.invoker.services.images.create(
image=pil_image,
@ -66,6 +85,8 @@ async def upload_image(
image_category=image_category,
session_id=session_id,
board_id=board_id,
metadata=metadata,
workflow=workflow,
is_intermediate=is_intermediate,
)
@ -74,6 +95,7 @@ async def upload_image(
return image_dto
except Exception:
ApiDependencies.invoker.services.logger.error(traceback.format_exc())
raise HTTPException(status_code=500, detail="Failed to create image")
@ -90,7 +112,7 @@ async def delete_image(
pass
@images_router.post("/clear-intermediates", operation_id="clear_intermediates")
@images_router.delete("/intermediates", operation_id="clear_intermediates")
async def clear_intermediates() -> int:
"""Clears all intermediates"""
@ -102,6 +124,17 @@ async def clear_intermediates() -> int:
pass
@images_router.get("/intermediates", operation_id="get_intermediates_count")
async def get_intermediates_count() -> int:
"""Gets the count of intermediate images"""
try:
return ApiDependencies.invoker.services.images.get_intermediates_count()
except Exception:
raise HTTPException(status_code=500, detail="Failed to get intermediates")
pass
@images_router.patch(
"/i/{image_name}",
operation_id="update_image",
@ -138,11 +171,11 @@ async def get_image_dto(
@images_router.get(
"/i/{image_name}/metadata",
operation_id="get_image_metadata",
response_model=ImageMetadata,
response_model=Optional[MetadataField],
)
async def get_image_metadata(
image_name: str = Path(description="The name of image to get"),
) -> ImageMetadata:
) -> Optional[MetadataField]:
"""Gets an image's metadata"""
try:
@ -325,3 +358,20 @@ async def unstar_images_in_list(
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception:
raise HTTPException(status_code=500, detail="Failed to unstar images")
class ImagesDownloaded(BaseModel):
response: Optional[str] = Field(
description="If defined, the message to display to the user when images begin downloading"
)
@images_router.post("/download", operation_id="download_images_from_list", response_model=ImagesDownloaded)
async def download_images_from_list(
image_names: list[str] = Body(description="The list of names of images to download", embed=True),
board_id: Optional[str] = Body(
default=None, description="The board from which image should be downloaded from", embed=True
),
) -> ImagesDownloaded:
# return ImagesDownloaded(response="Your images are downloading")
raise HTTPException(status_code=501, detail="Endpoint is not yet implemented")

View File

@ -0,0 +1,164 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
from hashlib import sha1
from random import randbytes
from typing import List, Optional
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
UnknownModelException,
)
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelType,
)
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["models"])
class ModelsList(BaseModel):
"""Return list of configs."""
models: list[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
@model_records_router.get(
"/",
operation_id="list_model_records",
)
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(record_store.search_by_attr(base_model=base_model, model_type=model_type))
else:
found_models.extend(record_store.search_by_attr(model_type=model_type))
return ModelsList(models=found_models)
@model_records_router.get(
"/i/{key}",
operation_id="get_model_record",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
},
)
async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_records
try:
return record_store.get_model(key)
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.patch(
"/i/{key}",
operation_id="update_model_record",
responses={
200: {"description": "The model was updated successfully"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
response_model=AnyModelConfig,
)
async def update_model_record(
key: Annotated[str, Path(description="Unique key of model")],
info: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
try:
model_response = record_store.update_model(key, config=info)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
@model_records_router.delete(
"/i/{key}",
operation_id="del_model_record",
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
)
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""Delete Model"""
logger = ApiDependencies.invoker.services.logger
try:
record_store = ApiDependencies.invoker.services.model_records
record_store.del_model(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.post(
"/i/",
operation_id="add_model_record",
responses={
201: {"description": "The model added successfully"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
415: {"description": "Unrecognized file/folder format"},
},
status_code=201,
)
async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")]
) -> AnyModelConfig:
"""
Add a model using the configuration information appropriate for its type.
"""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>":
config.key = sha1(randbytes(100)).hexdigest()
logger.info(f"Created model {config.key} for {config.name}")
try:
record_store.add_model(config.key, config)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
# now fetch it out
return record_store.get_model(config.key)

View File

@ -1,30 +1,35 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654), 2023 Kent Keirsey (https://github.com/hipsterusername), 2023 Lincoln D. Stein
import pathlib
from typing import Literal, List, Optional, Union
from typing import Annotated, List, Literal, Optional, Union
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, parse_obj_as
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter
from starlette.exceptions import HTTPException
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management import MergeInterpolationMethod
from invokeai.backend.model_management.models import (
OPENAPI_MODEL_CONFIGS,
SchedulerPredictionType,
ModelNotFoundException,
InvalidModelException,
ModelNotFoundException,
SchedulerPredictionType,
)
from invokeai.backend.model_management import MergeInterpolationMethod
from ..dependencies import ApiDependencies
models_router = APIRouter(prefix="/v1/models", tags=["models"])
UpdateModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
UpdateModelResponseValidator = TypeAdapter(UpdateModelResponse)
ImportModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelResponseValidator = TypeAdapter(ImportModelResponse)
ConvertModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ConvertModelResponseValidator = TypeAdapter(ConvertModelResponse)
MergeModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
@ -32,6 +37,11 @@ ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
class ModelsList(BaseModel):
models: list[Union[tuple(OPENAPI_MODEL_CONFIGS)]]
model_config = ConfigDict(use_enum_values=True)
ModelsListValidator = TypeAdapter(ModelsList)
@models_router.get(
"/",
@ -44,12 +54,12 @@ async def list_models(
) -> ModelsList:
"""Gets a list of models"""
if base_models and len(base_models) > 0:
models_raw = list()
models_raw = []
for base_model in base_models:
models_raw.extend(ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type))
else:
models_raw = ApiDependencies.invoker.services.model_manager.list_models(None, model_type)
models = parse_obj_as(ModelsList, {"models": models_raw})
models = ModelsListValidator.validate_python({"models": models_raw})
return models
@ -105,11 +115,14 @@ async def update_model(
info.path = new_info.get("path")
# replace empty string values with None/null to avoid phenomenon of vae: ''
info_dict = info.dict()
info_dict = info.model_dump()
info_dict = {x: info_dict[x] if info_dict[x] else None for x in info_dict.keys()}
ApiDependencies.invoker.services.model_manager.update_model(
model_name=model_name, base_model=base_model, model_type=model_type, model_attributes=info_dict
model_name=model_name,
base_model=base_model,
model_type=model_type,
model_attributes=info_dict,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
@ -117,7 +130,7 @@ async def update_model(
base_model=base_model,
model_type=model_type,
)
model_response = parse_obj_as(UpdateModelResponse, model_raw)
model_response = UpdateModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
@ -146,18 +159,21 @@ async def update_model(
async def import_model(
location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body(
description="Prediction type for SDv2 checkpoint files", default="v_prediction"
description="Prediction type for SDv2 checkpoints and rare SDv1 checkpoints",
default=None,
),
) -> ImportModelResponse:
"""Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically"""
location = location.strip("\"' ")
items_to_import = {location}
prediction_types = {x.value: x for x in SchedulerPredictionType}
logger = ApiDependencies.invoker.services.logger
try:
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import=items_to_import, prediction_type_helper=lambda x: prediction_types.get(prediction_type)
items_to_import=items_to_import,
prediction_type_helper=lambda x: prediction_types.get(prediction_type),
)
info = installed_models.get(location)
@ -169,7 +185,7 @@ async def import_model(
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.name, base_model=info.base_model, model_type=info.model_type
)
return parse_obj_as(ImportModelResponse, model_raw)
return ImportModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
logger.error(str(e))
@ -203,13 +219,18 @@ async def add_model(
try:
ApiDependencies.invoker.services.model_manager.add_model(
info.model_name, info.base_model, info.model_type, model_attributes=info.dict()
info.model_name,
info.base_model,
info.model_type,
model_attributes=info.model_dump(),
)
logger.info(f"Successfully added {info.model_name}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.model_name, base_model=info.base_model, model_type=info.model_type
model_name=info.model_name,
base_model=info.base_model,
model_type=info.model_type,
)
return parse_obj_as(ImportModelResponse, model_raw)
return ImportModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@ -221,7 +242,10 @@ async def add_model(
@models_router.delete(
"/{base_model}/{model_type}/{model_name}",
operation_id="del_model",
responses={204: {"description": "Model deleted successfully"}, 404: {"description": "Model not found"}},
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
response_model=None,
)
@ -277,7 +301,7 @@ async def convert_model(
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name, base_model=base_model, model_type=model_type
)
response = parse_obj_as(ConvertModelResponse, model_raw)
response = ConvertModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found: {str(e)}")
except ValueError as e:
@ -300,7 +324,8 @@ async def search_for_models(
) -> List[pathlib.Path]:
if not search_path.is_dir():
raise HTTPException(
status_code=404, detail=f"The search path '{search_path}' does not exist or is not directory"
status_code=404,
detail=f"The search path '{search_path}' does not exist or is not directory",
)
return ApiDependencies.invoker.services.model_manager.search_for_models(search_path)
@ -335,6 +360,26 @@ async def sync_to_config() -> bool:
return True
# There's some weird pydantic-fastapi behaviour that requires this to be a separate class
# TODO: After a few updates, see if it works inside the route operation handler?
class MergeModelsBody(BaseModel):
model_names: List[str] = Field(description="model name", min_length=2, max_length=3)
merged_model_name: Optional[str] = Field(description="Name of destination model")
alpha: Optional[float] = Field(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5)
interp: Optional[MergeInterpolationMethod] = Field(description="Interpolation method")
force: Optional[bool] = Field(
description="Force merging of models created with different versions of diffusers",
default=False,
)
merge_dest_directory: Optional[str] = Field(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
)
model_config = ConfigDict(protected_namespaces=())
@models_router.put(
"/merge/{base_model}",
operation_id="merge_models",
@ -347,31 +392,23 @@ async def sync_to_config() -> bool:
response_model=MergeModelResponse,
)
async def merge_models(
body: Annotated[MergeModelsBody, Body(description="Model configuration", embed=True)],
base_model: BaseModelType = Path(description="Base model"),
model_names: List[str] = Body(description="model name", min_items=2, max_items=3),
merged_model_name: Optional[str] = Body(description="Name of destination model"),
alpha: Optional[float] = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method"),
force: Optional[bool] = Body(
description="Force merging of models created with different versions of diffusers", default=False
),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> MergeModelResponse:
"""Convert a checkpoint model into a diffusers model"""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {model_names} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
logger.info(
f"Merging models: {body.model_names} into {body.merge_dest_directory or '<MODELS>'}/{body.merged_model_name}"
)
dest = pathlib.Path(body.merge_dest_directory) if body.merge_dest_directory else None
result = ApiDependencies.invoker.services.model_manager.merge_models(
model_names,
base_model,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
model_names=body.model_names,
base_model=base_model,
merged_model_name=body.merged_model_name or "+".join(body.model_names),
alpha=body.alpha,
interp=body.interp,
force=body.force,
merge_dest_directory=dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
@ -379,9 +416,12 @@ async def merge_models(
base_model=base_model,
model_type=ModelType.Main,
)
response = parse_obj_as(ConvertModelResponse, model_raw)
response = ConvertModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException:
raise HTTPException(status_code=404, detail=f"One or more of the models '{model_names}' not found")
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{body.model_names}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@ -0,0 +1,228 @@
from typing import Optional
from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
Batch,
BatchStatus,
CancelByBatchIDsResult,
ClearResult,
EnqueueBatchResult,
PruneResult,
SessionQueueItem,
SessionQueueItemDTO,
SessionQueueStatus,
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
from ..dependencies import ApiDependencies
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
class SessionQueueAndProcessorStatus(BaseModel):
"""The overall status of session queue and processor"""
queue: SessionQueueStatus
processor: SessionProcessorStatus
@session_queue_router.post(
"/{queue_id}/enqueue_batch",
operation_id="enqueue_batch",
responses={
201: {"model": EnqueueBatchResult},
},
)
async def enqueue_batch(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch: Batch = Body(description="Batch to process"),
prepend: bool = Body(default=False, description="Whether or not to prepend this batch in the queue"),
) -> EnqueueBatchResult:
"""Processes a batch and enqueues the output graphs for execution."""
return ApiDependencies.invoker.services.session_queue.enqueue_batch(queue_id=queue_id, batch=batch, prepend=prepend)
@session_queue_router.get(
"/{queue_id}/list",
operation_id="list_queue_items",
responses={
200: {"model": CursorPaginatedResults[SessionQueueItemDTO]},
},
)
async def list_queue_items(
queue_id: str = Path(description="The queue id to perform this operation on"),
limit: int = Query(default=50, description="The number of items to fetch"),
status: Optional[QUEUE_ITEM_STATUS] = Query(default=None, description="The status of items to fetch"),
cursor: Optional[int] = Query(default=None, description="The pagination cursor"),
priority: int = Query(default=0, description="The pagination cursor priority"),
) -> CursorPaginatedResults[SessionQueueItemDTO]:
"""Gets all queue items (without graphs)"""
return ApiDependencies.invoker.services.session_queue.list_queue_items(
queue_id=queue_id, limit=limit, status=status, cursor=cursor, priority=priority
)
@session_queue_router.put(
"/{queue_id}/processor/resume",
operation_id="resume",
responses={200: {"model": SessionProcessorStatus}},
)
async def resume(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionProcessorStatus:
"""Resumes session processor"""
return ApiDependencies.invoker.services.session_processor.resume()
@session_queue_router.put(
"/{queue_id}/processor/pause",
operation_id="pause",
responses={200: {"model": SessionProcessorStatus}},
)
async def Pause(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionProcessorStatus:
"""Pauses session processor"""
return ApiDependencies.invoker.services.session_processor.pause()
@session_queue_router.put(
"/{queue_id}/cancel_by_batch_ids",
operation_id="cancel_by_batch_ids",
responses={200: {"model": CancelByBatchIDsResult}},
)
async def cancel_by_batch_ids(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch_ids: list[str] = Body(description="The list of batch_ids to cancel all queue items for", embed=True),
) -> CancelByBatchIDsResult:
"""Immediately cancels all queue items from the given batch ids"""
return ApiDependencies.invoker.services.session_queue.cancel_by_batch_ids(queue_id=queue_id, batch_ids=batch_ids)
@session_queue_router.put(
"/{queue_id}/clear",
operation_id="clear",
responses={
200: {"model": ClearResult},
},
)
async def clear(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> ClearResult:
"""Clears the queue entirely, immediately canceling the currently-executing session"""
queue_item = ApiDependencies.invoker.services.session_queue.get_current(queue_id)
if queue_item is not None:
ApiDependencies.invoker.services.session_queue.cancel_queue_item(queue_item.item_id)
clear_result = ApiDependencies.invoker.services.session_queue.clear(queue_id)
return clear_result
@session_queue_router.put(
"/{queue_id}/prune",
operation_id="prune",
responses={
200: {"model": PruneResult},
},
)
async def prune(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> PruneResult:
"""Prunes all completed or errored queue items"""
return ApiDependencies.invoker.services.session_queue.prune(queue_id)
@session_queue_router.get(
"/{queue_id}/current",
operation_id="get_current_queue_item",
responses={
200: {"model": Optional[SessionQueueItem]},
},
)
async def get_current_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> Optional[SessionQueueItem]:
"""Gets the currently execution queue item"""
return ApiDependencies.invoker.services.session_queue.get_current(queue_id)
@session_queue_router.get(
"/{queue_id}/next",
operation_id="get_next_queue_item",
responses={
200: {"model": Optional[SessionQueueItem]},
},
)
async def get_next_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> Optional[SessionQueueItem]:
"""Gets the next queue item, without executing it"""
return ApiDependencies.invoker.services.session_queue.get_next(queue_id)
@session_queue_router.get(
"/{queue_id}/status",
operation_id="get_queue_status",
responses={
200: {"model": SessionQueueAndProcessorStatus},
},
)
async def get_queue_status(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionQueueAndProcessorStatus:
"""Gets the status of the session queue"""
queue = ApiDependencies.invoker.services.session_queue.get_queue_status(queue_id)
processor = ApiDependencies.invoker.services.session_processor.get_status()
return SessionQueueAndProcessorStatus(queue=queue, processor=processor)
@session_queue_router.get(
"/{queue_id}/b/{batch_id}/status",
operation_id="get_batch_status",
responses={
200: {"model": BatchStatus},
},
)
async def get_batch_status(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch_id: str = Path(description="The batch to get the status of"),
) -> BatchStatus:
"""Gets the status of the session queue"""
return ApiDependencies.invoker.services.session_queue.get_batch_status(queue_id=queue_id, batch_id=batch_id)
@session_queue_router.get(
"/{queue_id}/i/{item_id}",
operation_id="get_queue_item",
responses={
200: {"model": SessionQueueItem},
},
)
async def get_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_id: int = Path(description="The queue item to get"),
) -> SessionQueueItem:
"""Gets a queue item"""
return ApiDependencies.invoker.services.session_queue.get_queue_item(item_id)
@session_queue_router.put(
"/{queue_id}/i/{item_id}/cancel",
operation_id="cancel_queue_item",
responses={
200: {"model": SessionQueueItem},
},
)
async def cancel_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_id: int = Path(description="The queue item to cancel"),
) -> SessionQueueItem:
"""Deletes a queue item"""
return ApiDependencies.invoker.services.session_queue.cancel_queue_item(item_id)

View File

@ -1,53 +1,50 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Annotated, Optional, Union
from fastapi import Body, HTTPException, Path, Query, Response
from fastapi import HTTPException, Path
from fastapi.routing import APIRouter
from pydantic.fields import Field
# Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403
from ...invocations.baseinvocation import BaseInvocation
from ...services.graph import Edge, EdgeConnection, Graph, GraphExecutionState, NodeAlreadyExecutedError
from ...services.item_storage import PaginatedResults
from ...services.shared.graph import GraphExecutionState
from ..dependencies import ApiDependencies
session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
@session_router.post(
"/",
operation_id="create_session",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid json"},
},
)
async def create_session(
graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with")
) -> GraphExecutionState:
"""Creates a new session, optionally initializing it with an invocation graph"""
session = ApiDependencies.invoker.create_execution_state(graph)
return session
# @session_router.post(
# "/",
# operation_id="create_session",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid json"},
# },
# deprecated=True,
# )
# async def create_session(
# queue_id: str = Query(default="", description="The id of the queue to associate the session with"),
# graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with"),
# ) -> GraphExecutionState:
# """Creates a new session, optionally initializing it with an invocation graph"""
# session = ApiDependencies.invoker.create_execution_state(queue_id=queue_id, graph=graph)
# return session
@session_router.get(
"/",
operation_id="list_sessions",
responses={200: {"model": PaginatedResults[GraphExecutionState]}},
)
async def list_sessions(
page: int = Query(default=0, description="The page of results to get"),
per_page: int = Query(default=10, description="The number of results per page"),
query: str = Query(default="", description="The query string to search for"),
) -> PaginatedResults[GraphExecutionState]:
"""Gets a list of sessions, optionally searching"""
if query == "":
result = ApiDependencies.invoker.services.graph_execution_manager.list(page, per_page)
else:
result = ApiDependencies.invoker.services.graph_execution_manager.search(query, page, per_page)
return result
# @session_router.get(
# "/",
# operation_id="list_sessions",
# responses={200: {"model": PaginatedResults[GraphExecutionState]}},
# deprecated=True,
# )
# async def list_sessions(
# page: int = Query(default=0, description="The page of results to get"),
# per_page: int = Query(default=10, description="The number of results per page"),
# query: str = Query(default="", description="The query string to search for"),
# ) -> PaginatedResults[GraphExecutionState]:
# """Gets a list of sessions, optionally searching"""
# if query == "":
# result = ApiDependencies.invoker.services.graph_execution_manager.list(page, per_page)
# else:
# result = ApiDependencies.invoker.services.graph_execution_manager.search(query, page, per_page)
# return result
@session_router.get(
@ -69,203 +66,211 @@ async def get_session(
return session
@session_router.post(
"/{session_id}/nodes",
operation_id="add_node",
responses={
200: {"model": str},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def add_node(
session_id: str = Path(description="The id of the session"),
node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
description="The node to add"
),
) -> str:
"""Adds a node to the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# @session_router.post(
# "/{session_id}/nodes",
# operation_id="add_node",
# responses={
# 200: {"model": str},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def add_node(
# session_id: str = Path(description="The id of the session"),
# node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
# description="The node to add"
# ),
# ) -> str:
# """Adds a node to the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
try:
session.add_node(node)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session.id
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# try:
# session.add_node(node)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session.id
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
@session_router.put(
"/{session_id}/nodes/{node_path}",
operation_id="update_node",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def update_node(
session_id: str = Path(description="The id of the session"),
node_path: str = Path(description="The path to the node in the graph"),
node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
description="The new node"
),
) -> GraphExecutionState:
"""Updates a node in the graph and removes all linked edges"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# @session_router.put(
# "/{session_id}/nodes/{node_path}",
# operation_id="update_node",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def update_node(
# session_id: str = Path(description="The id of the session"),
# node_path: str = Path(description="The path to the node in the graph"),
# node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
# description="The new node"
# ),
# ) -> GraphExecutionState:
# """Updates a node in the graph and removes all linked edges"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
try:
session.update_node(node_path, node)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# try:
# session.update_node(node_path, node)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
@session_router.delete(
"/{session_id}/nodes/{node_path}",
operation_id="delete_node",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def delete_node(
session_id: str = Path(description="The id of the session"),
node_path: str = Path(description="The path to the node to delete"),
) -> GraphExecutionState:
"""Deletes a node in the graph and removes all linked edges"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# @session_router.delete(
# "/{session_id}/nodes/{node_path}",
# operation_id="delete_node",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def delete_node(
# session_id: str = Path(description="The id of the session"),
# node_path: str = Path(description="The path to the node to delete"),
# ) -> GraphExecutionState:
# """Deletes a node in the graph and removes all linked edges"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
try:
session.delete_node(node_path)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# try:
# session.delete_node(node_path)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
@session_router.post(
"/{session_id}/edges",
operation_id="add_edge",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def add_edge(
session_id: str = Path(description="The id of the session"),
edge: Edge = Body(description="The edge to add"),
) -> GraphExecutionState:
"""Adds an edge to the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# @session_router.post(
# "/{session_id}/edges",
# operation_id="add_edge",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def add_edge(
# session_id: str = Path(description="The id of the session"),
# edge: Edge = Body(description="The edge to add"),
# ) -> GraphExecutionState:
# """Adds an edge to the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
try:
session.add_edge(edge)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# try:
# session.add_edge(edge)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# TODO: the edge being in the path here is really ugly, find a better solution
@session_router.delete(
"/{session_id}/edges/{from_node_id}/{from_field}/{to_node_id}/{to_field}",
operation_id="delete_edge",
responses={
200: {"model": GraphExecutionState},
400: {"description": "Invalid node or link"},
404: {"description": "Session not found"},
},
)
async def delete_edge(
session_id: str = Path(description="The id of the session"),
from_node_id: str = Path(description="The id of the node the edge is coming from"),
from_field: str = Path(description="The field of the node the edge is coming from"),
to_node_id: str = Path(description="The id of the node the edge is going to"),
to_field: str = Path(description="The field of the node the edge is going to"),
) -> GraphExecutionState:
"""Deletes an edge from the graph"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# # TODO: the edge being in the path here is really ugly, find a better solution
# @session_router.delete(
# "/{session_id}/edges/{from_node_id}/{from_field}/{to_node_id}/{to_field}",
# operation_id="delete_edge",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def delete_edge(
# session_id: str = Path(description="The id of the session"),
# from_node_id: str = Path(description="The id of the node the edge is coming from"),
# from_field: str = Path(description="The field of the node the edge is coming from"),
# to_node_id: str = Path(description="The id of the node the edge is going to"),
# to_field: str = Path(description="The field of the node the edge is going to"),
# ) -> GraphExecutionState:
# """Deletes an edge from the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
try:
edge = Edge(
source=EdgeConnection(node_id=from_node_id, field=from_field),
destination=EdgeConnection(node_id=to_node_id, field=to_field),
)
session.delete_edge(edge)
ApiDependencies.invoker.services.graph_execution_manager.set(
session
) # TODO: can this be done automatically, or add node through an API?
return session
except NodeAlreadyExecutedError:
raise HTTPException(status_code=400)
except IndexError:
raise HTTPException(status_code=400)
# try:
# edge = Edge(
# source=EdgeConnection(node_id=from_node_id, field=from_field),
# destination=EdgeConnection(node_id=to_node_id, field=to_field),
# )
# session.delete_edge(edge)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
@session_router.put(
"/{session_id}/invoke",
operation_id="invoke_session",
responses={
200: {"model": None},
202: {"description": "The invocation is queued"},
400: {"description": "The session has no invocations ready to invoke"},
404: {"description": "Session not found"},
},
)
async def invoke_session(
session_id: str = Path(description="The id of the session to invoke"),
all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"),
) -> Response:
"""Invokes a session"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
# @session_router.put(
# "/{session_id}/invoke",
# operation_id="invoke_session",
# responses={
# 200: {"model": None},
# 202: {"description": "The invocation is queued"},
# 400: {"description": "The session has no invocations ready to invoke"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def invoke_session(
# queue_id: str = Query(description="The id of the queue to associate the session with"),
# session_id: str = Path(description="The id of the session to invoke"),
# all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"),
# ) -> Response:
# """Invokes a session"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
if session.is_complete():
raise HTTPException(status_code=400)
# if session.is_complete():
# raise HTTPException(status_code=400)
ApiDependencies.invoker.invoke(session, invoke_all=all)
return Response(status_code=202)
# ApiDependencies.invoker.invoke(queue_id, session, invoke_all=all)
# return Response(status_code=202)
@session_router.delete(
"/{session_id}/invoke",
operation_id="cancel_session_invoke",
responses={202: {"description": "The invocation is canceled"}},
)
async def cancel_session_invoke(
session_id: str = Path(description="The id of the session to cancel"),
) -> Response:
"""Invokes a session"""
ApiDependencies.invoker.cancel(session_id)
return Response(status_code=202)
# @session_router.delete(
# "/{session_id}/invoke",
# operation_id="cancel_session_invoke",
# responses={202: {"description": "The invocation is canceled"}},
# deprecated=True,
# )
# async def cancel_session_invoke(
# session_id: str = Path(description="The id of the session to cancel"),
# ) -> Response:
# """Invokes a session"""
# ApiDependencies.invoker.cancel(session_id)
# return Response(status_code=202)

Some files were not shown because too many files have changed in this diff Show More