Compare commits

..

87 Commits

Author SHA1 Message Date
732780c376 Playing with eventservice tests 2023-09-08 12:33:12 -04:00
ed7deee8f1 Merge branch 'main' into feat/batch-graphs 2023-09-06 11:03:54 -04:00
d22c4734ee feat(api): move batches to own router 2023-09-06 16:34:49 +10:00
3e7dadd7b3 fix(nodes): add version to iterate and collect 2023-09-05 23:46:20 +10:00
b777dba430 feat: batch events
When a batch creates a session, we need to alert the client of this. Because the sessions are created by the batch manager (not directly in response to a client action), we need to emit an event with the session id.

To accomodate this, a secondary set of sio sub/unsub/event handlers are created. These are specifically for batch events. The room is the `batch_id`.

When creating a batch, the client subscribes to this batch room.

When the batch manager creates a batch session, a `batch_session_created` event is emitted in the appropriate room.  It includes the session id. The client then may subscribe to the session room, and all socket stuff proceeds as it did before.
2023-09-05 21:17:33 +10:00
531c3bb1e2 fix(tests): fix batch tests [WIP] 2023-09-05 18:07:47 +10:00
331743ca0c Merge branch 'main' into feat/batch-graphs 2023-09-05 17:32:42 +10:00
13429e66b3 chore(ui): typegen 2023-09-05 17:32:05 +10:00
2185c85287 feat(tests): add test for batch with subgraph [WIP]
The tests still don't work due to the test events service not emitting events the batch mgr can listen for.
2023-09-05 17:31:55 +10:00
e8a4a654ac feat(batch): use node_path instead of node_id to create batched sessions 2023-09-05 17:30:27 +10:00
26f9ac9f21 Revert "Revert "feat(batches): defer ges *and* batch session creation until execution time""
This reverts commit be971617e3.
2023-09-05 16:34:40 +10:00
8d78af5db7 fix(tests): fix name of test 2023-09-05 16:14:52 +10:00
babd26feab feat(batch): extract repeated logic to function 2023-09-05 16:14:18 +10:00
e9b26e5e7d fix(api): fix duplicate operation id 2023-09-05 16:06:35 +10:00
6b946f53c4 Merge branch 'main' into feat/batch-graphs 2023-09-05 16:02:56 +10:00
70479b9827 Merge branch 'main' into feat/batch-graphs 2023-08-29 10:39:45 -04:00
35099dcdd8 Fix operation id on an endpoint 2023-08-29 10:32:22 -04:00
670600a863 Merge branch 'main' into feat/batch-graphs 2023-08-29 10:22:14 -04:00
6d5403e19d Merge branch 'main' into feat/batch-graphs 2023-08-29 09:07:18 -04:00
0f7695a081 Merge branch 'main' into feat/batch-graphs 2023-08-27 11:41:45 -04:00
d567d9f804 Merge branch 'main' into feat/batch-graphs 2023-08-22 10:12:30 -04:00
68f6140685 fix(tests): fix batches test 2023-08-22 01:48:53 +10:00
be971617e3 Revert "feat(batches): defer ges *and* batch session creation until execution time"
This reverts commit 1652143671.
2023-08-22 01:23:39 +10:00
1652143671 feat(batches): defer ges *and* batch session creation until execution time 2023-08-22 00:54:17 +10:00
88ae19a768 feat(batches): defer ges creation until execution
This improves the overall responsiveness of the system substantially, but does make each iteration *slightly* slower, distributing the up-front cost across the batch.

Two main changes:

1. Create BatchSessions immediately, but do not create a whole graph execution state until the batch is executed.
BatchSessions are created with a `session_id` that does not exist in sessions database.
The default state is changed to `"uninitialized"` to better represent this.

Results: Time to create 5000 batches reduced from over 30s to 2.5s

2. Use `executemany()` to retrieve lists of created sessions.
Results: time to create 5000 batches reduced from 2.5s to under 0.5s

Other changes:

- set BatchSession state to `"in_progress"` just before `invoke()` is called
- rename a few methods to accomodate the new behaviour
- remove unused `BatchProcessStorage.get_created_sessions()` method
2023-08-21 22:22:19 +10:00
50816432dc Merge branch 'main' into feat/batch-graphs 2023-08-21 19:51:41 +10:00
b98c9b516a feat: add batch docstrings 2023-08-21 19:51:16 +10:00
a15a5bc3b8 fix(api): correct get_batch response model 2023-08-21 19:51:02 +10:00
018ff56314 Merge branch 'main' into feat/batch-graphs 2023-08-18 23:32:08 -04:00
137fbacb92 Fix flake8 2023-08-18 15:47:27 -04:00
4b6d9a73ed Merge branch 'main' into feat/batch-graphs 2023-08-18 15:40:34 -04:00
3e26214b83 Add a few more endpoints for managing batches 2023-08-18 15:38:16 -04:00
0282f46c71 Add runs field for running the same batch multiple times 2023-08-18 13:41:07 -04:00
99e03fe92e Run unmodified graph if no batch data is provided 2023-08-18 13:33:09 -04:00
cb65526880 More session not found handling 2023-08-17 14:23:12 -04:00
59bc9ed399 fix(backend): handle BatchSessionNotFoundException in BatchManager._process()
The internal `BatchProcessStorage.get_session()` method throws when it finds nothing, but we were not catching any exceptions.

This caused a exception when the batch manager handles a `graph_execution_state_complete` event that did not originate from a batch.

Fixed by handling the exception.
2023-08-17 13:58:11 +10:00
e62d5478fd fix(backend): fix sqlite cannot commit - no transaction is active
The `commit()` was called even if we hadn't executed anything
2023-08-17 13:55:38 +10:00
2cf0d61b3e Merge branch 'main' into feat/batch-graphs 2023-08-17 13:33:17 +10:00
cc3c2756bd feat(backend): rename batch changes variable
`updateSession` -> `changes`
2023-08-17 13:32:32 +10:00
67cf594bb3 feat(backend): add missing types to batch_manager_storage.py 2023-08-17 13:29:19 +10:00
c5b963f1a6 fix(backend): typo
`relavent` -> `relevant`
2023-08-17 12:47:58 +10:00
4d2dd6bb10 feat(backend): rename BatchManager.process to _process
Just to make it clear that this is not a method on the ABC.
2023-08-17 12:47:05 +10:00
7e4beab4ff feat(backend): surface BatchSessionNodeFoundException
Catch this exception in the router and return an appropriate `HTTPException`.
2023-08-17 12:45:32 +10:00
e16b5f7cdc feat(backend): deserialize batch session directly
If the values from the `session_dict` are invalid, the model instantiation will fail, or if we end up with an invalid `batch_id`, the app will not run. So I think just parsing the dict directly is equivalent.

Also the LSP analyser is pleased now - no red squigglies.
2023-08-17 12:37:03 +10:00
1f355d5810 feat(backend): update batch_manager_storage.py docstrings 2023-08-17 12:31:51 +10:00
df7370f9d9 chore(backend): remove unused code 2023-08-17 12:16:34 +10:00
5bec64d65b fix(backend): fix typings in batch_manager.py
- `batch_indicies` is `tuple[int]` not `list[int]`
- explicit `None` return values
2023-08-17 12:07:20 +10:00
8cf9bd47b2 chore(backend): remove unnecessary batch validation function
The `Batch` model is fully validated by pydantic on instantiation; we do not need any validation logic for it.
2023-08-17 11:59:47 +10:00
c91621b46c fix(backend): BatchProcess.batch_id is required
Providing a `default_factory` is enough for pydantic to know to create the attribute on instantiation if it's not already provided. We can then make make the typing just `str`.
2023-08-17 11:58:29 +10:00
f246b236dd fix(api): fix start_batch route responses 2023-08-17 11:51:14 +10:00
f7277a8b21 Run python black 2023-08-16 15:44:52 -04:00
796ee1246b Add a batch validation test 2023-08-16 15:42:45 -04:00
29fceb960d Fix batch_manager test 2023-08-16 15:33:15 -04:00
796ff34c8a Testing out Spencer's batch data structure 2023-08-16 15:21:11 -04:00
d6a5c2dbe3 Fix tests 2023-08-16 14:35:49 -04:00
ef8dc2e8c5 Merge branch 'main' into feat/batch-graphs 2023-08-16 14:03:34 -04:00
314891a125 Merge branch 'main' into feat/batch-graphs 2023-08-15 22:42:49 -04:00
2d3094f988 Run python black 2023-08-15 21:51:45 -04:00
abf09fc8fa Switch sqlite clients to only use one connection 2023-08-15 21:46:24 -04:00
15e7ca1baa Break apart create/start logic 2023-08-15 16:28:47 -04:00
6cb90e01de Graph is required in batch create 2023-08-15 16:13:51 -04:00
faa4574970 Turn off WAL mode 2023-08-15 15:59:42 -04:00
cc5755d5b1 Update invokeai/app/services/batch_manager_storage.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-15 15:54:57 -04:00
85105fc070 Update invokeai/app/services/batch_manager_storage.py
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-08-15 15:54:17 -04:00
ed40aee4c5 Merge branch 'main' into feat/batch-graphs 2023-08-15 15:48:40 -04:00
f8d8b16267 Run python black 2023-08-14 11:01:31 -04:00
846e52f2ea Add test for batch manager 2023-08-14 10:57:18 -04:00
69f541075c Merge branch 'main' into feat/batch-graphs 2023-08-14 10:32:35 -04:00
1debc31e3d Allow cancel of running batch 2023-08-11 15:52:49 -04:00
1d798d4119 Return session id's on batch creation 2023-08-11 11:45:27 -04:00
c1dde83abb Clean up erroniously added lines 2023-08-10 14:28:50 -04:00
280ac15da2 Go back to 1 lock per table 2023-08-10 14:26:22 -04:00
e751f7d815 More testing 2023-08-10 14:09:00 -04:00
e26e4740b3 Testing sqlite issues with batch_manager 2023-08-10 11:38:28 -04:00
835d76af45 Merge branch 'main' into feat/batch-graphs 2023-08-01 16:44:30 -04:00
a3e099bbc0 Instantiate batch managers 2023-08-01 16:44:17 -04:00
a61685696f Run black formatting 2023-08-01 16:41:40 -04:00
02aa93c67c Cancel batch endpoint 2023-07-31 16:05:27 -04:00
55b921818d Create batch manager 2023-07-31 15:45:35 -04:00
bb681a8a11 Merge branch 'main' into feat/batch-graphs 2023-07-31 13:22:11 -04:00
74e0fbce42 Merge branch 'main' into feat/batch-graphs 2023-07-25 22:25:55 -04:00
f080c56771 Testing out generating a new session for each batch_index 2023-07-25 16:50:07 -04:00
d2f968b902 Trying different places of applying batches 2023-07-25 10:23:17 -04:00
e81601acf3 add todo 2023-07-24 18:12:05 -04:00
7073dc0d5d Fix next call in graphexecutionstate.next 2023-07-24 17:45:05 -04:00
d090be60e8 Make batch_indices in graph class more clear 2023-07-24 17:43:49 -04:00
4bad96d9d6 WIP running graphs as batches 2023-07-24 17:41:54 -04:00
845 changed files with 18323 additions and 48106 deletions

View File

@ -1,5 +1,5 @@
name: Feature Request name: Feature Request
description: Contribute a idea or request a new feature description: Commit a idea or Request a new feature
title: '[enhancement]: ' title: '[enhancement]: '
labels: ['enhancement'] labels: ['enhancement']
# assignees: # assignees:
@ -9,14 +9,14 @@ body:
- type: markdown - type: markdown
attributes: attributes:
value: | value: |
Thanks for taking the time to fill out this feature request! Thanks for taking the time to fill out this Feature request!
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing issue for this? label: Is there an existing issue for this?
description: | description: |
Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement) Please make use of the [search function](https://github.com/invoke-ai/InvokeAI/labels/enhancement)
to see if a similar issue already exists for the feature you want to request to see if a simmilar issue already exists for the feature you want to request
options: options:
- label: I have searched the existing issues - label: I have searched the existing issues
required: true required: true
@ -34,9 +34,12 @@ body:
id: whatisexpected id: whatisexpected
attributes: attributes:
label: What should this feature add? label: What should this feature add?
description: Explain the functionality this feature should add. Feature requests should be for single features. Please create multiple requests if you want to request multiple features. description: Please try to explain the functionality this feature should add
placeholder: | placeholder: |
I'd like a button that creates an image of banana sushi every time I press it. Each image should be different. There should be a toggle next to the button that enables strawberry mode, in which the images are of strawberry sushi instead. Instead of one huge textfield, it would be nice to have forms for bug-reports, feature-requests, ...
Great benefits with automatic labeling, assigning and other functionalitys not available in that form
via old-fashioned markdown-templates. I would also love to see the use of a moderator bot 🤖 like
https://github.com/marketplace/actions/issue-moderator-with-commands to auto close old issues and other things
validations: validations:
required: true required: true
@ -48,6 +51,6 @@ body:
- type: textarea - type: textarea
attributes: attributes:
label: Additional Content label: Aditional Content
description: Add any other context or screenshots about the feature request here. description: Add any other context or screenshots about the feature request here.
placeholder: This is a mockup of the design how I imagine it <screenshot> placeholder: This is a Mockup of the design how I imagine it <screenshot>

View File

@ -28,7 +28,7 @@ jobs:
run: twine check dist/* run: twine check dist/*
- name: check PyPI versions - name: check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/') if: github.ref == 'refs/heads/main' || github.ref == 'refs/heads/v2.3'
run: | run: |
pip install --upgrade requests pip install --upgrade requests
python -c "\ python -c "\

View File

@ -1,4 +1,6 @@
name: style checks name: style checks
# just formatting and flake8 for now
# TODO: add isort later
on: on:
pull_request: pull_request:
@ -18,8 +20,8 @@ jobs:
- name: Install dependencies with pip - name: Install dependencies with pip
run: | run: |
pip install black flake8 Flake8-pyproject isort pip install black flake8 Flake8-pyproject
- run: isort --check-only . # - run: isort --check-only .
- run: black --check . - run: black --check .
- run: flake8 - run: flake8

View File

@ -15,10 +15,3 @@ repos:
language: system language: system
entry: flake8 entry: flake8
types: [python] types: [python]
- id: isort
name: isort
stages: [commit]
language: system
entry: isort
types: [python]

View File

@ -46,13 +46,13 @@ the foundation for multiple commercial products.
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] Tutorials</a>] [<a
[<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] href="https://github.com/invoke-ai/InvokeAI/">Code and
Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a [<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>] Ideas & Q&A</a>]
[<a
href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>]
<div align="center"> <div align="center">
@ -368,9 +368,9 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more. The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Workflows & Nodes* ### *Node Architecture & Editor (Beta)*
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases. Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
### *Board & Gallery Management* ### *Board & Gallery Management*
@ -383,9 +383,8 @@ Invoke AI provides an organized gallery system for easily storing, accessing, an
- *Upscaling Tools* - *Upscaling Tools*
- *Embedding Manager & Support* - *Embedding Manager & Support*
- *Model Manager & Support* - *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture* - *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
### Latest Changes ### Latest Changes
@ -396,18 +395,20 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
### Troubleshooting ### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues. For more help, please join our [Discord][discord link] problems and other issues.
## Contributing ## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so. cleanup, testing, or code reviews, is very much encouraged to do so.
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board. To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
If you are unfamiliar with how If you are unfamiliar with how
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing: to contribute to GitHub projects, here is a
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/). [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
We hope you enjoy using our software as much as we enjoy creating it, We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect and we hope that some of those of you who are reading this will elect
@ -423,7 +424,7 @@ their time, hard work and effort.
### Support ### Support
For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link]. For support, please use this repository's GitHub Issues tracking service, or join the Discord.
Original portions of the software are Copyright (c) 2023 by respective contributors. Original portions of the software are Copyright (c) 2023 by respective contributors.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 228 KiB

After

Width:  |  Height:  |  Size: 490 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 319 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 217 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 244 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 948 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 197 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 216 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

View File

@ -1,41 +1,39 @@
# Contributing # How to Contribute
## Welcome to Invoke AI
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use. Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
# Methods of Contributing to Invoke AI ## Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so. Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
## Development To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If youd like to help with development, please see our [development guide](contribution_guides/development.md).
**New Contributors:** If youre unfamiliar with contributing to open source projects, take a look at our [new contributor guide](contribution_guides/newContributorChecklist.md). ### Areas of contribution:
## Nodes #### Development
If youd like to add a Node, please see our [nodes contribution guide](../nodes/contributingNodes.md). If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Support and Triaging #### Nodes
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate. If youd like to help with development, please see our [nodes contribution guide](/nodes/contributingNodes). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work. #### Documentation
## Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md). If youd like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
## Translation #### Translation
If you'd like to help with translation, please see our [translation guide](contribution_guides/translation.md). If you'd like to help with translation, please see our [translation guide](contribution_guides/translation.md).
## Tutorials #### Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI. Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community. We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
# Contributors ### Contributors
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort. This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
# Code of Conduct ### Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
@ -49,7 +47,8 @@ By making a contribution to this project, you certify that:
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project. This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution. This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
# Support
### Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy). For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).

View File

@ -47,9 +47,34 @@ pip install ".[dev,test]"
These are optional groups of packages which are defined within the `pyproject.toml` These are optional groups of packages which are defined within the `pyproject.toml`
and will be required for testing the changes you make to the code. and will be required for testing the changes you make to the code.
### Tests ### Running Tests
We use [pytest](https://docs.pytest.org/en/7.2.x/) for our test suite. Tests can
be found under the `./tests` folder and can be run with a single `pytest`
command. Optionally, to review test coverage you can append `--cov`.
```zsh
pytest --cov
```
Test outcomes and coverage will be reported in the terminal. In addition a more
detailed report is created in both XML and HTML format in the `./coverage`
folder. The HTML one in particular can help identify missing statements
requiring tests to ensure coverage. This can be run by opening
`./coverage/html/index.html`.
For example.
```zsh
pytest --cov; open ./coverage/html/index.html
```
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)
See the [tests documentation](./TESTS.md) for information about running and writing tests.
### Reloading Changes ### Reloading Changes
Experimenting with changes to the Python source code is a drag if you have to re-start the server — Experimenting with changes to the Python source code is a drag if you have to re-start the server —
@ -142,23 +167,6 @@ and so you'll have access to the same python environment as the InvokeAI app.
This is _super_ handy. This is _super_ handy.
#### Enabling Type-Checking with Pylance
We use python's typing system in InvokeAI. PR reviews will include checking that types are present and correct. We don't enforce types with `mypy` at this time, but that is on the horizon.
Using a code analysis tool to automatically type check your code (and types) is very important when writing with types. These tools provide immediate feedback in your editor when types are incorrect, and following their suggestions lead to fewer runtime bugs.
Pylance, installed at the beginning of this guide, is the de-facto python LSP (language server protocol). It provides type checking in the editor (among many other features). Once installed, you do need to enable type checking manually:
- Open a python file
- Look along the status bar in VSCode for `{ } Python`
- Click the `{ }`
- Turn type checking on - basic is fine
You'll now see red squiggly lines where type issues are detected. Hover your cursor over the indicated symbols to see what's wrong.
In 99% of cases when the type checker says there is a problem, there really is a problem, and you should take some time to understand and resolve what it is pointing out.
#### Debugging configs with `launch.json` #### Debugging configs with `launch.json`
Debugging configs are managed in a `launch.json` file. Like most VSCode configs, Debugging configs are managed in a `launch.json` file. Like most VSCode configs,

View File

@ -1,89 +0,0 @@
# InvokeAI Backend Tests
We use `pytest` to run the backend python tests. (See [pyproject.toml](/pyproject.toml) for the default `pytest` options.)
## Fast vs. Slow
All tests are categorized as either 'fast' (no test annotation) or 'slow' (annotated with the `@pytest.mark.slow` decorator).
'Fast' tests are run to validate every PR, and are fast enough that they can be run routinely during development.
'Slow' tests are currently only run manually on an ad-hoc basis. In the future, they may be automated to run nightly. Most developers are only expected to run the 'slow' tests that directly relate to the feature(s) that they are working on.
As a rule of thumb, tests should be marked as 'slow' if there is a chance that they take >1s (e.g. on a CPU-only machine with slow internet connection). Common examples of slow tests are tests that depend on downloading a model, or running model inference.
## Running Tests
Below are some common test commands:
```bash
# Run the fast tests. (This implicitly uses the configured default option: `-m "not slow"`.)
pytest tests/
# Equivalent command to run the fast tests.
pytest tests/ -m "not slow"
# Run the slow tests.
pytest tests/ -m "slow"
# Run the slow tests from a specific file.
pytest tests/path/to/slow_test.py -m "slow"
# Run all tests (fast and slow).
pytest tests -m ""
```
## Test Organization
All backend tests are in the [`tests/`](/tests/) directory. This directory mirrors the organization of the `invokeai/` directory. For example, tests for `invokeai/model_management/model_manager.py` would be found in `tests/model_management/test_model_manager.py`.
TODO: The above statement is aspirational. A re-organization of legacy tests is required to make it true.
## Tests that depend on models
There are a few things to keep in mind when adding tests that depend on models.
1. If a required model is not already present, it should automatically be downloaded as part of the test setup.
2. If a model is already downloaded, it should not be re-downloaded unnecessarily.
3. Take reasonable care to keep the total number of models required for the tests low. Whenever possible, re-use models that are already required for other tests. If you are adding a new model, consider including a comment to explain why it is required/unique.
There are several utilities to help with model setup for tests. Here is a sample test that depends on a model:
```python
import pytest
import torch
from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.util.test_utils import install_and_load_model
@pytest.mark.slow
def test_model(model_installer, torch_device):
model_info = install_and_load_model(
model_installer=model_installer,
model_path_id_or_url="HF/dummy_model_id",
model_name="dummy_model",
base_model=BaseModelType.StableDiffusion1,
model_type=ModelType.Dummy,
)
dummy_input = build_dummy_input(torch_device)
with torch.no_grad(), model_info as model:
model.to(torch_device, dtype=torch.float32)
output = model(dummy_input)
# Validate output...
```
## Test Coverage
To review test coverage, append `--cov` to your pytest command:
```bash
pytest tests/ --cov
```
Test outcomes and coverage will be reported in the terminal. In addition, a more detailed report is created in both XML and HTML format in the `./coverage` folder. The HTML output is particularly helpful in identifying untested statements where coverage should be improved. The HTML report can be viewed by opening `./coverage/html/index.html`.
??? info "HTML coverage report output"
![html-overview](../assets/contributing/html-overview.png)
![html-detail](../assets/contributing/html-detail.png)

View File

@ -4,21 +4,14 @@
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential. If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
## **Get Started**
To get started, take a look at our [new contributors checklist](newContributorChecklist.md)
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md) * #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](./contributingToFrontend.md) * #### [Frontend Documentation](development_guides/contributingToFrontend.md)
* #### [Node Documentation](../INVOCATIONS.md) * #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md) * #### [Local Development](../LOCAL_DEVELOPMENT.md)
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub.
There are two paths to making a development contribution: There are two paths to making a development contribution:
@ -30,18 +23,67 @@ There are two paths to making a development contribution:
## Best Practices: ## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged * Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution * Comments! Commenting your code helps reviwers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development * Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community * Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?** ## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord. If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@psychedelicious** is the best person to reach out to. For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@psychedelicious**.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?** ## **What does the Code of Conduct mean for me?**

View File

@ -10,4 +10,4 @@ When updating or creating documentation, please keep in mind InvokeAI is a tool
## Help & Questions ## Help & Questions
Please ping @imic or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions. Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions.

View File

@ -1,68 +0,0 @@
# New Contributor Guide
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.

View File

@ -21,8 +21,8 @@ TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
different TI training packages. InvokeAI supports both formats, but its different TI training packages. InvokeAI supports both formats, but its
[built-in TI training system](TRAINING.md) produces `.pt`. [built-in TI training system](TRAINING.md) produces `.pt`.
[Hugging Face](https://huggingface.co/sd-concepts-library) has The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
amassed a large library of &gt;800 community-contributed TI files covering a amassed a large ligrary of &gt;800 community-contributed TI files covering a
broad range of subjects and styles. You can also install your own or others' TI files broad range of subjects and styles. You can also install your own or others' TI files
by placing them in the designated directory for the compatible model type by placing them in the designated directory for the compatible model type

View File

@ -159,7 +159,7 @@ groups in `invokeia.yaml`:
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on | | `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on | | `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` | | `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) | | `allow_credentials | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API | | `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API | | `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |

View File

@ -1,11 +1,13 @@
--- ---
title: Control Adapters title: ControlNet
--- ---
# :material-loupe: Control Adapters # :material-loupe: ControlNet
## ControlNet ## ControlNet
ControlNet
ControlNet is a powerful set of features developed by the open-source ControlNet is a powerful set of features developed by the open-source
community (notably, Stanford researcher community (notably, Stanford researcher
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
@ -18,7 +20,7 @@ towards generating images that better fit your desired style or
outcome. outcome.
#### How it works ### How it works
ControlNet works by analyzing an input image, pre-processing that ControlNet works by analyzing an input image, pre-processing that
image to identify relevant information that can be interpreted by each image to identify relevant information that can be interpreted by each
@ -28,7 +30,7 @@ composition, or other aspects of the image to better achieve a
specific result. specific result.
#### Models ### Models
InvokeAI provides access to a series of ControlNet models that provide InvokeAI provides access to a series of ControlNet models that provide
different effects or styles in your generated images. Currently different effects or styles in your generated images. Currently
@ -94,8 +96,6 @@ A model that generates normal maps from input images, allowing for more realisti
**Image Segmentation**: **Image Segmentation**:
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon) A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
**QR Code Monster**:
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
**Openpose**: **Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image. The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
@ -104,7 +104,7 @@ The OpenPose control model allows for the identification of the general pose of
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces. The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile**: **Tile (experimental)**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors: The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
@ -117,10 +117,12 @@ The Tile Model can be a powerful tool in your arsenal for enhancing image qualit
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene. With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process. Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
### Using ControlNet ## Using ControlNet
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images. To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
@ -132,31 +134,3 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied. Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke. Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
## IP-Adapter
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
![IP-Adapter + IMG2IMG](https://github.com/tencent-ailab/IP-Adapter/blob/main/assets/demo/image-to-image.jpg)
#### Installation
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
Each IP-Adapter has two settings that are applied to the IP-Adapter:
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.

View File

@ -2,50 +2,17 @@
title: Model Merging title: Model Merging
--- ---
InvokeAI provides the ability to merge two or three diffusers-type models into a new merged model. The # :material-image-off: Model Merging
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
## How to Merge Models ## How to Merge Models
Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models. As of version 2.3, InvokeAI comes with a script that allows you to
merge two or three diffusers-type models into a new merged model. The
## Settings resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
* Model Selection: there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha: This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method: This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available.
* Save Location: The location you want the merged model to be saved in. Default is in the InvokeAI root folder
* Name for merged model: This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
* Ignore Mismatches / Force: Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
You may run the merge script by starting the invoke launcher You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option (4) for _merge (`invoke.sh` or `invoke.bat`) and choosing the option for _merge
models_. This will launch a text-based interactive user interface that models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the prompts you to select the models to merge, how to merge them, and the
merged model name. merged model name.
@ -73,4 +40,34 @@ this to get back.
If the merge runs successfully, it will create a new diffusers model If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI. under the selected name and register it with InvokeAI.
## The Settings
* Model Selection -- there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha -- This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method -- This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available. (TODO: cite a reference that describes what these
interpolation methods actually do and how to decide among them).
* Force -- Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
* Name for merged model - This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".

View File

@ -142,7 +142,7 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
The `prompt2prompt` code is based off The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl). [bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
### Escaping parentheses and speech marks ### Escaping parentheses () and speech marks ""
If the model you are using has parentheses () or speech marks "" as part of its If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)` syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
@ -246,7 +246,7 @@ To create a Dynamic Prompt, follow these steps:
Within the braces, separate each option using a vertical bar |. Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$. If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {style1|style2|style3}. For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {2$$style1|style2|style3}.
### How Dynamic Prompts Work ### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options. Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
@ -273,36 +273,3 @@ Below are some useful strategies for creating Dynamic Prompts:
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options. Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group. Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands. Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
## SDXL Prompting
Prompting with SDXL is slightly different than prompting with SD1.5 or SD2.1 models - SDXL expects a prompt _and_ a style.
### Prompting
<figure markdown>
![SDXL prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt.png)
</figure>
In the prompt box, enter a positive or negative prompt as you normally would.
For the style box you can enter a style that you want the image to be generated in. You can use styles from this example list, or any other style you wish: anime, photographic, digital art, comic book, fantasy art, analog film, neon punk, isometric, low poly, origami, line art, cinematic, 3d model, pixel art, etc.
### Concatenated Prompts
InvokeAI also has the option to concatenate the prompt and style inputs, by pressing the "link" button in the Positive Prompt box.
This concatenates the prompt & style inputs, and passes the joined prompt and style to the SDXL model.
![SDXL concatenated prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt-concatenated.png)

View File

@ -43,22 +43,27 @@ into the directory
InvokeAI 2.3 and higher comes with a text console-based training front InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script, end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start training tool selecting choice (3): start the front end by selecting choice (3):
```sh ```sh
1 "Generate images with a browser-based interface" Do you want to generate images using the
2 "Explore InvokeAI nodes using a command-line interface" 1: Browser-based UI
3 "Textual inversion training" 2: Command-line interface
4 "Merge models (diffusers type only)" 3: Run textual inversion training
5 "Download and install models" 4: Merge models (diffusers type only)
6 "Change InvokeAI startup options" 5: Download and install models
7 "Re-run the configure script to fix a broken install or to complete a major upgrade" 6: Change InvokeAI startup options
8 "Open the developer console" 7: Re-run the configure script to fix a broken install
9 "Update InvokeAI" 8: Open the developer console
9: Update InvokeAI
10: Command-line help
Q: Quit
Please enter 1-10, Q: [1]
``` ```
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active, From the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`. you can launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this: This will launch a text-based front end that will look like this:

View File

@ -1,336 +0,0 @@
---
title: Command-line Utilities
---
# :material-file-document: Utilities
# Command-line Utilities
InvokeAI comes with several scripts that are accessible via the
command line. To access these commands, start the "developer's
console" from the launcher (`invoke.bat` menu item [8]). Users who are
familiar with Python can alternatively activate InvokeAI's virtual
environment (typically, but not necessarily `invokeai/.venv`).
In the developer's console, type the script's name to run it. To get a
synopsis of what a utility does and the command-line arguments it
accepts, pass it the `-h` argument, e.g.
```bash
invokeai-merge -h
```
## **invokeai-web**
This script launches the web server and is effectively identical to
selecting option [1] in the launcher. An advantage of launching the
server from the command line is that you can override any setting
configuration option in `invokeai.yaml` using like-named command-line
arguments. For example, to temporarily change the size of the RAM
cache to 7 GB, you can launch as follows:
```bash
invokeai-web --ram 7
```
## **invokeai-merge**
This is the model merge script, the same as launcher option [4]. Call
it with the `--gui` command-line argument to start the interactive
console-based GUI. Alternatively, you can run it non-interactively
using command-line arguments as illustrated in the example below which
merges models named `stable-diffusion-1.5` and `inkdiffusion` into a new model named
`my_new_model`:
```bash
invokeai-merge --force --base-model sd-1 --models stable-diffusion-1.5 inkdiffusion --merged_model_name my_new_model
```
## **invokeai-ti**
This is the textual inversion training script that is run by launcher
option [3]. Call it with `--gui` to run the interactive console-based
front end. It can also be run non-interactively. It has about a
zillion arguments, but a typical training session can be launched
with:
```bash
invokeai-ti --model stable-diffusion-1.5 \
--placeholder_token 'jello' \
--learnable_property object \
--num_train_epochs 50 \
--train_data_dir /path/to/training/images \
--output_dir /path/to/trained/model
```
(Note that \\ is the Linux/Mac long-line continuation character. Use ^
in Windows).
## **invokeai-install**
This is the console-based model install script that is run by launcher
option [5]. If called without arguments, it will launch the
interactive console-based interface. It can also be used
non-interactively to list, add and remove models as shown by these
examples:
* This will download and install three models from CivitAI, HuggingFace,
and local disk:
```bash
invokeai-install --add https://civitai.com/api/download/models/161302 ^
gsdf/Counterfeit-V3.0 ^
D:\Models\merge_model_two.safetensors
```
(Note that ^ is the Windows long-line continuation character. Use \\ on
Linux/Mac).
* This will list installed models of type `main`:
```bash
invokeai-model-install --list-models main
```
* This will delete the models named `voxel-ish` and `realisticVision`:
```bash
invokeai-model-install --delete voxel-ish realisticVision
```
## **invokeai-configure**
This is the console-based configure script that ran when InvokeAI was
first installed. You can run it again at any time to change the
configuration, repair a broken install.
Called without any arguments, `invokeai-configure` enters interactive
mode with two screens. The first screen is a form that provides access
to most of InvokeAI's configuration options. The second screen lets
you download, add, and delete models interactively. When you exit the
second screen, the script will add any missing "support models"
needed for core functionality, and any selected "sd weights" which are
the model checkpoint/diffusers files.
This behavior can be changed via a series of command-line
arguments. Here are some of the useful ones:
* `invokeai-configure --skip-sd-weights --skip-support-models`
This will run just the configuration part of the utility, skipping
downloading of support models and stable diffusion weights.
* `invokeai-configure --yes`
This will run the configure script non-interactively. It will set the
configuration options to their default values, install/repair support
models, and download the "recommended" set of SD models.
* `invokeai-configure --yes --default_only`
This will run the configure script non-interactively. In contrast to
the previous command, it will only download the default SD model,
Stable Diffusion v1.5
* `invokeai-configure --yes --default_only --skip-sd-weights`
This is similar to the previous command, but will not download any
SD models at all. It is usually used to repair a broken install.
By default, `invokeai-configure` runs on the currently active InvokeAI
root folder. To run it against a different root, pass it the `--root
</path/to/root>` argument.
Lastly, you can use `invokeai-configure` to create a working root
directory entirely from scratch. Assuming you wish to make a root directory
named `InvokeAI-New`, run this command:
```bash
invokeai-configure --root InvokeAI-New --yes --default_only
```
This will create a minimally functional root directory. You can now
launch the web server against it with `invokeai-web --root InvokeAI-New`.
## **invokeai-update**
This is the interactive console-based script that is run by launcher
menu item [9] to update to a new version of InvokeAI. It takes no
command-line arguments.
## **invokeai-metadata**
This is a script which takes a list of InvokeAI-generated images and
outputs their metadata in the same JSON format that you get from the
`</>` button in the Web GUI. For example:
```bash
$ invokeai-metadata ffe2a115-b492-493c-afff-7679aa034b50.png
ffe2a115-b492-493c-afff-7679aa034b50.png:
{
"app_version": "3.1.0",
"cfg_scale": 8.0,
"clip_skip": 0,
"controlnets": [],
"generation_mode": "sdxl_txt2img",
"height": 1024,
"loras": [],
"model": {
"base_model": "sdxl",
"model_name": "stable-diffusion-xl-base-1.0",
"model_type": "main"
},
"negative_prompt": "",
"negative_style_prompt": "",
"positive_prompt": "military grade sushi dinner for shock troopers",
"positive_style_prompt": "",
"rand_device": "cpu",
"refiner_cfg_scale": 7.5,
"refiner_model": {
"base_model": "sdxl-refiner",
"model_name": "sd_xl_refiner_1.0",
"model_type": "main"
},
"refiner_negative_aesthetic_score": 2.5,
"refiner_positive_aesthetic_score": 6.0,
"refiner_scheduler": "euler",
"refiner_start": 0.8,
"refiner_steps": 20,
"scheduler": "euler",
"seed": 387129902,
"steps": 25,
"width": 1024
}
```
You may list multiple files on the command line.
## **invokeai-import-images**
InvokeAI uses a database to store information about images it
generated, and just copying the image files from one InvokeAI root
directory to another does not automatically import those images into
the destination's gallery. This script allows you to bulk import
images generated by one instance of InvokeAI into a gallery maintained
by another. It also works on images generated by older versions of
InvokeAI, going way back to version 1.
This script has an interactive mode only. The following example shows
it in action:
```bash
$ invokeai-import-images
===============================================================================
This script will import images generated by earlier versions of
InvokeAI into the currently installed root directory:
/home/XXXX/invokeai-main
If this is not what you want to do, type ctrl-C now to cancel.
===============================================================================
= Configuration & Settings
Found invokeai.yaml file at /home/XXXX/invokeai-main/invokeai.yaml:
Database : /home/XXXX/invokeai-main/databases/invokeai.db
Outputs : /home/XXXX/invokeai-main/outputs/images
Use these paths for import (yes) or choose different ones (no) [Yn]:
Inputs: Specify absolute path containing InvokeAI .png images to import: /home/XXXX/invokeai-2.3/outputs/images/
Include files from subfolders recursively [yN]?
Options for board selection for imported images:
1) Select an existing board name. (found 4)
2) Specify a board name to create/add to.
3) Create/add to board named 'IMPORT'.
4) Create/add to board named 'IMPORT' with the current datetime string appended (.e.g IMPORT_20230919T203519Z).
5) Create/add to board named 'IMPORT' with a the original file app_version appended (.e.g IMPORT_2.2.5).
Specify desired board option: 3
===============================================================================
= Import Settings Confirmation
Database File Path : /home/XXXX/invokeai-main/databases/invokeai.db
Outputs/Images Directory : /home/XXXX/invokeai-main/outputs/images
Import Image Source Directory : /home/XXXX/invokeai-2.3/outputs/images/
Recurse Source SubDirectories : No
Count of .png file(s) found : 5785
Board name option specified : IMPORT
Database backup will be taken at : /home/XXXX/invokeai-main/databases/backup
Notes about the import process:
- Source image files will not be modified, only copied to the outputs directory.
- If the same file name already exists in the destination, the file will be skipped.
- If the same file name already has a record in the database, the file will be skipped.
- Invoke AI metadata tags will be updated/written into the imported copy only.
- On the imported copy, only Invoke AI known tags (latest and legacy) will be retained (dream, sd-metadata, invokeai, invokeai_metadata)
- A property 'imported_app_version' will be added to metadata that can be viewed in the UI's metadata viewer.
- The new 3.x InvokeAI outputs folder structure is flat so recursively found source imges will all be placed into the single outputs/images folder.
Do you wish to continue with the import [Yn] ?
Making DB Backup at /home/lstein/invokeai-main/databases/backup/backup-20230919T203519Z-invokeai.db...Done!
===============================================================================
Importing /home/XXXX/invokeai-2.3/outputs/images/17d09907-297d-4db3-a18a-60b337feac66.png
... (5785 more lines) ...
===============================================================================
= Import Complete - Elpased Time: 0.28 second(s)
Source File(s) : 5785
Total Imported : 5783
Skipped b/c file already exists on disk : 1
Skipped b/c file already exists in db : 0
Errors during import : 1
```
## **invokeai-db-maintenance**
This script helps maintain the integrity of your InvokeAI database by
finding and fixing three problems that can arise over time:
1. An image was manually deleted from the outputs directory, leaving a
dangling image record in the InvokeAI database. This will cause a
black image to appear in the gallery. This is an "orphaned database
image record." The script can fix this by running a "clean"
operation on the database, removing the orphaned entries.
2. An image is present in the outputs directory but there is no
corresponding entry in the database. This can happen when the image
is added manually to the outputs directory, or if a crash occurred
after the image was generated but before the database was
completely updated. The symptom is that the image is present in the
outputs folder but doesn't appear in the InvokeAI gallery. This is
called an "orphaned image file." The script can fix this problem by
running an "archive" operation in which orphaned files are moved
into a directory named `outputs/images-archive`. If you wish, you
can then run `invokeai-image-import` to reimport these images back
into the database.
3. The thumbnail for an image is missing, again causing a black
gallery thumbnail. This is fixed by running the "thumbnaiils"
operation, which simply regenerates and re-registers the missing
thumbnail.
You can find and fix all three of these problems in a single go by
executing this command:
```bash
invokeai-db-maintenance --operation all
```
Or you can run just the clean and thumbnail operations like this:
```bash
invokeai-db-maintenance -operation clean, thumbnail
```
If called without any arguments, the script will ask you which
operations you wish to perform.
## **invokeai-migrate3**
This script will migrate settings and models (but not images!) from an
InvokeAI v2.3 root folder to an InvokeAI 3.X folder. Call it with the
source and destination root folders like this:
```bash
invokeai-migrate3 --from ~/invokeai-2.3 --to invokeai-3.1.1
```
Both directories must previously have been properly created and
initialized by `invokeai-configure`. If you wish to migrate the images
contained in the older root as well, you can use the
`invokeai-image-migrate` script described earlier.
---
Copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team

View File

@ -51,9 +51,6 @@ Prevent InvokeAI from displaying unwanted racy images.
### * [Controlling Logging](LOGGING.md) ### * [Controlling Logging](LOGGING.md)
Control how InvokeAI logs status messages. Control how InvokeAI logs status messages.
### * [Command-line Utilities](UTILITIES.md)
A list of the command-line utilities available with InvokeAI.
<!-- OUT OF DATE <!-- OUT OF DATE
### * [Miscellaneous](OTHER.md) ### * [Miscellaneous](OTHER.md)
Run InvokeAI on Google Colab, generate images with repeating patterns, Run InvokeAI on Google Colab, generate images with repeating patterns,

View File

@ -15,8 +15,7 @@ title: Home
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@6.2.1/css/fontawesome.min.css"> <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/fontawesome-free@6.2.1/css/fontawesome.min.css">
<style> <style>
.button { .button {
width: 100%; width: 300px;
max-width: 100%;
height: 50px; height: 50px;
background-color: #448AFF; background-color: #448AFF;
color: #fff; color: #fff;
@ -28,9 +27,8 @@ title: Home
.button-container { .button-container {
display: grid; display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); grid-template-columns: repeat(3, 300px);
gap: 20px; gap: 20px;
justify-content: center;
} }
.button:hover { .button:hover {
@ -147,7 +145,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
### InvokeAI Configuration ### InvokeAI Configuration
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md) - [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3 ## :octicons-log-16: Important Changes Since Version 2.3

View File

@ -256,10 +256,6 @@ manager, please follow these steps:
*highly recommended** if your virtual environment is located outside of *highly recommended** if your virtual environment is located outside of
your runtime directory. your runtime directory.
!!! tip
On linux, it is recommended to run invokeai with the following env var: `MALLOC_MMAP_THRESHOLD_=1048576`. For example: `MALLOC_MMAP_THRESHOLD_=1048576 invokeai --web`. This helps to prevent memory fragmentation that can lead to memory accumulation over time. This env var is set automatically when running via `invoke.sh`.
10. Render away! 10. Render away!
Browse the [features](../features/index.md) section to learn about all the Browse the [features](../features/index.md) section to learn about all the
@ -291,7 +287,7 @@ manager, please follow these steps:
Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument
to get usage instructions. to get usage instructions.
## Developer Install ### Developer Install
If you have an interest in how InvokeAI works, or you would like to If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source add features or bugfixes, you are encouraged to install the source
@ -300,29 +296,18 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git) Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md). 1. From the command line, run this command:
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
> **Why do I need the frontend toolchain**?
>
> The InvokeAI project uses trunk-based development. That means our `main` branch is the development branch, and releases are tags on that branch. Because development is very active, we don't keep an updated build of the UI in `main` - we only build it for production releases.
>
> That means that between releases, to have a functioning application when running directly from the repo, you will need to run the UI in dev mode or build it regularly (any time the UI code changes).
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
2. From the command line, run this command:
```bash ```bash
git clone https://github.com/<your_github_username>/InvokeAI.git git clone https://github.com/invoke-ai/InvokeAI.git
``` ```
This will create a directory named `InvokeAI` and populate it with the This will create a directory named `InvokeAI` and populate it with the
full source code from your fork of the InvokeAI repository. full source code from the InvokeAI repository.
3. Activate the InvokeAI virtual environment as per step (4) of the manual 2. Activate the InvokeAI virtual environment as per step (4) of the manual
installation protocol (important!) installation protocol (important!)
4. Enter the InvokeAI repository directory and run one of these 3. Enter the InvokeAI repository directory and run one of these
commands, based on your GPU: commands, based on your GPU:
=== "CUDA (NVidia)" === "CUDA (NVidia)"
@ -348,15 +333,11 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command. dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described. You can now run `invokeai` and its related commands. The code will be
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files read from the repository, so that you can edit the .py source files
and watch the code's behavior change. and watch the code's behavior change.
When you pull in new changes to the repo, be sure to re-build the UI. 4. If you wish to contribute to the InvokeAI project, you are
7. If you wish to contribute to the InvokeAI project, you are
encouraged to establish a GitHub account and "fork" encouraged to establish a GitHub account and "fork"
https://github.com/invoke-ai/InvokeAI into your own copy of the https://github.com/invoke-ai/InvokeAI into your own copy of the
repository. You can then use GitHub functions to create and submit repository. You can then use GitHub functions to create and submit

View File

@ -57,30 +57,6 @@ familiar with containerization technologies such as Docker.
For downloads and instructions, visit the [NVIDIA CUDA Container For downloads and instructions, visit the [NVIDIA CUDA Container
Runtime Site](https://developer.nvidia.com/nvidia-container-runtime) Runtime Site](https://developer.nvidia.com/nvidia-container-runtime)
### cuDNN Installation for 40/30 Series Optimization* (Optional)
1. Find the InvokeAI folder
2. Click on .venv folder - e.g., YourInvokeFolderHere\\.venv
3. Click on Lib folder - e.g., YourInvokeFolderHere\\.venv\Lib
4. Click on site-packages folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages
5. Click on Torch directory - e.g., YourInvokeFolderHere\InvokeAI\\.venv\Lib\site-packages\torch
6. Click on the lib folder - e.g., YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib
7. Copy everything inside the folder and save it elsewhere as a backup.
8. Go to __https://developer.nvidia.com/cudnn__
9. Login or create an Account.
10. Choose the newer version of cuDNN. **Note:**
There are two versions, 11.x or 12.x for the differents architectures(Turing,Maxwell Etc...) of GPUs.
You can find which version you should download from [this link](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html).
13. Download the latest version and extract it from the download location
14. Find the bin folder E\cudnn-windows-x86_64-__Whatever Version__\bin
15. Copy and paste the .dll files into YourInvokeFolderHere\\.venv\Lib\site-packages\torch\lib **Make sure to copy, and not move the files**
16. If prompted, replace any existing files
**Notes:**
* If no change is seen or any issues are encountered, follow the same steps as above and paste the torch/lib backup folder you made earlier and replace it. If you didn't make a backup, you can also uninstall and reinstall torch through the command line to repair this folder.
* This optimization is intended for the newer version of graphics card (40/30 series) but results have been seen with older graphics card.
### Torch Installation ### Torch Installation
When installing torch and torchvision manually with `pip`, remember to provide When installing torch and torchvision manually with `pip`, remember to provide

View File

@ -171,16 +171,3 @@ subfolders and organize them as you wish.
The location of the autoimport directories are controlled by settings The location of the autoimport directories are controlled by settings
in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md). in `invokeai.yaml`. See [Configuration](../features/CONFIGURATION.md).
### Installing models that live in HuggingFace subfolders
On rare occasions you may need to install a diffusers-style model that
lives in a subfolder of a HuggingFace repo id. In this event, simply
add ":_subfolder-name_" to the end of the repo id. For example, if the
repo id is "monster-labs/control_v1p_sd15_qrcode_monster" and the model
you wish to fetch lives in a subfolder named "v2", then the repo id to
pass to the various model installers should be
```
monster-labs/control_v1p_sd15_qrcode_monster:v2
```

View File

@ -17,32 +17,14 @@ This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver). driver).
### [Installation Getting Started Guide](installation)
## **[Automated Installer](010_INSTALL_AUTOMATED.md)** #### **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users. ✅ This is the recommended installation method for first-time users.
#### [Manual Installation](020_INSTALL_MANUAL.md)
This is a script that will install all of InvokeAI's essential This method is recommended for experienced users and developers
third party libraries and InvokeAI itself. It includes access to a #### [Docker Installation](040_INSTALL_DOCKER.md)
"developer console" which will help us debug problems with you and This method is recommended for those familiar with running Docker containers
give you to access experimental features. ### Other Installation Guides
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
## **[Docker Installation](040_INSTALL_DOCKER.md)**
This method is recommended for those familiar with running Docker containers.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md) - [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md) - [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md) - [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
@ -81,3 +63,43 @@ images in full-precision mode:
- GTX 1650 series cards - GTX 1650 series cards
- GTX 1660 series cards - GTX 1660 series cards
## Installation options
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
✅ This is the recommended option for first time users.
2. [Manual Installation](020_INSTALL_MANUAL.md)
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
This method is recommended for users who have previously used `conda`
or `pip` in the past, developers, and anyone who wishes to remain on
the cutting edge of future InvokeAI development and is willing to put
up with occasional glitches and breakage.
3. [Docker Installation](040_INSTALL_DOCKER.md)
We also offer a method for creating Docker containers containing
InvokeAI and its dependencies. This method is recommended for
individuals with experience with Docker containers and understand
the pluses and minuses of a container-based install.
## Quick Guides
* [Installing CUDA and ROCm Drivers](./030_INSTALL_CUDA_AND_ROCM.md)
* [Installing XFormers](./070_INSTALL_XFORMERS.md)
* [Installing PyPatchMatch](./060_INSTALL_PATCHMATCH.md)
* [Installing New Models](./050_INSTALLING_MODELS.md)

View File

@ -1,36 +1,13 @@
# Using the Workflow Editor # Using the Node Editor
The workflow editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use. The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs. To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
## Features
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
![linearview](../assets/nodes/linearview.png)
### Renaming Fields and Nodes
Any node or input field can be renamed in the workflow editor. If the input field you have renamed has been added to the Linear View, the changed name will be reflected in the Linear View and the node.
### Managing Nodes
* Ctrl+C to copy a node
* Ctrl+V to paste a node
* Backspace/Delete to delete a node
* Shift+Click to drag and select multiple nodes
### Node Caching
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Nodes Editor and build workflows to suit your needs.
## Important Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples). There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -60,7 +37,7 @@ It is common to want to use both the same seed (for continuity) and random seeds
### ControlNet ### ControlNet
The ControlNet node outputs a Control, which can be provided as input to a Denoise Latents node. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet. The ControlNet node outputs a Control, which can be provided as input to non-image *ToLatents nodes. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png) ![groupscontrol](../assets/nodes/groupscontrol.png)
@ -82,9 +59,10 @@ Iteration is a common concept in any processing, and means to repeat a process w
![groupsiterate](../assets/nodes/groupsiterate.png) ![groupsiterate](../assets/nodes/groupsiterate.png)
### Batch / Multiple Image Generation + Random Seeds ### Multiple Image Generation + Random Seeds
Batch or multiple image generation in the workflow editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate, meaning this example will generate 4 images. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection. This noise can then be fed to the Denoise Latents node for it to iterate through the denoising process with the different seeds provided. Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png) ![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)

View File

@ -4,46 +4,30 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md). If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. If you used the automated installation, this can be found inside the `.venv` folder. Along with the node, an example node graph should be provided to help you get started with the node. To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor. To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
- Community Nodes ## Community Nodes
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Ideal Size](#ideal-size)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [XY Image to Grid and Images to Grids nodes](#xy-image-to-grid-and-images-to-grids-nodes)
- [Example Node Template](#example-node-template)
- [Disclaimer](#disclaimer)
- [Help](#help)
### FaceTools
**Description:** FaceTools is a collection of nodes created to manipulate faces as you would in Unified Canvas. It includes FaceMask, FaceOff, and FacePlace. FaceMask autodetects a face in the image using MediaPipe and creates a mask from it. FaceOff similarly detects a face, then takes the face off of the image by adding a square bounding box around it and cropping/scaling it. FacePlace puts the bounded face image from FaceOff back onto the original image. Using these nodes with other inpainting node(s), you can put new faces on existing things, put new things around existing faces, and work closer with a face as a bounded image. Additionally, you can supply X and Y offset values to scale/change the shape of the mask for finer control on FaceMask and FaceOff. See GitHub repository below for usage examples.
**Node Link:** https://github.com/ymgenesis/FaceTools/
**FaceMask Output Examples**
![5cc8abce-53b0-487a-b891-3bf94dcc8960](https://github.com/invoke-ai/InvokeAI/assets/25252829/43f36d24-1429-4ab1-bd06-a4bedfe0955e)
![b920b710-1882-49a0-8d02-82dff2cca907](https://github.com/invoke-ai/InvokeAI/assets/25252829/7660c1ed-bf7d-4d0a-947f-1fc1679557ba)
![71a91805-fda5-481c-b380-264665703133](https://github.com/invoke-ai/InvokeAI/assets/25252829/f8f6a2ee-2b68-4482-87da-b90221d5c3e2)
-------------------------------- --------------------------------
### Depth Map from Wavefront OBJ ### Ideal Size
**Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation. **Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations. **Node Link:** https://github.com/JPPhoto/ideal-size-node
**Node Link:** https://github.com/dwringer/depth-from-obj-node
**Example Usage:**
</br><img src="https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg" width="500" />
-------------------------------- --------------------------------
### Film Grain ### Film Grain
@ -53,19 +37,22 @@ To be imported, an .obj must use triangulated meshes, so make sure to enable tha
**Node Link:** https://github.com/JPPhoto/film-grain-node **Node Link:** https://github.com/JPPhoto/film-grain-node
-------------------------------- --------------------------------
### Generative Grammar-Based Prompt Nodes ### Image Picker
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no nonterminal terms remain in the string. **Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
This includes 3 Nodes: **Node Link:** https://github.com/JPPhoto/image-picker-node
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes --------------------------------
### Retroize
**Example Usage:** **Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images.
</br><img src="https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg" width="500" />
**Node Link:** https://github.com/Ar7ific1al/invokeai-retroizeinode/
**Retroize Output Examples**
![image](https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974)
-------------------------------- --------------------------------
### GPT2RandomPromptMaker ### GPT2RandomPromptMaker
@ -78,133 +65,31 @@ This includes 3 Nodes:
Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment. Generated Prompt: An enchanted weapon will be usable by any character regardless of their alignment.
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/8496ba09-bcdd-4ff7-8076-ff213b6a1e4c" width="200" /> ![9acf5aef-7254-40dd-95b3-8eac431dfab0 (1)](https://github.com/mickr777/InvokeAI/assets/115216705/8496ba09-bcdd-4ff7-8076-ff213b6a1e4c)
--------------------------------
### Grid to Gif
**Description:** One node that turns a grid image into an image collection, one node that turns an image collection into a gif.
**Node Link:** https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/GridToGif.py
**Example Node Graph:** https://github.com/mildmisery/invokeai-GridToGifNode/blob/main/Grid%20to%20Gif%20Example%20Workflow.json
**Output Examples**
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/input.png" width="300" />
<img src="https://raw.githubusercontent.com/mildmisery/invokeai-GridToGifNode/main/output.gif" width="300" />
--------------------------------
### Halftone
**Description**: Halftone converts the source image to grayscale and then performs halftoning. CMYK Halftone converts the image to CMYK and applies a per-channel halftoning to make the source image look like a magazine or newspaper. For both nodes, you can specify angles and halftone dot spacing.
**Node Link:** https://github.com/JPPhoto/halftone-node
**Example**
Input:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/fd5efb9f-4355-4409-a1c2-c1ca99e0cab4" width="300" />
Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/7e606f29-e68f-4d46-b3d5-97f799a4ec2f" width="300" />
CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 15 Nodes:
- *Adjust Image Hue Plus* - Rotate the hue of an image in one of several different color spaces.
- *Blend Latents/Noise (Masked)* - Use a mask to blend part of one latents tensor [including Noise outputs] into another. Can be used to "renoise" sections during a multi-stage [masked] denoising process.
- *Enhance Image* - Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
- *Equivalent Achromatic Lightness* - Calculates image lightness accounting for Helmholtz-Kohlrausch effect based on a method described by High, Green, and Nussbaum (2023).
- *Text to Mask (Clipseg)* - Input a prompt and an image to generate a mask representing areas of the image matched by the prompt.
- *Text to Mask Advanced (Clipseg)* - Output up to four prompt masks combined with logical "and", logical "or", or as separate channels of an RGBA image.
- *Image Layer Blend* - Perform a layered blend of two images using alpha compositing. Opacity of top layer is selectable, with optional mask and several different blend modes/color spaces.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Image Dilate or Erode* - Dilate or expand a mask (or any image!). This is equivalent to an expand/contract operation.
- *Image Value Thresholds* - Clip an image to pure black/white beyond specified thresholds.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
- *Rotate/Flip Image* - Rotate an image in degrees clockwise/counterclockwise about its center, optionally resizing the image boundaries to fit, or flipping it about the vertical and/or horizontal axes.
- *Shadows/Highlights/Midtones* - Extract three masks (with adjustable hard or soft thresholds) representing shadows, midtones, and highlights regions of an image.
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
**Node Link:** https://github.com/dwringer/composition-nodes
</br><img src="https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_pack_overview.jpg" width="500" />
--------------------------------
### Image to Character Art Image Nodes
**Description:** Group of nodes to convert an input image into ascii/unicode art Image
**Node Link:** https://github.com/mickr777/imagetoasciiimage
**Output Examples**
<img src="https://user-images.githubusercontent.com/115216705/271817646-8e061fcc-9a2c-4fa9-bcc7-c0f7b01e9056.png" width="300" /><img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/3c4990eb-2f42-46b9-90f9-0088b939dc6a" width="300" /></br>
<img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/fee7f800-a4a8-41e2-a66b-c66e4343307e" width="300" />
<img src="https://github.com/mickr777/imagetoasciiimage/assets/115216705/1d9c1003-a45f-45c2-aac7-46470bb89330" width="300" />
--------------------------------
### Image Picker
**Description:** This InvokeAI node takes in a collection of images and randomly chooses one. This can be useful when you have a number of poses to choose from for a ControlNet node, or a number of input images for another purpose.
**Node Link:** https://github.com/JPPhoto/image-picker-node
-------------------------------- --------------------------------
### Load Video Frame ### Load Video Frame
**Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs. **Description:** This is a video frame image provider + indexer/video creation nodes for hooking up to iterators and ranges and ControlNets and such for invokeAI node experimentation. Think animation + ControlNet outputs.
**Node Link:** https://github.com/helix4u/load_video_frame **Node Link:** https://github.com/helix4u/load_video_frame
**Example Node Graph:** https://github.com/helix4u/load_video_frame/blob/main/Example_Workflow.json **Example Node Graph:** https://github.com/helix4u/load_video_frame/blob/main/Example_Workflow.json
**Output Example:** **Output Example:**
=======
<img src="https://github.com/helix4u/load_video_frame/blob/main/testmp4_embed_converted.gif" width="500" /> ![Example animation](https://github.com/helix4u/load_video_frame/blob/main/testmp4_embed_converted.gif)
[Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4) [Full mp4 of Example Output test.mp4](https://github.com/helix4u/load_video_frame/blob/main/test.mp4)
-------------------------------- --------------------------------
### Make 3D
**Description:** Create compelling 3D stereo images from 2D originals.
**Node Link:** [https://gitlab.com/srcrr/shift3d/-/raw/main/make3d.py](https://gitlab.com/srcrr/shift3d)
**Example Node Graph:** https://gitlab.com/srcrr/shift3d/-/raw/main/example-workflow.json?ref_type=heads&inline=false
**Output Examples**
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
--------------------------------
### Oobabooga ### Oobabooga
**Description:** asks a local LLM running in Oobabooga's Text-Generation-Webui to write a prompt based on the user input. **Description:** asks a local LLM running in Oobabooga's Text-Generation-Webui to write a prompt based on the user input.
**Link:** https://github.com/sammyf/oobabooga-node **Link:** https://github.com/sammyf/oobabooga-node
**Example:** **Example:**
"describe a new mystical creature in its natural environment" "describe a new mystical creature in its natural environment"
@ -214,7 +99,7 @@ This includes 15 Nodes:
"The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface. "The mystical creature I am describing to you is called the "Glimmerwing". It is a majestic, iridescent being that inhabits the depths of the most enchanted forests and glimmering lakes. Its body is covered in shimmering scales that reflect every color of the rainbow, and it has delicate, translucent wings that sparkle like diamonds in the sunlight. The Glimmerwing's home is a crystal-clear lake, surrounded by towering trees with leaves that shimmer like jewels. In this serene environment, the Glimmerwing spends its days swimming gracefully through the water, chasing schools of glittering fish and playing with the gentle ripples of the lake's surface.
As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace." As the sun sets, the Glimmerwing perches on a branch of one of the trees, spreading its wings to catch the last rays of light. The creature's scales glow softly, casting a rainbow of colors across the forest floor. The Glimmerwing sings a haunting melody, its voice echoing through the stillness of the night air. Its song is said to have the power to heal the sick and bring peace to troubled souls. Those who are lucky enough to hear the Glimmerwing's song are forever changed by its beauty and grace."
<img src="https://github.com/sammyf/oobabooga-node/assets/42468608/cecdd820-93dd-4c35-abbf-607e001fb2ed" width="300" /> ![glimmerwing_small](https://github.com/sammyf/oobabooga-node/assets/42468608/cecdd820-93dd-4c35-abbf-607e001fb2ed)
**Requirement** **Requirement**
@ -222,37 +107,62 @@ a Text-Generation-Webui instance (might work remotely too, but I never tried it)
**Note** **Note**
This node works best with SDXL models, especially as the style can be described independently of the LLM's output. This node works best with SDXL models, especially as the style can be described independantly of the LLM's output.
-------------------------------- --------------------------------
### Prompt Tools ### Depth Map from Wavefront OBJ
**Description:** A set of InvokeAI nodes that add general prompt manipulation tools. These were written to accompany the PromptsFromFile node and other prompt generation nodes. **Description:** Render depth maps from Wavefront .obj files (triangulated) using this simple 3D renderer utilizing numpy and matplotlib to compute and color the scene. There are simple parameters to change the FOV, camera position, and model orientation.
1. PromptJoin - Joins to prompts into one. To be imported, an .obj must use triangulated meshes, so make sure to enable that option if exporting from a 3D modeling program. This renderer makes each triangle a solid color based on its average depth, so it will cause anomalies if your .obj has large triangles. In Blender, the Remesh modifier can be helpful to subdivide a mesh into small pieces that work well given these limitations.
2. PromptReplace - performs a search and replace on a prompt. With the option of using regex.
3. PromptSplitNeg - splits a prompt into positive and negative using the old V2 method of [] for negative.
4. PromptToFile - saves a prompt or collection of prompts to a file. one per line. There is an append/overwrite option.
5. PTFieldsCollect - Converts image generation fields into a Json format string that can be passed to Prompt to file.
6. PTFieldsExpand - Takes Json string and converts it to individual generation parameters This can be fed from the Prompts to file node.
7. PromptJoinThree - Joins 3 prompt together.
8. PromptStrength - This take a string and float and outputs another string in the format of (string)strength like the weighted format of compel.
9. PromptStrengthCombine - This takes a collection of prompt strength strings and outputs a string in the .and() or .blend() format that can be fed into a proper prompt node.
See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/main/README.md **Node Link:** https://github.com/dwringer/depth-from-obj-node
**Node Link:** https://github.com/skunkworxdark/Prompt-tools-nodes **Example Usage:**
![depth from obj usage graph](https://raw.githubusercontent.com/dwringer/depth-from-obj-node/main/depth_from_obj_usage.jpg)
-------------------------------- --------------------------------
### Retroize ### Enhance Image (simple adjustments)
**Description:** Retroize is a collection of nodes for InvokeAI to "Retroize" images. Any image can be given a fresh coat of retro paint with these nodes, either from your gallery or from within the graph itself. It includes nodes to pixelize, quantize, palettize, and ditherize images; as well as to retrieve palettes from existing images. **Description:** Boost or reduce color saturation, contrast, brightness, sharpness, or invert colors of any image at any stage with this simple wrapper for pillow [PIL]'s ImageEnhance module.
**Node Link:** https://github.com/Ar7ific1al/invokeai-retroizeinode/ Color inversion is toggled with a simple switch, while each of the four enhancer modes are activated by entering a value other than 1 in each corresponding input field. Values less than 1 will reduce the corresponding property, while values greater than 1 will enhance it.
**Retroize Output Examples** **Node Link:** https://github.com/dwringer/image-enhance-node
<img src="https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974" width="500" /> **Example Usage:**
![enhance image usage graph](https://raw.githubusercontent.com/dwringer/image-enhance-node/main/image_enhance_usage.jpg)
--------------------------------
### Generative Grammar-Based Prompt Nodes
**Description:** This set of 3 nodes generates prompts from simple user-defined grammar rules (loaded from custom files - examples provided below). The prompts are made by recursively expanding a special template string, replacing nonterminal "parts-of-speech" until no more nonterminal terms remain in the string.
This includes 3 Nodes:
- *Lookup Table from File* - loads a YAML file "prompt" section (or of a whole folder of YAML's) into a JSON-ified dictionary (Lookups output)
- *Lookups Entry from Prompt* - places a single entry in a new Lookups output under the specified heading
- *Prompt from Lookup Table* - uses a Collection of Lookups as grammar rules from which to randomly generate prompts.
**Node Link:** https://github.com/dwringer/generative-grammar-prompt-nodes
**Example Usage:**
![lookups usage example graph](https://raw.githubusercontent.com/dwringer/generative-grammar-prompt-nodes/main/lookuptables_usage.jpg)
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
This includes 4 Nodes:
- *Text Mask (simple 2D)* - create and position a white on black (or black on white) line of text using any font locally available to Invoke.
- *Image Compositor* - Take a subject from an image with a flat backdrop and layer it on another image using a chroma key or flood select background removal.
- *Offset Latents* - Offset a latents tensor in the vertical and/or horizontal dimensions, wrapping it around.
- *Offset Image* - Offset an image in the vertical and/or horizontal dimensions, wrapping it around.
**Node Link:** https://github.com/dwringer/composition-nodes
**Example Usage:**
![composition nodes usage graph](https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_nodes_usage.jpg)
-------------------------------- --------------------------------
### Size Stepper Nodes ### Size Stepper Nodes
@ -264,9 +174,10 @@ A third node is included, *Random Switch (Integers)*, which is just a generic ve
**Node Link:** https://github.com/dwringer/size-stepper-nodes **Node Link:** https://github.com/dwringer/size-stepper-nodes
**Example Usage:** **Example Usage:**
</br><img src="https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg" width="500" /> ![size stepper usage graph](https://raw.githubusercontent.com/dwringer/size-stepper-nodes/main/size_nodes_usage.jpg)
-------------------------------- --------------------------------
### Text font to Image ### Text font to Image
**Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line **Description:** text font to text image node for InvokeAI, download a font to use (or if in font cache uses it from there), the text is always resized to the image size, but can control that with padding, optional 2nd line
@ -275,52 +186,16 @@ A third node is included, *Random Switch (Integers)*, which is just a generic ve
**Output Examples** **Output Examples**
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36" width="300" /> ![a3609d48-d9b7-41f0-b280-063d857986fb](https://github.com/mickr777/InvokeAI/assets/115216705/c21b0af3-d9c6-4c16-9152-846a23effd36)
Results after using the depth controlnet Results after using the depth controlnet
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a" width="300" /> ![9133eabb-bcda-4326-831e-1b641228b178](https://github.com/mickr777/InvokeAI/assets/115216705/915f1a53-968e-43eb-aa61-07cd8f1a733a)
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc" width="300" /> ![4f9a3fa8-9be9-4236-8a3e-fcec66decd2a](https://github.com/mickr777/InvokeAI/assets/115216705/821ef89e-8a60-44f5-b94e-471a9d8690cc)
<img src="https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89" width="300" /> ![babd69c4-9d60-4a55-a834-5e8397f62610](https://github.com/mickr777/InvokeAI/assets/115216705/2befcb6d-49f4-4bfd-b5fc-1fee19274f89)
-------------------------------- --------------------------------
### Thresholding
**Description:** This node generates masks for highlights, midtones, and shadows given an input image. You can optionally specify a blur for the lookup table used in making those masks from the source image.
**Node Link:** https://github.com/JPPhoto/thresholding-node
**Examples**
Input:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c88ada13-fb3d-484c-a4fe-947b44712632" width="300" />
Highlights/Midtones/Shadows:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/727021c1-36ff-4ec8-90c8-105e00de986d" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0b721bfc-f051-404e-b905-2f16b824ddfe" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/04c1297f-1c88-42b6-a7df-dd090b976286" width="300" />
Highlights/Midtones/Shadows (with LUT blur enabled):
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/19aa718a-70c1-4668-8169-d68f4bd13771" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0a440e43-697f-4d17-82ee-f287467df0a5" width="300" />
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/0701fd0f-2ca7-4fe2-8613-2b52547bafce" width="300" />
--------------------------------
### XY Image to Grid and Images to Grids nodes
**Description:** Image to grid nodes and supporting tools.
1. "Images To Grids" node - Takes a collection of images and creates a grid(s) of images. If there are more images than the size of a single grid then multiple grids will be created until it runs out of images.
2. "XYImage To Grid" node - Converts a collection of XYImages into a labeled Grid of images. The XYImages collection has to be built using the supporting nodes. See example node setups for more details.
See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/README.md
**Node Link:** https://github.com/skunkworxdark/XYGrid_nodes
--------------------------------
### Example Node Template ### Example Node Template
**Description:** This node allows you to do super cool things with InvokeAI. **Description:** This node allows you to do super cool things with InvokeAI.
@ -331,7 +206,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
**Output Examples** **Output Examples**
</br><img src="https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png" width="500" /> ![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"}
## Disclaimer ## Disclaimer

View File

@ -4,10 +4,10 @@ To learn about the specifics of creating a new node, please visit our [Node crea
Once youve created a node and confirmed that it behaves as expected locally, follow these steps: Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file. Preferrably, the node is in a repo with a README detaling the nodes usage & examples to help others more easily use your node. - Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node(s) repo in GitHub against the `main` branch to add the node to the [Community Nodes](communityNodes.md) list - Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does. Example output images and workflows are very helpful for other users looking to use your node. - Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you may be asked for permission to include it in the core project. - A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
### Community Node Template ### Community Node Template

View File

@ -1,6 +1,6 @@
# List of Default Nodes # List of Default Nodes
The table below contains a list of the default nodes shipped with InvokeAI and their descriptions. The table below contains a list of the default nodes shipped with InvokeAI and their descriptions.
| Node <img width=160 align="right"> | Function | | Node <img width=160 align="right"> | Function |
|: ---------------------------------- | :--------------------------------------------------------------------------------------| |: ---------------------------------- | :--------------------------------------------------------------------------------------|
@ -17,13 +17,11 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Conditioning Primitive | A conditioning tensor primitive value| |Conditioning Primitive | A conditioning tensor primitive value|
|Content Shuffle Processor | Applies content shuffle processing to image| |Content Shuffle Processor | Applies content shuffle processing to image|
|ControlNet | Collects ControlNet info to pass to other nodes| |ControlNet | Collects ControlNet info to pass to other nodes|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Denoise Latents | Denoises noisy latents to decodable images| |Denoise Latents | Denoises noisy latents to decodable images|
|Divide Integers | Divides two numbers| |Divide Integers | Divides two numbers|
|Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator| |Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator|
|[FaceMask](./detailedNodes/faceTools.md#facemask) | Generates masks for faces in an image to use with Inpainting| |Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|[FaceIdentifier](./detailedNodes/faceTools.md#faceidentifier) | Identifies and labels faces in an image|
|[FaceOff](./detailedNodes/faceTools.md#faceoff) | Creates a new image that is a scaled bounding box with a mask on the face for Inpainting|
|Float Math | Perform basic math operations on two floats|
|Float Primitive Collection | A collection of float primitive values| |Float Primitive Collection | A collection of float primitive values|
|Float Primitive | A float primitive value| |Float Primitive | A float primitive value|
|Float Range | Creates a range| |Float Range | Creates a range|
@ -31,7 +29,6 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Blur Image | Blurs an image| |Blur Image | Blurs an image|
|Extract Image Channel | Gets a channel from an image.| |Extract Image Channel | Gets a channel from an image.|
|Image Primitive Collection | A collection of image primitive values| |Image Primitive Collection | A collection of image primitive values|
|Integer Math | Perform basic math operations on two integers|
|Convert Image Mode | Converts an image to a different mode.| |Convert Image Mode | Converts an image to a different mode.|
|Crop Image | Crops an image to a specified box. The box can be outside of the image.| |Crop Image | Crops an image to a specified box. The box can be outside of the image.|
|Image Hue Adjustment | Adjusts the Hue of an image.| |Image Hue Adjustment | Adjusts the Hue of an image.|
@ -45,8 +42,6 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Paste Image | Pastes an image into another image.| |Paste Image | Pastes an image into another image.|
|ImageProcessor | Base class for invocations that preprocess images for ControlNet| |ImageProcessor | Base class for invocations that preprocess images for ControlNet|
|Resize Image | Resizes an image to specific dimensions| |Resize Image | Resizes an image to specific dimensions|
|Round Float | Rounds a float to a specified number of decimal places|
|Float to Integer | Converts a float to an integer. Optionally rounds to an even multiple of a input number.|
|Scale Image | Scales an image by a factor| |Scale Image | Scales an image by a factor|
|Image to Latents | Encodes an image into latents.| |Image to Latents | Encodes an image into latents.|
|Add Invisible Watermark | Add an invisible watermark to an image| |Add Invisible Watermark | Add an invisible watermark to an image|
@ -77,7 +72,6 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in __init__ to receive providers.| |ONNX Prompt (Raw) | A node to process inputs and produce outputs. May use dependency injection in __init__ to receive providers.|
|ONNX Text to Latents | Generates latents from conditionings.| |ONNX Text to Latents | Generates latents from conditionings.|
|ONNX Model Loader | Loads a main model, outputting its submodels.| |ONNX Model Loader | Loads a main model, outputting its submodels.|
|OpenCV Inpaint | Simple inpaint using opencv.|
|Openpose Processor | Applies Openpose processing to image| |Openpose Processor | Applies Openpose processing to image|
|PIDI Processor | Applies PIDI processing to image| |PIDI Processor | Applies PIDI processing to image|
|Prompts from File | Loads prompts from a text file| |Prompts from File | Loads prompts from a text file|
@ -99,6 +93,5 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|String Primitive | A string primitive value| |String Primitive | A string primitive value|
|Subtract Integers | Subtracts two numbers| |Subtract Integers | Subtracts two numbers|
|Tile Resample Processor | Tile resampler processor| |Tile Resample Processor | Tile resampler processor|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput| |VAE Loader | Loads a VAE model, outputting a VaeLoaderOutput|
|Zoe (Depth) Processor | Applies Zoe depth processing to image| |Zoe (Depth) Processor | Applies Zoe depth processing to image|

View File

@ -1,154 +0,0 @@
# Face Nodes
## FaceOff
FaceOff mimics a user finding a face in an image and resizing the bounding box
around the head in Canvas.
Enter a face ID (found with FaceIdentifier) to choose which face to mask.
Just as you would add more context inside the bounding box by making it larger
in Canvas, the node gives you a padding input (in pixels) which will
simultaneously add more context, and increase the resolution of the bounding box
so the face remains the same size inside it.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing. If the detected masks are imperfect and stray
too far outside/inside of faces, the node gives you X & Y offsets to shrink/grow
the masks by a multiplier.
FaceOff will output the face in a bounded image, taking the face off of the
original image for input into any node that accepts image inputs. The node also
outputs a face mask with the dimensions of the bounded image. The X & Y outputs
are for connecting to the X & Y inputs of the Paste Image node, which will place
the bounded image back on the original image using these coordinates.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Face ID | The face ID to process, numbered from 0. Multiple faces not supported. Find a face's ID with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Padding | All-axis padding around the mask in pixels |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Output | Description |
| ------------- | ------------------------------------------------ |
| Bounded Image | Original image bound, cropped, and resized |
| Width | The width of the bounded image in pixels |
| Height | The height of the bounded image in pixels |
| Mask | The output mask |
| X | The x coordinate of the bounding box's left side |
| Y | The y coordinate of the bounding box's top side |
## FaceMask
FaceMask mimics a user drawing masks on faces in an image in Canvas.
The "Face IDs" input allows the user to select specific faces to be masked.
Leave empty to detect and mask all faces, or a comma-separated list for a
specific combination of faces (ex: `1,2,4`). A single integer will detect and
mask that specific face. Find face IDs with the FaceIdentifier node.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing.
If the detected masks are imperfect and stray too far outside/inside of faces,
the node gives you X & Y offsets to shrink/grow the masks by a multiplier. All
masks shrink/grow together by the X & Y offset values.
By default, masks are created to change faces. When masks are inverted, they
change surrounding areas, protecting faces.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Face IDs | Comma-separated list of face ids to mask eg '0,2,7'. Numbered from 0. Leave empty to mask all. Find face IDs with FaceIdentifier node. |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| X Offset | X-axis offset of the mask |
| Y Offset | Y-axis offset of the mask |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Invert Mask | Toggle to invert the face mask |
| Output | Description |
| ------ | --------------------------------- |
| Image | The original image |
| Width | The width of the image in pixels |
| Height | The height of the image in pixels |
| Mask | The output face mask |
## FaceIdentifier
FaceIdentifier outputs an image with detected face IDs printed in white numbers
onto each face.
Face IDs can then be used in FaceMask and FaceOff to selectively mask all, a
specific combination, or single faces.
The FaceIdentifier output image is generated for user reference, and isn't meant
to be passed on to other image-processing nodes.
The "Minimum Confidence" input defaults to 0.5 (50%), and represents a pass/fail
threshold a detected face must reach for it to be processed. Lowering this value
may help if detection is failing. If an image is changed in the slightest, run
it through FaceIdentifier again to get updated FaceIDs.
###### Inputs/Outputs
| Input | Description |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Image | Image for face detection |
| Minimum Confidence | Minimum confidence for face detection (lower if detection is failing) |
| Chunk | Chunk (or divide) the image into sections to greatly improve face detection success. Defaults to off, but will activate if no faces are detected normally. Activate to chunk by default. |
| Output | Description |
| ------ | ------------------------------------------------------------------------------------------------ |
| Image | The original image with small face ID numbers printed in white onto each face for user reference |
| Width | The width of the original image in pixels |
| Height | The height of the original image in pixels |
## Tips
- If not all target faces are being detected, activate Chunk to bypass full
image face detection and greatly improve detection success.
- Final results will vary between full-image detection and chunking for faces
that are detectable by both due to the nature of the process. Try either to
your taste.
- Be sure Minimum Confidence is set the same when using FaceIdentifier with
FaceOff/FaceMask.
- For FaceOff, use the color correction node before faceplace to correct edges
being noticeable in the final image (see example screenshot).
- Non-inpainting models may struggle to paint/generate correctly around faces.
- If your face won't change the way you want it to no matter what you change,
consider that the change you're trying to make is too much at that resolution.
For example, if an image is only 512x768 total, the face might only be 128x128
or 256x256, much smaller than the 512x512 your SD1.5 model was probably
trained on. Try increasing the resolution of the image by upscaling or
resizing, add padding to increase the bounding box's resolution, or use an
image where the face takes up more pixels.
- If the resulting face seems out of place pasted back on the original image
(ie. too large, not proportional), add more padding on the FaceOff node to
give inpainting more context. Context and good prompting are important to
keeping things proportional.
- If you find the mask is too big/small and going too far outside/inside the
area you want to affect, adjust the x & y offsets to shrink/grow the mask area
- Use a higher denoise start value to resemble aspects of the original face or
surroundings. Denoise start = 0 & denoise end = 1 will make something new,
while denoise start = 0.50 & denoise end = 1 will be 50% old and 50% new.
- mediapipe isn't good at detecting faces with lots of face paint, hair covering
the face, etc. Anything that obstructs the face will likely result in no faces
being detected.
- If you find your face isn't being detected, try lowering the minimum
confidence value from 0.5. This could result in false positives, however
(random areas being detected as faces and masked).
- After altering an image and wanting to process a different face in the newly
altered image, run the altered image through FaceIdentifier again to see the
new Face IDs. MediaPipe will most likely detect faces in a different order
after an image has been changed in the slightest.

View File

@ -1,14 +1,15 @@
# Example Workflows # Example Workflows
We've curated some example workflows for you to get started with Workflows in InvokeAI TODO: Will update once uploading workflows is available.
To use them, right click on your desired workflow, press "Download Linked File". You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images! ## Text2Image
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord. ## Image2Image
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json) ## ControlNet
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL (with Refiner) Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json) ## Upscaling
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale w_Canny_ControlNet.json)
* [FaceMask](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceMask.json) ## Inpainting / Outpainting
* [FaceOff with 2x Face Scaling](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/FaceOff_FaceScale2x.json)
## LoRAs

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,735 +0,0 @@
{
"name": "SDXL Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"notes": "",
"exposedFields": [
{
"nodeId": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"fieldName": "model"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "prompt"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "style"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "prompt"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "style"
},
{
"nodeId": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"fieldName": "steps"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Negative Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 764,
"position": {
"x": 1275,
"y": -350
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -300
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 224,
"position": {
"x": 2025,
"y": -250
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -350
}
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "sdxl_model_loader",
"inputs": {
"model": {
"id": "39f9e799-bc95-4318-a200-30eed9e60c42",
"name": "model",
"type": "SDXLMainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-xl-base-1.0",
"base_model": "sdxl",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "2626a45e-59aa-4609-b131-2d45c5eaed69",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "7c9c42fa-93d5-4639-ab8b-c4d9b0559baf",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"clip2": {
"id": "0dafddcf-a472-49c1-a47c-7b8fab4c8bc9",
"name": "clip2",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "ee6a6997-1b3c-4ff3-99ce-1e7bfba2750c",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 234,
"position": {
"x": 475,
"y": 25
}
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Positive Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 764,
"position": {
"x": 900,
"y": -350
}
},
{
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "denoise_latents",
"inputs": {
"noise": {
"id": "4884a4b7-cc19-4fea-83c7-1f940e6edd24",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "4c61675c-b6b9-41ac-b187-b5c13b587039",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 36
},
"cfg_scale": {
"id": "f8213f35-4637-4a1a-83f4-1f8cfb9ccd2c",
"name": "cfg_scale",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "01e2f30d-0acd-4e21-98b9-a9b8e24c6db2",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "3db95479-a73b-4c75-9b44-08daec16b224",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "db8430a9-64c3-4c54-ae38-9f597cf7b6d5",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"control": {
"id": "599b49e8-6435-4576-be41-a5155f3a17e3",
"name": "control",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "226f9e91-454e-4159-9fa6-019c0cf29277",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "de019cb6-7fb5-45bf-a266-22e20889893f",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "02fc400a-110d-470e-8411-f404f966a949",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "4bd3bdfa-fcf4-42be-8e47-1e314255798f",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "7c2d58a8-b5f1-4e63-8ffd-8ada52c35832",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "6a6fa492-de26-4e95-b1d9-a322fe37eb13",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "a9790729-7d6c-4418-903d-4da961fccf56",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "fa74efe5-7330-4a3c-b256-c82a544585b4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 558,
"position": {
"x": 1650,
"y": -250
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2-55705012-79b9-4aac-9f26-c0b10309785b-collapsed",
"type": "collapsed"
},
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "vae",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-87ee6243-fb0d-4f77-ad5f-56591659339elatents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"sourceHandle": "conditioning",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-87ee6243-fb0d-4f77-ad5f-56591659339epositive_conditioning",
"type": "default"
},
{
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"sourceHandle": "conditioning",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-87ee6243-fb0d-4f77-ad5f-56591659339enegative_conditioning",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "unet",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "unet",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-87ee6243-fb0d-4f77-ad5f-56591659339eunet",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-87ee6243-fb0d-4f77-ad5f-56591659339enoise",
"type": "default"
}
]
}

File diff suppressed because it is too large Load Diff

View File

@ -1,573 +0,0 @@
{
"name": "Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
"exposedFields": [
{
"nodeId": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"fieldName": "model"
},
{
"nodeId": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"fieldName": "prompt"
},
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
},
{
"nodeId": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"fieldName": "steps"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 235,
"position": {
"x": 1400,
"y": -75
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 364,
"position": {
"x": 1000,
"y": 350
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 266,
"position": {
"x": 1800,
"y": 200
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "993eabd2-40fd-44fe-bce7-5d0c7075ddab",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "5c18c9db-328d-46d0-8cb9-143391c410be",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "6effcac0-ec2f-4bf5-a49e-a2c29cf921f4",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "57683ba3-f5f5-4f58-b9a2-4b83dacad4a1",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1000,
"y": 200
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 235,
"position": {
"x": 1000,
"y": -75
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1000,
"y": 275
}
},
{
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "denoise_latents",
"inputs": {
"noise": {
"id": "8b18f3eb-40d2-45c1-9a9d-28d6af0dce2b",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "0be4373c-46f3-441c-80a7-a4bb6ceb498c",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 36
},
"cfg_scale": {
"id": "107267ce-4666-4cd7-94b3-7476b7973ae9",
"name": "cfg_scale",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "d2ce9f0f-5fc2-48b2-b917-53442941e9a1",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "8ad51505-b8d0-422a-beb8-96fc6fc6b65f",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "53092874-a43b-4623-91a2-76e62fdb1f2e",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"control": {
"id": "7abe57cc-469d-437e-ad72-a18efa28215f",
"name": "control",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "add8bbe5-14d0-42d4-a867-9c65ab8dd129",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "f373a190-0fc8-45b7-ae62-c4aa8e9687e1",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "c7160303-8a23-4f15-9197-855d48802a7f",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "fd750efa-1dfc-4d0b-accb-828e905ba320",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "af1f41ba-ce2a-4314-8d7f-494bb5800381",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "8508d04d-f999-4a44-94d0-388ab1401d27",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "93dc8287-0a2a-4320-83a4-5e994b7ba23e",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "d9862f5c-0ab5-46fa-8c29-5059bb581d96",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 558,
"position": {
"x": 1400,
"y": 200
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-75899702-fa44-46d2-b2d5-3e17f234c3e7latents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7positive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"sourceHandle": "conditioning",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7negative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "unet",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-75899702-fa44-46d2-b2d5-3e17f234c3e7unet",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-75899702-fa44-46d2-b2d5-3e17f234c3e7noise",
"type": "default"
}
]
}

View File

@ -14,7 +14,7 @@ fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)") VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
PATCH="" PATCH=""
VERSION="v${VERSION}${PATCH}" VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3-latest" LATEST_TAG="v3.0-latest"
echo Building installer for version $VERSION echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing." echo "Be certain that you're in the 'installer' directory before continuing."

View File

@ -332,7 +332,6 @@ class InvokeAiInstance:
Configure the InvokeAI runtime directory Configure the InvokeAI runtime directory
""" """
auto_install = False
# set sys.argv to a consistent state # set sys.argv to a consistent state
new_argv = [sys.argv[0]] new_argv = [sys.argv[0]]
for i in range(1, len(sys.argv)): for i in range(1, len(sys.argv)):
@ -341,17 +340,13 @@ class InvokeAiInstance:
new_argv.append(el) new_argv.append(el)
new_argv.append(sys.argv[i + 1]) new_argv.append(sys.argv[i + 1])
elif el in ["-y", "--yes", "--yes-to-all"]: elif el in ["-y", "--yes", "--yes-to-all"]:
auto_install = True new_argv.append(el)
sys.argv = new_argv sys.argv = new_argv
import messages
import requests # to catch download exceptions import requests # to catch download exceptions
from messages import introduction
auto_install = auto_install or messages.user_wants_auto_configuration() introduction()
if auto_install:
sys.argv.append("--yes")
else:
messages.introduction()
from invokeai.frontend.install.invokeai_configure import invokeai_configure from invokeai.frontend.install.invokeai_configure import invokeai_configure

View File

@ -5,7 +5,6 @@ InvokeAI Installer
import argparse import argparse
import os import os
from pathlib import Path from pathlib import Path
from installer import Installer from installer import Installer
if __name__ == "__main__": if __name__ == "__main__":

View File

@ -7,7 +7,7 @@ import os
import platform import platform
from pathlib import Path from pathlib import Path
from prompt_toolkit import HTML, prompt from prompt_toolkit import prompt
from prompt_toolkit.completion import PathCompleter from prompt_toolkit.completion import PathCompleter
from prompt_toolkit.validation import Validator from prompt_toolkit.validation import Validator
from rich import box, print from rich import box, print
@ -65,50 +65,17 @@ def confirm_install(dest: Path) -> bool:
if dest.exists(): if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:") print(f":exclamation: Directory {dest} already exists :exclamation:")
dest_confirmed = Confirm.ask( dest_confirmed = Confirm.ask(
":stop_sign: (re)install in this location?", ":stop_sign: Are you sure you want to (re)install in this location?",
default=False, default=False,
) )
else: else:
print(f"InvokeAI will be installed in {dest}") print(f"InvokeAI will be installed in {dest}")
dest_confirmed = Confirm.ask("Use this location?", default=True) dest_confirmed = not Confirm.ask("Would you like to pick a different location?", default=False)
console.line() console.line()
return dest_confirmed return dest_confirmed
def user_wants_auto_configuration() -> bool:
"""Prompt the user to choose between manual and auto configuration."""
console.rule("InvokeAI Configuration Section")
console.print(
Panel(
Group(
"\n".join(
[
"Libraries are installed and InvokeAI will now set up its root directory and configuration. Choose between:",
"",
" * AUTOMATIC configuration: install reasonable defaults and a minimal set of starter models.",
" * MANUAL configuration: manually inspect and adjust configuration options and pick from a larger set of starter models.",
"",
"Later you can fine tune your configuration by selecting option [6] 'Change InvokeAI startup options' from the invoke.bat/invoke.sh launcher script.",
]
),
),
box=box.MINIMAL,
padding=(1, 1),
)
)
choice = (
prompt(
HTML("Choose <b>&lt;a&gt;</b>utomatic or <b>&lt;m&gt;</b>anual configuration [a/m] (a): "),
validator=Validator.from_callable(
lambda n: n == "" or n.startswith(("a", "A", "m", "M")), error_message="Please select 'a' or 'm'"
),
)
or "a"
)
return choice.lower().startswith("a")
def dest_path(dest=None) -> Path: def dest_path(dest=None) -> Path:
""" """
Prompt the user for the destination path and create the path Prompt the user for the destination path and create the path

View File

@ -17,10 +17,9 @@ echo 6. Change InvokeAI startup options
echo 7. Re-run the configure script to fix a broken install or to complete a major upgrade echo 7. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 8. Open the developer console echo 8. Open the developer console
echo 9. Update InvokeAI echo 9. Update InvokeAI
echo 10. Run the InvokeAI image database maintenance script echo 10. Command-line help
echo 11. Command-line help
echo Q - Quit echo Q - Quit
set /P choice="Please enter 1-11, Q: [1] " set /P choice="Please enter 1-10, Q: [1] "
if not defined choice set choice=1 if not defined choice set choice=1
IF /I "%choice%" == "1" ( IF /I "%choice%" == "1" (
echo Starting the InvokeAI browser-based UI.. echo Starting the InvokeAI browser-based UI..
@ -59,11 +58,8 @@ IF /I "%choice%" == "1" (
echo Running invokeai-update... echo Running invokeai-update...
python -m invokeai.frontend.install.invokeai_update python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "10" ( ) ELSE IF /I "%choice%" == "10" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
) ELSE IF /I "%choice%" == "11" (
echo Displaying command line help... echo Displaying command line help...
python .venv\Scripts\invokeai-web.exe --help %* python .venv\Scripts\invokeai.exe --help %*
pause pause
exit /b exit /b
) ELSE IF /I "%choice%" == "q" ( ) ELSE IF /I "%choice%" == "q" (

View File

@ -46,9 +46,6 @@ if [ "$(uname -s)" == "Darwin" ]; then
export PYTORCH_ENABLE_MPS_FALLBACK=1 export PYTORCH_ENABLE_MPS_FALLBACK=1
fi fi
# Avoid glibc memory fragmentation. See invokeai/backend/model_management/README.md for details.
export MALLOC_MMAP_THRESHOLD_=1048576
# Primary function for the case statement to determine user input # Primary function for the case statement to determine user input
do_choice() { do_choice() {
case $1 in case $1 in
@ -100,13 +97,13 @@ do_choice() {
;; ;;
10) 10)
clear clear
printf "Running the db maintenance script\n" printf "Command-line help\n"
invokeai-db-maintenance --root ${INVOKEAI_ROOT} invokeai --help
;; ;;
11) "HELP 1")
clear clear
printf "Command-line help\n" printf "Command-line help\n"
invokeai-web --help invokeai --help
;; ;;
*) *)
clear clear
@ -128,10 +125,7 @@ do_dialog() {
6 "Change InvokeAI startup options" 6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade" 7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console" 8 "Open the developer console"
9 "Update InvokeAI" 9 "Update InvokeAI")
10 "Run the InvokeAI image database maintenance script"
11 "Command-line help"
)
choice=$(dialog --clear \ choice=$(dialog --clear \
--backtitle "\Zb\Zu\Z3InvokeAI" \ --backtitle "\Zb\Zu\Z3InvokeAI" \
@ -163,10 +157,9 @@ do_line_input() {
printf "7: Re-run the configure script to fix a broken install\n" printf "7: Re-run the configure script to fix a broken install\n"
printf "8: Open the developer console\n" printf "8: Open the developer console\n"
printf "9: Update InvokeAI\n" printf "9: Update InvokeAI\n"
printf "10: Run the InvokeAI image database maintenance script\n" printf "10: Command-line help\n"
printf "11: Command-line help\n"
printf "Q: Quit\n\n" printf "Q: Quit\n\n"
read -p "Please enter 1-11, Q: [1] " yn read -p "Please enter 1-10, Q: [1] " yn
choice=${yn:='1'} choice=${yn:='1'}
do_choice $choice do_choice $choice
clear clear

View File

@ -1,35 +1,37 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from logging import Logger from logging import Logger
import sqlite3
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
from invokeai.app.services.board_images import (
BoardImagesService,
BoardImagesServiceDependencies,
)
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.backend.util.logging import InvokeAILogger from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__ from invokeai.version.invokeai_version import __version__
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage from ..services.default_graphs import create_system_graphs
from ..services.board_images.board_images_default import BoardImagesService from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage from ..services.graph import GraphExecutionState, LibraryGraph
from ..services.boards.boards_default import BoardService from ..services.image_file_storage import DiskImageFileStorage
from ..services.config import InvokeAIAppConfig from ..services.invocation_queue import MemoryInvocationQueue
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from ..services.invocation_processor.invocation_processor_default import DefaultInvocationProcessor
from ..services.invocation_queue.invocation_queue_memory import MemoryInvocationQueue
from ..services.invocation_services import InvocationServices from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage from ..services.processor import DefaultInvocationProcessor
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage from ..services.sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage from ..services.model_manager_service import ModelManagerService
from ..services.model_manager.model_manager_default import ModelManagerService from ..services.batch_manager import BatchManager
from ..services.names.names_default import SimpleNameService from ..services.batch_manager_storage import SqliteBatchProcessStorage
from ..services.session_processor.session_processor_default import DefaultSessionProcessor from ..services.invocation_stats import InvocationStatsService
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.shared.default_graphs import create_system_graphs
from ..services.shared.graph import GraphExecutionState, LibraryGraph
from ..services.shared.sqlite import SqliteDatabase
from ..services.urls.urls_default import LocalUrlService
from .events import FastAPIEventService from .events import FastAPIEventService
@ -49,7 +51,7 @@ def check_internet() -> bool:
return False return False
logger = InvokeAILogger.get_logger() logger = InvokeAILogger.getLogger()
class ApiDependencies: class ApiDependencies:
@ -63,65 +65,84 @@ class ApiDependencies:
logger.info(f"Root directory = {str(config.root_path)}") logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}") logger.debug(f"Internet connectivity is {config.internet_available}")
events = FastAPIEventService(event_handler_id)
output_folder = config.output_path output_folder = config.output_path
db = SqliteDatabase(config, logger) # TODO: build a file/path manager?
db_path = config.db_path
db_path.parent.mkdir(parents=True, exist_ok=True)
db_location = str(db_path)
configuration = config db_conn = sqlite3.connect(db_location, check_same_thread=False) # TODO: figure out a better threading solution
logger = logger
graph_execution_manager = SqliteItemStorage[GraphExecutionState](conn=db_conn, table_name="graph_executions")
board_image_records = SqliteBoardImageRecordStorage(db=db)
board_images = BoardImagesService()
board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_library = SqliteItemStorage[LibraryGraph](db=db, table_name="graphs")
image_files = DiskImageFileStorage(f"{output_folder}/images")
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
model_manager = ModelManagerService(config, logger)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
queue = MemoryInvocationQueue()
session_processor = DefaultSessionProcessor()
session_queue = SqliteSessionQueue(db=db)
urls = LocalUrlService() urls = LocalUrlService()
image_record_storage = SqliteImageRecordStorage(conn=db_conn)
image_file_storage = DiskImageFileStorage(f"{output_folder}/images")
names = SimpleNameService()
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
board_record_storage = SqliteBoardRecordStorage(conn=db_conn)
board_image_record_storage = SqliteBoardImageRecordStorage(conn=db_conn)
boards = BoardService(
services=BoardServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
board_images = BoardImagesService(
services=BoardImagesServiceDependencies(
board_image_record_storage=board_image_record_storage,
board_record_storage=board_record_storage,
image_record_storage=image_record_storage,
url=urls,
logger=logger,
)
)
images = ImageService(
services=ImageServiceDependencies(
board_image_record_storage=board_image_record_storage,
image_record_storage=image_record_storage,
image_file_storage=image_file_storage,
url=urls,
logger=logger,
names=names,
graph_execution_manager=graph_execution_manager,
)
)
batch_manager_storage = SqliteBatchProcessStorage(conn=db_conn)
batch_manager = BatchManager(batch_manager_storage)
services = InvocationServices( services = InvocationServices(
board_image_records=board_image_records, model_manager=ModelManagerService(config, logger),
board_images=board_images,
board_records=board_records,
boards=boards,
configuration=configuration,
events=events, events=events,
graph_execution_manager=graph_execution_manager,
graph_library=graph_library,
image_files=image_files,
image_records=image_records,
images=images,
invocation_cache=invocation_cache,
latents=latents, latents=latents,
images=images,
batch_manager=batch_manager,
boards=boards,
board_images=board_images,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](conn=db_conn, table_name="graphs"),
graph_execution_manager=graph_execution_manager,
processor=DefaultInvocationProcessor(),
configuration=config,
performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger, logger=logger,
model_manager=model_manager,
names=names,
performance_statistics=performance_statistics,
processor=processor,
queue=queue,
session_processor=session_processor,
session_queue=session_queue,
urls=urls,
) )
create_system_graphs(services.graph_library) create_system_graphs(services.graph_library)
ApiDependencies.invoker = Invoker(services) ApiDependencies.invoker = Invoker(services)
db.clean()
@staticmethod @staticmethod
def shutdown(): def shutdown():
if ApiDependencies.invoker: if ApiDependencies.invoker:

View File

@ -7,7 +7,7 @@ from typing import Any
from fastapi_events.dispatcher import dispatch from fastapi_events.dispatcher import dispatch
from ..services.events.events_base import EventServiceBase from ..services.events import EventServiceBase
class FastAPIEventService(EventServiceBase): class FastAPIEventService(EventServiceBase):

View File

@ -7,7 +7,6 @@ from fastapi.routing import APIRouter
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from invokeai.app.invocations.upscale import ESRGAN_MODELS from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.patchmatch import PatchMatch from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker from invokeai.backend.image_util.safety_checker import SafetyChecker
@ -104,43 +103,3 @@ async def set_log_level(
"""Sets the log verbosity level""" """Sets the log verbosity level"""
ApiDependencies.invoker.services.logger.setLevel(level) ApiDependencies.invoker.services.logger.setLevel(level)
return LogLevel(ApiDependencies.invoker.services.logger.level) return LogLevel(ApiDependencies.invoker.services.logger.level)
@app_router.delete(
"/invocation_cache",
operation_id="clear_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def clear_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.clear()
@app_router.put(
"/invocation_cache/enable",
operation_id="enable_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def enable_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.enable()
@app_router.put(
"/invocation_cache/disable",
operation_id="disable_invocation_cache",
responses={200: {"description": "The operation was successful"}},
)
async def disable_invocation_cache() -> None:
"""Clears the invocation cache"""
ApiDependencies.invoker.services.invocation_cache.disable()
@app_router.get(
"/invocation_cache/status",
operation_id="get_invocation_cache_status",
responses={200: {"model": InvocationCacheStatus}},
)
async def get_invocation_cache_status() -> InvocationCacheStatus:
"""Clears the invocation cache"""
return ApiDependencies.invoker.services.invocation_cache.get_status()

View File

@ -0,0 +1,106 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from fastapi import Body, HTTPException, Path, Response
from fastapi.routing import APIRouter
from invokeai.app.services.batch_manager_storage import BatchSession, BatchSessionNotFoundException
# Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403
from ...services.batch_manager import Batch, BatchProcessResponse
from ...services.graph import Graph
from ..dependencies import ApiDependencies
batches_router = APIRouter(prefix="/v1/batches", tags=["sessions"])
@batches_router.post(
"/",
operation_id="create_batch",
responses={
200: {"model": BatchProcessResponse},
400: {"description": "Invalid json"},
},
)
async def create_batch(
graph: Graph = Body(description="The graph to initialize the session with"),
batch: Batch = Body(description="Batch config to apply to the given graph"),
) -> BatchProcessResponse:
"""Creates a batch process"""
return ApiDependencies.invoker.services.batch_manager.create_batch_process(batch, graph)
@batches_router.put(
"/b/{batch_process_id}/invoke",
operation_id="start_batch",
responses={
202: {"description": "Batch process started"},
404: {"description": "Batch session not found"},
},
)
async def start_batch(
batch_process_id: str = Path(description="ID of Batch to start"),
) -> Response:
"""Executes a batch process"""
try:
ApiDependencies.invoker.services.batch_manager.run_batch_process(batch_process_id)
return Response(status_code=202)
except BatchSessionNotFoundException:
raise HTTPException(status_code=404, detail="Batch session not found")
@batches_router.delete(
"/b/{batch_process_id}",
operation_id="cancel_batch",
responses={202: {"description": "The batch is canceled"}},
)
async def cancel_batch(
batch_process_id: str = Path(description="The id of the batch process to cancel"),
) -> Response:
"""Cancels a batch process"""
ApiDependencies.invoker.services.batch_manager.cancel_batch_process(batch_process_id)
return Response(status_code=202)
@batches_router.get(
"/incomplete",
operation_id="list_incomplete_batches",
responses={200: {"model": list[BatchProcessResponse]}},
)
async def list_incomplete_batches() -> list[BatchProcessResponse]:
"""Lists incomplete batch processes"""
return ApiDependencies.invoker.services.batch_manager.get_incomplete_batch_processes()
@batches_router.get(
"/",
operation_id="list_batches",
responses={200: {"model": list[BatchProcessResponse]}},
)
async def list_batches() -> list[BatchProcessResponse]:
"""Lists all batch processes"""
return ApiDependencies.invoker.services.batch_manager.get_batch_processes()
@batches_router.get(
"/b/{batch_process_id}",
operation_id="get_batch",
responses={200: {"model": BatchProcessResponse}},
)
async def get_batch(
batch_process_id: str = Path(description="The id of the batch process to get"),
) -> BatchProcessResponse:
"""Gets a Batch Process"""
return ApiDependencies.invoker.services.batch_manager.get_batch(batch_process_id)
@batches_router.get(
"/b/{batch_process_id}/sessions",
operation_id="get_batch_sessions",
responses={200: {"model": list[BatchSession]}},
)
async def get_batch_sessions(
batch_process_id: str = Path(description="The id of the batch process to get"),
) -> list[BatchSession]:
"""Gets a list of batch sessions for a given batch process"""
return ApiDependencies.invoker.services.batch_manager.get_sessions(batch_process_id)

View File

@ -4,9 +4,9 @@ from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter from fastapi.routing import APIRouter
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from invokeai.app.services.board_records.board_records_common import BoardChanges from invokeai.app.services.board_record_storage import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.shared.pagination import OffsetPaginatedResults from invokeai.app.services.models.board_record import BoardDTO
from ..dependencies import ApiDependencies from ..dependencies import ApiDependencies

View File

@ -1,17 +1,20 @@
import io import io
from typing import Optional from typing import Optional
from PIL import Image
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse from fastapi.responses import FileResponse
from fastapi.routing import APIRouter from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
from invokeai.app.invocations.metadata import ImageMetadata from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.shared.pagination import OffsetPaginatedResults from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from ..dependencies import ApiDependencies from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"]) images_router = APIRouter(prefix="/v1/images", tags=["images"])
@ -42,7 +45,7 @@ async def upload_image(
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"), crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
) -> ImageDTO: ) -> ImageDTO:
"""Uploads an image""" """Uploads an image"""
if not file.content_type or not file.content_type.startswith("image"): if not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image") raise HTTPException(status_code=415, detail="Not an image")
contents = await file.read() contents = await file.read()
@ -322,20 +325,3 @@ async def unstar_images_in_list(
return ImagesUpdatedFromListResult(updated_image_names=updated_image_names) return ImagesUpdatedFromListResult(updated_image_names=updated_image_names)
except Exception: except Exception:
raise HTTPException(status_code=500, detail="Failed to unstar images") raise HTTPException(status_code=500, detail="Failed to unstar images")
class ImagesDownloaded(BaseModel):
response: Optional[str] = Field(
description="If defined, the message to display to the user when images begin downloading"
)
@images_router.post("/download", operation_id="download_images_from_list", response_model=ImagesDownloaded)
async def download_images_from_list(
image_names: list[str] = Body(description="The list of names of images to download", embed=True),
board_id: Optional[str] = Body(
default=None, description="The board from which image should be downloaded from", embed=True
),
) -> ImagesDownloaded:
# return ImagesDownloaded(response="Your images are downloading")
raise HTTPException(status_code=501, detail="Endpoint is not yet implemented")

View File

@ -2,35 +2,29 @@
import pathlib import pathlib
from typing import Annotated, List, Literal, Optional, Union from typing import Literal, List, Optional, Union
from fastapi import Body, Path, Query, Response from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter from pydantic import BaseModel, parse_obj_as
from starlette.exceptions import HTTPException from starlette.exceptions import HTTPException
from invokeai.backend import BaseModelType, ModelType from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management import MergeInterpolationMethod
from invokeai.backend.model_management.models import ( from invokeai.backend.model_management.models import (
OPENAPI_MODEL_CONFIGS, OPENAPI_MODEL_CONFIGS,
InvalidModelException,
ModelNotFoundException,
SchedulerPredictionType, SchedulerPredictionType,
ModelNotFoundException,
InvalidModelException,
) )
from invokeai.backend.model_management import MergeInterpolationMethod
from ..dependencies import ApiDependencies from ..dependencies import ApiDependencies
models_router = APIRouter(prefix="/v1/models", tags=["models"]) models_router = APIRouter(prefix="/v1/models", tags=["models"])
UpdateModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)] UpdateModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
update_models_response_adapter = TypeAdapter(UpdateModelResponse)
ImportModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)] ImportModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
import_models_response_adapter = TypeAdapter(ImportModelResponse)
ConvertModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)] ConvertModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
convert_models_response_adapter = TypeAdapter(ConvertModelResponse)
MergeModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)] MergeModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)] ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
@ -38,11 +32,6 @@ ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
class ModelsList(BaseModel): class ModelsList(BaseModel):
models: list[Union[tuple(OPENAPI_MODEL_CONFIGS)]] models: list[Union[tuple(OPENAPI_MODEL_CONFIGS)]]
model_config = ConfigDict(use_enum_values=True)
models_list_adapter = TypeAdapter(ModelsList)
@models_router.get( @models_router.get(
"/", "/",
@ -60,7 +49,7 @@ async def list_models(
models_raw.extend(ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type)) models_raw.extend(ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type))
else: else:
models_raw = ApiDependencies.invoker.services.model_manager.list_models(None, model_type) models_raw = ApiDependencies.invoker.services.model_manager.list_models(None, model_type)
models = models_list_adapter.validate_python({"models": models_raw}) models = parse_obj_as(ModelsList, {"models": models_raw})
return models return models
@ -116,14 +105,11 @@ async def update_model(
info.path = new_info.get("path") info.path = new_info.get("path")
# replace empty string values with None/null to avoid phenomenon of vae: '' # replace empty string values with None/null to avoid phenomenon of vae: ''
info_dict = info.model_dump() info_dict = info.dict()
info_dict = {x: info_dict[x] if info_dict[x] else None for x in info_dict.keys()} info_dict = {x: info_dict[x] if info_dict[x] else None for x in info_dict.keys()}
ApiDependencies.invoker.services.model_manager.update_model( ApiDependencies.invoker.services.model_manager.update_model(
model_name=model_name, model_name=model_name, base_model=base_model, model_type=model_type, model_attributes=info_dict
base_model=base_model,
model_type=model_type,
model_attributes=info_dict,
) )
model_raw = ApiDependencies.invoker.services.model_manager.list_model( model_raw = ApiDependencies.invoker.services.model_manager.list_model(
@ -131,7 +117,7 @@ async def update_model(
base_model=base_model, base_model=base_model,
model_type=model_type, model_type=model_type,
) )
model_response = update_models_response_adapter.validate_python(model_raw) model_response = parse_obj_as(UpdateModelResponse, model_raw)
except ModelNotFoundException as e: except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=str(e)) raise HTTPException(status_code=404, detail=str(e))
except ValueError as e: except ValueError as e:
@ -160,21 +146,18 @@ async def update_model(
async def import_model( async def import_model(
location: str = Body(description="A model path, repo_id or URL to import"), location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body( prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body(
description="Prediction type for SDv2 checkpoints and rare SDv1 checkpoints", description="Prediction type for SDv2 checkpoint files", default="v_prediction"
default=None,
), ),
) -> ImportModelResponse: ) -> ImportModelResponse:
"""Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically""" """Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically"""
location = location.strip("\"' ")
items_to_import = {location} items_to_import = {location}
prediction_types = {x.value: x for x in SchedulerPredictionType} prediction_types = {x.value: x for x in SchedulerPredictionType}
logger = ApiDependencies.invoker.services.logger logger = ApiDependencies.invoker.services.logger
try: try:
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import( installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import=items_to_import, items_to_import=items_to_import, prediction_type_helper=lambda x: prediction_types.get(prediction_type)
prediction_type_helper=lambda x: prediction_types.get(prediction_type),
) )
info = installed_models.get(location) info = installed_models.get(location)
@ -186,7 +169,7 @@ async def import_model(
model_raw = ApiDependencies.invoker.services.model_manager.list_model( model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.name, base_model=info.base_model, model_type=info.model_type model_name=info.name, base_model=info.base_model, model_type=info.model_type
) )
return import_models_response_adapter.validate_python(model_raw) return parse_obj_as(ImportModelResponse, model_raw)
except ModelNotFoundException as e: except ModelNotFoundException as e:
logger.error(str(e)) logger.error(str(e))
@ -220,18 +203,13 @@ async def add_model(
try: try:
ApiDependencies.invoker.services.model_manager.add_model( ApiDependencies.invoker.services.model_manager.add_model(
info.model_name, info.model_name, info.base_model, info.model_type, model_attributes=info.dict()
info.base_model,
info.model_type,
model_attributes=info.model_dump(),
) )
logger.info(f"Successfully added {info.model_name}") logger.info(f"Successfully added {info.model_name}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model( model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.model_name, model_name=info.model_name, base_model=info.base_model, model_type=info.model_type
base_model=info.base_model,
model_type=info.model_type,
) )
return import_models_response_adapter.validate_python(model_raw) return parse_obj_as(ImportModelResponse, model_raw)
except ModelNotFoundException as e: except ModelNotFoundException as e:
logger.error(str(e)) logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e)) raise HTTPException(status_code=404, detail=str(e))
@ -243,10 +221,7 @@ async def add_model(
@models_router.delete( @models_router.delete(
"/{base_model}/{model_type}/{model_name}", "/{base_model}/{model_type}/{model_name}",
operation_id="del_model", operation_id="del_model",
responses={ responses={204: {"description": "Model deleted successfully"}, 404: {"description": "Model not found"}},
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204, status_code=204,
response_model=None, response_model=None,
) )
@ -302,7 +277,7 @@ async def convert_model(
model_raw = ApiDependencies.invoker.services.model_manager.list_model( model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name, base_model=base_model, model_type=model_type model_name, base_model=base_model, model_type=model_type
) )
response = convert_models_response_adapter.validate_python(model_raw) response = parse_obj_as(ConvertModelResponse, model_raw)
except ModelNotFoundException as e: except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found: {str(e)}") raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found: {str(e)}")
except ValueError as e: except ValueError as e:
@ -325,8 +300,7 @@ async def search_for_models(
) -> List[pathlib.Path]: ) -> List[pathlib.Path]:
if not search_path.is_dir(): if not search_path.is_dir():
raise HTTPException( raise HTTPException(
status_code=404, status_code=404, detail=f"The search path '{search_path}' does not exist or is not directory"
detail=f"The search path '{search_path}' does not exist or is not directory",
) )
return ApiDependencies.invoker.services.model_manager.search_for_models(search_path) return ApiDependencies.invoker.services.model_manager.search_for_models(search_path)
@ -361,26 +335,6 @@ async def sync_to_config() -> bool:
return True return True
# There's some weird pydantic-fastapi behaviour that requires this to be a separate class
# TODO: After a few updates, see if it works inside the route operation handler?
class MergeModelsBody(BaseModel):
model_names: List[str] = Field(description="model name", min_length=2, max_length=3)
merged_model_name: Optional[str] = Field(description="Name of destination model")
alpha: Optional[float] = Field(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5)
interp: Optional[MergeInterpolationMethod] = Field(description="Interpolation method")
force: Optional[bool] = Field(
description="Force merging of models created with different versions of diffusers",
default=False,
)
merge_dest_directory: Optional[str] = Field(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
)
model_config = ConfigDict(protected_namespaces=())
@models_router.put( @models_router.put(
"/merge/{base_model}", "/merge/{base_model}",
operation_id="merge_models", operation_id="merge_models",
@ -393,23 +347,31 @@ class MergeModelsBody(BaseModel):
response_model=MergeModelResponse, response_model=MergeModelResponse,
) )
async def merge_models( async def merge_models(
body: Annotated[MergeModelsBody, Body(description="Model configuration", embed=True)],
base_model: BaseModelType = Path(description="Base model"), base_model: BaseModelType = Path(description="Base model"),
model_names: List[str] = Body(description="model name", min_items=2, max_items=3),
merged_model_name: Optional[str] = Body(description="Name of destination model"),
alpha: Optional[float] = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method"),
force: Optional[bool] = Body(
description="Force merging of models created with different versions of diffusers", default=False
),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> MergeModelResponse: ) -> MergeModelResponse:
"""Convert a checkpoint model into a diffusers model""" """Convert a checkpoint model into a diffusers model"""
logger = ApiDependencies.invoker.services.logger logger = ApiDependencies.invoker.services.logger
try: try:
logger.info( logger.info(f"Merging models: {model_names} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
f"Merging models: {body.model_names} into {body.merge_dest_directory or '<MODELS>'}/{body.merged_model_name}" dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
)
dest = pathlib.Path(body.merge_dest_directory) if body.merge_dest_directory else None
result = ApiDependencies.invoker.services.model_manager.merge_models( result = ApiDependencies.invoker.services.model_manager.merge_models(
model_names=body.model_names, model_names,
base_model=base_model, base_model,
merged_model_name=body.merged_model_name or "+".join(body.model_names), merged_model_name=merged_model_name or "+".join(model_names),
alpha=body.alpha, alpha=alpha,
interp=body.interp, interp=interp,
force=body.force, force=force,
merge_dest_directory=dest, merge_dest_directory=dest,
) )
model_raw = ApiDependencies.invoker.services.model_manager.list_model( model_raw = ApiDependencies.invoker.services.model_manager.list_model(
@ -417,12 +379,9 @@ async def merge_models(
base_model=base_model, base_model=base_model,
model_type=ModelType.Main, model_type=ModelType.Main,
) )
response = convert_models_response_adapter.validate_python(model_raw) response = parse_obj_as(ConvertModelResponse, model_raw)
except ModelNotFoundException: except ModelNotFoundException:
raise HTTPException( raise HTTPException(status_code=404, detail=f"One or more of the models '{model_names}' not found")
status_code=404,
detail=f"One or more of the models '{body.model_names}' not found",
)
except ValueError as e: except ValueError as e:
raise HTTPException(status_code=400, detail=str(e)) raise HTTPException(status_code=400, detail=str(e))
return response return response

View File

@ -1,247 +0,0 @@
from typing import Optional
from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
Batch,
BatchStatus,
CancelByBatchIDsResult,
ClearResult,
EnqueueBatchResult,
EnqueueGraphResult,
PruneResult,
SessionQueueItem,
SessionQueueItemDTO,
SessionQueueStatus,
)
from invokeai.app.services.shared.graph import Graph
from invokeai.app.services.shared.pagination import CursorPaginatedResults
from ..dependencies import ApiDependencies
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
class SessionQueueAndProcessorStatus(BaseModel):
"""The overall status of session queue and processor"""
queue: SessionQueueStatus
processor: SessionProcessorStatus
@session_queue_router.post(
"/{queue_id}/enqueue_graph",
operation_id="enqueue_graph",
responses={
201: {"model": EnqueueGraphResult},
},
)
async def enqueue_graph(
queue_id: str = Path(description="The queue id to perform this operation on"),
graph: Graph = Body(description="The graph to enqueue"),
prepend: bool = Body(default=False, description="Whether or not to prepend this batch in the queue"),
) -> EnqueueGraphResult:
"""Enqueues a graph for single execution."""
return ApiDependencies.invoker.services.session_queue.enqueue_graph(queue_id=queue_id, graph=graph, prepend=prepend)
@session_queue_router.post(
"/{queue_id}/enqueue_batch",
operation_id="enqueue_batch",
responses={
201: {"model": EnqueueBatchResult},
},
)
async def enqueue_batch(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch: Batch = Body(description="Batch to process"),
prepend: bool = Body(default=False, description="Whether or not to prepend this batch in the queue"),
) -> EnqueueBatchResult:
"""Processes a batch and enqueues the output graphs for execution."""
return ApiDependencies.invoker.services.session_queue.enqueue_batch(queue_id=queue_id, batch=batch, prepend=prepend)
@session_queue_router.get(
"/{queue_id}/list",
operation_id="list_queue_items",
responses={
200: {"model": CursorPaginatedResults[SessionQueueItemDTO]},
},
)
async def list_queue_items(
queue_id: str = Path(description="The queue id to perform this operation on"),
limit: int = Query(default=50, description="The number of items to fetch"),
status: Optional[QUEUE_ITEM_STATUS] = Query(default=None, description="The status of items to fetch"),
cursor: Optional[int] = Query(default=None, description="The pagination cursor"),
priority: int = Query(default=0, description="The pagination cursor priority"),
) -> CursorPaginatedResults[SessionQueueItemDTO]:
"""Gets all queue items (without graphs)"""
return ApiDependencies.invoker.services.session_queue.list_queue_items(
queue_id=queue_id, limit=limit, status=status, cursor=cursor, priority=priority
)
@session_queue_router.put(
"/{queue_id}/processor/resume",
operation_id="resume",
responses={200: {"model": SessionProcessorStatus}},
)
async def resume(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionProcessorStatus:
"""Resumes session processor"""
return ApiDependencies.invoker.services.session_processor.resume()
@session_queue_router.put(
"/{queue_id}/processor/pause",
operation_id="pause",
responses={200: {"model": SessionProcessorStatus}},
)
async def Pause(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionProcessorStatus:
"""Pauses session processor"""
return ApiDependencies.invoker.services.session_processor.pause()
@session_queue_router.put(
"/{queue_id}/cancel_by_batch_ids",
operation_id="cancel_by_batch_ids",
responses={200: {"model": CancelByBatchIDsResult}},
)
async def cancel_by_batch_ids(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch_ids: list[str] = Body(description="The list of batch_ids to cancel all queue items for", embed=True),
) -> CancelByBatchIDsResult:
"""Immediately cancels all queue items from the given batch ids"""
return ApiDependencies.invoker.services.session_queue.cancel_by_batch_ids(queue_id=queue_id, batch_ids=batch_ids)
@session_queue_router.put(
"/{queue_id}/clear",
operation_id="clear",
responses={
200: {"model": ClearResult},
},
)
async def clear(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> ClearResult:
"""Clears the queue entirely, immediately canceling the currently-executing session"""
queue_item = ApiDependencies.invoker.services.session_queue.get_current(queue_id)
if queue_item is not None:
ApiDependencies.invoker.services.session_queue.cancel_queue_item(queue_item.item_id)
clear_result = ApiDependencies.invoker.services.session_queue.clear(queue_id)
return clear_result
@session_queue_router.put(
"/{queue_id}/prune",
operation_id="prune",
responses={
200: {"model": PruneResult},
},
)
async def prune(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> PruneResult:
"""Prunes all completed or errored queue items"""
return ApiDependencies.invoker.services.session_queue.prune(queue_id)
@session_queue_router.get(
"/{queue_id}/current",
operation_id="get_current_queue_item",
responses={
200: {"model": Optional[SessionQueueItem]},
},
)
async def get_current_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> Optional[SessionQueueItem]:
"""Gets the currently execution queue item"""
return ApiDependencies.invoker.services.session_queue.get_current(queue_id)
@session_queue_router.get(
"/{queue_id}/next",
operation_id="get_next_queue_item",
responses={
200: {"model": Optional[SessionQueueItem]},
},
)
async def get_next_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> Optional[SessionQueueItem]:
"""Gets the next queue item, without executing it"""
return ApiDependencies.invoker.services.session_queue.get_next(queue_id)
@session_queue_router.get(
"/{queue_id}/status",
operation_id="get_queue_status",
responses={
200: {"model": SessionQueueAndProcessorStatus},
},
)
async def get_queue_status(
queue_id: str = Path(description="The queue id to perform this operation on"),
) -> SessionQueueAndProcessorStatus:
"""Gets the status of the session queue"""
queue = ApiDependencies.invoker.services.session_queue.get_queue_status(queue_id)
processor = ApiDependencies.invoker.services.session_processor.get_status()
return SessionQueueAndProcessorStatus(queue=queue, processor=processor)
@session_queue_router.get(
"/{queue_id}/b/{batch_id}/status",
operation_id="get_batch_status",
responses={
200: {"model": BatchStatus},
},
)
async def get_batch_status(
queue_id: str = Path(description="The queue id to perform this operation on"),
batch_id: str = Path(description="The batch to get the status of"),
) -> BatchStatus:
"""Gets the status of the session queue"""
return ApiDependencies.invoker.services.session_queue.get_batch_status(queue_id=queue_id, batch_id=batch_id)
@session_queue_router.get(
"/{queue_id}/i/{item_id}",
operation_id="get_queue_item",
responses={
200: {"model": SessionQueueItem},
},
)
async def get_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_id: int = Path(description="The queue item to get"),
) -> SessionQueueItem:
"""Gets a queue item"""
return ApiDependencies.invoker.services.session_queue.get_queue_item(item_id)
@session_queue_router.put(
"/{queue_id}/i/{item_id}/cancel",
operation_id="cancel_queue_item",
responses={
200: {"model": SessionQueueItem},
},
)
async def cancel_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),
item_id: int = Path(description="The queue item to cancel"),
) -> SessionQueueItem:
"""Deletes a queue item"""
return ApiDependencies.invoker.services.session_queue.cancel_queue_item(item_id)

View File

@ -6,12 +6,11 @@ from fastapi import Body, HTTPException, Path, Query, Response
from fastapi.routing import APIRouter from fastapi.routing import APIRouter
from pydantic.fields import Field from pydantic.fields import Field
from invokeai.app.services.shared.pagination import PaginatedResults
# Importing * is bad karma but needed here for node detection # Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403 from ...invocations import * # noqa: F401 F403
from ...invocations.baseinvocation import BaseInvocation from ...invocations.baseinvocation import BaseInvocation
from ...services.shared.graph import Edge, EdgeConnection, Graph, GraphExecutionState, NodeAlreadyExecutedError from ...services.graph import Edge, EdgeConnection, Graph, GraphExecutionState, NodeAlreadyExecutedError
from ...services.item_storage import PaginatedResults
from ..dependencies import ApiDependencies from ..dependencies import ApiDependencies
session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"]) session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
@ -24,14 +23,12 @@ session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
200: {"model": GraphExecutionState}, 200: {"model": GraphExecutionState},
400: {"description": "Invalid json"}, 400: {"description": "Invalid json"},
}, },
deprecated=True,
) )
async def create_session( async def create_session(
queue_id: str = Query(default="", description="The id of the queue to associate the session with"), graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with")
graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with"),
) -> GraphExecutionState: ) -> GraphExecutionState:
"""Creates a new session, optionally initializing it with an invocation graph""" """Creates a new session, optionally initializing it with an invocation graph"""
session = ApiDependencies.invoker.create_execution_state(queue_id=queue_id, graph=graph) session = ApiDependencies.invoker.create_execution_state(graph)
return session return session
@ -39,7 +36,6 @@ async def create_session(
"/", "/",
operation_id="list_sessions", operation_id="list_sessions",
responses={200: {"model": PaginatedResults[GraphExecutionState]}}, responses={200: {"model": PaginatedResults[GraphExecutionState]}},
deprecated=True,
) )
async def list_sessions( async def list_sessions(
page: int = Query(default=0, description="The page of results to get"), page: int = Query(default=0, description="The page of results to get"),
@ -61,7 +57,6 @@ async def list_sessions(
200: {"model": GraphExecutionState}, 200: {"model": GraphExecutionState},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def get_session( async def get_session(
session_id: str = Path(description="The id of the session to get"), session_id: str = Path(description="The id of the session to get"),
@ -82,7 +77,6 @@ async def get_session(
400: {"description": "Invalid node or link"}, 400: {"description": "Invalid node or link"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def add_node( async def add_node(
session_id: str = Path(description="The id of the session"), session_id: str = Path(description="The id of the session"),
@ -115,7 +109,6 @@ async def add_node(
400: {"description": "Invalid node or link"}, 400: {"description": "Invalid node or link"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def update_node( async def update_node(
session_id: str = Path(description="The id of the session"), session_id: str = Path(description="The id of the session"),
@ -149,7 +142,6 @@ async def update_node(
400: {"description": "Invalid node or link"}, 400: {"description": "Invalid node or link"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def delete_node( async def delete_node(
session_id: str = Path(description="The id of the session"), session_id: str = Path(description="The id of the session"),
@ -180,7 +172,6 @@ async def delete_node(
400: {"description": "Invalid node or link"}, 400: {"description": "Invalid node or link"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def add_edge( async def add_edge(
session_id: str = Path(description="The id of the session"), session_id: str = Path(description="The id of the session"),
@ -212,7 +203,6 @@ async def add_edge(
400: {"description": "Invalid node or link"}, 400: {"description": "Invalid node or link"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def delete_edge( async def delete_edge(
session_id: str = Path(description="The id of the session"), session_id: str = Path(description="The id of the session"),
@ -251,10 +241,8 @@ async def delete_edge(
400: {"description": "The session has no invocations ready to invoke"}, 400: {"description": "The session has no invocations ready to invoke"},
404: {"description": "Session not found"}, 404: {"description": "Session not found"},
}, },
deprecated=True,
) )
async def invoke_session( async def invoke_session(
queue_id: str = Query(description="The id of the queue to associate the session with"),
session_id: str = Path(description="The id of the session to invoke"), session_id: str = Path(description="The id of the session to invoke"),
all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"), all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"),
) -> Response: ) -> Response:
@ -266,7 +254,7 @@ async def invoke_session(
if session.is_complete(): if session.is_complete():
raise HTTPException(status_code=400) raise HTTPException(status_code=400)
ApiDependencies.invoker.invoke(queue_id, session, invoke_all=all) ApiDependencies.invoker.invoke(session, invoke_all=all)
return Response(status_code=202) return Response(status_code=202)
@ -274,7 +262,6 @@ async def invoke_session(
"/{session_id}/invoke", "/{session_id}/invoke",
operation_id="cancel_session_invoke", operation_id="cancel_session_invoke",
responses={202: {"description": "The invocation is canceled"}}, responses={202: {"description": "The invocation is canceled"}},
deprecated=True,
) )
async def cancel_session_invoke( async def cancel_session_invoke(
session_id: str = Path(description="The id of the session to cancel"), session_id: str = Path(description="The id of the session to cancel"),

View File

@ -1,42 +0,0 @@
from typing import Optional, Union
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from fastapi import Body
from fastapi.routing import APIRouter
from pydantic import BaseModel
from pyparsing import ParseException
utilities_router = APIRouter(prefix="/v1/utilities", tags=["utilities"])
class DynamicPromptsResponse(BaseModel):
prompts: list[str]
error: Optional[str] = None
@utilities_router.post(
"/dynamicprompts",
operation_id="parse_dynamicprompts",
responses={
200: {"model": DynamicPromptsResponse},
},
)
async def parse_dynamicprompts(
prompt: str = Body(description="The prompt to parse with dynamicprompts"),
max_prompts: int = Body(default=1000, description="The max number of prompts to generate"),
combinatorial: bool = Body(default=True, description="Whether to use the combinatorial generator"),
) -> DynamicPromptsResponse:
"""Creates a batch process"""
generator: Union[RandomPromptGenerator, CombinatorialPromptGenerator]
try:
error: Optional[str] = None
if combinatorial:
generator = CombinatorialPromptGenerator()
prompts = generator.generate(prompt, max_prompts=max_prompts)
else:
generator = RandomPromptGenerator()
prompts = generator.generate(prompt, num_images=max_prompts)
except ParseException as e:
prompts = [prompt]
error = str(e)
return DynamicPromptsResponse(prompts=prompts if prompts else [""], error=error)

View File

@ -3,35 +3,51 @@
from fastapi import FastAPI from fastapi import FastAPI
from fastapi_events.handlers.local import local_handler from fastapi_events.handlers.local import local_handler
from fastapi_events.typing import Event from fastapi_events.typing import Event
from socketio import ASGIApp, AsyncServer from fastapi_socketio import SocketManager
from ..services.events.events_base import EventServiceBase from ..services.events import EventServiceBase
class SocketIO: class SocketIO:
__sio: AsyncServer __sio: SocketManager
__app: ASGIApp
def __init__(self, app: FastAPI): def __init__(self, app: FastAPI):
self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*") self.__sio = SocketManager(app=app)
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="socket.io")
app.mount("/ws", self.__app)
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue) self.__sio.on("subscribe_session", handler=self._handle_sub_session)
self.__sio.on("unsubscribe_queue", handler=self._handle_unsub_queue) self.__sio.on("unsubscribe_session", handler=self._handle_unsub_session)
local_handler.register(event_name=EventServiceBase.queue_event, _func=self._handle_queue_event) local_handler.register(event_name=EventServiceBase.session_event, _func=self._handle_session_event)
async def _handle_queue_event(self, event: Event): self.__sio.on("subscribe_batch", handler=self._handle_sub_batch)
self.__sio.on("unsubscribe_batch", handler=self._handle_unsub_batch)
local_handler.register(event_name=EventServiceBase.batch_event, _func=self._handle_batch_event)
async def _handle_session_event(self, event: Event):
await self.__sio.emit( await self.__sio.emit(
event=event[1]["event"], event=event[1]["event"],
data=event[1]["data"], data=event[1]["data"],
room=event[1]["data"]["queue_id"], room=event[1]["data"]["graph_execution_state_id"],
) )
async def _handle_sub_queue(self, sid, data, *args, **kwargs): async def _handle_sub_session(self, sid, data, *args, **kwargs):
if "queue_id" in data: if "session" in data:
await self.__sio.enter_room(sid, data["queue_id"]) self.__sio.enter_room(sid, data["session"])
async def _handle_unsub_queue(self, sid, data, *args, **kwargs): async def _handle_unsub_session(self, sid, data, *args, **kwargs):
if "queue_id" in data: if "session" in data:
await self.__sio.enter_room(sid, data["queue_id"]) self.__sio.leave_room(sid, data["session"])
async def _handle_batch_event(self, event: Event):
await self.__sio.emit(
event=event[1]["event"],
data=event[1]["data"],
room=event[1]["data"]["batch_id"],
)
async def _handle_sub_batch(self, sid, data, *args, **kwargs):
if "batch_id" in data:
self.__sio.enter_room(sid, data["batch_id"])
async def _handle_unsub_batch(self, sid, data, *args, **kwargs):
if "batch_id" in data:
self.__sio.enter_room(sid, data["batch_id"])

View File

@ -1,48 +1,46 @@
# Copyright (c) 2022-2023 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
import asyncio
import logging
import socket
from inspect import signature
from pathlib import Path
import uvicorn
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.schema import schema
from .services.config import InvokeAIAppConfig from .services.config import InvokeAIAppConfig
from ..backend.util.logging import InvokeAILogger
# parse_args() must be called before any other imports. if it is not called first, consumers of the config from invokeai.version.invokeai_version import __version__
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
app_config = InvokeAIAppConfig.get_config()
app_config.parse_args()
if True: # hack to make flake8 happy with imports coming after setting up the config import invokeai.frontend.web as web_dir
import asyncio import mimetypes
import mimetypes
import socket
from inspect import signature
from pathlib import Path
import torch from .api.dependencies import ApiDependencies
import uvicorn from .api.routers import sessions, batches, models, images, boards, board_images, app_info
from fastapi import FastAPI from .api.sockets import SocketIO
from fastapi.middleware.cors import CORSMiddleware from .invocations.baseinvocation import BaseInvocation, _InputField, _OutputField, UIConfigBase
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
import torch
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
if torch.backends.mps.is_available():
# noinspection PyUnresolvedReferences # noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import) import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.version.invokeai_version import __version__
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
from .api.routers import app_info, board_images, boards, images, models, session_queue, utilities
from .api.sockets import SocketIO
from .invocations.baseinvocation import BaseInvocation, UIConfigBase, _InputField, _OutputField
if torch.backends.mps.is_available():
# noinspection PyUnresolvedReferences
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
app_config = InvokeAIAppConfig.get_config() app_config = InvokeAIAppConfig.get_config()
app_config.parse_args() app_config.parse_args()
logger = InvokeAILogger.get_logger(config=app_config) logger = InvokeAILogger.getLogger(config=app_config)
# fix for windows mimetypes registry entries being borked # fix for windows mimetypes registry entries being borked
# see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352 # see https://github.com/invoke-ai/InvokeAI/discussions/3684#discussioncomment-6391352
@ -51,7 +49,7 @@ mimetypes.add_type("text/css", ".css")
# Create the app # Create the app
# TODO: create this all in a method so configuration/etc. can be passed in? # TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None, separate_input_output_schemas=False) app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None)
# Add event handler # Add event handler
event_handler_id: int = id(app) event_handler_id: int = id(app)
@ -63,18 +61,18 @@ app.add_middleware(
socket_io = SocketIO(app) socket_io = SocketIO(app)
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
# Add startup event to load dependencies # Add startup event to load dependencies
@app.on_event("startup") @app.on_event("startup")
async def startup_event(): async def startup_event():
app.add_middleware(
CORSMiddleware,
allow_origins=app_config.allow_origins,
allow_credentials=app_config.allow_credentials,
allow_methods=app_config.allow_methods,
allow_headers=app_config.allow_headers,
)
ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger) ApiDependencies.initialize(config=app_config, event_handler_id=event_handler_id, logger=logger)
@ -85,9 +83,14 @@ async def shutdown_event():
# Include all routers # Include all routers
# app.include_router(sessions.session_router, prefix="/api") # TODO: REMOVE
# app.include_router(
# invocation.invocation_router,
# prefix = '/api')
app.include_router(utilities.utilities_router, prefix="/api") app.include_router(sessions.session_router, prefix="/api")
app.include_router(batches.batches_router, prefix="/api")
app.include_router(models.models_router, prefix="/api") app.include_router(models.models_router, prefix="/api")
@ -99,8 +102,6 @@ app.include_router(board_images.board_images_router, prefix="/api")
app.include_router(app_info.app_router, prefix="/api") app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
# Build a custom OpenAPI to include all outputs # Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow? # TODO: can outputs be included on metadata of invocation schemas somehow?
@ -112,7 +113,6 @@ def custom_openapi():
description="An API for invoking AI image operations", description="An API for invoking AI image operations",
version="1.0.0", version="1.0.0",
routes=app.routes, routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
) )
# Add all outputs # Add all outputs
@ -123,32 +123,29 @@ def custom_openapi():
output_type = signature(invoker.invoke).return_annotation output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type) output_types.add(output_type)
output_schemas = models_json_schema( output_schemas = schema(output_types, ref_prefix="#/components/schemas/")
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}" for schema_key, output_schema in output_schemas["definitions"].items():
) output_schema["class"] = "output"
for schema_key, output_schema in output_schemas[1]["$defs"].items(): openapi_schema["components"]["schemas"][schema_key] = output_schema
# TODO: note that we assume the schema_key here is the TYPE.__name__ # TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it # This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"] output_type_titles[schema_key] = output_schema["title"]
# Add Node Editor UI helper schemas # Add Node Editor UI helper schemas
ui_config_schemas = models_json_schema( ui_config_schemas = schema([UIConfigBase, _InputField, _OutputField], ref_prefix="#/components/schemas/")
[(UIConfigBase, "serialization"), (_InputField, "serialization"), (_OutputField, "serialization")], for schema_key, ui_config_schema in ui_config_schemas["definitions"].items():
ref_template="#/components/schemas/{model}",
)
for schema_key, ui_config_schema in ui_config_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = ui_config_schema openapi_schema["components"]["schemas"][schema_key] = ui_config_schema
# Add a reference to the output type to additionalProperties of the invoker schema # Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations: for invoker in all_invocations:
invoker_name = invoker.__name__ invoker_name = invoker.__name__
output_type = signature(obj=invoker.invoke).return_annotation output_type = signature(invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__] output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"] invoker_schema = openapi_schema["components"]["schemas"][invoker_name]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"} outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation" invoker_schema["class"] = "invocation"
openapi_schema["components"]["schemas"][f"{output_type_title}"]["class"] = "output"
from invokeai.backend.model_management.models import get_model_config_enums from invokeai.backend.model_management.models import get_model_config_enums
@ -171,7 +168,7 @@ def custom_openapi():
return app.openapi_schema return app.openapi_schema
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment app.openapi = custom_openapi
# Override API doc favicons # Override API doc favicons
app.mount("/static", StaticFiles(directory=Path(web_dir.__path__[0], "static/dream_web")), name="static") app.mount("/static", StaticFiles(directory=Path(web_dir.__path__[0], "static/dream_web")), name="static")
@ -223,7 +220,7 @@ def invoke_api():
exc_info=e, exc_info=e,
) )
else: else:
jurigged.watch(logger=InvokeAILogger.get_logger(name="jurigged").info) jurigged.watch(logger=InvokeAILogger.getLogger(name="jurigged").info)
port = find_port(app_config.port) port = find_port(app_config.port)
if port != app_config.port: if port != app_config.port:
@ -242,7 +239,7 @@ def invoke_api():
# replace uvicorn's loggers with InvokeAI's for consistent appearance # replace uvicorn's loggers with InvokeAI's for consistent appearance
for logname in ["uvicorn.access", "uvicorn"]: for logname in ["uvicorn.access", "uvicorn"]:
log = InvokeAILogger.get_logger(logname) log = logging.getLogger(logname)
log.handlers.clear() log.handlers.clear()
for ch in logger.handlers: for ch in logger.handlers:
log.addHandler(ch) log.addHandler(ch)

View File

@ -1,18 +1,16 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import argparse
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
import argparse
from typing import Any, Callable, Iterable, Literal, Union, get_args, get_origin, get_type_hints from typing import Any, Callable, Iterable, Literal, Union, get_args, get_origin, get_type_hints
import matplotlib.pyplot as plt
import networkx as nx
from pydantic import BaseModel, Field from pydantic import BaseModel, Field
import networkx as nx
import matplotlib.pyplot as plt
import invokeai.backend.util.logging as logger import invokeai.backend.util.logging as logger
from ..invocations.baseinvocation import BaseInvocation from ..invocations.baseinvocation import BaseInvocation
from ..invocations.image import ImageField from ..invocations.image import ImageField
from ..services.graph import Edge, GraphExecutionState, LibraryGraph from ..services.graph import GraphExecutionState, LibraryGraph, Edge
from ..services.invoker import Invoker from ..services.invoker import Invoker
@ -24,8 +22,8 @@ def add_field_argument(command_parser, name: str, field, default_override=None):
if field.default_factory is None if field.default_factory is None
else field.default_factory() else field.default_factory()
) )
if get_origin(field.annotation) == Literal: if get_origin(field.type_) == Literal:
allowed_values = get_args(field.annotation) allowed_values = get_args(field.type_)
allowed_types = set() allowed_types = set()
for val in allowed_values: for val in allowed_values:
allowed_types.add(type(val)) allowed_types.add(type(val))
@ -38,15 +36,15 @@ def add_field_argument(command_parser, name: str, field, default_override=None):
type=field_type, type=field_type,
default=default, default=default,
choices=allowed_values, choices=allowed_values,
help=field.description, help=field.field_info.description,
) )
else: else:
command_parser.add_argument( command_parser.add_argument(
f"--{name}", f"--{name}",
dest=name, dest=name,
type=field.annotation, type=field.type_,
default=default, default=default,
help=field.description, help=field.field_info.description,
) )
@ -142,6 +140,7 @@ class BaseCommand(ABC, BaseModel):
"""A CLI command""" """A CLI command"""
# All commands must include a type name like this: # All commands must include a type name like this:
# type: Literal['your_command_name'] = 'your_command_name'
@classmethod @classmethod
def get_all_subclasses(cls): def get_all_subclasses(cls):

View File

@ -6,15 +6,15 @@ completer object.
import atexit import atexit
import readline import readline
import shlex import shlex
from pathlib import Path from pathlib import Path
from typing import Dict, List, Literal, get_args, get_origin, get_type_hints from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
import invokeai.backend.util.logging as logger import invokeai.backend.util.logging as logger
from ...backend import ModelManager from ...backend import ModelManager
from ..invocations.baseinvocation import BaseInvocation from ..invocations.baseinvocation import BaseInvocation
from ..services.invocation_services import InvocationServices
from .commands import BaseCommand from .commands import BaseCommand
from ..services.invocation_services import InvocationServices
# singleton object, class variable # singleton object, class variable
completer = None completer = None

View File

@ -1,67 +1,71 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team # Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache import argparse
import re
import shlex
import sys
import time
import sqlite3
from typing import Union, get_type_hints, Optional
from pydantic import BaseModel, ValidationError
from pydantic.fields import Field
# This should come early so that the logger can pick up its configuration options
from .services.config import InvokeAIAppConfig from .services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
# parse_args() must be called before any other imports. if it is not called first, consumers of the config
# which are imported/used before parse_args() is called will get the default config values instead of the
# values from the command line or config file.
if True: # hack to make flake8 happy with imports coming after setting up the config from invokeai.app.services.board_image_record_storage import (
import argparse SqliteBoardImageRecordStorage,
import re )
import shlex from invokeai.app.services.board_images import (
import sqlite3 BoardImagesService,
import sys BoardImagesServiceDependencies,
import time )
from typing import Optional, Union, get_type_hints from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.app.services.batch_manager import BatchManager
from invokeai.app.services.batch_manager_storage import SqliteBatchProcessStorage
from invokeai.app.services.invocation_stats import InvocationStatsService
from .services.default_graphs import default_text_to_image_graph_id, create_system_graphs
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
import torch from .cli.commands import BaseCommand, CliContext, ExitCli, SortedHelpFormatter, add_graph_parsers, add_parsers
from pydantic import BaseModel, ValidationError from .cli.completer import set_autocompleter
from pydantic.fields import Field from .invocations.baseinvocation import BaseInvocation
from .services.events import EventServiceBase
from .services.graph import (
Edge,
EdgeConnection,
GraphExecutionState,
GraphInvocation,
LibraryGraph,
are_connection_types_compatible,
)
from .services.image_file_storage import DiskImageFileStorage
from .services.invocation_queue import MemoryInvocationQueue
from .services.invocation_services import InvocationServices
from .services.invoker import Invoker
from .services.model_manager_service import ModelManagerService
from .services.processor import DefaultInvocationProcessor
from .services.sqlite import SqliteItemStorage
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import) import torch
from invokeai.app.services.board_image_record_storage import SqliteBoardImageRecordStorage import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
from invokeai.app.services.board_images import BoardImagesService, BoardImagesServiceDependencies
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.image_record_storage import SqliteImageRecordStorage
from invokeai.app.services.images import ImageService, ImageServiceDependencies
from invokeai.app.services.invocation_stats import InvocationStatsService
from invokeai.app.services.resource_name import SimpleNameService
from invokeai.app.services.urls import LocalUrlService
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
from .cli.commands import BaseCommand, CliContext, ExitCli, SortedHelpFormatter, add_graph_parsers, add_parsers if torch.backends.mps.is_available():
from .cli.completer import set_autocompleter import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
from .invocations.baseinvocation import BaseInvocation
from .services.default_graphs import create_system_graphs, default_text_to_image_graph_id
from .services.events import EventServiceBase
from .services.graph import (
Edge,
EdgeConnection,
GraphExecutionState,
GraphInvocation,
LibraryGraph,
are_connection_types_compatible,
)
from .services.image_file_storage import DiskImageFileStorage
from .services.invocation_queue import MemoryInvocationQueue
from .services.invocation_services import InvocationServices
from .services.invoker import Invoker
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from .services.model_manager_service import ModelManagerService
from .services.processor import DefaultInvocationProcessor
from .services.sqlite import SqliteItemStorage
if torch.backends.mps.is_available():
import invokeai.backend.util.mps_fixes # noqa: F401 (monkeypatching on import)
config = InvokeAIAppConfig.get_config() config = InvokeAIAppConfig.get_config()
config.parse_args() config.parse_args()
logger = InvokeAILogger().get_logger(config=config) logger = InvokeAILogger().getLogger(config=config)
class CliCommand(BaseModel): class CliCommand(BaseModel):
@ -296,12 +300,16 @@ def invoke_cli():
) )
) )
batch_manager_storage = SqliteBatchProcessStorage(conn=db_conn)
batch_manager = BatchManager(batch_manager_storage)
services = InvocationServices( services = InvocationServices(
model_manager=model_manager, model_manager=model_manager,
events=events, events=events,
latents=ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents")), latents=ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents")),
images=images, images=images,
boards=boards, boards=boards,
batch_manager=batch_manager,
board_images=board_images, board_images=board_images,
queue=MemoryInvocationQueue(), queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](conn=db_conn, table_name="graphs"), graph_library=SqliteItemStorage[LibraryGraph](conn=db_conn, table_name="graphs"),
@ -310,7 +318,6 @@ def invoke_cli():
performance_statistics=InvocationStatsService(graph_execution_manager), performance_statistics=InvocationStatsService(graph_execution_manager),
logger=logger, logger=logger,
configuration=config, configuration=config,
invocation_cache=MemoryInvocationCache(max_cache_size=config.node_cache_size),
) )
system_graphs = create_system_graphs(services.graph_library) system_graphs = create_system_graphs(services.graph_library)

File diff suppressed because it is too large Load Diff

View File

@ -2,7 +2,7 @@
import numpy as np import numpy as np
from pydantic import ValidationInfo, field_validator from pydantic import validator
from invokeai.app.invocations.primitives import IntegerCollectionOutput from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.util.misc import SEED_MAX, get_random_seed from invokeai.app.util.misc import SEED_MAX, get_random_seed
@ -20,9 +20,9 @@ class RangeInvocation(BaseInvocation):
stop: int = InputField(default=10, description="The stop of the range") stop: int = InputField(default=10, description="The stop of the range")
step: int = InputField(default=1, description="The step of the range") step: int = InputField(default=1, description="The step of the range")
@field_validator("stop") @validator("stop")
def stop_gt_start(cls, v: int, info: ValidationInfo): def stop_gt_start(cls, v, values):
if "start" in info.data and v <= info.data["start"]: if "start" in values and v <= values["start"]:
raise ValueError("stop must be greater than start") raise ValueError("stop must be greater than start")
return v return v
@ -38,16 +38,14 @@ class RangeInvocation(BaseInvocation):
version="1.0.0", version="1.0.0",
) )
class RangeOfSizeInvocation(BaseInvocation): class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + (size * step) incremented by step""" """Creates a range from start to start + size with step"""
start: int = InputField(default=0, description="The start of the range") start: int = InputField(default=0, description="The start of the range")
size: int = InputField(default=1, gt=0, description="The number of values") size: int = InputField(default=1, description="The number of values")
step: int = InputField(default=1, description="The step of the range") step: int = InputField(default=1, description="The step of the range")
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput: def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput( return IntegerCollectionOutput(collection=list(range(self.start, self.start + self.size, self.step)))
collection=list(range(self.start, self.start + (self.step * self.size), self.step))
)
@invocation( @invocation(
@ -56,7 +54,6 @@ class RangeOfSizeInvocation(BaseInvocation):
tags=["range", "integer", "random", "collection"], tags=["range", "integer", "random", "collection"],
category="collections", category="collections",
version="1.0.0", version="1.0.0",
use_cache=False,
) )
class RandomRangeInvocation(BaseInvocation): class RandomRangeInvocation(BaseInvocation):
"""Creates a collection of random numbers""" """Creates a collection of random numbers"""

View File

@ -1,20 +1,21 @@
import re import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import List, Optional, Union from typing import List, Union
import torch import torch
from compel import Compel, ReturnedEmbeddingsType from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import (
BasicConditioningInfo, BasicConditioningInfo,
ExtraConditioningInfo,
SDXLConditioningInfo, SDXLConditioningInfo,
) )
from ...backend.model_management.models import ModelType
from ...backend.model_management.lora import ModelPatcher from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException, ModelType from ...backend.model_management.models import ModelNotFoundException
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
from ...backend.util.devices import torch_dtype from ...backend.util.devices import torch_dtype
from .baseinvocation import ( from .baseinvocation import (
BaseInvocation, BaseInvocation,
@ -43,13 +44,7 @@ class ConditioningFieldData:
# PerpNeg = "perp_neg" # PerpNeg = "perp_neg"
@invocation( @invocation("compel", title="Prompt", tags=["prompt", "compel"], category="conditioning", version="1.0.0")
"compel",
title="Prompt",
tags=["prompt", "compel"],
category="conditioning",
version="1.0.0",
)
class CompelInvocation(BaseInvocation): class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning.""" """Parse prompt using compel package to conditioning."""
@ -66,21 +61,23 @@ class CompelInvocation(BaseInvocation):
@torch.no_grad() @torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput: def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.get_model( tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.model_dump(), **self.clip.tokenizer.dict(),
context=context,
) )
text_encoder_info = context.get_model( text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.model_dump(), **self.clip.text_encoder.dict(),
context=context,
) )
def _lora_loader(): def _lora_loader():
for lora in self.clip.loras: for lora in self.clip.loras:
lora_info = context.get_model(**lora.model_dump(exclude={"weight"})) lora_info = context.services.model_manager.get_model(**lora.dict(exclude={"weight"}), context=context)
yield (lora_info.context.model, lora.weight) yield (lora_info.context.model, lora.weight)
del lora_info del lora_info
return return
# loras = [(context.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras] # loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = [] ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt): for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
@ -89,10 +86,11 @@ class CompelInvocation(BaseInvocation):
ti_list.append( ti_list.append(
( (
name, name,
context.get_model( context.services.model_manager.get_model(
model_name=name, model_name=name,
base_model=self.clip.text_encoder.base_model, base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion, model_type=ModelType.TextualInversion,
context=context,
).context.model, ).context.model,
) )
) )
@ -102,15 +100,14 @@ class CompelInvocation(BaseInvocation):
# print(traceback.format_exc()) # print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found') print(f'Warn: trigger: "{trigger}" not found')
with ( with ModelPatcher.apply_lora_text_encoder(
ModelPatcher.apply_lora_text_encoder(text_encoder_info.context.model, _lora_loader()), text_encoder_info.context.model, _lora_loader()
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as ( ), ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer, tokenizer,
ti_manager, ti_manager,
), ), ModelPatcher.apply_clip_skip(
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers), text_encoder_info.context.model, self.clip.skipped_layers
text_encoder_info as text_encoder, ), text_encoder_info as text_encoder:
):
compel = Compel( compel = Compel(
tokenizer=tokenizer, tokenizer=tokenizer,
text_encoder=text_encoder, text_encoder=text_encoder,
@ -121,12 +118,12 @@ class CompelInvocation(BaseInvocation):
conjunction = Compel.parse_prompt_string(self.prompt) conjunction = Compel.parse_prompt_string(self.prompt)
if context.config.log_tokenization: if context.services.configuration.log_tokenization:
log_tokenization_for_conjunction(conjunction, tokenizer) log_tokenization_for_conjunction(conjunction, tokenizer)
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction) c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
ec = ExtraConditioningInfo( ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction), tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None), cross_attention_control_args=options.get("cross_attention_control", None),
) )
@ -142,7 +139,8 @@ class CompelInvocation(BaseInvocation):
] ]
) )
conditioning_name = context.save_conditioning(conditioning_data) conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
return ConditioningOutput( return ConditioningOutput(
conditioning=ConditioningField( conditioning=ConditioningField(
@ -162,11 +160,11 @@ class SDXLPromptInvocationBase:
zero_on_empty: bool, zero_on_empty: bool,
): ):
tokenizer_info = context.services.model_manager.get_model( tokenizer_info = context.services.model_manager.get_model(
**clip_field.tokenizer.model_dump(), **clip_field.tokenizer.dict(),
context=context, context=context,
) )
text_encoder_info = context.services.model_manager.get_model( text_encoder_info = context.services.model_manager.get_model(
**clip_field.text_encoder.model_dump(), **clip_field.text_encoder.dict(),
context=context, context=context,
) )
@ -174,11 +172,7 @@ class SDXLPromptInvocationBase:
if prompt == "" and zero_on_empty: if prompt == "" and zero_on_empty:
cpu_text_encoder = text_encoder_info.context.model cpu_text_encoder = text_encoder_info.context.model
c = torch.zeros( c = torch.zeros(
( (1, cpu_text_encoder.config.max_position_embeddings, cpu_text_encoder.config.hidden_size),
1,
cpu_text_encoder.config.max_position_embeddings,
cpu_text_encoder.config.hidden_size,
),
dtype=text_encoder_info.context.cache.precision, dtype=text_encoder_info.context.cache.precision,
) )
if get_pooled: if get_pooled:
@ -192,9 +186,7 @@ class SDXLPromptInvocationBase:
def _lora_loader(): def _lora_loader():
for lora in clip_field.loras: for lora in clip_field.loras:
lora_info = context.services.model_manager.get_model( lora_info = context.services.model_manager.get_model(**lora.dict(exclude={"weight"}), context=context)
**lora.model_dump(exclude={"weight"}), context=context
)
yield (lora_info.context.model, lora.weight) yield (lora_info.context.model, lora.weight)
del lora_info del lora_info
return return
@ -222,15 +214,14 @@ class SDXLPromptInvocationBase:
# print(traceback.format_exc()) # print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found') print(f'Warn: trigger: "{trigger}" not found')
with ( with ModelPatcher.apply_lora(
ModelPatcher.apply_lora(text_encoder_info.context.model, _lora_loader(), lora_prefix), text_encoder_info.context.model, _lora_loader(), lora_prefix
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as ( ), ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
tokenizer, tokenizer,
ti_manager, ti_manager,
), ), ModelPatcher.apply_clip_skip(
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers), text_encoder_info.context.model, clip_field.skipped_layers
text_encoder_info as text_encoder, ), text_encoder_info as text_encoder:
):
compel = Compel( compel = Compel(
tokenizer=tokenizer, tokenizer=tokenizer,
text_encoder=text_encoder, text_encoder=text_encoder,
@ -254,7 +245,7 @@ class SDXLPromptInvocationBase:
else: else:
c_pooled = None c_pooled = None
ec = ExtraConditioningInfo( ec = InvokeAIDiffuserComponent.ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction), tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None), cross_attention_control_args=options.get("cross_attention_control", None),
) )
@ -281,16 +272,8 @@ class SDXLPromptInvocationBase:
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase): class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning.""" """Parse prompt using compel package to conditioning."""
prompt: str = InputField( prompt: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
default="", style: str = InputField(default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea)
description=FieldDescriptions.compel_prompt,
ui_component=UIComponent.Textarea,
)
style: str = InputField(
default="",
description=FieldDescriptions.compel_prompt,
ui_component=UIComponent.Textarea,
)
original_width: int = InputField(default=1024, description="") original_width: int = InputField(default=1024, description="")
original_height: int = InputField(default=1024, description="") original_height: int = InputField(default=1024, description="")
crop_top: int = InputField(default=0, description="") crop_top: int = InputField(default=0, description="")
@ -326,9 +309,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
[ [
c1, c1,
torch.zeros( torch.zeros(
(c1.shape[0], c2.shape[1] - c1.shape[1], c1.shape[2]), (c1.shape[0], c2.shape[1] - c1.shape[1], c1.shape[2]), device=c1.device, dtype=c1.dtype
device=c1.device,
dtype=c1.dtype,
), ),
], ],
dim=1, dim=1,
@ -339,9 +320,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
[ [
c2, c2,
torch.zeros( torch.zeros(
(c2.shape[0], c1.shape[1] - c2.shape[1], c2.shape[2]), (c2.shape[0], c1.shape[1] - c2.shape[1], c2.shape[2]), device=c2.device, dtype=c2.dtype
device=c2.device,
dtype=c2.dtype,
), ),
], ],
dim=1, dim=1,
@ -379,9 +358,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
"""Parse prompt using compel package to conditioning.""" """Parse prompt using compel package to conditioning."""
style: str = InputField( style: str = InputField(
default="", default="", description=FieldDescriptions.compel_prompt, ui_component=UIComponent.Textarea
description=FieldDescriptions.compel_prompt,
ui_component=UIComponent.Textarea,
) # TODO: ? ) # TODO: ?
original_width: int = InputField(default=1024, description="") original_width: int = InputField(default=1024, description="")
original_height: int = InputField(default=1024, description="") original_height: int = InputField(default=1024, description="")
@ -425,16 +402,10 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
class ClipSkipInvocationOutput(BaseInvocationOutput): class ClipSkipInvocationOutput(BaseInvocationOutput):
"""Clip skip node output""" """Clip skip node output"""
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP") clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation( @invocation("clip_skip", title="CLIP Skip", tags=["clipskip", "clip", "skip"], category="conditioning", version="1.0.0")
"clip_skip",
title="CLIP Skip",
tags=["clipskip", "clip", "skip"],
category="conditioning",
version="1.0.0",
)
class ClipSkipInvocation(BaseInvocation): class ClipSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model.""" """Skip layers in clip text_encoder model."""
@ -449,9 +420,7 @@ class ClipSkipInvocation(BaseInvocation):
def get_max_token_count( def get_max_token_count(
tokenizer, tokenizer, prompt: Union[FlattenedPrompt, Blend, Conjunction], truncate_if_too_long=False
prompt: Union[FlattenedPrompt, Blend, Conjunction],
truncate_if_too_long=False,
) -> int: ) -> int:
if type(prompt) is Blend: if type(prompt) is Blend:
blend: Blend = prompt blend: Blend = prompt
@ -468,11 +437,9 @@ def get_tokens_for_prompt_object(tokenizer, parsed_prompt: FlattenedPrompt, trun
raise ValueError("Blend is not supported here - you need to get tokens for each of its .children") raise ValueError("Blend is not supported here - you need to get tokens for each of its .children")
text_fragments = [ text_fragments = [
( x.text
x.text if type(x) is Fragment
if type(x) is Fragment else (" ".join([f.text for f in x.original]) if type(x) is CrossAttentionControlSubstitute else str(x))
else (" ".join([f.text for f in x.original]) if type(x) is CrossAttentionControlSubstitute else str(x))
)
for x in parsed_prompt.children for x in parsed_prompt.children
] ]
text = " ".join(text_fragments) text = " ".join(text_fragments)

View File

@ -2,7 +2,7 @@
# initial implementation by Gregg Helt, 2023 # initial implementation by Gregg Helt, 2023
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux # heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
from builtins import bool, float from builtins import bool, float
from typing import Dict, List, Literal, Union from typing import Dict, List, Literal, Optional, Union
import cv2 import cv2
import numpy as np import numpy as np
@ -24,24 +24,27 @@ from controlnet_aux import (
) )
from controlnet_aux.util import HWC3, ade_palette from controlnet_aux.util import HWC3, ade_palette
from PIL import Image from PIL import Image
from pydantic import BaseModel, ConfigDict, Field, field_validator from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from ...backend.model_management import BaseModelType from ...backend.model_management import BaseModelType
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import ( from .baseinvocation import (
BaseInvocation, BaseInvocation,
BaseInvocationOutput, BaseInvocationOutput,
FieldDescriptions, FieldDescriptions,
Input,
InputField, InputField,
Input,
InvocationContext, InvocationContext,
OutputField, OutputField,
UIType,
invocation, invocation,
invocation_output, invocation_output,
) )
CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"] CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
CONTROLNET_RESIZE_VALUES = Literal[ CONTROLNET_RESIZE_VALUES = Literal[
"just_resize", "just_resize",
@ -57,8 +60,6 @@ class ControlNetModelField(BaseModel):
model_name: str = Field(description="Name of the ControlNet model") model_name: str = Field(description="Name of the ControlNet model")
base_model: BaseModelType = Field(description="Base model") base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
class ControlField(BaseModel): class ControlField(BaseModel):
image: ImageField = Field(description="The control image") image: ImageField = Field(description="The control image")
@ -73,7 +74,7 @@ class ControlField(BaseModel):
control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use") control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode to use")
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use") resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight") @validator("control_weight")
def validate_control_weight(cls, v): def validate_control_weight(cls, v):
"""Validate that all control weights in the valid range""" """Validate that all control weights in the valid range"""
if isinstance(v, list): if isinstance(v, list):
@ -101,7 +102,7 @@ class ControlNetInvocation(BaseInvocation):
image: ImageField = InputField(description="The control image") image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct) control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_weight: Union[float, List[float]] = InputField( control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet" default=1.0, description="The weight given to the ControlNet", ui_type=UIType.Float
) )
begin_step_percent: float = InputField( begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)" default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)"
@ -126,7 +127,9 @@ class ControlNetInvocation(BaseInvocation):
) )
# This invocation exists for other invocations to subclass it - do not register with @invocation! @invocation(
"image_processor", title="Base Image Processor", tags=["controlnet"], category="controlnet", version="1.0.0"
)
class ImageProcessorInvocation(BaseInvocation): class ImageProcessorInvocation(BaseInvocation):
"""Base class for invocations that preprocess images for ControlNet""" """Base class for invocations that preprocess images for ControlNet"""
@ -393,9 +396,9 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res) detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res) image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
h: int = InputField(default=512, ge=0, description="Content shuffle `h` parameter") h: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter") w: Optional[int] = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter") f: Optional[int] = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
def run_processor(self, image): def run_processor(self, image):
content_shuffle_processor = ContentShuffleDetector() content_shuffle_processor = ContentShuffleDetector()
@ -559,33 +562,3 @@ class SamDetectorReproducibleColors(SamDetector):
img[:, :] = ann_color img[:, :] = ann_color
final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255))) final_img.paste(Image.fromarray(img, mode="RGB"), (0, 0), Image.fromarray(np.uint8(m * 255)))
return np.array(final_img, dtype=np.uint8) return np.array(final_img, dtype=np.uint8)
@invocation(
"color_map_image_processor",
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.0.0",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
image = image.convert("RGB")
np_image = np.array(image, dtype=np.uint8)
height, width = np_image.shape[:2]
width_tile_size = min(self.color_map_tile_size, width)
height_tile_size = min(self.color_map_tile_size, height)
color_map = cv2.resize(
np_image,
(width // width_tile_size, height // height_tile_size),
interpolation=cv2.INTER_CUBIC,
)
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map

View File

@ -4,10 +4,9 @@
import cv2 as cv import cv2 as cv
import numpy import numpy
from PIL import Image, ImageOps from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation

View File

@ -1,724 +0,0 @@
import math
import re
from pathlib import Path
from typing import Optional, TypedDict
import cv2
import numpy as np
from mediapipe.python.solutions.face_mesh import FaceMesh # type: ignore[import]
from PIL import Image, ImageDraw, ImageFilter, ImageFont, ImageOps
from PIL.Image import Image as ImageType
from pydantic import field_validator
import invokeai.assets.fonts as font_assets
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
InputField,
InvocationContext,
OutputField,
invocation,
invocation_output,
)
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
@invocation_output("face_mask_output")
class FaceMaskOutput(ImageOutput):
"""Base class for FaceMask output"""
mask: ImageField = OutputField(description="The output mask")
@invocation_output("face_off_output")
class FaceOffOutput(ImageOutput):
"""Base class for FaceOff Output"""
mask: ImageField = OutputField(description="The output mask")
x: int = OutputField(description="The x coordinate of the bounding box's left side")
y: int = OutputField(description="The y coordinate of the bounding box's top side")
class FaceResultData(TypedDict):
image: ImageType
mask: ImageType
x_center: float
y_center: float
mesh_width: int
mesh_height: int
chunk_x_offset: int
chunk_y_offset: int
class FaceResultDataWithId(FaceResultData):
face_id: int
class ExtractFaceData(TypedDict):
bounded_image: ImageType
bounded_mask: ImageType
x_min: int
y_min: int
x_max: int
y_max: int
class FaceMaskResult(TypedDict):
image: ImageType
mask: ImageType
def create_white_image(w: int, h: int) -> ImageType:
return Image.new("L", (w, h), color=255)
def create_black_image(w: int, h: int) -> ImageType:
return Image.new("L", (w, h), color=0)
FONT_SIZE = 32
FONT_STROKE_WIDTH = 4
def coalesce_faces(face1: FaceResultData, face2: FaceResultData) -> FaceResultData:
face1_x_offset = face1["chunk_x_offset"] - min(face1["chunk_x_offset"], face2["chunk_x_offset"])
face2_x_offset = face2["chunk_x_offset"] - min(face1["chunk_x_offset"], face2["chunk_x_offset"])
face1_y_offset = face1["chunk_y_offset"] - min(face1["chunk_y_offset"], face2["chunk_y_offset"])
face2_y_offset = face2["chunk_y_offset"] - min(face1["chunk_y_offset"], face2["chunk_y_offset"])
new_im_width = (
max(face1["image"].width, face2["image"].width)
+ max(face1["chunk_x_offset"], face2["chunk_x_offset"])
- min(face1["chunk_x_offset"], face2["chunk_x_offset"])
)
new_im_height = (
max(face1["image"].height, face2["image"].height)
+ max(face1["chunk_y_offset"], face2["chunk_y_offset"])
- min(face1["chunk_y_offset"], face2["chunk_y_offset"])
)
pil_image = Image.new(mode=face1["image"].mode, size=(new_im_width, new_im_height))
pil_image.paste(face1["image"], (face1_x_offset, face1_y_offset))
pil_image.paste(face2["image"], (face2_x_offset, face2_y_offset))
# Mask images are always from the origin
new_mask_im_width = max(face1["mask"].width, face2["mask"].width)
new_mask_im_height = max(face1["mask"].height, face2["mask"].height)
mask_pil = create_white_image(new_mask_im_width, new_mask_im_height)
black_image = create_black_image(face1["mask"].width, face1["mask"].height)
mask_pil.paste(black_image, (0, 0), ImageOps.invert(face1["mask"]))
black_image = create_black_image(face2["mask"].width, face2["mask"].height)
mask_pil.paste(black_image, (0, 0), ImageOps.invert(face2["mask"]))
new_face = FaceResultData(
image=pil_image,
mask=mask_pil,
x_center=max(face1["x_center"], face2["x_center"]),
y_center=max(face1["y_center"], face2["y_center"]),
mesh_width=max(face1["mesh_width"], face2["mesh_width"]),
mesh_height=max(face1["mesh_height"], face2["mesh_height"]),
chunk_x_offset=max(face1["chunk_x_offset"], face2["chunk_x_offset"]),
chunk_y_offset=max(face2["chunk_y_offset"], face2["chunk_y_offset"]),
)
return new_face
def prepare_faces_list(
face_result_list: list[FaceResultData],
) -> list[FaceResultDataWithId]:
"""Deduplicates a list of faces, adding IDs to them."""
deduped_faces: list[FaceResultData] = []
if len(face_result_list) == 0:
return list()
for candidate in face_result_list:
should_add = True
candidate_x_center = candidate["x_center"]
candidate_y_center = candidate["y_center"]
for idx, face in enumerate(deduped_faces):
face_center_x = face["x_center"]
face_center_y = face["y_center"]
face_radius_w = face["mesh_width"] / 2
face_radius_h = face["mesh_height"] / 2
# Determine if the center of the candidate_face is inside the ellipse of the added face
# p < 1 -> Inside
# p = 1 -> Exactly on the ellipse
# p > 1 -> Outside
p = (math.pow((candidate_x_center - face_center_x), 2) / math.pow(face_radius_w, 2)) + (
math.pow((candidate_y_center - face_center_y), 2) / math.pow(face_radius_h, 2)
)
if p < 1: # Inside of the already-added face's radius
deduped_faces[idx] = coalesce_faces(face, candidate)
should_add = False
break
if should_add is True:
deduped_faces.append(candidate)
sorted_faces = sorted(deduped_faces, key=lambda x: x["y_center"])
sorted_faces = sorted(sorted_faces, key=lambda x: x["x_center"])
# add face_id for reference
sorted_faces_with_ids: list[FaceResultDataWithId] = []
face_id_counter = 0
for face in sorted_faces:
sorted_faces_with_ids.append(
FaceResultDataWithId(
**face,
face_id=face_id_counter,
)
)
face_id_counter += 1
return sorted_faces_with_ids
def generate_face_box_mask(
context: InvocationContext,
minimum_confidence: float,
x_offset: float,
y_offset: float,
pil_image: ImageType,
chunk_x_offset: int = 0,
chunk_y_offset: int = 0,
draw_mesh: bool = True,
) -> list[FaceResultData]:
result = []
mask_pil = None
# Convert the PIL image to a NumPy array.
np_image = np.array(pil_image, dtype=np.uint8)
# Check if the input image has four channels (RGBA).
if np_image.shape[2] == 4:
# Convert RGBA to RGB by removing the alpha channel.
np_image = np_image[:, :, :3]
# Create a FaceMesh object for face landmark detection and mesh generation.
face_mesh = FaceMesh(
max_num_faces=999,
min_detection_confidence=minimum_confidence,
min_tracking_confidence=minimum_confidence,
)
# Detect the face landmarks and mesh in the input image.
results = face_mesh.process(np_image)
# Check if any face is detected.
if results.multi_face_landmarks: # type: ignore # this are via protobuf and not typed
# Search for the face_id in the detected faces.
for face_id, face_landmarks in enumerate(results.multi_face_landmarks): # type: ignore #this are via protobuf and not typed
# Get the bounding box of the face mesh.
x_coordinates = [landmark.x for landmark in face_landmarks.landmark]
y_coordinates = [landmark.y for landmark in face_landmarks.landmark]
x_min, x_max = min(x_coordinates), max(x_coordinates)
y_min, y_max = min(y_coordinates), max(y_coordinates)
# Calculate the width and height of the face mesh.
mesh_width = int((x_max - x_min) * np_image.shape[1])
mesh_height = int((y_max - y_min) * np_image.shape[0])
# Get the center of the face.
x_center = np.mean([landmark.x * np_image.shape[1] for landmark in face_landmarks.landmark])
y_center = np.mean([landmark.y * np_image.shape[0] for landmark in face_landmarks.landmark])
face_landmark_points = np.array(
[
[landmark.x * np_image.shape[1], landmark.y * np_image.shape[0]]
for landmark in face_landmarks.landmark
]
)
# Apply the scaling offsets to the face landmark points with a multiplier.
scale_multiplier = 0.2
x_center = np.mean(face_landmark_points[:, 0])
y_center = np.mean(face_landmark_points[:, 1])
if draw_mesh:
x_scaled = face_landmark_points[:, 0] + scale_multiplier * x_offset * (
face_landmark_points[:, 0] - x_center
)
y_scaled = face_landmark_points[:, 1] + scale_multiplier * y_offset * (
face_landmark_points[:, 1] - y_center
)
convex_hull = cv2.convexHull(np.column_stack((x_scaled, y_scaled)).astype(np.int32))
# Generate a binary face mask using the face mesh.
mask_image = np.ones(np_image.shape[:2], dtype=np.uint8) * 255
cv2.fillConvexPoly(mask_image, convex_hull, 0)
# Convert the binary mask image to a PIL Image.
init_mask_pil = Image.fromarray(mask_image, mode="L")
w, h = init_mask_pil.size
mask_pil = create_white_image(w + chunk_x_offset, h + chunk_y_offset)
mask_pil.paste(init_mask_pil, (chunk_x_offset, chunk_y_offset))
x_center = float(x_center)
y_center = float(y_center)
face = FaceResultData(
image=pil_image,
mask=mask_pil or create_white_image(*pil_image.size),
x_center=x_center + chunk_x_offset,
y_center=y_center + chunk_y_offset,
mesh_width=mesh_width,
mesh_height=mesh_height,
chunk_x_offset=chunk_x_offset,
chunk_y_offset=chunk_y_offset,
)
result.append(face)
return result
def extract_face(
context: InvocationContext,
image: ImageType,
face: FaceResultData,
padding: int,
) -> ExtractFaceData:
mask = face["mask"]
center_x = face["x_center"]
center_y = face["y_center"]
mesh_width = face["mesh_width"]
mesh_height = face["mesh_height"]
# Determine the minimum size of the square crop
min_size = min(mask.width, mask.height)
# Calculate the crop boundaries for the output image and mask.
mesh_width += 128 + padding # add pixels to account for mask variance
mesh_height += 128 + padding # add pixels to account for mask variance
crop_size = min(
max(mesh_width, mesh_height, 128), min_size
) # Choose the smaller of the two (given value or face mask size)
if crop_size > 128:
crop_size = (crop_size + 7) // 8 * 8 # Ensure crop side is multiple of 8
# Calculate the actual crop boundaries within the bounds of the original image.
x_min = int(center_x - crop_size / 2)
y_min = int(center_y - crop_size / 2)
x_max = int(center_x + crop_size / 2)
y_max = int(center_y + crop_size / 2)
# Adjust the crop boundaries to stay within the original image's dimensions
if x_min < 0:
context.services.logger.warning("FaceTools --> -X-axis padding reached image edge.")
x_max -= x_min
x_min = 0
elif x_max > mask.width:
context.services.logger.warning("FaceTools --> +X-axis padding reached image edge.")
x_min -= x_max - mask.width
x_max = mask.width
if y_min < 0:
context.services.logger.warning("FaceTools --> +Y-axis padding reached image edge.")
y_max -= y_min
y_min = 0
elif y_max > mask.height:
context.services.logger.warning("FaceTools --> -Y-axis padding reached image edge.")
y_min -= y_max - mask.height
y_max = mask.height
# Ensure the crop is square and adjust the boundaries if needed
if x_max - x_min != crop_size:
context.services.logger.warning("FaceTools --> Limiting x-axis padding to constrain bounding box to a square.")
diff = crop_size - (x_max - x_min)
x_min -= diff // 2
x_max += diff - diff // 2
if y_max - y_min != crop_size:
context.services.logger.warning("FaceTools --> Limiting y-axis padding to constrain bounding box to a square.")
diff = crop_size - (y_max - y_min)
y_min -= diff // 2
y_max += diff - diff // 2
context.services.logger.info(f"FaceTools --> Calculated bounding box (8 multiple): {crop_size}")
# Crop the output image to the specified size with the center of the face mesh as the center.
mask = mask.crop((x_min, y_min, x_max, y_max))
bounded_image = image.crop((x_min, y_min, x_max, y_max))
# blur mask edge by small radius
mask = mask.filter(ImageFilter.GaussianBlur(radius=2))
return ExtractFaceData(
bounded_image=bounded_image,
bounded_mask=mask,
x_min=x_min,
y_min=y_min,
x_max=x_max,
y_max=y_max,
)
def get_faces_list(
context: InvocationContext,
image: ImageType,
should_chunk: bool,
minimum_confidence: float,
x_offset: float,
y_offset: float,
draw_mesh: bool = True,
) -> list[FaceResultDataWithId]:
result = []
# Generate the face box mask and get the center of the face.
if not should_chunk:
context.services.logger.info("FaceTools --> Attempting full image face detection.")
result = generate_face_box_mask(
context=context,
minimum_confidence=minimum_confidence,
x_offset=x_offset,
y_offset=y_offset,
pil_image=image,
chunk_x_offset=0,
chunk_y_offset=0,
draw_mesh=draw_mesh,
)
if should_chunk or len(result) == 0:
context.services.logger.info("FaceTools --> Chunking image (chunk toggled on, or no face found in full image).")
width, height = image.size
image_chunks = []
x_offsets = []
y_offsets = []
result = []
# If width == height, there's nothing more we can do... otherwise...
if width > height:
# Landscape - slice the image horizontally
fx = 0.0
steps = int(width * 2 / height) + 1
increment = (width - height) / (steps - 1)
while fx <= (width - height):
x = int(fx)
image_chunks.append(image.crop((x, 0, x + height, height)))
x_offsets.append(x)
y_offsets.append(0)
fx += increment
context.services.logger.info(f"FaceTools --> Chunk starting at x = {x}")
elif height > width:
# Portrait - slice the image vertically
fy = 0.0
steps = int(height * 2 / width) + 1
increment = (height - width) / (steps - 1)
while fy <= (height - width):
y = int(fy)
image_chunks.append(image.crop((0, y, width, y + width)))
x_offsets.append(0)
y_offsets.append(y)
fy += increment
context.services.logger.info(f"FaceTools --> Chunk starting at y = {y}")
for idx in range(len(image_chunks)):
context.services.logger.info(f"FaceTools --> Evaluating faces in chunk {idx}")
result = result + generate_face_box_mask(
context=context,
minimum_confidence=minimum_confidence,
x_offset=x_offset,
y_offset=y_offset,
pil_image=image_chunks[idx],
chunk_x_offset=x_offsets[idx],
chunk_y_offset=y_offsets[idx],
draw_mesh=draw_mesh,
)
if len(result) == 0:
# Give up
context.services.logger.warning(
"FaceTools --> No face detected in chunked input image. Passing through original image."
)
all_faces = prepare_faces_list(result)
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.0.2")
class FaceOffInvocation(BaseInvocation):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
image: ImageField = InputField(description="Image for face detection")
face_id: int = InputField(
default=0,
ge=0,
description="The face ID to process, numbered from 0. Multiple faces not supported. Find a face's ID with FaceIdentifier node.",
)
minimum_confidence: float = InputField(
default=0.5, description="Minimum confidence for face detection (lower if detection is failing)"
)
x_offset: float = InputField(default=0.0, description="X-axis offset of the mask")
y_offset: float = InputField(default=0.0, description="Y-axis offset of the mask")
padding: int = InputField(default=0, description="All-axis padding around the mask in pixels")
chunk: bool = InputField(
default=False,
description="Whether to bypass full image face detection and default to image chunking. Chunking will occur if no faces are found in the full image.",
)
def faceoff(self, context: InvocationContext, image: ImageType) -> Optional[ExtractFaceData]:
all_faces = get_faces_list(
context=context,
image=image,
should_chunk=self.chunk,
minimum_confidence=self.minimum_confidence,
x_offset=self.x_offset,
y_offset=self.y_offset,
draw_mesh=True,
)
if len(all_faces) == 0:
context.services.logger.warning("FaceOff --> No faces detected. Passing through original image.")
return None
if self.face_id > len(all_faces) - 1:
context.services.logger.warning(
f"FaceOff --> Face ID {self.face_id} is outside of the number of faces detected ({len(all_faces)}). Passing through original image."
)
return None
face_data = extract_face(context=context, image=image, face=all_faces[self.face_id], padding=self.padding)
# Convert the input image to RGBA mode to ensure it has an alpha channel.
face_data["bounded_image"] = face_data["bounded_image"].convert("RGBA")
return face_data
def invoke(self, context: InvocationContext) -> FaceOffOutput:
image = context.services.images.get_pil_image(self.image.image_name)
result = self.faceoff(context=context, image=image)
if result is None:
result_image = image
result_mask = create_white_image(*image.size)
x = 0
y = 0
else:
result_image = result["bounded_image"]
result_mask = result["bounded_mask"]
x = result["x_min"]
y = result["y_min"]
image_dto = context.services.images.create(
image=result_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
mask_dto = context.services.images.create(
image=result_mask,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
output = FaceOffOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
mask=ImageField(image_name=mask_dto.image_name),
x=x,
y=y,
)
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.0.2")
class FaceMaskInvocation(BaseInvocation):
"""Face mask creation using mediapipe face detection"""
image: ImageField = InputField(description="Image to face detect")
face_ids: str = InputField(
default="",
description="Comma-separated list of face ids to mask eg '0,2,7'. Numbered from 0. Leave empty to mask all. Find face IDs with FaceIdentifier node.",
)
minimum_confidence: float = InputField(
default=0.5, description="Minimum confidence for face detection (lower if detection is failing)"
)
x_offset: float = InputField(default=0.0, description="Offset for the X-axis of the face mask")
y_offset: float = InputField(default=0.0, description="Offset for the Y-axis of the face mask")
chunk: bool = InputField(
default=False,
description="Whether to bypass full image face detection and default to image chunking. Chunking will occur if no faces are found in the full image.",
)
invert_mask: bool = InputField(default=False, description="Toggle to invert the mask")
@field_validator("face_ids")
def validate_comma_separated_ints(cls, v) -> str:
comma_separated_ints_regex = re.compile(r"^\d*(,\d+)*$")
if comma_separated_ints_regex.match(v) is None:
raise ValueError('Face IDs must be a comma-separated list of integers (e.g. "1,2,3")')
return v
def facemask(self, context: InvocationContext, image: ImageType) -> FaceMaskResult:
all_faces = get_faces_list(
context=context,
image=image,
should_chunk=self.chunk,
minimum_confidence=self.minimum_confidence,
x_offset=self.x_offset,
y_offset=self.y_offset,
draw_mesh=True,
)
mask_pil = create_white_image(*image.size)
id_range = list(range(0, len(all_faces)))
ids_to_extract = id_range
if self.face_ids != "":
parsed_face_ids = [int(id) for id in self.face_ids.split(",")]
# get requested face_ids that are in range
intersected_face_ids = set(parsed_face_ids) & set(id_range)
if len(intersected_face_ids) == 0:
id_range_str = ",".join([str(id) for id in id_range])
context.services.logger.warning(
f"Face IDs must be in range of detected faces - requested {self.face_ids}, detected {id_range_str}. Passing through original image."
)
return FaceMaskResult(
image=image, # original image
mask=mask_pil, # white mask
)
ids_to_extract = list(intersected_face_ids)
for face_id in ids_to_extract:
face_data = extract_face(context=context, image=image, face=all_faces[face_id], padding=0)
face_mask_pil = face_data["bounded_mask"]
x_min = face_data["x_min"]
y_min = face_data["y_min"]
x_max = face_data["x_max"]
y_max = face_data["y_max"]
mask_pil.paste(
create_black_image(x_max - x_min, y_max - y_min),
box=(x_min, y_min),
mask=ImageOps.invert(face_mask_pil),
)
if self.invert_mask:
mask_pil = ImageOps.invert(mask_pil)
# Create an RGBA image with transparency
image = image.convert("RGBA")
return FaceMaskResult(
image=image,
mask=mask_pil,
)
def invoke(self, context: InvocationContext) -> FaceMaskOutput:
image = context.services.images.get_pil_image(self.image.image_name)
result = self.facemask(context=context, image=image)
image_dto = context.services.images.create(
image=result["image"],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
mask_dto = context.services.images.create(
image=result["mask"],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
output = FaceMaskOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
mask=ImageField(image_name=mask_dto.image_name),
)
return output
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.0.2"
)
class FaceIdentifierInvocation(BaseInvocation):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""
image: ImageField = InputField(description="Image to face detect")
minimum_confidence: float = InputField(
default=0.5, description="Minimum confidence for face detection (lower if detection is failing)"
)
chunk: bool = InputField(
default=False,
description="Whether to bypass full image face detection and default to image chunking. Chunking will occur if no faces are found in the full image.",
)
def faceidentifier(self, context: InvocationContext, image: ImageType) -> ImageType:
image = image.copy()
all_faces = get_faces_list(
context=context,
image=image,
should_chunk=self.chunk,
minimum_confidence=self.minimum_confidence,
x_offset=0,
y_offset=0,
draw_mesh=False,
)
# Note - font may be found either in the repo if running an editable install, or in the venv if running a package install
font_path = [x for x in [Path(y, "inter/Inter-Regular.ttf") for y in font_assets.__path__] if x.exists()]
font = ImageFont.truetype(font_path[0].as_posix(), FONT_SIZE)
# Paste face IDs on the output image
draw = ImageDraw.Draw(image)
for face in all_faces:
x_coord = face["x_center"]
y_coord = face["y_center"]
text = str(face["face_id"])
# get bbox of the text so we can center the id on the face
_, _, bbox_w, bbox_h = draw.textbbox(xy=(0, 0), text=text, font=font, stroke_width=FONT_STROKE_WIDTH)
x = x_coord - bbox_w / 2
y = y_coord - bbox_h / 2
draw.text(
xy=(x, y),
text=str(text),
fill=(255, 255, 255, 255),
font=font,
stroke_width=FONT_STROKE_WIDTH,
stroke_fill=(0, 0, 0, 255),
)
# Create an RGBA image with transparency
image = image.convert("RGBA")
return image
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
result_image = self.faceidentifier(context=context, image=image)
image_dto = context.services.images.create(
image=result_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -8,12 +8,12 @@ import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.metadata import CoreMetadata from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import BoardField, ColorField, ImageField, ImageOutput from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, FieldDescriptions, Input, InputField, InvocationContext, invocation from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0") @invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
@ -36,13 +36,7 @@ class ShowImageInvocation(BaseInvocation):
) )
@invocation( @invocation("blank_image", title="Blank Image", tags=["image"], category="image", version="1.0.0")
"blank_image",
title="Blank Image",
tags=["image"],
category="image",
version="1.0.0",
)
class BlankImageInvocation(BaseInvocation): class BlankImageInvocation(BaseInvocation):
"""Creates a blank image and forwards it to the pipeline""" """Creates a blank image and forwards it to the pipeline"""
@ -71,13 +65,7 @@ class BlankImageInvocation(BaseInvocation):
) )
@invocation( @invocation("img_crop", title="Crop Image", tags=["image", "crop"], category="image", version="1.0.0")
"img_crop",
title="Crop Image",
tags=["image", "crop"],
category="image",
version="1.0.0",
)
class ImageCropInvocation(BaseInvocation): class ImageCropInvocation(BaseInvocation):
"""Crops an image to a specified box. The box can be outside of the image.""" """Crops an image to a specified box. The box can be outside of the image."""
@ -110,13 +98,7 @@ class ImageCropInvocation(BaseInvocation):
) )
@invocation( @invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.0")
"img_paste",
title="Paste Image",
tags=["image", "paste"],
category="image",
version="1.0.1",
)
class ImagePasteInvocation(BaseInvocation): class ImagePasteInvocation(BaseInvocation):
"""Pastes an image into another image.""" """Pastes an image into another image."""
@ -128,7 +110,6 @@ class ImagePasteInvocation(BaseInvocation):
) )
x: int = InputField(default=0, description="The left x coordinate at which to paste the image") x: int = InputField(default=0, description="The left x coordinate at which to paste the image")
y: int = InputField(default=0, description="The top y coordinate at which to paste the image") y: int = InputField(default=0, description="The top y coordinate at which to paste the image")
crop: bool = InputField(default=False, description="Crop to base image dimensions")
def invoke(self, context: InvocationContext) -> ImageOutput: def invoke(self, context: InvocationContext) -> ImageOutput:
base_image = context.services.images.get_pil_image(self.base_image.image_name) base_image = context.services.images.get_pil_image(self.base_image.image_name)
@ -148,10 +129,6 @@ class ImagePasteInvocation(BaseInvocation):
new_image.paste(base_image, (abs(min_x), abs(min_y))) new_image.paste(base_image, (abs(min_x), abs(min_y)))
new_image.paste(image, (max(0, self.x), max(0, self.y)), mask=mask) new_image.paste(image, (max(0, self.x), max(0, self.y)), mask=mask)
if self.crop:
base_w, base_h = base_image.size
new_image = new_image.crop((abs(min_x), abs(min_y), abs(min_x) + base_w, abs(min_y) + base_h))
image_dto = context.services.images.create( image_dto = context.services.images.create(
image=new_image, image=new_image,
image_origin=ResourceOrigin.INTERNAL, image_origin=ResourceOrigin.INTERNAL,
@ -169,13 +146,7 @@ class ImagePasteInvocation(BaseInvocation):
) )
@invocation( @invocation("tomask", title="Mask from Alpha", tags=["image", "mask"], category="image", version="1.0.0")
"tomask",
title="Mask from Alpha",
tags=["image", "mask"],
category="image",
version="1.0.0",
)
class MaskFromAlphaInvocation(BaseInvocation): class MaskFromAlphaInvocation(BaseInvocation):
"""Extracts the alpha channel of an image as a mask.""" """Extracts the alpha channel of an image as a mask."""
@ -206,13 +177,7 @@ class MaskFromAlphaInvocation(BaseInvocation):
) )
@invocation( @invocation("img_mul", title="Multiply Images", tags=["image", "multiply"], category="image", version="1.0.0")
"img_mul",
title="Multiply Images",
tags=["image", "multiply"],
category="image",
version="1.0.0",
)
class ImageMultiplyInvocation(BaseInvocation): class ImageMultiplyInvocation(BaseInvocation):
"""Multiplies two images together using `PIL.ImageChops.multiply()`.""" """Multiplies two images together using `PIL.ImageChops.multiply()`."""
@ -245,13 +210,7 @@ class ImageMultiplyInvocation(BaseInvocation):
IMAGE_CHANNELS = Literal["A", "R", "G", "B"] IMAGE_CHANNELS = Literal["A", "R", "G", "B"]
@invocation( @invocation("img_chan", title="Extract Image Channel", tags=["image", "channel"], category="image", version="1.0.0")
"img_chan",
title="Extract Image Channel",
tags=["image", "channel"],
category="image",
version="1.0.0",
)
class ImageChannelInvocation(BaseInvocation): class ImageChannelInvocation(BaseInvocation):
"""Gets a channel from an image.""" """Gets a channel from an image."""
@ -283,13 +242,7 @@ class ImageChannelInvocation(BaseInvocation):
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"] IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
@invocation( @invocation("img_conv", title="Convert Image Mode", tags=["image", "convert"], category="image", version="1.0.0")
"img_conv",
title="Convert Image Mode",
tags=["image", "convert"],
category="image",
version="1.0.0",
)
class ImageConvertInvocation(BaseInvocation): class ImageConvertInvocation(BaseInvocation):
"""Converts an image to a different mode.""" """Converts an image to a different mode."""
@ -318,13 +271,7 @@ class ImageConvertInvocation(BaseInvocation):
) )
@invocation( @invocation("img_blur", title="Blur Image", tags=["image", "blur"], category="image", version="1.0.0")
"img_blur",
title="Blur Image",
tags=["image", "blur"],
category="image",
version="1.0.0",
)
class ImageBlurInvocation(BaseInvocation): class ImageBlurInvocation(BaseInvocation):
"""Blurs an image""" """Blurs an image"""
@ -378,26 +325,20 @@ PIL_RESAMPLING_MAP = {
} }
@invocation( @invocation("img_resize", title="Resize Image", tags=["image", "resize"], category="image", version="1.0.0")
"img_resize",
title="Resize Image",
tags=["image", "resize"],
category="image",
version="1.0.0",
)
class ImageResizeInvocation(BaseInvocation): class ImageResizeInvocation(BaseInvocation):
"""Resizes an image to specific dimensions""" """Resizes an image to specific dimensions"""
image: ImageField = InputField(description="The image to resize") image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, gt=0, description="The width to resize to (px)") width: int = InputField(default=512, ge=64, multiple_of=8, description="The width to resize to (px)")
height: int = InputField(default=512, gt=0, description="The height to resize to (px)") height: int = InputField(default=512, ge=64, multiple_of=8, description="The height to resize to (px)")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode") resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
metadata: Optional[CoreMetadata] = InputField( metadata: Optional[CoreMetadata] = InputField(
default=None, description=FieldDescriptions.core_metadata, ui_hidden=True default=None, description=FieldDescriptions.core_metadata, ui_hidden=True
) )
def invoke(self, context: InvocationContext) -> ImageOutput: def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.get_image(self.image.image_name) image = context.services.images.get_pil_image(self.image.image_name)
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode] resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
@ -406,22 +347,25 @@ class ImageResizeInvocation(BaseInvocation):
resample=resample_mode, resample=resample_mode,
) )
image_name = context.save_image(image=resize_image) image_dto = context.services.images.create(
image=resize_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput( return ImageOutput(
image=ImageField(image_name=image_name), image=ImageField(image_name=image_dto.image_name),
width=resize_image.width, width=image_dto.width,
height=resize_image.height, height=image_dto.height,
) )
@invocation( @invocation("img_scale", title="Scale Image", tags=["image", "scale"], category="image", version="1.0.0")
"img_scale",
title="Scale Image",
tags=["image", "scale"],
category="image",
version="1.0.0",
)
class ImageScaleInvocation(BaseInvocation): class ImageScaleInvocation(BaseInvocation):
"""Scales an image by a factor""" """Scales an image by a factor"""
@ -462,13 +406,7 @@ class ImageScaleInvocation(BaseInvocation):
) )
@invocation( @invocation("img_lerp", title="Lerp Image", tags=["image", "lerp"], category="image", version="1.0.0")
"img_lerp",
title="Lerp Image",
tags=["image", "lerp"],
category="image",
version="1.0.0",
)
class ImageLerpInvocation(BaseInvocation): class ImageLerpInvocation(BaseInvocation):
"""Linear interpolation of all pixels of an image""" """Linear interpolation of all pixels of an image"""
@ -501,13 +439,7 @@ class ImageLerpInvocation(BaseInvocation):
) )
@invocation( @invocation("img_ilerp", title="Inverse Lerp Image", tags=["image", "ilerp"], category="image", version="1.0.0")
"img_ilerp",
title="Inverse Lerp Image",
tags=["image", "ilerp"],
category="image",
version="1.0.0",
)
class ImageInverseLerpInvocation(BaseInvocation): class ImageInverseLerpInvocation(BaseInvocation):
"""Inverse linear interpolation of all pixels of an image""" """Inverse linear interpolation of all pixels of an image"""
@ -519,7 +451,7 @@ class ImageInverseLerpInvocation(BaseInvocation):
image = context.services.images.get_pil_image(self.image.image_name) image = context.services.images.get_pil_image(self.image.image_name)
image_arr = numpy.asarray(image, dtype=numpy.float32) image_arr = numpy.asarray(image, dtype=numpy.float32)
image_arr = numpy.minimum(numpy.maximum(image_arr - self.min, 0) / float(self.max - self.min), 1) * 255 # type: ignore [assignment] image_arr = numpy.minimum(numpy.maximum(image_arr - self.min, 0) / float(self.max - self.min), 1) * 255
ilerp_image = Image.fromarray(numpy.uint8(image_arr)) ilerp_image = Image.fromarray(numpy.uint8(image_arr))
@ -540,13 +472,7 @@ class ImageInverseLerpInvocation(BaseInvocation):
) )
@invocation( @invocation("img_nsfw", title="Blur NSFW Image", tags=["image", "nsfw"], category="image", version="1.0.0")
"img_nsfw",
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.0.0",
)
class ImageNSFWBlurInvocation(BaseInvocation): class ImageNSFWBlurInvocation(BaseInvocation):
"""Add blur to NSFW-flagged images""" """Add blur to NSFW-flagged images"""
@ -574,7 +500,7 @@ class ImageNSFWBlurInvocation(BaseInvocation):
node_id=self.id, node_id=self.id,
session_id=context.graph_execution_state_id, session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate, is_intermediate=self.is_intermediate,
metadata=self.metadata.model_dump() if self.metadata else None, metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow, workflow=self.workflow,
) )
@ -584,7 +510,7 @@ class ImageNSFWBlurInvocation(BaseInvocation):
height=image_dto.height, height=image_dto.height,
) )
def _get_caution_img(self) -> Image.Image: def _get_caution_img(self) -> Image:
import invokeai.app.assets.images as image_assets import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png") caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
@ -592,11 +518,7 @@ class ImageNSFWBlurInvocation(BaseInvocation):
@invocation( @invocation(
"img_watermark", "img_watermark", title="Add Invisible Watermark", tags=["image", "watermark"], category="image", version="1.0.0"
title="Add Invisible Watermark",
tags=["image", "watermark"],
category="image",
version="1.0.0",
) )
class ImageWatermarkInvocation(BaseInvocation): class ImageWatermarkInvocation(BaseInvocation):
"""Add an invisible watermark to an image""" """Add an invisible watermark to an image"""
@ -617,7 +539,7 @@ class ImageWatermarkInvocation(BaseInvocation):
node_id=self.id, node_id=self.id,
session_id=context.graph_execution_state_id, session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate, is_intermediate=self.is_intermediate,
metadata=self.metadata.model_dump() if self.metadata else None, metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow, workflow=self.workflow,
) )
@ -628,13 +550,7 @@ class ImageWatermarkInvocation(BaseInvocation):
) )
@invocation( @invocation("mask_edge", title="Mask Edge", tags=["image", "mask", "inpaint"], category="image", version="1.0.0")
"mask_edge",
title="Mask Edge",
tags=["image", "mask", "inpaint"],
category="image",
version="1.0.0",
)
class MaskEdgeInvocation(BaseInvocation): class MaskEdgeInvocation(BaseInvocation):
"""Applies an edge mask to an image""" """Applies an edge mask to an image"""
@ -680,11 +596,7 @@ class MaskEdgeInvocation(BaseInvocation):
@invocation( @invocation(
"mask_combine", "mask_combine", title="Combine Masks", tags=["image", "mask", "multiply"], category="image", version="1.0.0"
title="Combine Masks",
tags=["image", "mask", "multiply"],
category="image",
version="1.0.0",
) )
class MaskCombineInvocation(BaseInvocation): class MaskCombineInvocation(BaseInvocation):
"""Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`.""" """Combine two masks together by multiplying them using `PIL.ImageChops.multiply()`."""
@ -715,13 +627,7 @@ class MaskCombineInvocation(BaseInvocation):
) )
@invocation( @invocation("color_correct", title="Color Correct", tags=["image", "color"], category="image", version="1.0.0")
"color_correct",
title="Color Correct",
tags=["image", "color"],
category="image",
version="1.0.0",
)
class ColorCorrectInvocation(BaseInvocation): class ColorCorrectInvocation(BaseInvocation):
""" """
Shifts the colors of a target image to match the reference image, optionally Shifts the colors of a target image to match the reference image, optionally
@ -831,13 +737,7 @@ class ColorCorrectInvocation(BaseInvocation):
) )
@invocation( @invocation("img_hue_adjust", title="Adjust Image Hue", tags=["image", "hue"], category="image", version="1.0.0")
"img_hue_adjust",
title="Adjust Image Hue",
tags=["image", "hue"],
category="image",
version="1.0.0",
)
class ImageHueAdjustmentInvocation(BaseInvocation): class ImageHueAdjustmentInvocation(BaseInvocation):
"""Adjusts the Hue of an image.""" """Adjusts the Hue of an image."""
@ -1060,44 +960,3 @@ class ImageChannelMultiplyInvocation(BaseInvocation):
width=image_dto.width, width=image_dto.width,
height=image_dto.height, height=image_dto.height,
) )
@invocation(
"save_image",
title="Save Image",
tags=["primitives", "image"],
category="primitives",
version="1.0.1",
use_cache=False,
)
class SaveImageInvocation(BaseInvocation):
"""Saves an image. Unlike an image primitive, this invocation stores a copy of the image."""
image: ImageField = InputField(description=FieldDescriptions.image)
board: Optional[BoardField] = InputField(default=None, description=FieldDescriptions.board, input=Input.Direct)
metadata: Optional[CoreMetadata] = InputField(
default=None,
description=FieldDescriptions.core_metadata,
ui_hidden=True,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
board_id=self.board.board_id if self.board else None,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.model_dump() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)

View File

@ -7,12 +7,12 @@ import numpy as np
from PIL import Image, ImageOps from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.util.misc import SEED_MAX, get_random_seed from invokeai.app.util.misc import SEED_MAX, get_random_seed
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch from invokeai.backend.image_util.patchmatch import PatchMatch
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
@ -269,7 +269,7 @@ class LaMaInfillInvocation(BaseInvocation):
) )
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.0.0") @invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint")
class CV2InfillInvocation(BaseInvocation): class CV2InfillInvocation(BaseInvocation):
"""Infills transparent areas of an image using OpenCV Inpainting""" """Infills transparent areas of an image using OpenCV Inpainting"""

View File

@ -1,107 +0,0 @@
import os
from builtins import float
from typing import List, Union
from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
from invokeai.app.invocations.primitives import ImageField
from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.model_management.models.ip_adapter import get_ip_adapter_image_encoder_model_id
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
class CLIPVisionModelField(BaseModel):
model_name: str = Field(description="Name of the CLIP Vision image encoder model")
base_model: BaseModelType = Field(description="Base model (usually 'Any')")
model_config = ConfigDict(protected_namespaces=())
class IPAdapterField(BaseModel):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
# weight: float = Field(default=1.0, ge=0, description="The weight of the IP-Adapter.")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@invocation_output("ip_adapter_output")
class IPAdapterOutput(BaseInvocationOutput):
# Outputs
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.0.0")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
# Inputs
image: ImageField = InputField(description="The IP-Adapter image prompt.")
ip_adapter_model: IPAdapterModelField = InputField(
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
)
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField(
default=1, ge=0, description="The weight given to the IP-Adapter", ui_type=UIType.Float, title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.services.model_manager.model_info(
self.ip_adapter_model.model_name, self.ip_adapter_model.base_model, ModelType.IPAdapter
)
# HACK(ryand): This is bad for a couple of reasons: 1) we are bypassing the model manager to read the model
# directly, and 2) we are reading from disk every time this invocation is called without caching the result.
# A better solution would be to store the image encoder model reference in the IP-Adapter model info, but this
# is currently messy due to differences between how the model info is generated when installing a model from
# disk vs. downloading the model.
image_encoder_model_id = get_ip_adapter_image_encoder_model_id(
os.path.join(context.services.configuration.get_config().models_path, ip_adapter_info["path"])
)
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_model = CLIPVisionModelField(
model_name=image_encoder_model_name,
base_model=BaseModelType.Any,
)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
ip_adapter_model=self.ip_adapter_model,
image_encoder_model=image_encoder_model,
weight=self.weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
),
)

View File

@ -1,16 +1,13 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from contextlib import ExitStack from contextlib import ExitStack
from functools import singledispatchmethod
from typing import List, Literal, Optional, Union from typing import List, Literal, Optional, Union
import einops import einops
import numpy as np import numpy as np
import torch import torch
import torchvision.transforms as T import torchvision.transforms as T
from diffusers import AutoencoderKL, AutoencoderTiny
from diffusers.image_processor import VaeImageProcessor from diffusers.image_processor import VaeImageProcessor
from diffusers.models.adapter import FullAdapterXL, T2IAdapter
from diffusers.models.attention_processor import ( from diffusers.models.attention_processor import (
AttnProcessor2_0, AttnProcessor2_0,
LoRAAttnProcessor2_0, LoRAAttnProcessor2_0,
@ -19,10 +16,9 @@ from diffusers.models.attention_processor import (
) )
from diffusers.schedulers import DPMSolverSDEScheduler from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler from diffusers.schedulers import SchedulerMixin as Scheduler
from pydantic import field_validator from pydantic import validator
from torchvision.transforms.functional import resize as tv_resize from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.ip_adapter import IPAdapterField
from invokeai.app.invocations.metadata import CoreMetadata from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ( from invokeai.app.invocations.primitives import (
DenoiseMaskField, DenoiseMaskField,
@ -33,28 +29,24 @@ from invokeai.app.invocations.primitives import (
LatentsOutput, LatentsOutput,
build_latents_output, build_latents_output,
) )
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.util.controlnet_utils import prepare_control_image from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.app.util.step_callback import stable_diffusion_step_callback from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter, IPAdapterPlus
from invokeai.backend.model_management.models import ModelType, SilenceWarnings from invokeai.backend.model_management.models import ModelType, SilenceWarnings
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningData, IPAdapterConditioningInfo
from ...backend.model_management.lora import ModelPatcher from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import BaseModelType
from ...backend.model_management.seamless import set_seamless from ...backend.model_management.seamless import set_seamless
from ...backend.model_management.models import BaseModelType
from ...backend.stable_diffusion import PipelineIntermediateState from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import ( from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData,
ControlNetData, ControlNetData,
IPAdapterData,
StableDiffusionGeneratorPipeline, StableDiffusionGeneratorPipeline,
T2IAdapterData,
image_resized_to_grid_as_tensor, image_resized_to_grid_as_tensor,
) )
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device from ...backend.util.devices import choose_precision, choose_torch_device
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import ( from .baseinvocation import (
BaseInvocation, BaseInvocation,
BaseInvocationOutput, BaseInvocationOutput,
@ -71,11 +63,9 @@ from .compel import ConditioningField
from .controlnet_image_processors import ControlField from .controlnet_image_processors import ControlField
from .model import ModelInfo, UNetField, VaeField from .model import ModelInfo, UNetField, VaeField
if choose_torch_device() == torch.device("mps"):
from torch import mps
DEFAULT_PRECISION = choose_precision(choose_torch_device()) DEFAULT_PRECISION = choose_precision(choose_torch_device())
SAMPLER_NAME_VALUES = Literal[tuple(list(SCHEDULER_MAP.keys()))] SAMPLER_NAME_VALUES = Literal[tuple(list(SCHEDULER_MAP.keys()))]
@ -84,20 +74,12 @@ class SchedulerOutput(BaseInvocationOutput):
scheduler: SAMPLER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler) scheduler: SAMPLER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
@invocation( @invocation("scheduler", title="Scheduler", tags=["scheduler"], category="latents", version="1.0.0")
"scheduler",
title="Scheduler",
tags=["scheduler"],
category="latents",
version="1.0.0",
)
class SchedulerInvocation(BaseInvocation): class SchedulerInvocation(BaseInvocation):
"""Selects a scheduler.""" """Selects a scheduler."""
scheduler: SAMPLER_NAME_VALUES = InputField( scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", default="euler", description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
) )
def invoke(self, context: InvocationContext) -> SchedulerOutput: def invoke(self, context: InvocationContext) -> SchedulerOutput:
@ -105,11 +87,7 @@ class SchedulerInvocation(BaseInvocation):
@invocation( @invocation(
"create_denoise_mask", "create_denoise_mask", title="Create Denoise Mask", tags=["mask", "denoise"], category="latents", version="1.0.0"
title="Create Denoise Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.0",
) )
class CreateDenoiseMaskInvocation(BaseInvocation): class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run.""" """Creates mask for denoising model run."""
@ -118,11 +96,7 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1) image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1)
mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2) mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=3) tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=3)
fp32: bool = InputField( fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32, ui_order=4)
default=DEFAULT_PRECISION == "float32",
description=FieldDescriptions.fp32,
ui_order=4,
)
def prep_mask_tensor(self, mask_image): def prep_mask_tensor(self, mask_image):
if mask_image.mode != "L": if mask_image.mode != "L":
@ -150,7 +124,7 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
if image is not None: if image is not None:
vae_info = context.services.model_manager.get_model( vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(), **self.vae.vae.dict(),
context=context, context=context,
) )
@ -182,8 +156,9 @@ def get_scheduler(
seed: int, seed: int,
) -> Scheduler: ) -> Scheduler:
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"]) scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"])
orig_scheduler_info = context.get_model( orig_scheduler_info = context.services.model_manager.get_model(
**scheduler_info.model_dump(), **scheduler_info.dict(),
context=context,
) )
with orig_scheduler_info as orig_scheduler: with orig_scheduler_info as orig_scheduler:
scheduler_config = orig_scheduler.config scheduler_config = orig_scheduler.config
@ -213,7 +188,7 @@ def get_scheduler(
title="Denoise Latents", title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"], tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents", category="latents",
version="1.3.0", version="1.0.0",
) )
class DenoiseLatentsInvocation(BaseInvocation): class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images""" """Denoises noisy latents to decodable images"""
@ -224,64 +199,29 @@ class DenoiseLatentsInvocation(BaseInvocation):
negative_conditioning: ConditioningField = InputField( negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection, ui_order=1 description=FieldDescriptions.negative_cond, input=Input.Connection, ui_order=1
) )
noise: Optional[LatentsField] = InputField( noise: Optional[LatentsField] = InputField(description=FieldDescriptions.noise, input=Input.Connection, ui_order=3)
default=None,
description=FieldDescriptions.noise,
input=Input.Connection,
ui_order=3,
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps) steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField( cfg_scale: Union[float, List[float]] = InputField(
default=7.5, ge=1, description=FieldDescriptions.cfg_scale, title="CFG Scale" default=7.5, ge=1, description=FieldDescriptions.cfg_scale, ui_type=UIType.Float, title="CFG Scale"
)
denoising_start: float = InputField(
default=0.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_start,
) )
denoising_start: float = InputField(default=0.0, ge=0, le=1, description=FieldDescriptions.denoising_start)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end) denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
scheduler: SAMPLER_NAME_VALUES = InputField( scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", default="euler", description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
) )
unet: UNetField = InputField( unet: UNetField = InputField(description=FieldDescriptions.unet, input=Input.Connection, title="UNet", ui_order=2)
description=FieldDescriptions.unet, control: Union[ControlField, list[ControlField]] = InputField(
input=Input.Connection,
title="UNet",
ui_order=2,
)
control: Optional[Union[ControlField, list[ControlField]]] = InputField(
default=None, default=None,
description=FieldDescriptions.control,
input=Input.Connection, input=Input.Connection,
ui_order=5, ui_order=5,
) )
ip_adapter: Optional[Union[IPAdapterField, list[IPAdapterField]]] = InputField( latents: Optional[LatentsField] = InputField(description=FieldDescriptions.latents, input=Input.Connection)
description=FieldDescriptions.ip_adapter,
title="IP-Adapter",
default=None,
input=Input.Connection,
ui_order=6,
)
t2i_adapter: Optional[Union[T2IAdapterField, list[T2IAdapterField]]] = InputField(
description=FieldDescriptions.t2i_adapter,
title="T2I-Adapter",
default=None,
input=Input.Connection,
ui_order=7,
)
latents: Optional[LatentsField] = InputField(
default=None, description=FieldDescriptions.latents, input=Input.Connection
)
denoise_mask: Optional[DenoiseMaskField] = InputField( denoise_mask: Optional[DenoiseMaskField] = InputField(
default=None, default=None, description=FieldDescriptions.mask, input=Input.Connection, ui_order=6
description=FieldDescriptions.mask,
input=Input.Connection,
ui_order=8,
) )
@field_validator("cfg_scale") @validator("cfg_scale")
def ge_one(cls, v): def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1""" """validate that all cfg_scale values are >= 1"""
if isinstance(v, list): if isinstance(v, list):
@ -297,12 +237,15 @@ class DenoiseLatentsInvocation(BaseInvocation):
def dispatch_progress( def dispatch_progress(
self, self,
context: InvocationContext, context: InvocationContext,
source_node_id: str,
intermediate_state: PipelineIntermediateState, intermediate_state: PipelineIntermediateState,
base_model: BaseModelType, base_model: BaseModelType,
) -> None: ) -> None:
stable_diffusion_step_callback( stable_diffusion_step_callback(
context=context, context=context,
intermediate_state=intermediate_state, intermediate_state=intermediate_state,
node=self.dict(),
source_node_id=source_node_id,
base_model=base_model, base_model=base_model,
) )
@ -313,11 +256,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
unet, unet,
seed, seed,
) -> ConditioningData: ) -> ConditioningData:
positive_cond_data = context.get_conditioning(self.positive_conditioning.conditioning_name) positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
c = positive_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype) c = positive_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
extra_conditioning_info = c.extra_conditioning extra_conditioning_info = c.extra_conditioning
negative_cond_data = context.get_conditioning(self.negative_conditioning.conditioning_name) negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
uc = negative_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype) uc = negative_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
conditioning_data = ConditioningData( conditioning_data = ConditioningData(
@ -377,6 +320,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
def prep_control_data( def prep_control_data(
self, self,
context: InvocationContext, context: InvocationContext,
# really only need model for dtype and device
model: StableDiffusionGeneratorPipeline,
control_input: Union[ControlField, List[ControlField]], control_input: Union[ControlField, List[ControlField]],
latents_shape: List[int], latents_shape: List[int],
exit_stack: ExitStack, exit_stack: ExitStack,
@ -396,202 +341,57 @@ class DenoiseLatentsInvocation(BaseInvocation):
else: else:
control_list = None control_list = None
if control_list is None: if control_list is None:
return None control_data = None
# After above handling, any control that is not None should now be of type list[ControlField]. # from above handling, any control that is not None should now be of type list[ControlField]
else:
# FIXME: add checks to skip entry if model or image is None # FIXME: add checks to skip entry if model or image is None
# and if weight is None, populate with default 1.0? # and if weight is None, populate with default 1.0?
controlnet_data = [] control_data = []
for control_info in control_list: control_models = []
control_model = exit_stack.enter_context( for control_info in control_list:
context.get_model( control_model = exit_stack.enter_context(
model_name=control_info.control_model.model_name, context.services.model_manager.get_model(
model_type=ModelType.ControlNet, model_name=control_info.control_model.model_name,
base_model=control_info.control_model.base_model, model_type=ModelType.ControlNet,
) base_model=control_info.control_model.base_model,
) context=context,
)
# control_models.append(control_model)
control_image_field = control_info.image
input_image = context.get_image(control_image_field.image_name)
# self.image.image_type, self.image.image_name
# FIXME: still need to test with different widths, heights, devices, dtypes
# and add in batch_size, num_images_per_prompt?
# and do real check for classifier_free_guidance?
# prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width)
control_image = prepare_control_image(
image=input_image,
do_classifier_free_guidance=do_classifier_free_guidance,
width=control_width_resize,
height=control_height_resize,
# batch_size=batch_size * num_images_per_prompt,
# num_images_per_prompt=num_images_per_prompt,
device=control_model.device,
dtype=control_model.dtype,
control_mode=control_info.control_mode,
resize_mode=control_info.resize_mode,
)
control_item = ControlNetData(
model=control_model, # model object
image_tensor=control_image,
weight=control_info.control_weight,
begin_step_percent=control_info.begin_step_percent,
end_step_percent=control_info.end_step_percent,
control_mode=control_info.control_mode,
# any resizing needed should currently be happening in prepare_control_image(),
# but adding resize_mode to ControlNetData in case needed in the future
resize_mode=control_info.resize_mode,
)
controlnet_data.append(control_item)
# MultiControlNetModel has been refactored out, just need list[ControlNetData]
return controlnet_data
def prep_ip_adapter_data(
self,
context: InvocationContext,
ip_adapter: Optional[Union[IPAdapterField, list[IPAdapterField]]],
conditioning_data: ConditioningData,
exit_stack: ExitStack,
) -> Optional[list[IPAdapterData]]:
"""If IP-Adapter is enabled, then this function loads the requisite models, and adds the image prompt embeddings
to the `conditioning_data` (in-place).
"""
if ip_adapter is None:
return None
# ip_adapter could be a list or a single IPAdapterField. Normalize to a list here.
if not isinstance(ip_adapter, list):
ip_adapter = [ip_adapter]
if len(ip_adapter) == 0:
return None
ip_adapter_data_list = []
conditioning_data.ip_adapter_conditioning = []
for single_ip_adapter in ip_adapter:
ip_adapter_model: Union[IPAdapter, IPAdapterPlus] = exit_stack.enter_context(
context.get_model(
model_name=single_ip_adapter.ip_adapter_model.model_name,
model_type=ModelType.IPAdapter,
base_model=single_ip_adapter.ip_adapter_model.base_model,
)
)
image_encoder_model_info = context.get_model(
model_name=single_ip_adapter.image_encoder_model.model_name,
model_type=ModelType.CLIPVision,
base_model=single_ip_adapter.image_encoder_model.base_model,
)
input_image = context.get_image(single_ip_adapter.image.image_name)
# TODO(ryand): With some effort, the step of running the CLIP Vision encoder could be done before any other
# models are needed in memory. This would help to reduce peak memory utilization in low-memory environments.
with image_encoder_model_info as image_encoder_model:
# Get image embeddings from CLIP and ImageProjModel.
(
image_prompt_embeds,
uncond_image_prompt_embeds,
) = ip_adapter_model.get_image_embeds(input_image, image_encoder_model)
conditioning_data.ip_adapter_conditioning.append(
IPAdapterConditioningInfo(image_prompt_embeds, uncond_image_prompt_embeds)
) )
ip_adapter_data_list.append( control_models.append(control_model)
IPAdapterData( control_image_field = control_info.image
ip_adapter_model=ip_adapter_model, input_image = context.services.images.get_pil_image(control_image_field.image_name)
weight=single_ip_adapter.weight, # self.image.image_type, self.image.image_name
begin_step_percent=single_ip_adapter.begin_step_percent, # FIXME: still need to test with different widths, heights, devices, dtypes
end_step_percent=single_ip_adapter.end_step_percent, # and add in batch_size, num_images_per_prompt?
# and do real check for classifier_free_guidance?
# prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width)
control_image = prepare_control_image(
image=input_image,
do_classifier_free_guidance=do_classifier_free_guidance,
width=control_width_resize,
height=control_height_resize,
# batch_size=batch_size * num_images_per_prompt,
# num_images_per_prompt=num_images_per_prompt,
device=control_model.device,
dtype=control_model.dtype,
control_mode=control_info.control_mode,
resize_mode=control_info.resize_mode,
) )
) control_item = ControlNetData(
model=control_model,
return ip_adapter_data_list image_tensor=control_image,
weight=control_info.control_weight,
def run_t2i_adapters( begin_step_percent=control_info.begin_step_percent,
self, end_step_percent=control_info.end_step_percent,
context: InvocationContext, control_mode=control_info.control_mode,
t2i_adapter: Optional[Union[T2IAdapterField, list[T2IAdapterField]]], # any resizing needed should currently be happening in prepare_control_image(),
latents_shape: list[int], # but adding resize_mode to ControlNetData in case needed in the future
do_classifier_free_guidance: bool, resize_mode=control_info.resize_mode,
) -> Optional[list[T2IAdapterData]]:
if t2i_adapter is None:
return None
# Handle the possibility that t2i_adapter could be a list or a single T2IAdapterField.
if isinstance(t2i_adapter, T2IAdapterField):
t2i_adapter = [t2i_adapter]
if len(t2i_adapter) == 0:
return None
t2i_adapter_data = []
for t2i_adapter_field in t2i_adapter:
t2i_adapter_model_info = context.get_model(
model_name=t2i_adapter_field.t2i_adapter_model.model_name,
model_type=ModelType.T2IAdapter,
base_model=t2i_adapter_field.t2i_adapter_model.base_model,
)
image = context.get_image(t2i_adapter_field.image.image_name)
# The max_unet_downscale is the maximum amount that the UNet model downscales the latent image internally.
if t2i_adapter_field.t2i_adapter_model.base_model == BaseModelType.StableDiffusion1:
max_unet_downscale = 8
elif t2i_adapter_field.t2i_adapter_model.base_model == BaseModelType.StableDiffusionXL:
max_unet_downscale = 4
else:
raise ValueError(
f"Unexpected T2I-Adapter base model type: '{t2i_adapter_field.t2i_adapter_model.base_model}'."
) )
control_data.append(control_item)
t2i_adapter_model: T2IAdapter # MultiControlNetModel has been refactored out, just need list[ControlNetData]
with t2i_adapter_model_info as t2i_adapter_model: return control_data
total_downscale_factor = t2i_adapter_model.total_downscale_factor
if isinstance(t2i_adapter_model.adapter, FullAdapterXL):
# HACK(ryand): Work around a bug in FullAdapterXL. This is being addressed upstream in diffusers by
# this PR: https://github.com/huggingface/diffusers/pull/5134.
total_downscale_factor = total_downscale_factor // 2
# Resize the T2I-Adapter input image.
# We select the resize dimensions so that after the T2I-Adapter's total_downscale_factor is applied, the
# result will match the latent image's dimensions after max_unet_downscale is applied.
t2i_input_height = latents_shape[2] // max_unet_downscale * total_downscale_factor
t2i_input_width = latents_shape[3] // max_unet_downscale * total_downscale_factor
# Note: We have hard-coded `do_classifier_free_guidance=False`. This is because we only want to prepare
# a single image. If CFG is enabled, we will duplicate the resultant tensor after applying the
# T2I-Adapter model.
#
# Note: We re-use the `prepare_control_image(...)` from ControlNet for T2I-Adapter, because it has many
# of the same requirements (e.g. preserving binary masks during resize).
t2i_image = prepare_control_image(
image=image,
do_classifier_free_guidance=False,
width=t2i_input_width,
height=t2i_input_height,
num_channels=t2i_adapter_model.config.in_channels,
device=t2i_adapter_model.device,
dtype=t2i_adapter_model.dtype,
resize_mode=t2i_adapter_field.resize_mode,
)
adapter_state = t2i_adapter_model(t2i_image)
if do_classifier_free_guidance:
for idx, value in enumerate(adapter_state):
adapter_state[idx] = torch.cat([value] * 2, dim=0)
t2i_adapter_data.append(
T2IAdapterData(
adapter_state=adapter_state,
weight=t2i_adapter_field.weight,
begin_step_percent=t2i_adapter_field.begin_step_percent,
end_step_percent=t2i_adapter_field.end_step_percent,
)
)
return t2i_adapter_data
# original idea by https://github.com/AmericanPresidentJimmyCarter # original idea by https://github.com/AmericanPresidentJimmyCarter
# TODO: research more for second order schedulers timesteps # TODO: research more for second order schedulers timesteps
@ -643,11 +443,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
seed = None seed = None
noise = None noise = None
if self.noise is not None: if self.noise is not None:
noise = context.get_latents(self.noise.latents_name) noise = context.services.latents.get(self.noise.latents_name)
seed = self.noise.seed seed = self.noise.seed
if self.latents is not None: if self.latents is not None:
latents = context.get_latents(self.latents.latents_name) latents = context.services.latents.get(self.latents.latents_name)
if seed is None: if seed is None:
seed = self.latents.seed seed = self.latents.seed
@ -664,36 +464,30 @@ class DenoiseLatentsInvocation(BaseInvocation):
mask, masked_latents = self.prep_inpaint_mask(context, latents) mask, masked_latents = self.prep_inpaint_mask(context, latents)
# TODO(ryand): I have hard-coded `do_classifier_free_guidance=True` to mirror the behaviour of ControlNets, # Get the source node id (we are invoking the prepared node)
# below. Investigate whether this is appropriate. graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
t2i_adapter_data = self.run_t2i_adapters( source_node_id = graph_execution_state.prepared_source_mapping[self.id]
context,
self.t2i_adapter,
latents.shape,
do_classifier_free_guidance=True,
)
def step_callback(state: PipelineIntermediateState): def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, state, self.unet.unet.base_model) self.dispatch_progress(context, source_node_id, state, self.unet.unet.base_model)
def _lora_loader(): def _lora_loader():
for lora in self.unet.loras: for lora in self.unet.loras:
lora_info = context.get_model( lora_info = context.services.model_manager.get_model(
**lora.model_dump(exclude={"weight"}), **lora.dict(exclude={"weight"}),
context=context,
) )
yield (lora_info.context.model, lora.weight) yield (lora_info.context.model, lora.weight)
del lora_info del lora_info
return return
unet_info = context.get_model( unet_info = context.services.model_manager.get_model(
**self.unet.unet.model_dump(), **self.unet.unet.dict(),
context=context,
) )
with ( with ExitStack() as exit_stack, ModelPatcher.apply_lora_unet(
ExitStack() as exit_stack, unet_info.context.model, _lora_loader()
ModelPatcher.apply_lora_unet(unet_info.context.model, _lora_loader()), ), set_seamless(unet_info.context.model, self.unet.seamless_axes), unet_info as unet:
set_seamless(unet_info.context.model, self.unet.seamless_axes),
unet_info as unet,
):
latents = latents.to(device=unet.device, dtype=unet.dtype) latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None: if noise is not None:
noise = noise.to(device=unet.device, dtype=unet.dtype) noise = noise.to(device=unet.device, dtype=unet.dtype)
@ -712,7 +506,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
pipeline = self.create_pipeline(unet, scheduler) pipeline = self.create_pipeline(unet, scheduler)
conditioning_data = self.get_conditioning_data(context, scheduler, unet, seed) conditioning_data = self.get_conditioning_data(context, scheduler, unet, seed)
controlnet_data = self.prep_control_data( control_data = self.prep_control_data(
model=pipeline,
context=context, context=context,
control_input=self.control, control_input=self.control,
latents_shape=latents.shape, latents_shape=latents.shape,
@ -721,13 +516,6 @@ class DenoiseLatentsInvocation(BaseInvocation):
exit_stack=exit_stack, exit_stack=exit_stack,
) )
ip_adapter_data = self.prep_ip_adapter_data(
context=context,
ip_adapter=self.ip_adapter,
conditioning_data=conditioning_data,
exit_stack=exit_stack,
)
num_inference_steps, timesteps, init_timestep = self.init_scheduler( num_inference_steps, timesteps, init_timestep = self.init_scheduler(
scheduler, scheduler,
device=unet.device, device=unet.device,
@ -736,10 +524,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
denoising_end=self.denoising_end, denoising_end=self.denoising_end,
) )
( result_latents, result_attention_map_saver = pipeline.latents_from_embeddings(
result_latents,
result_attention_map_saver,
) = pipeline.latents_from_embeddings(
latents=latents, latents=latents,
timesteps=timesteps, timesteps=timesteps,
init_timestep=init_timestep, init_timestep=init_timestep,
@ -749,28 +534,21 @@ class DenoiseLatentsInvocation(BaseInvocation):
masked_latents=masked_latents, masked_latents=masked_latents,
num_inference_steps=num_inference_steps, num_inference_steps=num_inference_steps,
conditioning_data=conditioning_data, conditioning_data=conditioning_data,
control_data=controlnet_data, control_data=control_data, # list[ControlNetData]
ip_adapter_data=ip_adapter_data,
t2i_adapter_data=t2i_adapter_data,
callback=step_callback, callback=step_callback,
) )
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699 # https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
result_latents = result_latents.to("cpu") result_latents = result_latents.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
latents_name = context.save_latents(result_latents) name = f"{context.graph_execution_state_id}__{self.id}"
return build_latents_output(latents_name=latents_name, latents=result_latents, seed=seed) context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents, seed=seed)
@invocation( @invocation(
"l2i", "l2i", title="Latents to Image", tags=["latents", "image", "vae", "l2i"], category="latents", version="1.0.0"
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.0.0",
) )
class LatentsToImageInvocation(BaseInvocation): class LatentsToImageInvocation(BaseInvocation):
"""Generates an image from latents.""" """Generates an image from latents."""
@ -785,7 +563,7 @@ class LatentsToImageInvocation(BaseInvocation):
) )
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled) tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32) fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
metadata: Optional[CoreMetadata] = InputField( metadata: CoreMetadata = InputField(
default=None, default=None,
description=FieldDescriptions.core_metadata, description=FieldDescriptions.core_metadata,
ui_hidden=True, ui_hidden=True,
@ -793,10 +571,11 @@ class LatentsToImageInvocation(BaseInvocation):
@torch.no_grad() @torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput: def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.get_latents(self.latents.latents_name) latents = context.services.latents.get(self.latents.latents_name)
vae_info = context.get_model( vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(), **self.vae.vae.dict(),
context=context,
) )
with set_seamless(vae_info.context.model, self.vae.seamless_axes), vae_info as vae: with set_seamless(vae_info.context.model, self.vae.seamless_axes), vae_info as vae:
@ -826,15 +605,13 @@ class LatentsToImageInvocation(BaseInvocation):
vae.to(dtype=torch.float16) vae.to(dtype=torch.float16)
latents = latents.half() latents = latents.half()
if self.tiled or context.config.tiled_decode: if self.tiled or context.services.configuration.tiled_decode:
vae.enable_tiling() vae.enable_tiling()
else: else:
vae.disable_tiling() vae.disable_tiling()
# clear memory as vae decode can request a lot # clear memory as vae decode can request a lot
torch.cuda.empty_cache() torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
with torch.inference_mode(): with torch.inference_mode():
# copied from diffusers pipeline # copied from diffusers pipeline
@ -847,28 +624,29 @@ class LatentsToImageInvocation(BaseInvocation):
image = VaeImageProcessor.numpy_to_pil(np_image)[0] image = VaeImageProcessor.numpy_to_pil(np_image)[0]
torch.cuda.empty_cache() torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
image_name = context.save_image(image, category=context.categories.GENERAL) image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow,
)
return ImageOutput( return ImageOutput(
image=ImageField(image_name=image_name), image=ImageField(image_name=image_dto.image_name),
width=image.width, width=image_dto.width,
height=image.height, height=image_dto.height,
) )
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"] LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@invocation( @invocation("lresize", title="Resize Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
"lresize",
title="Resize Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.0",
)
class ResizeLatentsInvocation(BaseInvocation): class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8.""" """Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
@ -905,8 +683,6 @@ class ResizeLatentsInvocation(BaseInvocation):
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699 # https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
resized_latents = resized_latents.to("cpu") resized_latents = resized_latents.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}" name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents) # context.services.latents.set(name, resized_latents)
@ -914,13 +690,7 @@ class ResizeLatentsInvocation(BaseInvocation):
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed) return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation( @invocation("lscale", title="Scale Latents", tags=["latents", "resize"], category="latents", version="1.0.0")
"lscale",
title="Scale Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.0",
)
class ScaleLatentsInvocation(BaseInvocation): class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor.""" """Scales latents by a given factor."""
@ -949,8 +719,6 @@ class ScaleLatentsInvocation(BaseInvocation):
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699 # https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
resized_latents = resized_latents.to("cpu") resized_latents = resized_latents.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}" name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents) # context.services.latents.set(name, resized_latents)
@ -959,11 +727,7 @@ class ScaleLatentsInvocation(BaseInvocation):
@invocation( @invocation(
"i2l", "i2l", title="Image to Latents", tags=["latents", "image", "vae", "i2l"], category="latents", version="1.0.0"
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.0.0",
) )
class ImageToLatentsInvocation(BaseInvocation): class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents.""" """Encodes an image into latents."""
@ -1015,7 +779,8 @@ class ImageToLatentsInvocation(BaseInvocation):
# non_noised_latents_from_image # non_noised_latents_from_image
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype) image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode(): with torch.inference_mode():
latents = ImageToLatentsInvocation._encode_to_tensor(vae, image_tensor) image_tensor_dist = vae.encode(image_tensor).latent_dist
latents = image_tensor_dist.sample().to(dtype=vae.dtype) # FIXME: uses torch.randn. make reproducible!
latents = vae.config.scaling_factor * latents latents = vae.config.scaling_factor * latents
latents = latents.to(dtype=orig_dtype) latents = latents.to(dtype=orig_dtype)
@ -1027,7 +792,7 @@ class ImageToLatentsInvocation(BaseInvocation):
image = context.services.images.get_pil_image(self.image.image_name) image = context.services.images.get_pil_image(self.image.image_name)
vae_info = context.services.model_manager.get_model( vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(), **self.vae.vae.dict(),
context=context, context=context,
) )
@ -1042,26 +807,8 @@ class ImageToLatentsInvocation(BaseInvocation):
context.services.latents.save(name, latents) context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents, seed=None) return build_latents_output(latents_name=name, latents=latents, seed=None)
@singledispatchmethod
@staticmethod
def _encode_to_tensor(vae: AutoencoderKL, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents = image_tensor_dist.sample().to(dtype=vae.dtype) # FIXME: uses torch.randn. make reproducible!
return latents
@_encode_to_tensor.register @invocation("lblend", title="Blend Latents", tags=["latents", "blend"], category="latents", version="1.0.0")
@staticmethod
def _(vae: AutoencoderTiny, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
return vae.encode(image_tensor).latents
@invocation(
"lblend",
title="Blend Latents",
tags=["latents", "blend"],
category="latents",
version="1.0.0",
)
class BlendLatentsInvocation(BaseInvocation): class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size.""" """Blend two latents using a given alpha. Latents must have same size."""
@ -1128,8 +875,6 @@ class BlendLatentsInvocation(BaseInvocation):
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699 # https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
blended_latents = blended_latents.to("cpu") blended_latents = blended_latents.to("cpu")
torch.cuda.empty_cache() torch.cuda.empty_cache()
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}" name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents) # context.services.latents.set(name, resized_latents)

View File

@ -1,11 +1,8 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Literal
import numpy as np import numpy as np
from pydantic import field_validator
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput from invokeai.app.invocations.primitives import IntegerOutput
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@ -54,238 +51,12 @@ class DivideInvocation(BaseInvocation):
return IntegerOutput(value=int(self.a / self.b)) return IntegerOutput(value=int(self.a / self.b))
@invocation( @invocation("rand_int", title="Random Integer", tags=["math", "random"], category="math", version="1.0.0")
"rand_int",
title="Random Integer",
tags=["math", "random"],
category="math",
version="1.0.0",
use_cache=False,
)
class RandomIntInvocation(BaseInvocation): class RandomIntInvocation(BaseInvocation):
"""Outputs a single random integer.""" """Outputs a single random integer."""
low: int = InputField(default=0, description=FieldDescriptions.inclusive_low) low: int = InputField(default=0, description="The inclusive low value")
high: int = InputField(default=np.iinfo(np.int32).max, description=FieldDescriptions.exclusive_high) high: int = InputField(default=np.iinfo(np.int32).max, description="The exclusive high value")
def invoke(self, context: InvocationContext) -> IntegerOutput: def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(value=np.random.randint(self.low, self.high)) return IntegerOutput(value=np.random.randint(self.low, self.high))
@invocation(
"rand_float",
title="Random Float",
tags=["math", "float", "random"],
category="math",
version="1.0.1",
use_cache=False,
)
class RandomFloatInvocation(BaseInvocation):
"""Outputs a single random float"""
low: float = InputField(default=0.0, description=FieldDescriptions.inclusive_low)
high: float = InputField(default=1.0, description=FieldDescriptions.exclusive_high)
decimals: int = InputField(default=2, description=FieldDescriptions.decimal_places)
def invoke(self, context: InvocationContext) -> FloatOutput:
random_float = np.random.uniform(self.low, self.high)
rounded_float = round(random_float, self.decimals)
return FloatOutput(value=rounded_float)
@invocation(
"float_to_int",
title="Float To Integer",
tags=["math", "round", "integer", "float", "convert"],
category="math",
version="1.0.0",
)
class FloatToIntegerInvocation(BaseInvocation):
"""Rounds a float number to (a multiple of) an integer."""
value: float = InputField(default=0, description="The value to round")
multiple: int = InputField(default=1, ge=1, title="Multiple of", description="The multiple to round to")
method: Literal["Nearest", "Floor", "Ceiling", "Truncate"] = InputField(
default="Nearest", description="The method to use for rounding"
)
def invoke(self, context: InvocationContext) -> IntegerOutput:
if self.method == "Nearest":
return IntegerOutput(value=round(self.value / self.multiple) * self.multiple)
elif self.method == "Floor":
return IntegerOutput(value=np.floor(self.value / self.multiple) * self.multiple)
elif self.method == "Ceiling":
return IntegerOutput(value=np.ceil(self.value / self.multiple) * self.multiple)
else: # self.method == "Truncate"
return IntegerOutput(value=int(self.value / self.multiple) * self.multiple)
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.0")
class RoundInvocation(BaseInvocation):
"""Rounds a float to a specified number of decimal places."""
value: float = InputField(default=0, description="The float value")
decimals: int = InputField(default=0, description="The number of decimal places")
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(value=round(self.value, self.decimals))
INTEGER_OPERATIONS = Literal[
"ADD",
"SUB",
"MUL",
"DIV",
"EXP",
"MOD",
"ABS",
"MIN",
"MAX",
]
INTEGER_OPERATIONS_LABELS = dict(
ADD="Add A+B",
SUB="Subtract A-B",
MUL="Multiply A*B",
DIV="Divide A/B",
EXP="Exponentiate A^B",
MOD="Modulus A%B",
ABS="Absolute Value of A",
MIN="Minimum(A,B)",
MAX="Maximum(A,B)",
)
@invocation(
"integer_math",
title="Integer Math",
tags=[
"math",
"integer",
"add",
"subtract",
"multiply",
"divide",
"modulus",
"power",
"absolute value",
"min",
"max",
],
category="math",
version="1.0.0",
)
class IntegerMathInvocation(BaseInvocation):
"""Performs integer math."""
operation: INTEGER_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=INTEGER_OPERATIONS_LABELS
)
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
@field_validator("b")
def no_unrepresentable_results(cls, v, values):
if values["operation"] == "DIV" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "MOD" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "EXP" and v < 0:
raise ValueError("Result of exponentiation is not an integer")
return v
def invoke(self, context: InvocationContext) -> IntegerOutput:
# Python doesn't support switch statements until 3.10, but InvokeAI supports back to 3.9
if self.operation == "ADD":
return IntegerOutput(value=self.a + self.b)
elif self.operation == "SUB":
return IntegerOutput(value=self.a - self.b)
elif self.operation == "MUL":
return IntegerOutput(value=self.a * self.b)
elif self.operation == "DIV":
return IntegerOutput(value=int(self.a / self.b))
elif self.operation == "EXP":
return IntegerOutput(value=self.a**self.b)
elif self.operation == "MOD":
return IntegerOutput(value=self.a % self.b)
elif self.operation == "ABS":
return IntegerOutput(value=abs(self.a))
elif self.operation == "MIN":
return IntegerOutput(value=min(self.a, self.b))
else: # self.operation == "MAX":
return IntegerOutput(value=max(self.a, self.b))
FLOAT_OPERATIONS = Literal[
"ADD",
"SUB",
"MUL",
"DIV",
"EXP",
"ABS",
"SQRT",
"MIN",
"MAX",
]
FLOAT_OPERATIONS_LABELS = dict(
ADD="Add A+B",
SUB="Subtract A-B",
MUL="Multiply A*B",
DIV="Divide A/B",
EXP="Exponentiate A^B",
ABS="Absolute Value of A",
SQRT="Square Root of A",
MIN="Minimum(A,B)",
MAX="Maximum(A,B)",
)
@invocation(
"float_math",
title="Float Math",
tags=["math", "float", "add", "subtract", "multiply", "divide", "power", "root", "absolute value", "min", "max"],
category="math",
version="1.0.0",
)
class FloatMathInvocation(BaseInvocation):
"""Performs floating point math."""
operation: FLOAT_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=FLOAT_OPERATIONS_LABELS
)
a: float = InputField(default=0, description=FieldDescriptions.num_1)
b: float = InputField(default=0, description=FieldDescriptions.num_2)
@field_validator("b")
def no_unrepresentable_results(cls, v, values):
if values["operation"] == "DIV" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "EXP" and values["a"] == 0 and v < 0:
raise ValueError("Cannot raise zero to a negative power")
elif values["operation"] == "EXP" and type(values["a"] ** v) is complex:
raise ValueError("Root operation resulted in a complex number")
return v
def invoke(self, context: InvocationContext) -> FloatOutput:
# Python doesn't support switch statements until 3.10, but InvokeAI supports back to 3.9
if self.operation == "ADD":
return FloatOutput(value=self.a + self.b)
elif self.operation == "SUB":
return FloatOutput(value=self.a - self.b)
elif self.operation == "MUL":
return FloatOutput(value=self.a * self.b)
elif self.operation == "DIV":
return FloatOutput(value=self.a / self.b)
elif self.operation == "EXP":
return FloatOutput(value=self.a**self.b)
elif self.operation == "SQRT":
return FloatOutput(value=np.sqrt(self.a))
elif self.operation == "ABS":
return FloatOutput(value=abs(self.a))
elif self.operation == "MIN":
return FloatOutput(value=min(self.a, self.b))
else: # self.operation == "MAX":
return FloatOutput(value=max(self.a, self.b))

View File

@ -12,10 +12,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation_output, invocation_output,
) )
from invokeai.app.invocations.controlnet_image_processors import ControlField from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.ip_adapter import IPAdapterModelField
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull from invokeai.app.util.model_exclude_null import BaseModelExcludeNull
from ...version import __version__ from ...version import __version__
@ -28,47 +25,29 @@ class LoRAMetadataField(BaseModelExcludeNull):
weight: float = Field(description="The weight of the LoRA model") weight: float = Field(description="The weight of the LoRA model")
class IPAdapterMetadataField(BaseModelExcludeNull):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
weight: float = Field(description="The weight of the IP-Adapter model")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
class CoreMetadata(BaseModelExcludeNull): class CoreMetadata(BaseModelExcludeNull):
"""Core generation metadata for an image generated in InvokeAI.""" """Core generation metadata for an image generated in InvokeAI."""
app_version: str = Field(default=__version__, description="The version of InvokeAI used to generate this image") app_version: str = Field(default=__version__, description="The version of InvokeAI used to generate this image")
generation_mode: Optional[str] = Field( generation_mode: str = Field(
default=None,
description="The generation mode that output this image", description="The generation mode that output this image",
) )
created_by: Optional[str] = Field(description="The name of the creator of the image") created_by: Optional[str] = Field(description="The name of the creator of the image")
positive_prompt: Optional[str] = Field(default=None, description="The positive prompt parameter") positive_prompt: str = Field(description="The positive prompt parameter")
negative_prompt: Optional[str] = Field(default=None, description="The negative prompt parameter") negative_prompt: str = Field(description="The negative prompt parameter")
width: Optional[int] = Field(default=None, description="The width parameter") width: int = Field(description="The width parameter")
height: Optional[int] = Field(default=None, description="The height parameter") height: int = Field(description="The height parameter")
seed: Optional[int] = Field(default=None, description="The seed used for noise generation") seed: int = Field(description="The seed used for noise generation")
rand_device: Optional[str] = Field(default=None, description="The device used for random number generation") rand_device: str = Field(description="The device used for random number generation")
cfg_scale: Optional[float] = Field(default=None, description="The classifier-free guidance scale parameter") cfg_scale: float = Field(description="The classifier-free guidance scale parameter")
steps: Optional[int] = Field(default=None, description="The number of steps used for inference") steps: int = Field(description="The number of steps used for inference")
scheduler: Optional[str] = Field(default=None, description="The scheduler used for inference") scheduler: str = Field(description="The scheduler used for inference")
clip_skip: Optional[int] = Field( clip_skip: int = Field(
default=None,
description="The number of skipped CLIP layers", description="The number of skipped CLIP layers",
) )
model: Optional[MainModelField] = Field(default=None, description="The main model used for inference") model: MainModelField = Field(description="The main model used for inference")
controlnets: Optional[list[ControlField]] = Field(default=None, description="The ControlNets used for inference") controlnets: list[ControlField] = Field(description="The ControlNets used for inference")
ipAdapters: Optional[list[IPAdapterMetadataField]] = Field( loras: list[LoRAMetadataField] = Field(description="The LoRAs used for inference")
default=None, description="The IP Adapters used for inference"
)
t2iAdapters: Optional[list[T2IAdapterField]] = Field(default=None, description="The IP Adapters used for inference")
loras: Optional[list[LoRAMetadataField]] = Field(default=None, description="The LoRAs used for inference")
vae: Optional[VAEModelField] = Field( vae: Optional[VAEModelField] = Field(
default=None, default=None,
description="The VAE used for decoding, if the main model's default was not used", description="The VAE used for decoding, if the main model's default was not used",
@ -125,34 +104,24 @@ class MetadataAccumulatorOutput(BaseInvocationOutput):
class MetadataAccumulatorInvocation(BaseInvocation): class MetadataAccumulatorInvocation(BaseInvocation):
"""Outputs a Core Metadata Object""" """Outputs a Core Metadata Object"""
generation_mode: Optional[str] = InputField( generation_mode: str = InputField(
default=None,
description="The generation mode that output this image", description="The generation mode that output this image",
) )
positive_prompt: Optional[str] = InputField(default=None, description="The positive prompt parameter") positive_prompt: str = InputField(description="The positive prompt parameter")
negative_prompt: Optional[str] = InputField(default=None, description="The negative prompt parameter") negative_prompt: str = InputField(description="The negative prompt parameter")
width: Optional[int] = InputField(default=None, description="The width parameter") width: int = InputField(description="The width parameter")
height: Optional[int] = InputField(default=None, description="The height parameter") height: int = InputField(description="The height parameter")
seed: Optional[int] = InputField(default=None, description="The seed used for noise generation") seed: int = InputField(description="The seed used for noise generation")
rand_device: Optional[str] = InputField(default=None, description="The device used for random number generation") rand_device: str = InputField(description="The device used for random number generation")
cfg_scale: Optional[float] = InputField(default=None, description="The classifier-free guidance scale parameter") cfg_scale: float = InputField(description="The classifier-free guidance scale parameter")
steps: Optional[int] = InputField(default=None, description="The number of steps used for inference") steps: int = InputField(description="The number of steps used for inference")
scheduler: Optional[str] = InputField(default=None, description="The scheduler used for inference") scheduler: str = InputField(description="The scheduler used for inference")
clip_skip: Optional[int] = InputField( clip_skip: int = InputField(
default=None,
description="The number of skipped CLIP layers", description="The number of skipped CLIP layers",
) )
model: Optional[MainModelField] = InputField(default=None, description="The main model used for inference") model: MainModelField = InputField(description="The main model used for inference")
controlnets: Optional[list[ControlField]] = InputField( controlnets: list[ControlField] = InputField(description="The ControlNets used for inference")
default=None, description="The ControlNets used for inference" loras: list[LoRAMetadataField] = InputField(description="The LoRAs used for inference")
)
ipAdapters: Optional[list[IPAdapterMetadataField]] = InputField(
default=None, description="The IP Adapters used for inference"
)
t2iAdapters: Optional[list[T2IAdapterField]] = InputField(
default=None, description="The IP Adapters used for inference"
)
loras: Optional[list[LoRAMetadataField]] = InputField(default=None, description="The LoRAs used for inference")
strength: Optional[float] = InputField( strength: Optional[float] = InputField(
default=None, default=None,
description="The strength used for latents-to-latents", description="The strength used for latents-to-latents",
@ -166,20 +135,6 @@ class MetadataAccumulatorInvocation(BaseInvocation):
description="The VAE used for decoding, if the main model's default was not used", description="The VAE used for decoding, if the main model's default was not used",
) )
# High resolution fix metadata.
hrf_width: Optional[int] = InputField(
default=None,
description="The high resolution fix height and width multipler.",
)
hrf_height: Optional[int] = InputField(
default=None,
description="The high resolution fix height and width multipler.",
)
hrf_strength: Optional[float] = InputField(
default=None,
description="The high resolution fix img2img strength used in the upscale pass.",
)
# SDXL # SDXL
positive_style_prompt: Optional[str] = InputField( positive_style_prompt: Optional[str] = InputField(
default=None, default=None,
@ -223,4 +178,4 @@ class MetadataAccumulatorInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> MetadataAccumulatorOutput: def invoke(self, context: InvocationContext) -> MetadataAccumulatorOutput:
"""Collects and outputs a CoreMetadata object""" """Collects and outputs a CoreMetadata object"""
return MetadataAccumulatorOutput(metadata=CoreMetadata(**self.model_dump())) return MetadataAccumulatorOutput(metadata=CoreMetadata(**self.dict()))

View File

@ -1,7 +1,7 @@
import copy import copy
from typing import List, Optional from typing import List, Optional
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, Field
from ...backend.model_management import BaseModelType, ModelType, SubModelType from ...backend.model_management import BaseModelType, ModelType, SubModelType
from .baseinvocation import ( from .baseinvocation import (
@ -24,8 +24,6 @@ class ModelInfo(BaseModel):
model_type: ModelType = Field(description="Info to load submodel") model_type: ModelType = Field(description="Info to load submodel")
submodel: Optional[SubModelType] = Field(default=None, description="Info to load submodel") submodel: Optional[SubModelType] = Field(default=None, description="Info to load submodel")
model_config = ConfigDict(protected_namespaces=())
class LoraInfo(ModelInfo): class LoraInfo(ModelInfo):
weight: float = Field(description="Lora's weight which to use when apply to model") weight: float = Field(description="Lora's weight which to use when apply to model")
@ -67,8 +65,6 @@ class MainModelField(BaseModel):
base_model: BaseModelType = Field(description="Base model") base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type") model_type: ModelType = Field(description="Model Type")
model_config = ConfigDict(protected_namespaces=())
class LoRAModelField(BaseModel): class LoRAModelField(BaseModel):
"""LoRA model field""" """LoRA model field"""
@ -76,16 +72,8 @@ class LoRAModelField(BaseModel):
model_name: str = Field(description="Name of the LoRA model") model_name: str = Field(description="Name of the LoRA model")
base_model: BaseModelType = Field(description="Base model") base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
@invocation("main_model_loader", title="Main Model", tags=["model"], category="model", version="1.0.0")
@invocation(
"main_model_loader",
title="Main Model",
tags=["model"],
category="model",
version="1.0.0",
)
class MainModelLoaderInvocation(BaseInvocation): class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels.""" """Loads a main model, outputting its submodels."""
@ -98,7 +86,7 @@ class MainModelLoaderInvocation(BaseInvocation):
model_type = ModelType.Main model_type = ModelType.Main
# TODO: not found exceptions # TODO: not found exceptions
if not context.model_exists( if not context.services.model_manager.model_exists(
model_name=model_name, model_name=model_name,
base_model=base_model, base_model=base_model,
model_type=model_type, model_type=model_type,
@ -192,16 +180,10 @@ class LoraLoaderInvocation(BaseInvocation):
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA") lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight) weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField( unet: Optional[UNetField] = InputField(
default=None, default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
) )
clip: Optional[ClipField] = InputField( clip: Optional[ClipField] = InputField(
default=None, default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP"
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
) )
def invoke(self, context: InvocationContext) -> LoraLoaderOutput: def invoke(self, context: InvocationContext) -> LoraLoaderOutput:
@ -262,35 +244,20 @@ class SDXLLoraLoaderOutput(BaseInvocationOutput):
clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2") clip2: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP 2")
@invocation( @invocation("sdxl_lora_loader", title="SDXL LoRA", tags=["lora", "model"], category="model", version="1.0.0")
"sdxl_lora_loader",
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.0",
)
class SDXLLoraLoaderInvocation(BaseInvocation): class SDXLLoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder.""" """Apply selected lora to unet and text_encoder."""
lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA") lora: LoRAModelField = InputField(description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA")
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight) weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField( unet: Optional[UNetField] = InputField(
default=None, default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
) )
clip: Optional[ClipField] = InputField( clip: Optional[ClipField] = InputField(
default=None, default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1"
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 1",
) )
clip2: Optional[ClipField] = InputField( clip2: Optional[ClipField] = InputField(
default=None, default=None, description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2"
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 2",
) )
def invoke(self, context: InvocationContext) -> SDXLLoraLoaderOutput: def invoke(self, context: InvocationContext) -> SDXLLoraLoaderOutput:
@ -363,8 +330,6 @@ class VAEModelField(BaseModel):
model_name: str = Field(description="Name of the model") model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model") base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
@invocation_output("vae_loader_output") @invocation_output("vae_loader_output")
class VaeLoaderOutput(BaseInvocationOutput): class VaeLoaderOutput(BaseInvocationOutput):
@ -378,10 +343,7 @@ class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput""" """Loads a VAE model, outputting a VaeLoaderOutput"""
vae_model: VAEModelField = InputField( vae_model: VAEModelField = InputField(
description=FieldDescriptions.vae_model, description=FieldDescriptions.vae_model, input=Input.Direct, ui_type=UIType.VaeModel, title="VAE"
input=Input.Direct,
ui_type=UIType.VaeModel,
title="VAE",
) )
def invoke(self, context: InvocationContext) -> VaeLoaderOutput: def invoke(self, context: InvocationContext) -> VaeLoaderOutput:
@ -410,31 +372,19 @@ class VaeLoaderInvocation(BaseInvocation):
class SeamlessModeOutput(BaseInvocationOutput): class SeamlessModeOutput(BaseInvocationOutput):
"""Modified Seamless Model output""" """Modified Seamless Model output"""
unet: Optional[UNetField] = OutputField(default=None, description=FieldDescriptions.unet, title="UNet") unet: Optional[UNetField] = OutputField(description=FieldDescriptions.unet, title="UNet")
vae: Optional[VaeField] = OutputField(default=None, description=FieldDescriptions.vae, title="VAE") vae: Optional[VaeField] = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation( @invocation("seamless", title="Seamless", tags=["seamless", "model"], category="model", version="1.0.0")
"seamless",
title="Seamless",
tags=["seamless", "model"],
category="model",
version="1.0.0",
)
class SeamlessModeInvocation(BaseInvocation): class SeamlessModeInvocation(BaseInvocation):
"""Applies the seamless transformation to the Model UNet and VAE.""" """Applies the seamless transformation to the Model UNet and VAE."""
unet: Optional[UNetField] = InputField( unet: Optional[UNetField] = InputField(
default=None, default=None, description=FieldDescriptions.unet, input=Input.Connection, title="UNet"
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
) )
vae: Optional[VaeField] = InputField( vae: Optional[VaeField] = InputField(
default=None, default=None, description=FieldDescriptions.vae_model, input=Input.Connection, title="VAE"
description=FieldDescriptions.vae_model,
input=Input.Connection,
title="VAE",
) )
seamless_y: bool = InputField(default=True, input=Input.Any, description="Specify whether Y axis is seamless") seamless_y: bool = InputField(default=True, input=Input.Any, description="Specify whether Y axis is seamless")
seamless_x: bool = InputField(default=True, input=Input.Any, description="Specify whether X axis is seamless") seamless_x: bool = InputField(default=True, input=Input.Any, description="Specify whether X axis is seamless")

View File

@ -2,7 +2,7 @@
import torch import torch
from pydantic import field_validator from pydantic import validator
from invokeai.app.invocations.latent import LatentsField from invokeai.app.invocations.latent import LatentsField
from invokeai.app.util.misc import SEED_MAX, get_random_seed from invokeai.app.util.misc import SEED_MAX, get_random_seed
@ -65,7 +65,7 @@ Nodes
class NoiseOutput(BaseInvocationOutput): class NoiseOutput(BaseInvocationOutput):
"""Invocation noise output""" """Invocation noise output"""
noise: LatentsField = OutputField(description=FieldDescriptions.noise) noise: LatentsField = OutputField(default=None, description=FieldDescriptions.noise)
width: int = OutputField(description=FieldDescriptions.width) width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height) height: int = OutputField(description=FieldDescriptions.height)
@ -78,13 +78,7 @@ def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
) )
@invocation( @invocation("noise", title="Noise", tags=["latents", "noise"], category="latents", version="1.0.0")
"noise",
title="Noise",
tags=["latents", "noise"],
category="latents",
version="1.0.0",
)
class NoiseInvocation(BaseInvocation): class NoiseInvocation(BaseInvocation):
"""Generates latent noise.""" """Generates latent noise."""
@ -111,7 +105,7 @@ class NoiseInvocation(BaseInvocation):
description="Use CPU for noise generation (for reproducible results across platforms)", description="Use CPU for noise generation (for reproducible results across platforms)",
) )
@field_validator("seed", mode="before") @validator("seed", pre=True)
def modulo_seed(cls, v): def modulo_seed(cls, v):
"""Returns the seed modulo (SEED_MAX + 1) to ensure it is within the valid range.""" """Returns the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
return v % (SEED_MAX + 1) return v % (SEED_MAX + 1)
@ -124,5 +118,6 @@ class NoiseInvocation(BaseInvocation):
seed=self.seed, seed=self.seed,
use_cpu=self.use_cpu, use_cpu=self.use_cpu,
) )
latents_name = context.save_latents(noise) name = f"{context.graph_execution_state_id}__{self.id}"
return build_noise_output(latents_name=latents_name, latents=noise, seed=self.seed) context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise, seed=self.seed)

View File

@ -9,24 +9,24 @@ from typing import List, Literal, Optional, Union
import numpy as np import numpy as np
import torch import torch
from diffusers.image_processor import VaeImageProcessor from diffusers.image_processor import VaeImageProcessor
from pydantic import BaseModel, ConfigDict, Field, field_validator from pydantic import BaseModel, Field, validator
from tqdm import tqdm from tqdm import tqdm
from invokeai.app.invocations.metadata import CoreMetadata from invokeai.app.invocations.metadata import CoreMetadata
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput, ImageField, ImageOutput from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.util.step_callback import stable_diffusion_step_callback from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend import BaseModelType, ModelType, SubModelType from invokeai.backend import BaseModelType, ModelType, SubModelType
from ...backend.model_management import ONNXModelPatcher from ...backend.model_management import ONNXModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util import choose_torch_device from ...backend.util import choose_torch_device
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import ( from .baseinvocation import (
BaseInvocation, BaseInvocation,
BaseInvocationOutput, BaseInvocationOutput,
FieldDescriptions, FieldDescriptions,
Input,
InputField, InputField,
Input,
InvocationContext, InvocationContext,
OutputField, OutputField,
UIComponent, UIComponent,
@ -63,17 +63,14 @@ class ONNXPromptInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> ConditioningOutput: def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.services.model_manager.get_model( tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.model_dump(), **self.clip.tokenizer.dict(),
) )
text_encoder_info = context.services.model_manager.get_model( text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.model_dump(), **self.clip.text_encoder.dict(),
) )
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder: # , ExitStack() as stack: with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder: # , ExitStack() as stack:
loras = [ loras = [
( (context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
context.services.model_manager.get_model(**lora.model_dump(exclude={"weight"})).context.model,
lora.weight,
)
for lora in self.clip.loras for lora in self.clip.loras
] ]
@ -98,10 +95,9 @@ class ONNXPromptInvocation(BaseInvocation):
print(f'Warn: trigger: "{trigger}" not found') print(f'Warn: trigger: "{trigger}" not found')
if loras or ti_list: if loras or ti_list:
text_encoder.release_session() text_encoder.release_session()
with ( with ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras), ONNXModelPatcher.apply_ti(
ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras), orig_tokenizer, text_encoder, ti_list
ONNXModelPatcher.apply_ti(orig_tokenizer, text_encoder, ti_list) as (tokenizer, ti_manager), ) as (tokenizer, ti_manager):
):
text_encoder.create_session() text_encoder.create_session()
# copy from # copy from
@ -169,6 +165,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
default=7.5, default=7.5,
ge=1, ge=1,
description=FieldDescriptions.cfg_scale, description=FieldDescriptions.cfg_scale,
ui_type=UIType.Float,
) )
scheduler: SAMPLER_NAME_VALUES = InputField( scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, input=Input.Direct, ui_type=UIType.Scheduler default="euler", description=FieldDescriptions.scheduler, input=Input.Direct, ui_type=UIType.Scheduler
@ -178,14 +175,15 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
description=FieldDescriptions.unet, description=FieldDescriptions.unet,
input=Input.Connection, input=Input.Connection,
) )
control: Union[ControlField, list[ControlField]] = InputField( control: Optional[Union[ControlField, list[ControlField]]] = InputField(
default=None, default=None,
description=FieldDescriptions.control, description=FieldDescriptions.control,
ui_type=UIType.Control,
) )
# seamless: bool = InputField(default=False, description="Whether or not to generate an image that can tile without seams", ) # seamless: bool = InputField(default=False, description="Whether or not to generate an image that can tile without seams", )
# seamless_axes: str = InputField(default="", description="The axes to tile the image on, 'x' and/or 'y'") # seamless_axes: str = InputField(default="", description="The axes to tile the image on, 'x' and/or 'y'")
@field_validator("cfg_scale") @validator("cfg_scale")
def ge_one(cls, v): def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1""" """validate that all cfg_scale values are >= 1"""
if isinstance(v, list): if isinstance(v, list):
@ -244,7 +242,7 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
stable_diffusion_step_callback( stable_diffusion_step_callback(
context=context, context=context,
intermediate_state=intermediate_state, intermediate_state=intermediate_state,
node=self.model_dump(), node=self.dict(),
source_node_id=source_node_id, source_node_id=source_node_id,
) )
@ -257,15 +255,12 @@ class ONNXTextToLatentsInvocation(BaseInvocation):
eta=0.0, eta=0.0,
) )
unet_info = context.services.model_manager.get_model(**self.unet.unet.model_dump()) unet_info = context.services.model_manager.get_model(**self.unet.unet.dict())
with unet_info as unet: # , ExitStack() as stack: with unet_info as unet: # , ExitStack() as stack:
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras] # loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [ loras = [
( (context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight)
context.services.model_manager.get_model(**lora.model_dump(exclude={"weight"})).context.model,
lora.weight,
)
for lora in self.unet.loras for lora in self.unet.loras
] ]
@ -352,7 +347,7 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}") raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}")
vae_info = context.services.model_manager.get_model( vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(), **self.vae.vae.dict(),
) )
# clear memory as vae decode can request a lot # clear memory as vae decode can request a lot
@ -381,7 +376,7 @@ class ONNXLatentsToImageInvocation(BaseInvocation):
node_id=self.id, node_id=self.id,
session_id=context.graph_execution_state_id, session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate, is_intermediate=self.is_intermediate,
metadata=self.metadata.model_dump() if self.metadata else None, metadata=self.metadata.dict() if self.metadata else None,
workflow=self.workflow, workflow=self.workflow,
) )
@ -409,8 +404,6 @@ class OnnxModelField(BaseModel):
base_model: BaseModelType = Field(description="Base model") base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type") model_type: ModelType = Field(description="Model Type")
model_config = ConfigDict(protected_namespaces=())
@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model", version="1.0.0") @invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model", version="1.0.0")
class OnnxModelLoaderInvocation(BaseInvocation): class OnnxModelLoaderInvocation(BaseInvocation):

View File

@ -3,6 +3,7 @@ from typing import Literal, Optional
import matplotlib.pyplot as plt import matplotlib.pyplot as plt
import numpy as np import numpy as np
import PIL.Image import PIL.Image
from easing_functions import ( from easing_functions import (
BackEaseIn, BackEaseIn,
@ -44,22 +45,13 @@ from invokeai.app.invocations.primitives import FloatCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@invocation( @invocation("float_range", title="Float Range", tags=["math", "range"], category="math", version="1.0.0")
"float_range",
title="Float Range",
tags=["math", "range"],
category="math",
version="1.0.0",
)
class FloatLinearRangeInvocation(BaseInvocation): class FloatLinearRangeInvocation(BaseInvocation):
"""Creates a range""" """Creates a range"""
start: float = InputField(default=5, description="The first value of the range") start: float = InputField(default=5, description="The first value of the range")
stop: float = InputField(default=10, description="The last value of the range") stop: float = InputField(default=10, description="The last value of the range")
steps: int = InputField( steps: int = InputField(default=30, description="number of values to interpolate over (including start and stop)")
default=30,
description="number of values to interpolate over (including start and stop)",
)
def invoke(self, context: InvocationContext) -> FloatCollectionOutput: def invoke(self, context: InvocationContext) -> FloatCollectionOutput:
param_list = list(np.linspace(self.start, self.stop, self.steps)) param_list = list(np.linspace(self.start, self.stop, self.steps))
@ -104,13 +96,7 @@ EASING_FUNCTION_KEYS = Literal[tuple(list(EASING_FUNCTIONS_MAP.keys()))]
# actually I think for now could just use CollectionOutput (which is list[Any] # actually I think for now could just use CollectionOutput (which is list[Any]
@invocation( @invocation("step_param_easing", title="Step Param Easing", tags=["step", "easing"], category="step", version="1.0.0")
"step_param_easing",
title="Step Param Easing",
tags=["step", "easing"],
category="step",
version="1.0.0",
)
class StepParamEasingInvocation(BaseInvocation): class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps""" """Experimental per-step parameter easing for denoising steps"""
@ -174,9 +160,7 @@ class StepParamEasingInvocation(BaseInvocation):
context.services.logger.debug("base easing duration: " + str(base_easing_duration)) context.services.logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = num_easing_steps % 2 == 0 # even number of steps even_num_steps = num_easing_steps % 2 == 0 # even number of steps
easing_function = easing_class( easing_function = easing_class(
start=self.start_value, start=self.start_value, end=self.end_value, duration=base_easing_duration - 1
end=self.end_value,
duration=base_easing_duration - 1,
) )
base_easing_vals = list() base_easing_vals = list()
for step_index in range(base_easing_duration): for step_index in range(base_easing_duration):
@ -216,11 +200,7 @@ class StepParamEasingInvocation(BaseInvocation):
# #
else: # no mirroring (default) else: # no mirroring (default)
easing_function = easing_class( easing_function = easing_class(start=self.start_value, end=self.end_value, duration=num_easing_steps - 1)
start=self.start_value,
end=self.end_value,
duration=num_easing_steps - 1,
)
for step_index in range(num_easing_steps): for step_index in range(num_easing_steps):
step_val = easing_function.ease(step_index) step_val = easing_function.ease(step_index)
easing_list.append(step_val) easing_list.append(step_val)

View File

@ -226,12 +226,6 @@ class ImageField(BaseModel):
image_name: str = Field(description="The name of the image") image_name: str = Field(description="The name of the image")
class BoardField(BaseModel):
"""A board primitive field"""
board_id: str = Field(description="The id of the board")
@invocation_output("image_output") @invocation_output("image_output")
class ImageOutput(BaseInvocationOutput): class ImageOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image""" """Base class for nodes that output a single image"""

View File

@ -3,28 +3,18 @@ from typing import Optional, Union
import numpy as np import numpy as np
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from pydantic import field_validator from pydantic import validator
from invokeai.app.invocations.primitives import StringCollectionOutput from invokeai.app.invocations.primitives import StringCollectionOutput
from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation from .baseinvocation import BaseInvocation, InputField, InvocationContext, UIComponent, invocation
@invocation( @invocation("dynamic_prompt", title="Dynamic Prompt", tags=["prompt", "collection"], category="prompt", version="1.0.0")
"dynamic_prompt",
title="Dynamic Prompt",
tags=["prompt", "collection"],
category="prompt",
version="1.0.0",
use_cache=False,
)
class DynamicPromptInvocation(BaseInvocation): class DynamicPromptInvocation(BaseInvocation):
"""Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator""" """Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator"""
prompt: str = InputField( prompt: str = InputField(description="The prompt to parse with dynamicprompts", ui_component=UIComponent.Textarea)
description="The prompt to parse with dynamicprompts",
ui_component=UIComponent.Textarea,
)
max_prompts: int = InputField(default=1, description="The number of prompts to generate") max_prompts: int = InputField(default=1, description="The number of prompts to generate")
combinatorial: bool = InputField(default=False, description="Whether to use the combinatorial generator") combinatorial: bool = InputField(default=False, description="Whether to use the combinatorial generator")
@ -39,31 +29,21 @@ class DynamicPromptInvocation(BaseInvocation):
return StringCollectionOutput(collection=prompts) return StringCollectionOutput(collection=prompts)
@invocation( @invocation("prompt_from_file", title="Prompts from File", tags=["prompt", "file"], category="prompt", version="1.0.0")
"prompt_from_file",
title="Prompts from File",
tags=["prompt", "file"],
category="prompt",
version="1.0.0",
)
class PromptsFromFileInvocation(BaseInvocation): class PromptsFromFileInvocation(BaseInvocation):
"""Loads prompts from a text file""" """Loads prompts from a text file"""
file_path: str = InputField(description="Path to prompt text file") file_path: str = InputField(description="Path to prompt text file")
pre_prompt: Optional[str] = InputField( pre_prompt: Optional[str] = InputField(
default=None, default=None, description="String to prepend to each prompt", ui_component=UIComponent.Textarea
description="String to prepend to each prompt",
ui_component=UIComponent.Textarea,
) )
post_prompt: Optional[str] = InputField( post_prompt: Optional[str] = InputField(
default=None, default=None, description="String to append to each prompt", ui_component=UIComponent.Textarea
description="String to append to each prompt",
ui_component=UIComponent.Textarea,
) )
start_line: int = InputField(default=1, ge=1, description="Line in the file to start start from") start_line: int = InputField(default=1, ge=1, description="Line in the file to start start from")
max_prompts: int = InputField(default=1, ge=0, description="Max lines to read from file (0=all)") max_prompts: int = InputField(default=1, ge=0, description="Max lines to read from file (0=all)")
@field_validator("file_path") @validator("file_path")
def file_path_exists(cls, v): def file_path_exists(cls, v):
if not exists(v): if not exists(v):
raise ValueError(FileNotFoundError) raise ValueError(FileNotFoundError)
@ -92,10 +72,6 @@ class PromptsFromFileInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> StringCollectionOutput: def invoke(self, context: InvocationContext) -> StringCollectionOutput:
prompts = self.promptsFromFile( prompts = self.promptsFromFile(
self.file_path, self.file_path, self.pre_prompt, self.post_prompt, self.start_line, self.max_prompts
self.pre_prompt,
self.post_prompt,
self.start_line,
self.max_prompts,
) )
return StringCollectionOutput(collection=prompts) return StringCollectionOutput(collection=prompts)

View File

@ -1,139 +0,0 @@
# 2023 skunkworxdark (https://github.com/skunkworxdark)
import re
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
OutputField,
UIComponent,
invocation,
invocation_output,
)
from .primitives import StringOutput
@invocation_output("string_pos_neg_output")
class StringPosNegOutput(BaseInvocationOutput):
"""Base class for invocations that output a positive and negative string"""
positive_string: str = OutputField(description="Positive string")
negative_string: str = OutputField(description="Negative string")
@invocation(
"string_split_neg",
title="String Split Negative",
tags=["string", "split", "negative"],
category="string",
version="1.0.0",
)
class StringSplitNegInvocation(BaseInvocation):
"""Splits string into two strings, inside [] goes into negative string everthing else goes into positive string. Each [ and ] character is replaced with a space"""
string: str = InputField(default="", description="String to split", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringPosNegOutput:
p_string = ""
n_string = ""
brackets_depth = 0
escaped = False
for char in self.string or "":
if char == "[" and not escaped:
n_string += " "
brackets_depth += 1
elif char == "]" and not escaped:
brackets_depth -= 1
char = " "
elif brackets_depth > 0:
n_string += char
else:
p_string += char
# keep track of the escape char but only if it isn't escaped already
if char == "\\" and not escaped:
escaped = True
else:
escaped = False
return StringPosNegOutput(positive_string=p_string, negative_string=n_string)
@invocation_output("string_2_output")
class String2Output(BaseInvocationOutput):
"""Base class for invocations that output two strings"""
string_1: str = OutputField(description="string 1")
string_2: str = OutputField(description="string 2")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.0")
class StringSplitInvocation(BaseInvocation):
"""Splits string into two strings, based on the first occurance of the delimiter. The delimiter will be removed from the string"""
string: str = InputField(default="", description="String to split", ui_component=UIComponent.Textarea)
delimiter: str = InputField(
default="", description="Delimiter to spilt with. blank will split on the first whitespace"
)
def invoke(self, context: InvocationContext) -> String2Output:
result = self.string.split(self.delimiter, 1)
if len(result) == 2:
part1, part2 = result
else:
part1 = result[0]
part2 = ""
return String2Output(string_1=part1, string_2=part2)
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.0")
class StringJoinInvocation(BaseInvocation):
"""Joins string left to string right"""
string_left: str = InputField(default="", description="String Left", ui_component=UIComponent.Textarea)
string_right: str = InputField(default="", description="String Right", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(value=((self.string_left or "") + (self.string_right or "")))
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.0")
class StringJoinThreeInvocation(BaseInvocation):
"""Joins string left to string middle to string right"""
string_left: str = InputField(default="", description="String Left", ui_component=UIComponent.Textarea)
string_middle: str = InputField(default="", description="String Middle", ui_component=UIComponent.Textarea)
string_right: str = InputField(default="", description="String Right", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(value=((self.string_left or "") + (self.string_middle or "") + (self.string_right or "")))
@invocation(
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.0"
)
class StringReplaceInvocation(BaseInvocation):
"""Replaces the search string with the replace string"""
string: str = InputField(default="", description="String to work on", ui_component=UIComponent.Textarea)
search_string: str = InputField(default="", description="String to search for", ui_component=UIComponent.Textarea)
replace_string: str = InputField(
default="", description="String to replace the search", ui_component=UIComponent.Textarea
)
use_regex: bool = InputField(
default=False, description="Use search string as a regex expression (non regex is case insensitive)"
)
def invoke(self, context: InvocationContext) -> StringOutput:
pattern = self.search_string or ""
new_string = self.string or ""
if len(pattern) > 0:
if not self.use_regex:
# None regex so make case insensitve
pattern = "(?i)" + re.escape(pattern)
new_string = re.sub(pattern, (self.replace_string or ""), new_string)
return StringOutput(value=new_string)

View File

@ -1,85 +0,0 @@
from typing import Union
from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.primitives import ImageField
from invokeai.backend.model_management.models.base import BaseModelType
class T2IAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the T2I-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
class T2IAdapterField(BaseModel):
image: ImageField = Field(description="The T2I-Adapter image prompt.")
t2i_adapter_model: T2IAdapterModelField = Field(description="The T2I-Adapter model to use.")
weight: Union[float, list[float]] = Field(default=1, description="The weight given to the T2I-Adapter")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)"
)
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@invocation_output("t2i_adapter_output")
class T2IAdapterOutput(BaseInvocationOutput):
t2i_adapter: T2IAdapterField = OutputField(description=FieldDescriptions.t2i_adapter, title="T2I Adapter")
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.0"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
# Inputs
image: ImageField = InputField(description="The IP-Adapter image prompt.")
t2i_adapter_model: T2IAdapterModelField = InputField(
description="The T2I-Adapter model.",
title="T2I-Adapter Model",
input=Input.Direct,
ui_order=-1,
)
weight: Union[float, list[float]] = InputField(
default=1, ge=0, description="The weight given to the T2I-Adapter", ui_type=UIType.Float, title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the T2I-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)"
)
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(
default="just_resize",
description="The resize mode applied to the T2I-Adapter input image so that it matches the target output size.",
)
def invoke(self, context: InvocationContext) -> T2IAdapterOutput:
return T2IAdapterOutput(
t2i_adapter=T2IAdapterField(
image=self.image,
t2i_adapter_model=self.t2i_adapter_model,
weight=self.weight,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
resize_mode=self.resize_mode,
)
)

View File

@ -4,15 +4,12 @@ from typing import Literal
import cv2 as cv import cv2 as cv
import numpy as np import numpy as np
import torch
from basicsr.archs.rrdbnet_arch import RRDBNet from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image from PIL import Image
from pydantic import ConfigDict
from realesrgan import RealESRGANer from realesrgan import RealESRGANer
from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.util.devices import choose_torch_device from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
@ -25,21 +22,13 @@ ESRGAN_MODELS = Literal[
"RealESRGAN_x2plus.pth", "RealESRGAN_x2plus.pth",
] ]
if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.0.0")
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.1.0")
class ESRGANInvocation(BaseInvocation): class ESRGANInvocation(BaseInvocation):
"""Upscales an image using RealESRGAN.""" """Upscales an image using RealESRGAN."""
image: ImageField = InputField(description="The input image") image: ImageField = InputField(description="The input image")
model_name: ESRGAN_MODELS = InputField(default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use") model_name: ESRGAN_MODELS = InputField(default="RealESRGAN_x4plus.pth", description="The Real-ESRGAN model to use")
tile_size: int = InputField(
default=400, ge=0, description="Tile size for tiled ESRGAN upscaling (0=tiling disabled)"
)
model_config = ConfigDict(protected_namespaces=())
def invoke(self, context: InvocationContext) -> ImageOutput: def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name) image = context.services.images.get_pil_image(self.image.image_name)
@ -97,11 +86,9 @@ class ESRGANInvocation(BaseInvocation):
model_path=str(models_path / esrgan_model_path), model_path=str(models_path / esrgan_model_path),
model=rrdbnet_model, model=rrdbnet_model,
half=False, half=False,
tile=self.tile_size,
) )
# prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL # prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL
# TODO: This strips the alpha... is that okay?
cv_image = cv.cvtColor(np.array(image.convert("RGB")), cv.COLOR_RGB2BGR) cv_image = cv.cvtColor(np.array(image.convert("RGB")), cv.COLOR_RGB2BGR)
# We can pass an `outscale` value here, but it just resizes the image by that factor after # We can pass an `outscale` value here, but it just resizes the image by that factor after
@ -112,10 +99,6 @@ class ESRGANInvocation(BaseInvocation):
# back to PIL # back to PIL
pil_image = Image.fromarray(cv.cvtColor(upscaled_image, cv.COLOR_BGR2RGB)).convert("RGBA") pil_image = Image.fromarray(cv.cvtColor(upscaled_image, cv.COLOR_BGR2RGB)).convert("RGBA")
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
image_dto = context.services.images.create( image_dto = context.services.images.create(
image=pil_image, image=pil_image,
image_origin=ResourceOrigin.INTERNAL, image_origin=ResourceOrigin.INTERNAL,

View File

@ -0,0 +1,4 @@
class CanceledException(Exception):
"""Execution canceled by user."""
pass

View File

@ -0,0 +1,71 @@
from enum import Enum
from pydantic import BaseModel, Field
from invokeai.app.util.metaenum import MetaEnum
class ProgressImage(BaseModel):
"""The progress image sent intermittently during processing"""
width: int = Field(description="The effective width of the image in pixels")
height: int = Field(description="The effective height of the image in pixels")
dataURL: str = Field(description="The image data as a b64 data URL")
class ResourceOrigin(str, Enum, metaclass=MetaEnum):
"""The origin of a resource (eg image).
- INTERNAL: The resource was created by the application.
- EXTERNAL: The resource was not created by the application.
This may be a user-initiated upload, or an internal application upload (eg Canvas init image).
"""
INTERNAL = "internal"
"""The resource was created by the application."""
EXTERNAL = "external"
"""The resource was not created by the application.
This may be a user-initiated upload, or an internal application upload (eg Canvas init image).
"""
class InvalidOriginException(ValueError):
"""Raised when a provided value is not a valid ResourceOrigin.
Subclasses `ValueError`.
"""
def __init__(self, message="Invalid resource origin."):
super().__init__(message)
class ImageCategory(str, Enum, metaclass=MetaEnum):
"""The category of an image.
- GENERAL: The image is an output, init image, or otherwise an image without a specialized purpose.
- MASK: The image is a mask image.
- CONTROL: The image is a ControlNet control image.
- USER: The image is a user-provide image.
- OTHER: The image is some other type of image with a specialized purpose. To be used by external nodes.
"""
GENERAL = "general"
"""GENERAL: The image is an output, init image, or otherwise an image without a specialized purpose."""
MASK = "mask"
"""MASK: The image is a mask image."""
CONTROL = "control"
"""CONTROL: The image is a ControlNet control image."""
USER = "user"
"""USER: The image is a user-provide image."""
OTHER = "other"
"""OTHER: The image is some other type of image with a specialized purpose. To be used by external nodes."""
class InvalidImageCategoryException(ValueError):
"""Raised when a provided value is not a valid ImageCategory.
Subclasses `ValueError`.
"""
def __init__(self, message="Invalid image category."):
super().__init__(message)

View File

@ -0,0 +1,215 @@
from abc import ABC, abstractmethod
from itertools import product
from typing import Optional
from uuid import uuid4
from fastapi_events.handlers.local import local_handler
from fastapi_events.typing import Event
from pydantic import BaseModel, Field
from invokeai.app.services.batch_manager_storage import (
Batch,
BatchProcess,
BatchProcessStorageBase,
BatchSession,
BatchSessionChanges,
BatchSessionNotFoundException,
)
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.graph import Graph, GraphExecutionState
from invokeai.app.services.invoker import Invoker
class BatchProcessResponse(BaseModel):
batch_id: str = Field(description="ID for the batch")
session_ids: list[str] = Field(description="List of session IDs created for this batch")
class BatchManagerBase(ABC):
@abstractmethod
def start(self, invoker: Invoker) -> None:
"""Starts the BatchManager service"""
pass
@abstractmethod
def create_batch_process(self, batch: Batch, graph: Graph) -> BatchProcessResponse:
"""Creates a batch process"""
pass
@abstractmethod
def run_batch_process(self, batch_id: str) -> None:
"""Runs a batch process"""
pass
@abstractmethod
def cancel_batch_process(self, batch_process_id: str) -> None:
"""Cancels a batch process"""
pass
@abstractmethod
def get_batch(self, batch_id: str) -> BatchProcessResponse:
"""Gets a batch process"""
pass
@abstractmethod
def get_batch_processes(self) -> list[BatchProcessResponse]:
"""Gets all batch processes"""
pass
@abstractmethod
def get_incomplete_batch_processes(self) -> list[BatchProcessResponse]:
"""Gets all incomplete batch processes"""
pass
@abstractmethod
def get_sessions(self, batch_id: str) -> list[BatchSession]:
"""Gets the sessions associated with a batch"""
pass
class BatchManager(BatchManagerBase):
"""Responsible for managing currently running and scheduled batch jobs"""
__invoker: Invoker
__batch_process_storage: BatchProcessStorageBase
def __init__(self, batch_process_storage: BatchProcessStorageBase) -> None:
super().__init__()
self.__batch_process_storage = batch_process_storage
def start(self, invoker: Invoker) -> None:
self.__invoker = invoker
local_handler.register(event_name=EventServiceBase.session_event, _func=self.on_event)
async def on_event(self, event: Event):
event_name = event[1]["event"]
match event_name:
case "graph_execution_state_complete":
await self._process(event, False)
case "invocation_error":
await self._process(event, True)
return event
async def _process(self, event: Event, err: bool) -> None:
data = event[1]["data"]
try:
batch_session = self.__batch_process_storage.get_session_by_session_id(data["graph_execution_state_id"])
except BatchSessionNotFoundException:
return None
changes = BatchSessionChanges(state="error" if err else "completed")
batch_session = self.__batch_process_storage.update_session_state(
batch_session.batch_id,
batch_session.session_id,
changes,
)
sessions = self.get_sessions(batch_session.batch_id)
batch_process = self.__batch_process_storage.get(batch_session.batch_id)
if not batch_process.canceled:
self.run_batch_process(batch_process.batch_id)
def _create_graph_execution_state(
self, batch_process: BatchProcess, batch_indices: tuple[int, ...]
) -> GraphExecutionState:
graph = batch_process.graph.copy(deep=True)
batch = batch_process.batch
for index, bdl in enumerate(batch.data):
for bd in bdl:
node = graph.get_node(bd.node_path)
if node is None:
continue
batch_index = batch_indices[index]
datum = bd.items[batch_index]
key = bd.field_name
node.__dict__[key] = datum
graph.update_node(bd.node_path, node)
return GraphExecutionState(graph=graph)
def run_batch_process(self, batch_id: str) -> None:
self.__batch_process_storage.start(batch_id)
batch_process = self.__batch_process_storage.get(batch_id)
next_batch_index = self._get_batch_index_tuple(batch_process)
if next_batch_index is None:
# finished with current run
if batch_process.current_run >= (batch_process.batch.runs - 1):
# finished with all runs
return
batch_process.current_batch_index = 0
batch_process.current_run += 1
next_batch_index = self._get_batch_index_tuple(batch_process)
if next_batch_index is None:
# shouldn't happen; satisfy types
return
# remember to increment the batch index
batch_process.current_batch_index += 1
self.__batch_process_storage.save(batch_process)
ges = self._create_graph_execution_state(batch_process=batch_process, batch_indices=next_batch_index)
next_session = self.__batch_process_storage.create_session(
BatchSession(
batch_id=batch_id,
session_id=str(uuid4()),
state="uninitialized",
batch_index=batch_process.current_batch_index,
)
)
ges.id = next_session.session_id
self.__invoker.services.graph_execution_manager.set(ges)
self.__batch_process_storage.update_session_state(
batch_id=next_session.batch_id,
session_id=next_session.session_id,
changes=BatchSessionChanges(state="in_progress"),
)
self.__invoker.services.events.emit_batch_session_created(next_session.batch_id, next_session.session_id)
self.__invoker.invoke(ges, invoke_all=True)
def create_batch_process(self, batch: Batch, graph: Graph) -> BatchProcessResponse:
batch_process = BatchProcess(
batch=batch,
graph=graph,
)
batch_process = self.__batch_process_storage.save(batch_process)
return BatchProcessResponse(
batch_id=batch_process.batch_id,
session_ids=[],
)
def get_sessions(self, batch_id: str) -> list[BatchSession]:
return self.__batch_process_storage.get_sessions_by_batch_id(batch_id)
def get_batch(self, batch_id: str) -> BatchProcess:
return self.__batch_process_storage.get(batch_id)
def get_batch_processes(self) -> list[BatchProcessResponse]:
bps = self.__batch_process_storage.get_all()
return self._get_batch_process_responses(bps)
def get_incomplete_batch_processes(self) -> list[BatchProcessResponse]:
bps = self.__batch_process_storage.get_incomplete()
return self._get_batch_process_responses(bps)
def cancel_batch_process(self, batch_process_id: str) -> None:
self.__batch_process_storage.cancel(batch_process_id)
def _get_batch_process_responses(self, batch_processes: list[BatchProcess]) -> list[BatchProcessResponse]:
sessions = list()
res: list[BatchProcessResponse] = list()
for bp in batch_processes:
sessions = self.__batch_process_storage.get_sessions_by_batch_id(bp.batch_id)
res.append(
BatchProcessResponse(
batch_id=bp.batch_id,
session_ids=[session.session_id for session in sessions],
)
)
return res
def _get_batch_index_tuple(self, batch_process: BatchProcess) -> Optional[tuple[int, ...]]:
batch_indices = list()
for batchdata in batch_process.batch.data:
batch_indices.append(list(range(len(batchdata[0].items))))
try:
return list(product(*batch_indices))[batch_process.current_batch_index]
except IndexError:
return None

View File

@ -0,0 +1,707 @@
import sqlite3
import threading
import uuid
from abc import ABC, abstractmethod
from typing import List, Literal, Optional, Union, cast
from pydantic import BaseModel, Extra, Field, StrictFloat, StrictInt, StrictStr, parse_raw_as, validator
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.services.graph import Graph
BatchDataType = Union[StrictStr, StrictInt, StrictFloat, ImageField]
class BatchData(BaseModel):
"""
A batch data collection.
"""
node_path: str = Field(description="The node into which this batch data collection will be substituted.")
field_name: str = Field(description="The field into which this batch data collection will be substituted.")
items: list[BatchDataType] = Field(
default_factory=list, description="The list of items to substitute into the node/field."
)
class Batch(BaseModel):
"""
A batch, consisting of a list of a list of batch data collections.
First, each inner list[BatchData] is zipped into a single batch data collection.
Then, the final batch collection is created by taking the Cartesian product of all batch data collections.
"""
data: list[list[BatchData]] = Field(default_factory=list, description="The list of batch data collections.")
runs: int = Field(default=1, description="Int stating how many times to iterate through all possible batch indices")
@validator("runs")
def validate_positive_runs(cls, r: int):
if r < 1:
raise ValueError("runs must be a positive integer")
return r
@validator("data")
def validate_len(cls, v: list[list[BatchData]]):
for batch_data in v:
if any(len(batch_data[0].items) != len(i.items) for i in batch_data):
raise ValueError("Zipped batch items must have all have same length")
return v
@validator("data")
def validate_types(cls, v: list[list[BatchData]]):
for batch_data in v:
for datum in batch_data:
for item in datum.items:
if not all(isinstance(item, type(i)) for i in datum.items):
raise TypeError("All items in a batch must have have same type")
return v
@validator("data")
def validate_unique_field_mappings(cls, v: list[list[BatchData]]):
paths: set[tuple[str, str]] = set()
count: int = 0
for batch_data in v:
for datum in batch_data:
paths.add((datum.node_path, datum.field_name))
count += 1
if len(paths) != count:
raise ValueError("Each batch data must have unique node_id and field_name")
return v
def uuid_string():
res = uuid.uuid4()
return str(res)
BATCH_SESSION_STATE = Literal["uninitialized", "in_progress", "completed", "error"]
class BatchSession(BaseModel):
batch_id: str = Field(defaultdescription="The Batch to which this BatchSession is attached.")
session_id: str = Field(
default_factory=uuid_string, description="The Session to which this BatchSession is attached."
)
batch_index: int = Field(description="The index of this batch session in its parent batch process")
state: BATCH_SESSION_STATE = Field(default="uninitialized", description="The state of this BatchSession")
class BatchProcess(BaseModel):
batch_id: str = Field(default_factory=uuid_string, description="Identifier for this batch.")
batch: Batch = Field(description="The Batch to apply to this session.")
current_batch_index: int = Field(default=0, description="The last executed batch index")
current_run: int = Field(default=0, description="The current run of the batch")
canceled: bool = Field(description="Whether or not to run sessions from this batch.", default=False)
graph: Graph = Field(description="The graph into which batch data will be inserted before being executed.")
class BatchSessionChanges(BaseModel, extra=Extra.forbid):
state: BATCH_SESSION_STATE = Field(description="The state of this BatchSession")
class BatchProcessNotFoundException(Exception):
"""Raised when an Batch Process record is not found."""
def __init__(self, message="BatchProcess record not found"):
super().__init__(message)
class BatchProcessSaveException(Exception):
"""Raised when an Batch Process record cannot be saved."""
def __init__(self, message="BatchProcess record not saved"):
super().__init__(message)
class BatchProcessDeleteException(Exception):
"""Raised when an Batch Process record cannot be deleted."""
def __init__(self, message="BatchProcess record not deleted"):
super().__init__(message)
class BatchSessionNotFoundException(Exception):
"""Raised when an Batch Session record is not found."""
def __init__(self, message="BatchSession record not found"):
super().__init__(message)
class BatchSessionSaveException(Exception):
"""Raised when an Batch Session record cannot be saved."""
def __init__(self, message="BatchSession record not saved"):
super().__init__(message)
class BatchSessionDeleteException(Exception):
"""Raised when an Batch Session record cannot be deleted."""
def __init__(self, message="BatchSession record not deleted"):
super().__init__(message)
class BatchProcessStorageBase(ABC):
"""Low-level service responsible for interfacing with the Batch Process record store."""
@abstractmethod
def delete(self, batch_id: str) -> None:
"""Deletes a BatchProcess record."""
pass
@abstractmethod
def save(
self,
batch_process: BatchProcess,
) -> BatchProcess:
"""Saves a BatchProcess record."""
pass
@abstractmethod
def get(
self,
batch_id: str,
) -> BatchProcess:
"""Gets a BatchProcess record."""
pass
@abstractmethod
def get_all(
self,
) -> list[BatchProcess]:
"""Gets a BatchProcess record."""
pass
@abstractmethod
def get_incomplete(
self,
) -> list[BatchProcess]:
"""Gets a BatchProcess record."""
pass
@abstractmethod
def start(
self,
batch_id: str,
) -> None:
"""'Starts' a BatchProcess record by marking its `canceled` attribute to False."""
pass
@abstractmethod
def cancel(
self,
batch_id: str,
) -> None:
"""'Cancels' a BatchProcess record by setting its `canceled` attribute to True."""
pass
@abstractmethod
def create_session(
self,
session: BatchSession,
) -> BatchSession:
"""Creates a BatchSession attached to a BatchProcess."""
pass
@abstractmethod
def create_sessions(
self,
sessions: list[BatchSession],
) -> list[BatchSession]:
"""Creates many BatchSessions attached to a BatchProcess."""
pass
@abstractmethod
def get_session_by_session_id(self, session_id: str) -> BatchSession:
"""Gets a BatchSession by session_id"""
pass
@abstractmethod
def get_sessions_by_batch_id(self, batch_id: str) -> List[BatchSession]:
"""Gets all BatchSession's for a given BatchProcess id."""
pass
@abstractmethod
def get_sessions_by_session_ids(self, session_ids: list[str]) -> List[BatchSession]:
"""Gets all BatchSession's for a given list of session ids."""
pass
@abstractmethod
def get_next_session(self, batch_id: str) -> BatchSession:
"""Gets the next BatchSession with state `uninitialized`, for a given BatchProcess id."""
pass
@abstractmethod
def update_session_state(
self,
batch_id: str,
session_id: str,
changes: BatchSessionChanges,
) -> BatchSession:
"""Updates the state of a BatchSession record."""
pass
class SqliteBatchProcessStorage(BatchProcessStorageBase):
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_lock: threading.Lock
def __init__(self, conn: sqlite3.Connection) -> None:
super().__init__()
self._conn = conn
# Enable row factory to get rows as dictionaries (must be done before making the cursor!)
self._conn.row_factory = sqlite3.Row
self._cursor = self._conn.cursor()
self._lock = threading.Lock()
try:
self._lock.acquire()
# Enable foreign keys
self._conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `batch_process` table and `batch_session` junction table."""
# Create the `batch_process` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS batch_process (
batch_id TEXT NOT NULL PRIMARY KEY,
batch TEXT NOT NULL,
graph TEXT NOT NULL,
current_batch_index NUMBER NOT NULL,
current_run NUMBER NOT NULL,
canceled BOOLEAN NOT NULL DEFAULT(0),
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME
);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_batch_process_created_at ON batch_process (created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_batch_process_updated_at
AFTER UPDATE
ON batch_process FOR EACH ROW
BEGIN
UPDATE batch_process SET updated_at = current_timestamp
WHERE batch_id = old.batch_id;
END;
"""
)
# Create the `batch_session` junction table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS batch_session (
batch_id TEXT NOT NULL,
session_id TEXT NOT NULL,
state TEXT NOT NULL,
batch_index NUMBER NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between batch_process and batch_session using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (batch_id,session_id),
FOREIGN KEY (batch_id) REFERENCES batch_process (batch_id) ON DELETE CASCADE
);
"""
)
# Add index for batch id
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_batch_session_batch_id ON batch_session (batch_id);
"""
)
# Add index for batch id, sorted by created_at
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_batch_session_batch_id_created_at ON batch_session (batch_id,created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_batch_session_updated_at
AFTER UPDATE
ON batch_session FOR EACH ROW
BEGIN
UPDATE batch_session SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE batch_id = old.batch_id AND session_id = old.session_id;
END;
"""
)
def delete(self, batch_id: str) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
DELETE FROM batch_process
WHERE batch_id = ?;
""",
(batch_id,),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchProcessDeleteException from e
except Exception as e:
self._conn.rollback()
raise BatchProcessDeleteException from e
finally:
self._lock.release()
def save(
self,
batch_process: BatchProcess,
) -> BatchProcess:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
INSERT OR REPLACE INTO batch_process (batch_id, batch, graph, current_batch_index, current_run)
VALUES (?, ?, ?, ?, ?);
""",
(
batch_process.batch_id,
batch_process.batch.json(),
batch_process.graph.json(),
batch_process.current_batch_index,
batch_process.current_run,
),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchProcessSaveException from e
finally:
self._lock.release()
return self.get(batch_process.batch_id)
def _deserialize_batch_process(self, session_dict: dict) -> BatchProcess:
"""Deserializes a batch session."""
# Retrieve all the values, setting "reasonable" defaults if they are not present.
batch_id = session_dict.get("batch_id", "unknown")
batch_raw = session_dict.get("batch", "unknown")
graph_raw = session_dict.get("graph", "unknown")
current_batch_index = session_dict.get("current_batch_index", 0)
current_run = session_dict.get("current_run", 0)
canceled = session_dict.get("canceled", 0)
return BatchProcess(
batch_id=batch_id,
batch=parse_raw_as(Batch, batch_raw),
graph=parse_raw_as(Graph, graph_raw),
current_batch_index=current_batch_index,
current_run=current_run,
canceled=canceled == 1,
)
def get(
self,
batch_id: str,
) -> BatchProcess:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM batch_process
WHERE batch_id = ?;
""",
(batch_id,),
)
result = cast(Union[sqlite3.Row, None], self._cursor.fetchone())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchProcessNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BatchProcessNotFoundException
return self._deserialize_batch_process(dict(result))
def get_all(
self,
) -> list[BatchProcess]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM batch_process
"""
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchProcessNotFoundException from e
finally:
self._lock.release()
if result is None:
return list()
return list(map(lambda r: self._deserialize_batch_process(dict(r)), result))
def get_incomplete(
self,
) -> list[BatchProcess]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT bp.*
FROM batch_process bp
WHERE bp.batch_id IN
(
SELECT batch_id
FROM batch_session bs
WHERE state IN ('uninitialized', 'in_progress')
);
"""
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchProcessNotFoundException from e
finally:
self._lock.release()
if result is None:
return list()
return list(map(lambda r: self._deserialize_batch_process(dict(r)), result))
def start(
self,
batch_id: str,
) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
UPDATE batch_process
SET canceled = 0
WHERE batch_id = ?;
""",
(batch_id,),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionSaveException from e
finally:
self._lock.release()
def cancel(
self,
batch_id: str,
) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
UPDATE batch_process
SET canceled = 1
WHERE batch_id = ?;
""",
(batch_id,),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionSaveException from e
finally:
self._lock.release()
def create_session(
self,
session: BatchSession,
) -> BatchSession:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
INSERT OR IGNORE INTO batch_session (batch_id, session_id, state, batch_index)
VALUES (?, ?, ?, ?);
""",
(session.batch_id, session.session_id, session.state, session.batch_index),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionSaveException from e
finally:
self._lock.release()
return self.get_session_by_session_id(session.session_id)
def create_sessions(
self,
sessions: list[BatchSession],
) -> list[BatchSession]:
try:
self._lock.acquire()
session_data = [(session.batch_id, session.session_id, session.state) for session in sessions]
self._cursor.executemany(
"""--sql
INSERT OR IGNORE INTO batch_session (batch_id, session_id, state, batch_index)
VALUES (?, ?, ?, ?);
""",
session_data,
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionSaveException from e
finally:
self._lock.release()
return self.get_sessions_by_session_ids([session.session_id for session in sessions])
def get_session_by_session_id(self, session_id: str) -> BatchSession:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM batch_session
WHERE session_id= ?;
""",
(session_id,),
)
result = cast(Union[sqlite3.Row, None], self._cursor.fetchone())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BatchSessionNotFoundException
return self._deserialize_batch_session(dict(result))
def _deserialize_batch_session(self, session_dict: dict) -> BatchSession:
"""Deserializes a batch session."""
return BatchSession.parse_obj(session_dict)
def get_next_session(self, batch_id: str) -> BatchSession:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM batch_session
WHERE batch_id = ? AND state = 'uninitialized';
""",
(batch_id,),
)
result = cast(Optional[sqlite3.Row], self._cursor.fetchone())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BatchSessionNotFoundException
session = self._deserialize_batch_session(dict(result))
return session
def get_sessions_by_batch_id(self, batch_id: str) -> List[BatchSession]:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT *
FROM batch_session
WHERE batch_id = ?;
""",
(batch_id,),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BatchSessionNotFoundException
sessions = list(map(lambda r: self._deserialize_batch_session(dict(r)), result))
return sessions
def get_sessions_by_session_ids(self, session_ids: list[str]) -> List[BatchSession]:
try:
self._lock.acquire()
placeholders = ",".join("?" * len(session_ids))
self._cursor.execute(
f"""--sql
SELECT * FROM batch_session
WHERE session_id
IN ({placeholders})
""",
tuple(session_ids),
)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionNotFoundException from e
finally:
self._lock.release()
if result is None:
raise BatchSessionNotFoundException
sessions = list(map(lambda r: self._deserialize_batch_session(dict(r)), result))
return sessions
def update_session_state(
self,
batch_id: str,
session_id: str,
changes: BatchSessionChanges,
) -> BatchSession:
try:
self._lock.acquire()
# Change the state of a batch session
if changes.state is not None:
self._cursor.execute(
"""--sql
UPDATE batch_session
SET state = ?
WHERE batch_id = ? AND session_id = ?;
""",
(changes.state, batch_id, session_id),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
raise BatchSessionSaveException from e
finally:
self._lock.release()
return self.get_session_by_session_id(session_id)

View File

@ -1,27 +1,77 @@
from abc import ABC, abstractmethod
import sqlite3 import sqlite3
import threading import threading
from typing import Optional, cast from typing import Optional, cast
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.shared.pagination import OffsetPaginatedResults from invokeai.app.services.models.image_record import (
from invokeai.app.services.shared.sqlite import SqliteDatabase ImageRecord,
deserialize_image_record,
)
from .board_image_records_base import BoardImageRecordStorageBase
class BoardImageRecordStorageBase(ABC):
"""Abstract base class for the one-to-many board-image relationship record storage."""
@abstractmethod
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Adds an image to a board."""
pass
@abstractmethod
def remove_image_from_board(
self,
image_name: str,
) -> None:
"""Removes an image from a board."""
pass
@abstractmethod
def get_all_board_image_names_for_board(
self,
board_id: str,
) -> list[str]:
"""Gets all board images for a board, as a list of the image names."""
pass
@abstractmethod
def get_board_for_image(
self,
image_name: str,
) -> Optional[str]:
"""Gets an image's board id, if it has one."""
pass
@abstractmethod
def get_image_count_for_board(
self,
board_id: str,
) -> int:
"""Gets the number of images for a board."""
pass
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase): class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
_conn: sqlite3.Connection _conn: sqlite3.Connection
_cursor: sqlite3.Cursor _cursor: sqlite3.Cursor
_lock: threading.RLock _lock: threading.Lock
def __init__(self, db: SqliteDatabase) -> None: def __init__(self, conn: sqlite3.Connection) -> None:
super().__init__() super().__init__()
self._lock = db.lock self._conn = conn
self._conn = db.conn # Enable row factory to get rows as dictionaries (must be done before making the cursor!)
self._conn.row_factory = sqlite3.Row
self._cursor = self._conn.cursor() self._cursor = self._conn.cursor()
self._lock = threading.Lock()
try: try:
self._lock.acquire() self._lock.acquire()
# Enable foreign keys
self._conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables() self._create_tables()
self._conn.commit() self._conn.commit()
finally: finally:

View File

@ -1,47 +0,0 @@
from abc import ABC, abstractmethod
from typing import Optional
class BoardImageRecordStorageBase(ABC):
"""Abstract base class for the one-to-many board-image relationship record storage."""
@abstractmethod
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Adds an image to a board."""
pass
@abstractmethod
def remove_image_from_board(
self,
image_name: str,
) -> None:
"""Removes an image from a board."""
pass
@abstractmethod
def get_all_board_image_names_for_board(
self,
board_id: str,
) -> list[str]:
"""Gets all board images for a board, as a list of the image names."""
pass
@abstractmethod
def get_board_for_image(
self,
image_name: str,
) -> Optional[str]:
"""Gets an image's board id, if it has one."""
pass
@abstractmethod
def get_image_count_for_board(
self,
board_id: str,
) -> int:
"""Gets the number of images for a board."""
pass

View File

@ -0,0 +1,115 @@
from abc import ABC, abstractmethod
from logging import Logger
from typing import Optional
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_record_storage import (
BoardRecord,
BoardRecordStorageBase,
)
from invokeai.app.services.image_record_storage import ImageRecordStorageBase
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.urls import UrlServiceBase
class BoardImagesServiceABC(ABC):
"""High-level service for board-image relationship management."""
@abstractmethod
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
"""Adds an image to a board."""
pass
@abstractmethod
def remove_image_from_board(
self,
image_name: str,
) -> None:
"""Removes an image from a board."""
pass
@abstractmethod
def get_all_board_image_names_for_board(
self,
board_id: str,
) -> list[str]:
"""Gets all board images for a board, as a list of the image names."""
pass
@abstractmethod
def get_board_for_image(
self,
image_name: str,
) -> Optional[str]:
"""Gets an image's board id, if it has one."""
pass
class BoardImagesServiceDependencies:
"""Service dependencies for the BoardImagesService."""
board_image_records: BoardImageRecordStorageBase
board_records: BoardRecordStorageBase
image_records: ImageRecordStorageBase
urls: UrlServiceBase
logger: Logger
def __init__(
self,
board_image_record_storage: BoardImageRecordStorageBase,
image_record_storage: ImageRecordStorageBase,
board_record_storage: BoardRecordStorageBase,
url: UrlServiceBase,
logger: Logger,
):
self.board_image_records = board_image_record_storage
self.image_records = image_record_storage
self.board_records = board_record_storage
self.urls = url
self.logger = logger
class BoardImagesService(BoardImagesServiceABC):
_services: BoardImagesServiceDependencies
def __init__(self, services: BoardImagesServiceDependencies):
self._services = services
def add_image_to_board(
self,
board_id: str,
image_name: str,
) -> None:
self._services.board_image_records.add_image_to_board(board_id, image_name)
def remove_image_from_board(
self,
image_name: str,
) -> None:
self._services.board_image_records.remove_image_from_board(image_name)
def get_all_board_image_names_for_board(
self,
board_id: str,
) -> list[str]:
return self._services.board_image_records.get_all_board_image_names_for_board(board_id)
def get_board_for_image(
self,
image_name: str,
) -> Optional[str]:
board_id = self._services.board_image_records.get_board_for_image(image_name)
return board_id
def board_record_to_dto(board_record: BoardRecord, cover_image_name: Optional[str], image_count: int) -> BoardDTO:
"""Converts a board record to a board DTO."""
return BoardDTO(
**board_record.dict(exclude={"cover_image_name"}),
cover_image_name=cover_image_name,
image_count=image_count,
)

Some files were not shown because too many files have changed in this diff Show More