Merge remote-tracking branch 'origin/main' into feat/taesd

This commit is contained in:
Kevin Turner 2023-09-13 09:08:43 -07:00
commit 090db1ab3a
238 changed files with 6523 additions and 1731 deletions

View File

@ -1,6 +1,4 @@
name: style checks
# just formatting and flake8 for now
# TODO: add isort later
on:
pull_request:
@ -20,8 +18,8 @@ jobs:
- name: Install dependencies with pip
run: |
pip install black flake8 Flake8-pyproject
pip install black flake8 Flake8-pyproject isort
# - run: isort --check-only .
- run: isort --check-only .
- run: black --check .
- run: flake8

View File

@ -15,3 +15,10 @@ repos:
language: system
entry: flake8
types: [python]
- id: isort
name: isort
stages: [commit]
language: system
entry: isort
types: [python]

View File

@ -46,13 +46,13 @@ the foundation for multiple commercial products.
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/">Code and
Downloads</a>] [<a
href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
Tutorials</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
Ideas & Q&A</a>]
[<a
href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>]
<div align="center">
@ -368,9 +368,9 @@ InvokeAI offers a locally hosted Web Server & React Frontend, with an industry l
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Node Architecture & Editor (Beta)*
### *Workflows & Nodes*
Invoke AI's backend is built on a graph-based execution architecture. This allows for customizable generation pipelines to be developed by professional users looking to create specific workflows to support their production use-cases, and will be extended in the future with additional capabilities.
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
@ -383,8 +383,9 @@ Invoke AI provides an organized gallery system for easily storing, accessing, an
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
- *Node-Based Plug-&-Play UI (Beta)*
### Latest Changes
@ -395,20 +396,18 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
### Troubleshooting
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
problems and other issues.
problems and other issues. For more help, please join our [Discord][discord link]
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
If you'd like to help with translation, please see our [translation guide](docs/other/TRANSLATION.md).
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, here is a
[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). A full set of contribution guidelines, along with templates, are in progress. You can **make your pull request against the "main" branch**.
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
@ -424,7 +423,7 @@ their time, hard work and effort.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
Original portions of the software are Copyright (c) 2023 by respective contributors.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 490 KiB

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 319 KiB

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 217 KiB

After

Width:  |  Height:  |  Size: 209 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 244 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 948 KiB

After

Width:  |  Height:  |  Size: 187 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 420 KiB

After

Width:  |  Height:  |  Size: 132 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 197 KiB

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 216 KiB

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

View File

@ -1,39 +1,41 @@
# How to Contribute
# Contributing
## Welcome to Invoke AI
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
## Contributing to Invoke AI
# Methods of Contributing to Invoke AI
Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so.
To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.
## Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md).
### Areas of contribution:
**New Contributors:** If youre unfamiliar with contributing to open source projects, take a look at our [new contributor guide](contribution_guides/newContributorChecklist.md).
#### Development
If youd like to help with development, please see our [development guide](contribution_guides/development.md). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Nodes
If youd like to add a Node, please see our [nodes contribution guide](../nodes/contributingNodes.md).
#### Nodes
If youd like to help with development, please see our [nodes contribution guide](/nodes/contributingNodes). If youre unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide.
## Support and Triaging
Helping support other users in [Discord](https://discord.gg/ZmtBAhwWhy) and on Github are valuable forms of contribution that we greatly appreciate.
#### Documentation
We receive many issues and requests for help from users. We're limited in bandwidth relative to our the user base, so providing answers to questions or helping identify causes of issues is very helpful. By doing this, you enable us to spend time on the highest priority work.
## Documentation
If youd like to help with documentation, please see our [documentation guide](contribution_guides/documentation.md).
#### Translation
## Translation
If you'd like to help with translation, please see our [translation guide](contribution_guides/translation.md).
#### Tutorials
## Tutorials
Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community.
### Contributors
# Contributors
This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort.
### Code of Conduct
# Code of Conduct
The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment.
@ -47,8 +49,7 @@ By making a contribution to this project, you certify that:
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
### Support
# Support
For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy).

View File

@ -4,14 +4,21 @@
If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential.
For more information, please review our area specific documentation:
## **Get Started**
To get started, take a look at our [new contributors checklist](newContributorChecklist.md)
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](development_guides/contributingToFrontend.md)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md).
If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md), [translation](translation.md) or helping support other users and triage issues as they're reported in GitHub.
There are two paths to making a development contribution:
@ -23,60 +30,10 @@ There are two paths to making a development contribution:
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviwers easily understand your contribution
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **How do I make a contribution?**
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add insert-paths-of-changed-files-here
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```markdown
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository.
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
@ -85,6 +42,7 @@ For frontend related work, **@pyschedelicious** is the best person to reach out
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.
## **What does the Code of Conduct mean for me?**
Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code.

View File

@ -0,0 +1,68 @@
# New Contributor Guide
If you're a new contributor to InvokeAI or Open Source Projects, this is the guide for you.
## New Contributor Checklist
- [x] Set up your local development environment & fork of InvokAI by following [the steps outlined here](../../installation/020_INSTALL_MANUAL.md#developer-install)
- [x] Set up your local tooling with [this guide](InvokeAI/contributing/LOCAL_DEVELOPMENT/#developing-invokeai-in-vscode). Feel free to skip this step if you already have tooling you're comfortable with.
- [x] Familiarize yourself with [Git](https://www.atlassian.com/git) & our project structure by reading through the [development documentation](development.md)
- [x] Join the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord
- [x] Choose an issue to work on! This can be achieved by asking in the #dev-chat channel, tackling a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) or finding an item on the [roadmap](https://github.com/orgs/invoke-ai/projects/7). If nothing in any of those places catches your eye, feel free to work on something of interest to you!
- [x] Make your first Pull Request with the guide below
- [x] Happy development! Don't be afraid to ask for help - we're happy to help you contribute!
## How do I make a contribution?
Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown!
Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md).
1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success.
2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**.
3. Clone the repository to your local machine using:
```bash
git clone https://github.com/your-GitHub-username/InvokeAI.git
```
If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface.
4. Create a new branch for your fix using:
```bash
git checkout -b branch-name-here
```
5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add.
6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index:
```bash
git add -A
```
7. Store the contents of the index with a descriptive message.
```bash
git commit -m "Insert a short message of the changes made here"
```
8. Push the changes to the remote repository using
```bash
git push origin branch-name-here
```
9. Submit a pull request to the **main** branch of the InvokeAI repository. If you're not sure how to, [follow this guide](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)
10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234".
11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it!
12. Wait for the pull request to be reviewed by other collaborators.
13. Make changes to the pull request if the reviewer(s) recommend them.
14. Celebrate your success after your pull request is merged!
If youd like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github).
## Best Practices:
* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged
* Comments! Commenting your code helps reviewers easily understand your contribution
* Use Python and Typescripts typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development
* Make all communications public. This ensure knowledge is shared with the whole community
## **Where can I go for help?**
If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord.
For frontend related work, **@pyschedelicious** is the best person to reach out to.
For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**.

View File

@ -21,8 +21,8 @@ TI files that you'll encounter are `.pt` and `.bin` files, which are produced by
different TI training packages. InvokeAI supports both formats, but its
[built-in TI training system](TRAINING.md) produces `.pt`.
The [Hugging Face company](https://huggingface.co/sd-concepts-library) has
amassed a large ligrary of &gt;800 community-contributed TI files covering a
[Hugging Face](https://huggingface.co/sd-concepts-library) has
amassed a large library of &gt;800 community-contributed TI files covering a
broad range of subjects and styles. You can also install your own or others' TI files
by placing them in the designated directory for the compatible model type

View File

@ -104,7 +104,7 @@ The OpenPose control model allows for the identification of the general pose of
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
**Tile (experimental)**:
**Tile**:
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
@ -117,8 +117,6 @@ The Tile Model can be a powerful tool in your arsenal for enhancing image qualit
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.

View File

@ -2,17 +2,50 @@
title: Model Merging
---
# :material-image-off: Model Merging
## How to Merge Models
As of version 2.3, InvokeAI comes with a script that allows you to
merge two or three diffusers-type models into a new merged model. The
InvokeAI provides the ability to merge two or three diffusers-type models into a new merged model. The
resulting model will combine characteristics of the original, and can
be used to teach an old model new tricks.
## How to Merge Models
Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models.
## Settings
* Model Selection: there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha: This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method: This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available.
* Save Location: The location you want the merged model to be saved in. Default is in the InvokeAI root folder
* Name for merged model: This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".
* Ignore Mismatches / Force: Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
You may run the merge script by starting the invoke launcher
(`invoke.sh` or `invoke.bat`) and choosing the option for _merge
(`invoke.sh` or `invoke.bat`) and choosing the option (4) for _merge
models_. This will launch a text-based interactive user interface that
prompts you to select the models to merge, how to merge them, and the
merged model name.
@ -40,34 +73,4 @@ this to get back.
If the merge runs successfully, it will create a new diffusers model
under the selected name and register it with InvokeAI.
## The Settings
* Model Selection -- there are three multiple choice fields that
display all the diffusers-style models that InvokeAI knows about.
If you do not see the model you are looking for, then it is probably
a legacy checkpoint model and needs to be converted using the
`invoke` command-line client and its `!optimize` command. You
must select at least two models to merge. The third can be left at
"None" if you desire.
* Alpha -- This is the ratio to use when combining models. It ranges
from 0 to 1. The higher the value, the more weight is given to the
2d and (optionally) 3d models. So if you have two models named "A"
and "B", an alpha value of 0.25 will give you a merged model that is
25% A and 75% B.
* Interpolation Method -- This is the method used to combine
weights. The options are "weighted_sum" (the default), "sigmoid",
"inv_sigmoid" and "add_difference". Each produces slightly different
results. When three models are in use, only "add_difference" is
available. (TODO: cite a reference that describes what these
interpolation methods actually do and how to decide among them).
* Force -- Not all models are compatible with each other. The merge
script will check for compatibility and refuse to merge ones that
are incompatible. Set this checkbox to try merging anyway.
* Name for merged model - This is the name for the new model. Please
use InvokeAI conventions - only alphanumeric letters and the
characters ".+-".

View File

@ -142,7 +142,7 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
### Escaping parentheses () and speech marks ""
### Escaping parentheses and speech marks
If the model you are using has parentheses () or speech marks "" as part of its
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
@ -246,7 +246,7 @@ To create a Dynamic Prompt, follow these steps:
Within the braces, separate each option using a vertical bar |.
If you want to include multiple options from a single group, prefix with the desired number and $$.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {2$$style1|style2|style3}.
For instance: A {house|apartment|lodge|cottage} in {summer|winter|autumn|spring} designed in {style1|style2|style3}.
### How Dynamic Prompts Work
Once a Dynamic Prompt is configured, the system generates an array of combinations using the options provided. Each group of options in curly braces is treated independently, with the system selecting one option from each group. For a prefixed set (e.g., 2$$), the system will select two distinct options.
@ -273,3 +273,36 @@ Below are some useful strategies for creating Dynamic Prompts:
Experiment with different quantities for the prefix. For example, 3$$ will select three distinct options.
Be aware of coherence in your prompts. Although the system can generate all possible combinations, not all may semantically make sense. Therefore, carefully choose the options for each group.
Always review and fine-tune the generated prompts as needed. While Dynamic Prompts can help you generate a multitude of combinations, the final polishing and refining remain in your hands.
## SDXL Prompting
Prompting with SDXL is slightly different than prompting with SD1.5 or SD2.1 models - SDXL expects a prompt _and_ a style.
### Prompting
<figure markdown>
![SDXL prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt.png)
</figure>
In the prompt box, enter a positive or negative prompt as you normally would.
For the style box you can enter a style that you want the image to be generated in. You can use styles from this example list, or any other style you wish: anime, photographic, digital art, comic book, fantasy art, analog film, neon punk, isometric, low poly, origami, line art, cinematic, 3d model, pixel art, etc.
### Concatenated Prompts
InvokeAI also has the option to concatenate the prompt and style inputs, by pressing the "link" button in the Positive Prompt box.
This concatenates the prompt & style inputs, and passes the joined prompt and style to the SDXL model.
![SDXL concatenated prompt boxes in InvokeAI](../assets/prompt_syntax/sdxl-prompt-concatenated.png)

View File

@ -43,27 +43,22 @@ into the directory
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start the front end by selecting choice (3):
start training tool selecting choice (3):
```sh
Do you want to generate images using the
1: Browser-based UI
2: Command-line interface
3: Run textual inversion training
4: Merge models (diffusers type only)
5: Download and install models
6: Change InvokeAI startup options
7: Re-run the configure script to fix a broken install
8: Open the developer console
9: Update InvokeAI
10: Command-line help
Q: Quit
Please enter 1-10, Q: [1]
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
From the command line, with the InvokeAI virtual environment active,
you can launch the front end with the command `invokeai-ti --gui`.
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:

View File

@ -287,7 +287,7 @@ manager, please follow these steps:
Leave off the `--gui` option to run the script using command-line arguments. Pass the `--help` argument
to get usage instructions.
### Developer Install
## Developer Install
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
@ -296,13 +296,14 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
1. Create a fork of the InvokeAI repository through the GitHub UI or [this link](https://github.com/invoke-ai/InvokeAI/fork)
1. From the command line, run this command:
```bash
git clone https://github.com/invoke-ai/InvokeAI.git
git clone https://github.com/<your_github_username>/InvokeAI.git
```
This will create a directory named `InvokeAI` and populate it with the
full source code from the InvokeAI repository.
full source code from your fork of the InvokeAI repository.
2. Activate the InvokeAI virtual environment as per step (4) of the manual
installation protocol (important!)

View File

@ -17,14 +17,32 @@ This fork is supported across Linux, Windows and Macintosh. Linux users can use
either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
### [Installation Getting Started Guide](installation)
#### **[Automated Installer](010_INSTALL_AUTOMATED.md)**
## **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
#### [Manual Installation](020_INSTALL_MANUAL.md)
This method is recommended for experienced users and developers
#### [Docker Installation](040_INSTALL_DOCKER.md)
This method is recommended for those familiar with running Docker containers
### Other Installation Guides
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
## **[Docker Installation](040_INSTALL_DOCKER.md)**
This method is recommended for those familiar with running Docker containers.
We offer a method for creating Docker containers containing InvokeAI and its dependencies. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install.
## Other Installation Guides
- [PyPatchMatch](060_INSTALL_PATCHMATCH.md)
- [XFormers](070_INSTALL_XFORMERS.md)
- [CUDA and ROCm Drivers](030_INSTALL_CUDA_AND_ROCM.md)
@ -63,43 +81,3 @@ images in full-precision mode:
- GTX 1650 series cards
- GTX 1660 series cards
## Installation options
1. [Automated Installer](010_INSTALL_AUTOMATED.md)
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
✅ This is the recommended option for first time users.
2. [Manual Installation](020_INSTALL_MANUAL.md)
In this method you will manually run the commands needed to install
InvokeAI and its dependencies. We offer two recipes: one suited to
those who prefer the `conda` tool, and one suited to those who prefer
`pip` and Python virtual environments. In our hands the pip install
is faster and more reliable, but your mileage may vary.
Note that the conda installation method is currently deprecated and
will not be supported at some point in the future.
This method is recommended for users who have previously used `conda`
or `pip` in the past, developers, and anyone who wishes to remain on
the cutting edge of future InvokeAI development and is willing to put
up with occasional glitches and breakage.
3. [Docker Installation](040_INSTALL_DOCKER.md)
We also offer a method for creating Docker containers containing
InvokeAI and its dependencies. This method is recommended for
individuals with experience with Docker containers and understand
the pluses and minuses of a container-based install.
## Quick Guides
* [Installing CUDA and ROCm Drivers](./030_INSTALL_CUDA_AND_ROCM.md)
* [Installing XFormers](./070_INSTALL_XFORMERS.md)
* [Installing PyPatchMatch](./060_INSTALL_PATCHMATCH.md)
* [Installing New Models](./050_INSTALLING_MODELS.md)

View File

@ -1,13 +1,32 @@
# Using the Node Editor
# Using the Workflow Editor
The nodes editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
The workflow editor is a blank canvas allowing for the use of individual functions and image transformations to control the image generation workflow. Nodes take in inputs on the left side of the node, and return an output on the right side of the node. A node graph is composed of multiple nodes that are connected together to create a workflow. Nodes' inputs and outputs are connected by dragging connectors from node to node. Inputs and outputs are color coded for ease of use.
To better understand how nodes are used, think of how an electric power bar works. It takes in one input (electricity from a wall outlet) and passes it to multiple devices through multiple outputs. Similarly, a node could have multiple inputs and outputs functioning at the same (or different) time, but all node outputs pass information onward like a power bar passes electricity. Not all outputs are compatible with all inputs, however - Each node has different constraints on how it is expecting to input/output information. In general, node outputs are colour-coded to match compatible inputs of other nodes.
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
## UI Features
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
![linearview](../assets/nodes/linearview.png)
### Renaming Fields and Nodes
Any node or input field can be renamed in the workflow editor. If the input field you have renamed has been added to the Linear View, the changed name will be reflected in the Linear View and the node.
### Managing Nodes
* Ctrl+C to copy a node
* Ctrl+V to paste a node
* Backspace/Delete to delete a node
* Shift+Click to drag and select multiple nodes
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Nodes Editor and build workflows to suit your needs.
## Important Concepts
## Important Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -37,7 +56,7 @@ It is common to want to use both the same seed (for continuity) and random seeds
### ControlNet
The ControlNet node outputs a Control, which can be provided as input to non-image *ToLatents nodes. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
The ControlNet node outputs a Control, which can be provided as input to a Denoise Latents node. Depending on the type of ControlNet desired, ControlNet nodes usually require an image processor node, such as a Canny Processor or Depth Processor, which prepares an input image for use with ControlNet.
![groupscontrol](../assets/nodes/groupscontrol.png)
@ -59,10 +78,9 @@ Iteration is a common concept in any processing, and means to repeat a process w
![groupsiterate](../assets/nodes/groupsiterate.png)
### Multiple Image Generation + Random Seeds
### Batch / Multiple Image Generation + Random Seeds
Multiple image generation in the node editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection.
To control seeds across generations takes some care. The first row in the screenshot will generate multiple images with different seeds, but using the same RandomRange parameters across invocations will result in the same group of random seeds being used across the images, producing repeatable results. In the second row, adding the RandomInt node as input to RandomRange's 'Seed' edge point will ensure that seeds are varied across all images across invocations, producing varied results.
Batch or multiple image generation in the workflow editor is done using the RandomRange node. In this case, the 'Size' field represents the number of images to generate, meaning this example will generate 4 images. As RandomRange produces a collection of integers, we need to add the Iterate node to iterate through the collection. This noise can then be fed to the Denoise Latents node for it to iterate through the denoising process with the different seeds provided.
![groupsmultigenseeding](../assets/nodes/groupsmultigenseeding.png)

View File

@ -4,9 +4,9 @@ These are nodes that have been developed by the community, for the community. If
If you'd like to submit a node for the community, please refer to the [node creation overview](contributingNodes.md).
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node.
To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations` folder in your Invoke AI install location. If you used the automated installation, this can be found inside the `.venv` folder. Along with the node, an example node graph should be provided to help you get started with the node.
To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor.
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
## Community Nodes

View File

@ -4,10 +4,10 @@ To learn about the specifics of creating a new node, please visit our [Node crea
Once youve created a node and confirmed that it behaves as expected locally, follow these steps:
- Make sure the node is contained in a new Python (.py) file
- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project.
- Make sure the node is contained in a new Python (.py) file. Preferrably, the node is in a repo with a README detaling the nodes usage & examples to help others more easily use your node.
- Submit a pull request with a link to your node(s) repo in GitHub against the `main` branch to add the node to the [Community Nodes](communityNodes.md) list
- Make sure you are following the template below and have provided all relevant details about the node and what it does. Example output images and workflows are very helpful for other users looking to use your node.
- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you may be asked for permission to include it in the core project.
### Community Node Template

View File

@ -22,6 +22,7 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Divide Integers | Divides two numbers|
|Dynamic Prompt | Parses a prompt using adieyal/dynamicprompts' random or combinatorial generator|
|Upscale (RealESRGAN) | Upscales an image using RealESRGAN.|
|Float Math | Perform basic math operations on two floats|
|Float Primitive Collection | A collection of float primitive values|
|Float Primitive | A float primitive value|
|Float Range | Creates a range|
@ -29,6 +30,7 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Blur Image | Blurs an image|
|Extract Image Channel | Gets a channel from an image.|
|Image Primitive Collection | A collection of image primitive values|
|Integer Math | Perform basic math operations on two integers|
|Convert Image Mode | Converts an image to a different mode.|
|Crop Image | Crops an image to a specified box. The box can be outside of the image.|
|Image Hue Adjustment | Adjusts the Hue of an image.|
@ -42,6 +44,8 @@ The table below contains a list of the default nodes shipped with InvokeAI and t
|Paste Image | Pastes an image into another image.|
|ImageProcessor | Base class for invocations that preprocess images for ControlNet|
|Resize Image | Resizes an image to specific dimensions|
|Round Float | Rounds a float to a specified number of decimal places|
|Float to Integer | Converts a float to an integer. Optionally rounds to an even multiple of a input number.|
|Scale Image | Scales an image by a factor|
|Image to Latents | Encodes an image into latents.|
|Add Invisible Watermark | Add an invisible watermark to an image|

View File

@ -1,15 +1,13 @@
# Example Workflows
TODO: Will update once uploading workflows is available.
We've curated some example workflows for you to get started with Workflows in InvokeAI
## Text2Image
To use them, right click on your desired workflow, press "Download Linked File". You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!
## Image2Image
If you're interested in finding more workflows, checkout the [#share-your-workflows](https://discord.com/channels/1020123559063990373/1130291608097661000) channel in the InvokeAI Discord.
## ControlNet
* [SD1.5 / SD2 Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/Text_to_Image.json)
* [SDXL Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [SDXL (with Refiner) Text to Image](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/SDXL_Text_to_Image.json)
* [Tiled Upscaling with ControlNet](https://github.com/invoke-ai/InvokeAI/blob/main/docs/workflows/ESRGAN_img2img_upscale w_Canny_ControlNet.json
## Upscaling
## Inpainting / Outpainting
## LoRAs

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,735 @@
{
"name": "SDXL Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for SDXL",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SDXL, default",
"notes": "",
"exposedFields": [
{
"nodeId": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"fieldName": "model"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "prompt"
},
{
"nodeId": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"fieldName": "style"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "prompt"
},
{
"nodeId": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"fieldName": "style"
},
{
"nodeId": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"fieldName": "steps"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Negative Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 764,
"position": {
"x": 1275,
"y": -350
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -300
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 224,
"position": {
"x": 2025,
"y": -250
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1650,
"y": -350
}
},
{
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"type": "sdxl_model_loader",
"inputs": {
"model": {
"id": "39f9e799-bc95-4318-a200-30eed9e60c42",
"name": "model",
"type": "SDXLMainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-xl-base-1.0",
"base_model": "sdxl",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "2626a45e-59aa-4609-b131-2d45c5eaed69",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "7c9c42fa-93d5-4639-ab8b-c4d9b0559baf",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"clip2": {
"id": "0dafddcf-a472-49c1-a47c-7b8fab4c8bc9",
"name": "clip2",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "ee6a6997-1b3c-4ff3-99ce-1e7bfba2750c",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 234,
"position": {
"x": 475,
"y": 25
}
},
{
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "sdxl_compel_prompt",
"inputs": {
"prompt": {
"id": "5a6889e6-95cb-462f-8f4a-6b93ae7afaec",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"style": {
"id": "f240d0e6-3a1c-4320-af23-20ebb707c276",
"name": "style",
"type": "string",
"fieldKind": "input",
"label": "Positive Style",
"value": ""
},
"original_width": {
"id": "05af07b0-99a0-4a68-8ad2-697bbdb7fc7e",
"name": "original_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"original_height": {
"id": "2c771996-a998-43b7-9dd3-3792664d4e5b",
"name": "original_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"crop_top": {
"id": "66519dca-a151-4e3e-ae1f-88f1f9877bde",
"name": "crop_top",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"crop_left": {
"id": "349cf2e9-f3d0-4e16-9ae2-7097d25b6a51",
"name": "crop_left",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"target_width": {
"id": "44499347-7bd6-4a73-99d6-5a982786db05",
"name": "target_width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"target_height": {
"id": "fda359b0-ab80-4f3c-805b-c9f61319d7d2",
"name": "target_height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 1024
},
"clip": {
"id": "b447adaf-a649-4a76-a827-046a9fc8d89b",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
},
"clip2": {
"id": "86ee4e32-08f9-4baa-9163-31d93f5c0187",
"name": "clip2",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "7c10118e-7b4e-4911-b98e-d3ba6347dfd0",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "SDXL Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 764,
"position": {
"x": 900,
"y": -350
}
},
{
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"type": "denoise_latents",
"inputs": {
"noise": {
"id": "4884a4b7-cc19-4fea-83c7-1f940e6edd24",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "4c61675c-b6b9-41ac-b187-b5c13b587039",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 36
},
"cfg_scale": {
"id": "f8213f35-4637-4a1a-83f4-1f8cfb9ccd2c",
"name": "cfg_scale",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "01e2f30d-0acd-4e21-98b9-a9b8e24c6db2",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "3db95479-a73b-4c75-9b44-08daec16b224",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "db8430a9-64c3-4c54-ae38-9f597cf7b6d5",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"control": {
"id": "599b49e8-6435-4576-be41-a5155f3a17e3",
"name": "control",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "226f9e91-454e-4159-9fa6-019c0cf29277",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "de019cb6-7fb5-45bf-a266-22e20889893f",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "02fc400a-110d-470e-8411-f404f966a949",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "4bd3bdfa-fcf4-42be-8e47-1e314255798f",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "7c2d58a8-b5f1-4e63-8ffd-8ada52c35832",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "6a6fa492-de26-4e95-b1d9-a322fe37eb13",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "a9790729-7d6c-4418-903d-4da961fccf56",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "fa74efe5-7330-4a3c-b256-c82a544585b4",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 558,
"position": {
"x": 1650,
"y": -250
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2-55705012-79b9-4aac-9f26-c0b10309785b-collapsed",
"type": "collapsed"
},
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "clip2",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"targetHandle": "clip2",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "vae",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-87ee6243-fb0d-4f77-ad5f-56591659339elatents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"sourceHandle": "conditioning",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-87ee6243-fb0d-4f77-ad5f-56591659339epositive_conditioning",
"type": "default"
},
{
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"sourceHandle": "conditioning",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-87ee6243-fb0d-4f77-ad5f-56591659339enegative_conditioning",
"type": "default"
},
{
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"sourceHandle": "unet",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "unet",
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-87ee6243-fb0d-4f77-ad5f-56591659339eunet",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "87ee6243-fb0d-4f77-ad5f-56591659339e",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-87ee6243-fb0d-4f77-ad5f-56591659339enoise",
"type": "default"
}
]
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,573 @@
{
"name": "Text to Image",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "1.0.1",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
"exposedFields": [
{
"nodeId": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"fieldName": "model"
},
{
"nodeId": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"fieldName": "prompt"
},
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
},
{
"nodeId": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"fieldName": "steps"
}
],
"meta": {
"version": "1.0.0"
},
"nodes": [
{
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 235,
"position": {
"x": 1400,
"y": -75
}
},
{
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 512
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": true
}
},
"outputs": {
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 364,
"position": {
"x": 1000,
"y": 350
}
},
{
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"type": "l2i",
"inputs": {
"tiled": {
"id": "24f5bc7b-f6a1-425d-8ab1-f50b4db5d0df",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"fp32": {
"id": "b146d873-ffb9-4767-986a-5360504841a2",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
},
"latents": {
"id": "65441abd-7713-4b00-9d8d-3771404002e8",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"vae": {
"id": "a478b833-6e13-4611-9a10-842c89603c74",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"image": {
"id": "c87ae925-f858-417a-8940-8708ba9b4b53",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
},
"width": {
"id": "4bcb8512-b5a1-45f1-9e52-6e92849f9d6c",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "23e41c00-a354-48e8-8f59-5875679c27ab",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": true,
"isIntermediate": false
},
"width": 320,
"height": 266,
"position": {
"x": 1800,
"y": 200
}
},
{
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"inputs": {
"model": {
"id": "993eabd2-40fd-44fe-bce7-5d0c7075ddab",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
"model_type": "main"
}
}
},
"outputs": {
"unet": {
"id": "5c18c9db-328d-46d0-8cb9-143391c410be",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
},
"clip": {
"id": "6effcac0-ec2f-4bf5-a49e-a2c29cf921f4",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
},
"vae": {
"id": "57683ba3-f5f5-4f58-b9a2-4b83dacad4a1",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
}
},
"label": "",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1000,
"y": 200
}
},
{
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
}
},
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 235,
"position": {
"x": 1000,
"y": -75
}
},
{
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 2147483647
}
},
"outputs": {
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
}
},
"label": "Random Seed",
"isOpen": false,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 32,
"position": {
"x": 1000,
"y": 275
}
},
{
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "invocation",
"data": {
"version": "1.0.0",
"id": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"type": "denoise_latents",
"inputs": {
"noise": {
"id": "8b18f3eb-40d2-45c1-9a9d-28d6af0dce2b",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"steps": {
"id": "0be4373c-46f3-441c-80a7-a4bb6ceb498c",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 36
},
"cfg_scale": {
"id": "107267ce-4666-4cd7-94b3-7476b7973ae9",
"name": "cfg_scale",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 7.5
},
"denoising_start": {
"id": "d2ce9f0f-5fc2-48b2-b917-53442941e9a1",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 0
},
"denoising_end": {
"id": "8ad51505-b8d0-422a-beb8-96fc6fc6b65f",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"value": 1
},
"scheduler": {
"id": "53092874-a43b-4623-91a2-76e62fdb1f2e",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
},
"control": {
"id": "7abe57cc-469d-437e-ad72-a18efa28215f",
"name": "control",
"type": "ControlField",
"fieldKind": "input",
"label": ""
},
"latents": {
"id": "add8bbe5-14d0-42d4-a867-9c65ab8dd129",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
},
"denoise_mask": {
"id": "f373a190-0fc8-45b7-ae62-c4aa8e9687e1",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
},
"positive_conditioning": {
"id": "c7160303-8a23-4f15-9197-855d48802a7f",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"negative_conditioning": {
"id": "fd750efa-1dfc-4d0b-accb-828e905ba320",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
},
"unet": {
"id": "af1f41ba-ce2a-4314-8d7f-494bb5800381",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
}
},
"outputs": {
"latents": {
"id": "8508d04d-f999-4a44-94d0-388ab1401d27",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
},
"width": {
"id": "93dc8287-0a2a-4320-83a4-5e994b7ba23e",
"name": "width",
"type": "integer",
"fieldKind": "output"
},
"height": {
"id": "d9862f5c-0ab5-46fa-8c29-5059bb581d96",
"name": "height",
"type": "integer",
"fieldKind": "output"
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true
},
"width": 320,
"height": 558,
"position": {
"x": 1400,
"y": 200
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "vae",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-dbcd2f98-d809-48c8-bf64-2635f88a2fe9vae",
"type": "default"
},
{
"source": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"sourceHandle": "latents",
"target": "dbcd2f98-d809-48c8-bf64-2635f88a2fe9",
"targetHandle": "latents",
"id": "reactflow__edge-75899702-fa44-46d2-b2d5-3e17f234c3e7latents-dbcd2f98-d809-48c8-bf64-2635f88a2fe9latents",
"type": "default"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7positive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"sourceHandle": "conditioning",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "negative_conditioning",
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-75899702-fa44-46d2-b2d5-3e17f234c3e7negative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "unet",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-75899702-fa44-46d2-b2d5-3e17f234c3e7unet",
"type": "default"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "75899702-fa44-46d2-b2d5-3e17f234c3e7",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-75899702-fa44-46d2-b2d5-3e17f234c3e7noise",
"type": "default"
}
]
}

View File

@ -14,7 +14,7 @@ fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3.0-latest"
LATEST_TAG="v3-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."

View File

@ -5,6 +5,7 @@ InvokeAI Installer
import argparse
import os
from pathlib import Path
from installer import Installer
if __name__ == "__main__":

View File

@ -1,13 +1,9 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from logging import Logger
from invokeai.app.services.board_image_record_storage import (
SqliteBoardImageRecordStorage,
)
from invokeai.app.services.board_images import (
BoardImagesService,
BoardImagesServiceDependencies,
)
from invokeai.app.services.board_image_record_storage import SqliteBoardImageRecordStorage
from invokeai.app.services.board_images import BoardImagesService, BoardImagesServiceDependencies
from invokeai.app.services.board_record_storage import SqliteBoardRecordStorage
from invokeai.app.services.boards import BoardService, BoardServiceDependencies
from invokeai.app.services.config import InvokeAIAppConfig
@ -19,16 +15,16 @@ from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
from ..services.default_graphs import create_system_graphs
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from ..services.graph import GraphExecutionState, LibraryGraph
from ..services.image_file_storage import DiskImageFileStorage
from ..services.invocation_queue import MemoryInvocationQueue
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats import InvocationStatsService
from ..services.invoker import Invoker
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
from ..services.model_manager_service import ModelManagerService
from ..services.processor import DefaultInvocationProcessor
from ..services.sqlite import SqliteItemStorage
from ..services.model_manager_service import ModelManagerService
from ..services.invocation_stats import InvocationStatsService
from .events import FastAPIEventService

View File

@ -1,20 +1,17 @@
import io
from typing import Optional
from PIL import Image
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field
from invokeai.app.invocations.metadata import ImageMetadata
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecordChanges,
ImageUrlsDTO,
)
from invokeai.app.services.models.image_record import ImageDTO, ImageRecordChanges, ImageUrlsDTO
from ..dependencies import ApiDependencies
images_router = APIRouter(prefix="/v1/images", tags=["images"])

View File

@ -2,7 +2,7 @@
import pathlib
from typing import Literal, List, Optional, Union
from typing import List, Literal, Optional, Union
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
@ -10,13 +10,13 @@ from pydantic import BaseModel, parse_obj_as
from starlette.exceptions import HTTPException
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management import MergeInterpolationMethod
from invokeai.backend.model_management.models import (
OPENAPI_MODEL_CONFIGS,
SchedulerPredictionType,
ModelNotFoundException,
InvalidModelException,
ModelNotFoundException,
SchedulerPredictionType,
)
from invokeai.backend.model_management import MergeInterpolationMethod
from ..dependencies import ApiDependencies

View File

@ -9,13 +9,7 @@ from pydantic.fields import Field
# Importing * is bad karma but needed here for node detection
from ...invocations import * # noqa: F401 F403
from ...invocations.baseinvocation import BaseInvocation
from ...services.graph import (
Edge,
EdgeConnection,
Graph,
GraphExecutionState,
NodeAlreadyExecutedError,
)
from ...services.graph import Edge, EdgeConnection, Graph, GraphExecutionState, NodeAlreadyExecutedError
from ...services.item_storage import PaginatedResults
from ..dependencies import ApiDependencies

View File

@ -1,16 +1,18 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from abc import ABC, abstractmethod
import argparse
from abc import ABC, abstractmethod
from typing import Any, Callable, Iterable, Literal, Union, get_args, get_origin, get_type_hints
from pydantic import BaseModel, Field
import networkx as nx
import matplotlib.pyplot as plt
import networkx as nx
from pydantic import BaseModel, Field
import invokeai.backend.util.logging as logger
from ..invocations.baseinvocation import BaseInvocation
from ..invocations.image import ImageField
from ..services.graph import GraphExecutionState, LibraryGraph, Edge
from ..services.graph import Edge, GraphExecutionState, LibraryGraph
from ..services.invoker import Invoker

View File

@ -6,15 +6,15 @@ completer object.
import atexit
import readline
import shlex
from pathlib import Path
from typing import List, Dict, Literal, get_args, get_type_hints, get_origin
from typing import Dict, List, Literal, get_args, get_origin, get_type_hints
import invokeai.backend.util.logging as logger
from ...backend import ModelManager
from ..invocations.baseinvocation import BaseInvocation
from .commands import BaseCommand
from ..services.invocation_services import InvocationServices
from .commands import BaseCommand
# singleton object, class variable
completer = None

View File

@ -3,10 +3,10 @@
from __future__ import annotations
import json
import re
from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
import re
from typing import (
TYPE_CHECKING,
AbstractSet,
@ -23,10 +23,10 @@ from typing import (
get_type_hints,
)
from pydantic import BaseModel, Field, validator
from pydantic.fields import Undefined, ModelField
from pydantic.typing import NoArgAnyCallable
import semver
from pydantic import BaseModel, Field, validator
from pydantic.fields import ModelField, Undefined
from pydantic.typing import NoArgAnyCallable
from invokeai.app.services.config.invokeai_config import InvokeAIAppConfig
@ -198,6 +198,7 @@ class _InputField(BaseModel):
ui_type: Optional[UIType]
ui_component: Optional[UIComponent]
ui_order: Optional[int]
ui_choice_labels: Optional[dict[str, str]]
item_default: Optional[Any]
@ -246,6 +247,7 @@ def InputField(
ui_component: Optional[UIComponent] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
ui_choice_labels: Optional[dict[str, str]] = None,
item_default: Optional[Any] = None,
**kwargs: Any,
) -> Any:
@ -312,6 +314,7 @@ def InputField(
ui_hidden=ui_hidden,
ui_order=ui_order,
item_default=item_default,
ui_choice_labels=ui_choice_labels,
**kwargs,
)

View File

@ -38,14 +38,16 @@ class RangeInvocation(BaseInvocation):
version="1.0.0",
)
class RangeOfSizeInvocation(BaseInvocation):
"""Creates a range from start to start + size with step"""
"""Creates a range from start to start + (size * step) incremented by step"""
start: int = InputField(default=0, description="The start of the range")
size: int = InputField(default=1, description="The number of values")
size: int = InputField(default=1, gt=0, description="The number of values")
step: int = InputField(default=1, description="The step of the range")
def invoke(self, context: InvocationContext) -> IntegerCollectionOutput:
return IntegerCollectionOutput(collection=list(range(self.start, self.start + self.size, self.step)))
return IntegerCollectionOutput(
collection=list(range(self.start, self.start + (self.step * self.size), self.step))
)
@invocation(

View File

@ -5,16 +5,15 @@ from typing import List, Union
import torch
from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput
from invokeai.backend.stable_diffusion.diffusion.shared_invokeai_diffusion import (
BasicConditioningInfo,
SDXLConditioningInfo,
)
from ...backend.model_management.models import ModelType
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException
from ...backend.model_management.models import ModelNotFoundException, ModelType
from ...backend.stable_diffusion.diffusion import InvokeAIDiffuserComponent
from ...backend.util.devices import torch_dtype
from .baseinvocation import (

View File

@ -28,15 +28,14 @@ from pydantic import BaseModel, Field, validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from ...backend.model_management import BaseModelType
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
@ -44,7 +43,6 @@ from .baseinvocation import (
invocation_output,
)
CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
CONTROLNET_RESIZE_VALUES = Literal[
"just_resize",

View File

@ -4,9 +4,10 @@
import cv2 as cv
import numpy
from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation

View File

@ -98,7 +98,7 @@ class ImageCropInvocation(BaseInvocation):
)
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.0")
@invocation("img_paste", title="Paste Image", tags=["image", "paste"], category="image", version="1.0.1")
class ImagePasteInvocation(BaseInvocation):
"""Pastes an image into another image."""
@ -110,6 +110,7 @@ class ImagePasteInvocation(BaseInvocation):
)
x: int = InputField(default=0, description="The left x coordinate at which to paste the image")
y: int = InputField(default=0, description="The top y coordinate at which to paste the image")
crop: bool = InputField(default=False, description="Crop to base image dimensions")
def invoke(self, context: InvocationContext) -> ImageOutput:
base_image = context.services.images.get_pil_image(self.base_image.image_name)
@ -129,6 +130,10 @@ class ImagePasteInvocation(BaseInvocation):
new_image.paste(base_image, (abs(min_x), abs(min_y)))
new_image.paste(image, (max(0, self.x), max(0, self.y)), mask=mask)
if self.crop:
base_w, base_h = base_image.size
new_image = new_image.crop((abs(min_x), abs(min_y), abs(min_x) + base_w, abs(min_y) + base_h))
image_dto = context.services.images.create(
image=new_image,
image_origin=ResourceOrigin.INTERNAL,

View File

@ -34,6 +34,22 @@ from invokeai.app.invocations.primitives import (
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.model_management.models import ModelType, SilenceWarnings
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import BaseModelType
from ...backend.model_management.seamless import set_seamless
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData,
ControlNetData,
StableDiffusionGeneratorPipeline,
image_resized_to_grid_as_tensor,
)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device
from ...backend.util.logging import InvokeAILogger
from ..models.image import ImageCategory, ResourceOrigin
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
@ -49,21 +65,6 @@ from .baseinvocation import (
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .model import ModelInfo, UNetField, VaeField
from ..models.image import ImageCategory, ResourceOrigin
from ...backend.model_management import BaseModelType
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.seamless import set_seamless
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ConditioningData,
ControlNetData,
StableDiffusionGeneratorPipeline,
image_resized_to_grid_as_tensor,
)
from ...backend.stable_diffusion.diffusion.shared_invokeai_diffusion import PostprocessingSettings
from ...backend.stable_diffusion.schedulers import SCHEDULER_MAP
from ...backend.util.devices import choose_precision, choose_torch_device
from ...backend.util.logging import InvokeAILogger
DEFAULT_PRECISION = choose_precision(choose_torch_device())

View File

@ -1,8 +1,11 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import numpy as np
from typing import Literal
from invokeai.app.invocations.primitives import IntegerOutput
import numpy as np
from pydantic import validator
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
from .baseinvocation import BaseInvocation, FieldDescriptions, InputField, InvocationContext, invocation
@ -60,3 +63,201 @@ class RandomIntInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IntegerOutput:
return IntegerOutput(value=np.random.randint(self.low, self.high))
@invocation(
"float_to_int",
title="Float To Integer",
tags=["math", "round", "integer", "float", "convert"],
category="math",
version="1.0.0",
)
class FloatToIntegerInvocation(BaseInvocation):
"""Rounds a float number to (a multiple of) an integer."""
value: float = InputField(default=0, description="The value to round")
multiple: int = InputField(default=1, ge=1, title="Multiple of", description="The multiple to round to")
method: Literal["Nearest", "Floor", "Ceiling", "Truncate"] = InputField(
default="Nearest", description="The method to use for rounding"
)
def invoke(self, context: InvocationContext) -> IntegerOutput:
if self.method == "Nearest":
return IntegerOutput(value=round(self.value / self.multiple) * self.multiple)
elif self.method == "Floor":
return IntegerOutput(value=np.floor(self.value / self.multiple) * self.multiple)
elif self.method == "Ceiling":
return IntegerOutput(value=np.ceil(self.value / self.multiple) * self.multiple)
else: # self.method == "Truncate"
return IntegerOutput(value=int(self.value / self.multiple) * self.multiple)
@invocation("round_float", title="Round Float", tags=["math", "round"], category="math", version="1.0.0")
class RoundInvocation(BaseInvocation):
"""Rounds a float to a specified number of decimal places."""
value: float = InputField(default=0, description="The float value")
decimals: int = InputField(default=0, description="The number of decimal places")
def invoke(self, context: InvocationContext) -> FloatOutput:
return FloatOutput(value=round(self.value, self.decimals))
INTEGER_OPERATIONS = Literal[
"ADD",
"SUB",
"MUL",
"DIV",
"EXP",
"MOD",
"ABS",
"MIN",
"MAX",
]
INTEGER_OPERATIONS_LABELS = dict(
ADD="Add A+B",
SUB="Subtract A-B",
MUL="Multiply A*B",
DIV="Divide A/B",
EXP="Exponentiate A^B",
MOD="Modulus A%B",
ABS="Absolute Value of A",
MIN="Minimum(A,B)",
MAX="Maximum(A,B)",
)
@invocation(
"integer_math",
title="Integer Math",
tags=[
"math",
"integer",
"add",
"subtract",
"multiply",
"divide",
"modulus",
"power",
"absolute value",
"min",
"max",
],
category="math",
version="1.0.0",
)
class IntegerMathInvocation(BaseInvocation):
"""Performs integer math."""
operation: INTEGER_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=INTEGER_OPERATIONS_LABELS
)
a: int = InputField(default=0, description=FieldDescriptions.num_1)
b: int = InputField(default=0, description=FieldDescriptions.num_2)
@validator("b")
def no_unrepresentable_results(cls, v, values):
if values["operation"] == "DIV" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "MOD" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "EXP" and v < 0:
raise ValueError("Result of exponentiation is not an integer")
return v
def invoke(self, context: InvocationContext) -> IntegerOutput:
# Python doesn't support switch statements until 3.10, but InvokeAI supports back to 3.9
if self.operation == "ADD":
return IntegerOutput(value=self.a + self.b)
elif self.operation == "SUB":
return IntegerOutput(value=self.a - self.b)
elif self.operation == "MUL":
return IntegerOutput(value=self.a * self.b)
elif self.operation == "DIV":
return IntegerOutput(value=int(self.a / self.b))
elif self.operation == "EXP":
return IntegerOutput(value=self.a**self.b)
elif self.operation == "MOD":
return IntegerOutput(value=self.a % self.b)
elif self.operation == "ABS":
return IntegerOutput(value=abs(self.a))
elif self.operation == "MIN":
return IntegerOutput(value=min(self.a, self.b))
else: # self.operation == "MAX":
return IntegerOutput(value=max(self.a, self.b))
FLOAT_OPERATIONS = Literal[
"ADD",
"SUB",
"MUL",
"DIV",
"EXP",
"ABS",
"SQRT",
"MIN",
"MAX",
]
FLOAT_OPERATIONS_LABELS = dict(
ADD="Add A+B",
SUB="Subtract A-B",
MUL="Multiply A*B",
DIV="Divide A/B",
EXP="Exponentiate A^B",
ABS="Absolute Value of A",
SQRT="Square Root of A",
MIN="Minimum(A,B)",
MAX="Maximum(A,B)",
)
@invocation(
"float_math",
title="Float Math",
tags=["math", "float", "add", "subtract", "multiply", "divide", "power", "root", "absolute value", "min", "max"],
category="math",
version="1.0.0",
)
class FloatMathInvocation(BaseInvocation):
"""Performs floating point math."""
operation: FLOAT_OPERATIONS = InputField(
default="ADD", description="The operation to perform", ui_choice_labels=FLOAT_OPERATIONS_LABELS
)
a: float = InputField(default=0, description=FieldDescriptions.num_1)
b: float = InputField(default=0, description=FieldDescriptions.num_2)
@validator("b")
def no_unrepresentable_results(cls, v, values):
if values["operation"] == "DIV" and v == 0:
raise ValueError("Cannot divide by zero")
elif values["operation"] == "EXP" and values["a"] == 0 and v < 0:
raise ValueError("Cannot raise zero to a negative power")
elif values["operation"] == "EXP" and type(values["a"] ** v) is complex:
raise ValueError("Root operation resulted in a complex number")
return v
def invoke(self, context: InvocationContext) -> FloatOutput:
# Python doesn't support switch statements until 3.10, but InvokeAI supports back to 3.9
if self.operation == "ADD":
return FloatOutput(value=self.a + self.b)
elif self.operation == "SUB":
return FloatOutput(value=self.a - self.b)
elif self.operation == "MUL":
return FloatOutput(value=self.a * self.b)
elif self.operation == "DIV":
return FloatOutput(value=self.a / self.b)
elif self.operation == "EXP":
return FloatOutput(value=self.a**self.b)
elif self.operation == "SQRT":
return FloatOutput(value=np.sqrt(self.a))
elif self.operation == "ABS":
return FloatOutput(value=abs(self.a))
elif self.operation == "MIN":
return FloatOutput(value=min(self.a, self.b))
else: # self.operation == "MAX":
return FloatOutput(value=max(self.a, self.b))

View File

@ -25,8 +25,8 @@ from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
FieldDescriptions,
InputField,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,

View File

@ -3,7 +3,6 @@ from typing import Literal, Optional
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
from easing_functions import (
BackEaseIn,

View File

@ -0,0 +1,139 @@
# 2023 skunkworxdark (https://github.com/skunkworxdark)
import re
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
OutputField,
UIComponent,
invocation,
invocation_output,
)
from .primitives import StringOutput
@invocation_output("string_pos_neg_output")
class StringPosNegOutput(BaseInvocationOutput):
"""Base class for invocations that output a positive and negative string"""
positive_string: str = OutputField(description="Positive string")
negative_string: str = OutputField(description="Negative string")
@invocation(
"string_split_neg",
title="String Split Negative",
tags=["string", "split", "negative"],
category="string",
version="1.0.0",
)
class StringSplitNegInvocation(BaseInvocation):
"""Splits string into two strings, inside [] goes into negative string everthing else goes into positive string. Each [ and ] character is replaced with a space"""
string: str = InputField(default="", description="String to split", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringPosNegOutput:
p_string = ""
n_string = ""
brackets_depth = 0
escaped = False
for char in self.string or "":
if char == "[" and not escaped:
n_string += " "
brackets_depth += 1
elif char == "]" and not escaped:
brackets_depth -= 1
char = " "
elif brackets_depth > 0:
n_string += char
else:
p_string += char
# keep track of the escape char but only if it isn't escaped already
if char == "\\" and not escaped:
escaped = True
else:
escaped = False
return StringPosNegOutput(positive_string=p_string, negative_string=n_string)
@invocation_output("string_2_output")
class String2Output(BaseInvocationOutput):
"""Base class for invocations that output two strings"""
string_1: str = OutputField(description="string 1")
string_2: str = OutputField(description="string 2")
@invocation("string_split", title="String Split", tags=["string", "split"], category="string", version="1.0.0")
class StringSplitInvocation(BaseInvocation):
"""Splits string into two strings, based on the first occurance of the delimiter. The delimiter will be removed from the string"""
string: str = InputField(default="", description="String to split", ui_component=UIComponent.Textarea)
delimiter: str = InputField(
default="", description="Delimiter to spilt with. blank will split on the first whitespace"
)
def invoke(self, context: InvocationContext) -> String2Output:
result = self.string.split(self.delimiter, 1)
if len(result) == 2:
part1, part2 = result
else:
part1 = result[0]
part2 = ""
return String2Output(string_1=part1, string_2=part2)
@invocation("string_join", title="String Join", tags=["string", "join"], category="string", version="1.0.0")
class StringJoinInvocation(BaseInvocation):
"""Joins string left to string right"""
string_left: str = InputField(default="", description="String Left", ui_component=UIComponent.Textarea)
string_right: str = InputField(default="", description="String Right", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(value=((self.string_left or "") + (self.string_right or "")))
@invocation("string_join_three", title="String Join Three", tags=["string", "join"], category="string", version="1.0.0")
class StringJoinThreeInvocation(BaseInvocation):
"""Joins string left to string middle to string right"""
string_left: str = InputField(default="", description="String Left", ui_component=UIComponent.Textarea)
string_middle: str = InputField(default="", description="String Middle", ui_component=UIComponent.Textarea)
string_right: str = InputField(default="", description="String Right", ui_component=UIComponent.Textarea)
def invoke(self, context: InvocationContext) -> StringOutput:
return StringOutput(value=((self.string_left or "") + (self.string_middle or "") + (self.string_right or "")))
@invocation(
"string_replace", title="String Replace", tags=["string", "replace", "regex"], category="string", version="1.0.0"
)
class StringReplaceInvocation(BaseInvocation):
"""Replaces the search string with the replace string"""
string: str = InputField(default="", description="String to work on", ui_component=UIComponent.Textarea)
search_string: str = InputField(default="", description="String to search for", ui_component=UIComponent.Textarea)
replace_string: str = InputField(
default="", description="String to replace the search", ui_component=UIComponent.Textarea
)
use_regex: bool = InputField(
default=False, description="Use search string as a regex expression (non regex is case insensitive)"
)
def invoke(self, context: InvocationContext) -> StringOutput:
pattern = self.search_string or ""
new_string = self.string or ""
if len(pattern) > 0:
if not self.use_regex:
# None regex so make case insensitve
pattern = "(?i)" + re.escape(pattern)
new_string = re.sub(pattern, (self.replace_string or ""), new_string)
return StringOutput(value=new_string)

View File

@ -7,8 +7,8 @@ import numpy as np
from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image
from realesrgan import RealESRGANer
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation

View File

@ -1,13 +1,10 @@
from abc import ABC, abstractmethod
import sqlite3
import threading
from abc import ABC, abstractmethod
from typing import Optional, cast
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.image_record import (
ImageRecord,
deserialize_image_record,
)
from invokeai.app.services.models.image_record import ImageRecord, deserialize_image_record
class BoardImageRecordStorageBase(ABC):

View File

@ -1,12 +1,9 @@
from abc import ABC, abstractmethod
from logging import Logger
from typing import Optional
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_record_storage import (
BoardRecord,
BoardRecordStorageBase,
)
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_record_storage import BoardRecord, BoardRecordStorageBase
from invokeai.app.services.image_record_storage import ImageRecordStorageBase
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.urls import UrlServiceBase

View File

@ -1,15 +1,13 @@
import sqlite3
import threading
import uuid
from abc import ABC, abstractmethod
from typing import Optional, Union, cast
import sqlite3
from pydantic import BaseModel, Extra, Field
from invokeai.app.services.image_record_storage import OffsetPaginatedResults
from invokeai.app.services.models.board_record import (
BoardRecord,
deserialize_board_record,
)
from pydantic import BaseModel, Field, Extra
from invokeai.app.services.models.board_record import BoardRecord, deserialize_board_record
class BoardChanges(BaseModel, extra=Extra.forbid):

View File

@ -1,17 +1,10 @@
from abc import ABC, abstractmethod
from logging import Logger
from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase
from invokeai.app.services.board_images import board_record_to_dto
from invokeai.app.services.board_record_storage import (
BoardChanges,
BoardRecordStorageBase,
)
from invokeai.app.services.image_record_storage import (
ImageRecordStorageBase,
OffsetPaginatedResults,
)
from invokeai.app.services.board_record_storage import BoardChanges, BoardRecordStorageBase
from invokeai.app.services.image_record_storage import ImageRecordStorageBase, OffsetPaginatedResults
from invokeai.app.services.models.board_record import BoardDTO
from invokeai.app.services.urls import UrlServiceBase

View File

@ -2,8 +2,5 @@
Init file for InvokeAI configure package
"""
from .invokeai_config import ( # noqa F401
InvokeAIAppConfig,
get_invokeai_config,
)
from .base import PagingArgumentParser # noqa F401
from .invokeai_config import InvokeAIAppConfig, get_invokeai_config # noqa F401

View File

@ -9,15 +9,17 @@ the command line.
"""
from __future__ import annotations
import argparse
import os
import pydoc
import sys
from argparse import ArgumentParser
from omegaconf import OmegaConf, DictConfig, ListConfig
from pathlib import Path
from typing import ClassVar, Dict, List, Literal, Union, get_args, get_origin, get_type_hints
from omegaconf import DictConfig, ListConfig, OmegaConf
from pydantic import BaseSettings
from typing import ClassVar, Dict, List, Literal, Union, get_origin, get_type_hints, get_args
class PagingArgumentParser(argparse.ArgumentParser):

View File

@ -172,9 +172,9 @@ from __future__ import annotations
import os
from pathlib import Path
from typing import ClassVar, Dict, List, Literal, Union, get_type_hints, Optional
from typing import ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from omegaconf import OmegaConf, DictConfig
from omegaconf import DictConfig, OmegaConf
from pydantic import Field, parse_obj_as
from .base import InvokeAISettings

View File

@ -1,12 +1,11 @@
from ..invocations.latent import LatentsToImageInvocation, DenoiseLatentsInvocation
from ..invocations.image import ImageNSFWBlurInvocation
from ..invocations.noise import NoiseInvocation
from ..invocations.compel import CompelInvocation
from ..invocations.image import ImageNSFWBlurInvocation
from ..invocations.latent import DenoiseLatentsInvocation, LatentsToImageInvocation
from ..invocations.noise import NoiseInvocation
from ..invocations.primitives import IntegerInvocation
from .graph import Edge, EdgeConnection, ExposedNodeInput, ExposedNodeOutput, Graph, LibraryGraph
from .item_storage import ItemStorageABC
default_text_to_image_graph_id = "539b2af5-2b4d-4d8c-8071-e54a3255fc74"

View File

@ -1,14 +1,10 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional
from invokeai.app.models.image import ProgressImage
from invokeai.app.services.model_manager_service import BaseModelType, ModelInfo, ModelType, SubModelType
from invokeai.app.util.misc import get_timestamp
from invokeai.app.services.model_manager_service import (
BaseModelType,
ModelType,
SubModelType,
ModelInfo,
)
class EventServiceBase:

View File

@ -14,12 +14,12 @@ from ..invocations import * # noqa: F401 F403
from ..invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
invocation,
invocation_output,
)

View File

@ -9,11 +9,7 @@ from pydantic import BaseModel, Field
from pydantic.generics import GenericModel
from invokeai.app.models.image import ImageCategory, ResourceOrigin
from invokeai.app.services.models.image_record import (
ImageRecord,
ImageRecordChanges,
deserialize_image_record,
)
from invokeai.app.services.models.image_record import ImageRecord, ImageRecordChanges, deserialize_image_record
T = TypeVar("T", bound=BaseModel)

View File

@ -26,12 +26,7 @@ from invokeai.app.services.image_record_storage import (
OffsetPaginatedResults,
)
from invokeai.app.services.item_storage import ItemStorageABC
from invokeai.app.services.models.image_record import (
ImageDTO,
ImageRecord,
ImageRecordChanges,
image_record_to_dto,
)
from invokeai.app.services.models.image_record import ImageDTO, ImageRecord, ImageRecordChanges, image_record_to_dto
from invokeai.app.services.resource_name import NameServiceBase
from invokeai.app.services.urls import UrlServiceBase
from invokeai.app.util.metadata import get_metadata_graph_from_raw_session

View File

@ -3,9 +3,9 @@
import time
from abc import ABC, abstractmethod
from queue import Queue
from typing import Optional
from pydantic import BaseModel, Field
from typing import Optional
class InvocationQueueItem(BaseModel):

View File

@ -1,21 +1,23 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from logging import Logger
from invokeai.app.services.board_images import BoardImagesServiceABC
from invokeai.app.services.boards import BoardServiceABC
from invokeai.app.services.images import ImageServiceABC
from invokeai.app.services.invocation_stats import InvocationStatsServiceBase
from invokeai.app.services.model_manager_service import ModelManagerServiceBase
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.latent_storage import LatentsStorageBase
from invokeai.app.services.invocation_queue import InvocationQueueABC
from invokeai.app.services.item_storage import ItemStorageABC
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.graph import GraphExecutionState, LibraryGraph
from invokeai.app.services.images import ImageServiceABC
from invokeai.app.services.invocation_queue import InvocationQueueABC
from invokeai.app.services.invocation_stats import InvocationStatsServiceBase
from invokeai.app.services.invoker import InvocationProcessorABC
from invokeai.app.services.item_storage import ItemStorageABC
from invokeai.app.services.latent_storage import LatentsStorageBase
from invokeai.app.services.model_manager_service import ModelManagerServiceBase
class InvocationServices:

View File

@ -28,22 +28,22 @@ The abstract base class for this class is InvocationStatsServiceBase. An impleme
writes to the system log is stored in InvocationServices.performance_statistics.
"""
import psutil
import time
from abc import ABC, abstractmethod
from contextlib import AbstractContextManager
from dataclasses import dataclass, field
from typing import Dict
import psutil
import torch
import invokeai.backend.util.logging as logger
from invokeai.backend.model_management.model_cache import CacheStats
from ..invocations.baseinvocation import BaseInvocation
from .graph import GraphExecutionState
from .item_storage import ItemStorageABC
from .model_manager_service import ModelManagerService
from invokeai.backend.model_management.model_cache import CacheStats
# size of GIG in bytes
GIG = 1073741824

View File

@ -3,7 +3,7 @@
from abc import ABC, abstractmethod
from pathlib import Path
from queue import Queue
from typing import Dict, Union, Optional
from typing import Dict, Optional, Union
import torch

View File

@ -5,27 +5,28 @@ from __future__ import annotations
from abc import ABC, abstractmethod
from logging import Logger
from pathlib import Path
from pydantic import Field
from typing import Literal, Optional, Union, Callable, List, Tuple, TYPE_CHECKING
from types import ModuleType
from invokeai.backend.model_management import (
ModelManager,
BaseModelType,
ModelType,
SubModelType,
ModelInfo,
AddModelResult,
SchedulerPredictionType,
ModelMerger,
MergeInterpolationMethod,
ModelNotFoundException,
)
from invokeai.backend.model_management.model_search import FindModels
from invokeai.backend.model_management.model_cache import CacheStats
from typing import TYPE_CHECKING, Callable, List, Literal, Optional, Tuple, Union
import torch
from pydantic import Field
from invokeai.app.models.exceptions import CanceledException
from invokeai.backend.model_management import (
AddModelResult,
BaseModelType,
MergeInterpolationMethod,
ModelInfo,
ModelManager,
ModelMerger,
ModelNotFoundException,
ModelType,
SchedulerPredictionType,
SubModelType,
)
from invokeai.backend.model_management.model_cache import CacheStats
from invokeai.backend.model_management.model_search import FindModels
from ...backend.util import choose_precision, choose_torch_device
from .config import InvokeAIAppConfig

View File

@ -1,6 +1,8 @@
from typing import Optional, Union
from datetime import datetime
from typing import Optional, Union
from pydantic import Field
from invokeai.app.util.misc import get_iso_timestamp
from invokeai.app.util.model_exclude_null import BaseModelExcludeNull

View File

@ -1,6 +1,6 @@
import uuid
from abc import ABC, abstractmethod
from enum import Enum, EnumMeta
import uuid
class ResourceType(str, Enum, metaclass=EnumMeta):

View File

@ -1,12 +1,12 @@
from typing import Union
import torch
import numpy as np
import cv2
from PIL import Image
from diffusers.utils import PIL_INTERPOLATION
from einops import rearrange
import cv2
import numpy as np
import torch
from controlnet_aux.util import HWC3
from diffusers.utils import PIL_INTERPOLATION
from einops import rearrange
from PIL import Image
###################################################################
# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet

View File

@ -1,4 +1,5 @@
import datetime
import numpy as np

View File

@ -1,6 +1,6 @@
from typing import Any
from pydantic import BaseModel
from pydantic import BaseModel
"""
We want to exclude null values from objects that make their way to the client.

View File

@ -1,11 +1,13 @@
import torch
from PIL import Image
from invokeai.app.models.exceptions import CanceledException
from invokeai.app.models.image import ProgressImage
from ..invocations.baseinvocation import InvocationContext
from ...backend.util.util import image_to_dataURL
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.model_management.models import BaseModelType
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util.util import image_to_dataURL
from ..invocations.baseinvocation import InvocationContext
def sample_to_lowres_estimated_image(samples, latent_rgb_factors, smooth_matrix=None):

View File

@ -1,4 +1,5 @@
import os
from PIL import Image

View File

@ -1,5 +1,5 @@
"""
Initialization file for invokeai.backend
"""
from .model_management import ModelManager, ModelCache, BaseModelType, ModelType, SubModelType, ModelInfo # noqa: F401
from .model_management import BaseModelType, ModelCache, ModelInfo, ModelManager, ModelType, SubModelType # noqa: F401
from .model_management.models import SilenceWarnings # noqa: F401

View File

@ -3,12 +3,13 @@ This module defines a singleton object, "invisible_watermark" that
wraps the invisible watermark model. It respects the global "invisible_watermark"
configuration variable, that allows the watermarking to be supressed.
"""
import numpy as np
import cv2
from PIL import Image
import numpy as np
from imwatermark import WatermarkEncoder
from invokeai.app.services.config import InvokeAIAppConfig
from PIL import Image
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
config = InvokeAIAppConfig.get_config()

View File

@ -5,6 +5,7 @@ wraps the actual patchmatch object. It respects the global
be suppressed or deferred
"""
import numpy as np
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig

View File

@ -5,10 +5,11 @@ configuration variable, that allows the checker to be supressed.
"""
import numpy as np
from PIL import Image
from invokeai.backend import SilenceWarnings
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.util.devices import choose_torch_device
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend import SilenceWarnings
from invokeai.backend.util.devices import choose_torch_device
config = InvokeAIAppConfig.get_config()

View File

@ -2,9 +2,8 @@
Check that the invokeai_root is correctly configured and exit if not.
"""
import sys
from invokeai.app.services.config import (
InvokeAIAppConfig,
)
from invokeai.app.services.config import InvokeAIAppConfig
def check_invokeai_root(config: InvokeAIAppConfig):

View File

@ -6,68 +6,56 @@
#
# Coauthor: Kevin Turner http://github.com/keturn
#
import sys
import argparse
import io
import os
import psutil
import shutil
import sys
import textwrap
import torch
import traceback
import yaml
import warnings
from argparse import Namespace
from enum import Enum
from pathlib import Path
from shutil import get_terminal_size
from typing import get_type_hints, get_args, Any
from typing import Any, get_args, get_type_hints
from urllib import request
import npyscreen
import transformers
import omegaconf
import psutil
import torch
import transformers
import yaml
from diffusers import AutoencoderKL
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from huggingface_hub import HfFolder
from huggingface_hub import login as hf_hub_login
from omegaconf import OmegaConf
from pydantic.error_wrappers import ValidationError
from tqdm import tqdm
from transformers import (
CLIPTextModel,
CLIPTextConfig,
CLIPTokenizer,
AutoFeatureExtractor,
BertTokenizerFast,
)
import invokeai.configs as configs
from transformers import AutoFeatureExtractor, BertTokenizerFast, CLIPTextConfig, CLIPTextModel, CLIPTokenizer
from invokeai.app.services.config import (
InvokeAIAppConfig,
)
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.install.legacy_arg_parsing import legacy_parser
from invokeai.backend.install.model_install_backend import InstallSelections, ModelInstall, hf_download_from_pretrained
from invokeai.backend.model_management.model_probe import BaseModelType, ModelType
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.model_install import addModelsForm, process_and_execute
# TO DO - Move all the frontend code into invokeai.frontend.install
from invokeai.frontend.install.widgets import (
SingleSelectColumnsSimple,
MultiSelectColumns,
CenteredButtonPress,
FileBox,
set_min_terminal_size,
CyclingForm,
MIN_COLS,
MIN_LINES,
CenteredButtonPress,
CyclingForm,
FileBox,
MultiSelectColumns,
SingleSelectColumnsSimple,
WindowTooSmallException,
set_min_terminal_size,
)
from invokeai.backend.install.legacy_arg_parsing import legacy_parser
from invokeai.backend.install.model_install_backend import (
hf_download_from_pretrained,
InstallSelections,
ModelInstall,
)
from invokeai.backend.model_management.model_probe import ModelType, BaseModelType
from pydantic.error_wrappers import ValidationError
warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()

View File

@ -3,33 +3,26 @@ Migrate the models directory and models.yaml file from an existing
InvokeAI 2.3 installation to 3.0.0.
"""
import os
import argparse
import os
import shutil
import yaml
import transformers
import diffusers
import warnings
from dataclasses import dataclass
from pathlib import Path
from omegaconf import OmegaConf, DictConfig
from typing import Union
from diffusers import StableDiffusionPipeline, AutoencoderKL
import diffusers
import transformers
import yaml
from diffusers import AutoencoderKL, StableDiffusionPipeline
from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
from transformers import (
CLIPTextModel,
CLIPTokenizer,
AutoFeatureExtractor,
BertTokenizerFast,
)
from omegaconf import DictConfig, OmegaConf
from transformers import AutoFeatureExtractor, BertTokenizerFast, CLIPTextModel, CLIPTokenizer
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.model_management import ModelManager
from invokeai.backend.model_management.model_probe import ModelProbe, ModelType, BaseModelType, ModelProbeInfo
from invokeai.backend.model_management.model_probe import BaseModelType, ModelProbe, ModelProbeInfo, ModelType
warnings.filterwarnings("ignore")
transformers.logging.set_verbosity_error()

View File

@ -7,23 +7,23 @@ import warnings
from dataclasses import dataclass, field
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Optional, List, Dict, Callable, Union, Set
from typing import Callable, Dict, List, Optional, Set, Union
import requests
import torch
from diffusers import DiffusionPipeline
from diffusers import logging as dlogging
import torch
from huggingface_hub import hf_hub_url, HfFolder, HfApi
from huggingface_hub import HfApi, HfFolder, hf_hub_url
from omegaconf import OmegaConf
from tqdm import tqdm
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.model_management import ModelManager, ModelType, BaseModelType, ModelVariantType, AddModelResult
from invokeai.backend.model_management.model_probe import ModelProbe, SchedulerPredictionType, ModelProbeInfo
from invokeai.backend.model_management import AddModelResult, BaseModelType, ModelManager, ModelType, ModelVariantType
from invokeai.backend.model_management.model_probe import ModelProbe, ModelProbeInfo, SchedulerPredictionType
from invokeai.backend.util import download_with_resume
from invokeai.backend.util.devices import torch_dtype, choose_torch_device
from invokeai.backend.util.devices import choose_torch_device, torch_dtype
from ..util.logging import InvokeAILogger
warnings.filterwarnings("ignore")

View File

@ -1,15 +1,19 @@
"""
Initialization file for invokeai.backend.model_management
"""
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType # noqa: F401
from .model_cache import ModelCache # noqa: F401
# This import must be first
from .model_manager import ModelManager, ModelInfo, AddModelResult, SchedulerPredictionType # noqa: F401 isort: split
from .lora import ModelPatcher, ONNXModelPatcher # noqa: F401
from .model_cache import ModelCache # noqa: F401
from .models import ( # noqa: F401
BaseModelType,
ModelType,
SubModelType,
ModelVariantType,
ModelNotFoundException,
DuplicateModelException,
ModelNotFoundException,
ModelType,
ModelVariantType,
SubModelType,
)
from .model_merge import ModelMerger, MergeInterpolationMethod # noqa: F401
# This import must be last
from .model_merge import ModelMerger, MergeInterpolationMethod # noqa: F401 isort: split

View File

@ -25,12 +25,7 @@ from typing import Optional, Union
import requests
import torch
from diffusers.models import (
AutoencoderKL,
ControlNetModel,
PriorTransformer,
UNet2DConditionModel,
)
from diffusers.models import AutoencoderKL, ControlNetModel, PriorTransformer, UNet2DConditionModel
from diffusers.pipelines.latent_diffusion.pipeline_latent_diffusion import LDMBertConfig, LDMBertModel
from diffusers.pipelines.paint_by_example import PaintByExampleImageEncoder
from diffusers.pipelines.pipeline_utils import DiffusionPipeline
@ -64,6 +59,7 @@ from transformers import (
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
from .models import BaseModelType, ModelVariantType
try:
@ -1203,8 +1199,8 @@ def download_from_original_stable_diffusion_ckpt(
StableDiffusionControlNetPipeline,
StableDiffusionInpaintPipeline,
StableDiffusionPipeline,
StableDiffusionXLPipeline,
StableDiffusionXLImg2ImgPipeline,
StableDiffusionXLPipeline,
StableUnCLIPImg2ImgPipeline,
StableUnCLIPPipeline,
)

View File

@ -2,8 +2,8 @@ from __future__ import annotations
import copy
from contextlib import contextmanager
from typing import Optional, Dict, Tuple, Any, Union, List
from pathlib import Path
from typing import Any, Dict, List, Optional, Tuple, Union
import numpy as np
import torch
@ -14,7 +14,6 @@ from transformers import CLIPTextModel, CLIPTokenizer
from .models.lora import LoRAModel
"""
loras = [
(lora_model1, 0.7),
@ -307,9 +306,10 @@ class TextualInversionManager(BaseTextualInversionManager):
class ONNXModelPatcher:
from .models.base import IAIOnnxRuntimeModel
from diffusers import OnnxRuntimeModel
from .models.base import IAIOnnxRuntimeModel
@classmethod
@contextmanager
def apply_lora_unet(

View File

@ -17,18 +17,19 @@ context. Use like this:
"""
import gc
import hashlib
import os
import sys
import hashlib
from contextlib import suppress
from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, Union, types, Optional, Type, Any
from typing import Any, Dict, Optional, Type, Union, types
import torch
import invokeai.backend.util.logging as logger
from .models import BaseModelType, ModelType, SubModelType, ModelBase
from .models import BaseModelType, ModelBase, ModelType, SubModelType
# Maximum size of the cache, in gigs
# Default is roughly enough to hold three fp16 diffusers models in RAM simultaneously

View File

@ -234,8 +234,8 @@ import textwrap
import types
from dataclasses import dataclass
from pathlib import Path
from shutil import rmtree, move
from typing import Optional, List, Literal, Tuple, Union, Dict, Set, Callable
from shutil import move, rmtree
from typing import Callable, Dict, List, Literal, Optional, Set, Tuple, Union
import torch
import yaml
@ -246,20 +246,21 @@ from pydantic import BaseModel, Field
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.util import CUDA_DEVICE, Chdir
from .model_cache import ModelCache, ModelLocker
from .model_search import ModelSearch
from .models import (
BaseModelType,
ModelType,
SubModelType,
ModelError,
SchedulerPredictionType,
MODEL_CLASSES,
ModelConfigBase,
ModelNotFoundException,
InvalidModelException,
BaseModelType,
DuplicateModelException,
InvalidModelException,
ModelBase,
ModelConfigBase,
ModelError,
ModelNotFoundException,
ModelType,
SchedulerPredictionType,
SubModelType,
)
# We are only starting to number the config file with release 3.

View File

@ -9,13 +9,14 @@ Copyright (c) 2023 Lincoln Stein and the InvokeAI Development Team
import warnings
from enum import Enum
from pathlib import Path
from typing import List, Optional, Union
from diffusers import DiffusionPipeline
from diffusers import logging as dlogging
from typing import List, Union, Optional
import invokeai.backend.util.logging as logger
from ...backend.model_management import ModelManager, ModelType, BaseModelType, ModelVariantType, AddModelResult
from ...backend.model_management import AddModelResult, BaseModelType, ModelManager, ModelType, ModelVariantType
class MergeInterpolationMethod(str, Enum):

View File

@ -1,20 +1,20 @@
import json
from dataclasses import dataclass
from pathlib import Path
from typing import Callable, Literal, Union, Dict, Optional
from typing import Callable, Dict, Literal, Optional, Union
import safetensors.torch
import torch
from diffusers import ModelMixin, ConfigMixin
from diffusers import ConfigMixin, ModelMixin
from picklescan.scanner import scan_file_path
from .models import (
BaseModelType,
InvalidModelException,
ModelType,
ModelVariantType,
SchedulerPredictionType,
SilenceWarnings,
InvalidModelException,
)
from .models.base import read_checkpoint_meta
from .util import lora_token_vector_length

View File

@ -5,8 +5,8 @@ Abstract base class for recursive directory search for models.
import os
from abc import ABC, abstractmethod
from typing import List, Set, types
from pathlib import Path
from typing import List, Set, types
import invokeai.backend.util.logging as logger

View File

@ -1,29 +1,30 @@
import inspect
from enum import Enum
from pydantic import BaseModel
from typing import Literal, get_origin
from pydantic import BaseModel
from .base import ( # noqa: F401
BaseModelType,
ModelType,
SubModelType,
DuplicateModelException,
InvalidModelException,
ModelBase,
ModelConfigBase,
ModelError,
ModelNotFoundException,
ModelType,
ModelVariantType,
SchedulerPredictionType,
ModelError,
SilenceWarnings,
ModelNotFoundException,
InvalidModelException,
DuplicateModelException,
SubModelType,
)
from .stable_diffusion import StableDiffusion1Model, StableDiffusion2Model
from .sdxl import StableDiffusionXLModel
from .vae import VaeModel
from .lora import LoRAModel
from .controlnet import ControlNetModel # TODO:
from .textual_inversion import TextualInversionModel
from .lora import LoRAModel
from .sdxl import StableDiffusionXLModel
from .stable_diffusion import StableDiffusion1Model, StableDiffusion2Model
from .stable_diffusion_onnx import ONNXStableDiffusion1Model, ONNXStableDiffusion2Model
from .textual_inversion import TextualInversionModel
from .vae import VaeModel
MODEL_CLASSES = {
BaseModelType.StableDiffusion1: {

View File

@ -1,29 +1,25 @@
import inspect
import json
import os
import sys
import typing
import inspect
import warnings
from abc import ABCMeta, abstractmethod
from contextlib import suppress
from enum import Enum
from pathlib import Path
from picklescan.scanner import scan_file_path
from typing import Any, Callable, Dict, Generic, List, Literal, Optional, Type, TypeVar, Union
import torch
import numpy as np
import onnx
import safetensors.torch
from diffusers import DiffusionPipeline, ConfigMixin
from onnx import numpy_helper
from onnxruntime import (
InferenceSession,
SessionOptions,
get_available_providers,
)
from pydantic import BaseModel, Field
from typing import List, Dict, Optional, Type, Literal, TypeVar, Generic, Callable, Any, Union
import torch
from diffusers import ConfigMixin, DiffusionPipeline
from diffusers import logging as diffusers_logging
from onnx import numpy_helper
from onnxruntime import InferenceSession, SessionOptions, get_available_providers
from picklescan.scanner import scan_file_path
from pydantic import BaseModel, Field
from transformers import logging as transformers_logging

View File

@ -1,23 +1,26 @@
import os
import torch
from enum import Enum
from pathlib import Path
from typing import Optional, Literal
from typing import Literal, Optional
import torch
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from .base import (
BaseModelType,
EmptyConfigLoader,
InvalidModelException,
ModelBase,
ModelConfigBase,
BaseModelType,
ModelNotFoundException,
ModelType,
SubModelType,
EmptyConfigLoader,
calc_model_size_by_fs,
calc_model_size_by_data,
calc_model_size_by_fs,
classproperty,
InvalidModelException,
ModelNotFoundException,
)
from invokeai.app.services.config import InvokeAIAppConfig
import invokeai.backend.util.logging as logger
class ControlNetModelFormat(str, Enum):

View File

@ -1,19 +1,21 @@
import os
import json
import os
from enum import Enum
from pydantic import Field
from typing import Literal, Optional
from omegaconf import OmegaConf
from pydantic import Field
from .base import (
ModelConfigBase,
BaseModelType,
DiffusersModel,
InvalidModelException,
ModelConfigBase,
ModelType,
ModelVariantType,
DiffusersModel,
read_checkpoint_meta,
classproperty,
InvalidModelException,
read_checkpoint_meta,
)
from omegaconf import OmegaConf
class StableDiffusionXLModelFormat(str, Enum):

View File

@ -1,26 +1,29 @@
import os
import json
import os
from enum import Enum
from pydantic import Field
from pathlib import Path
from typing import Literal, Optional, Union
from diffusers import StableDiffusionInpaintPipeline, StableDiffusionPipeline
from .base import (
ModelConfigBase,
BaseModelType,
ModelType,
ModelVariantType,
DiffusersModel,
SilenceWarnings,
read_checkpoint_meta,
classproperty,
InvalidModelException,
ModelNotFoundException,
)
from .sdxl import StableDiffusionXLModel
from omegaconf import OmegaConf
from pydantic import Field
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from omegaconf import OmegaConf
from .base import (
BaseModelType,
DiffusersModel,
InvalidModelException,
ModelConfigBase,
ModelNotFoundException,
ModelType,
ModelVariantType,
SilenceWarnings,
classproperty,
read_checkpoint_meta,
)
from .sdxl import StableDiffusionXLModel
class StableDiffusion1ModelFormat(str, Enum):
@ -272,8 +275,8 @@ def _convert_ckpt_and_cache(
return output_path
# to avoid circular import errors
from ..convert_ckpt_to_diffusers import convert_ckpt_to_diffusers
from ...util.devices import choose_torch_device, torch_dtype
from ..convert_ckpt_to_diffusers import convert_ckpt_to_diffusers
model_base_to_model_type = {
BaseModelType.StableDiffusion1: "FrozenCLIPEmbedder",

Some files were not shown because too many files have changed in this diff Show More