From 446d87516a96ced930ff222f0f4ece1bcf0f332c Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Wed, 19 Jul 2023 14:34:03 +1000 Subject: [PATCH 01/51] * Updated contributiion guide * Updated nav to be in new order prioritizing more commonuly used tabs * Added set nav in mkdocs.yaml --- docs/contributing/CONTRIBUTING.md | 53 +++++------- .../contribution_guides/development.md | 80 +++++++++++++++++++ .../contributingToFrontend.md | 65 +++++++++++++++ .../development_guides/contributingToNodes.md | 1 + .../contribution_guides/documentation.md | 13 +++ .../contribution_guides/translation.md | 19 +++++ .../contribution_guides/tutorials.md | 11 +++ mkdocs.yml | 60 +++++++++++++- 8 files changed, 270 insertions(+), 32 deletions(-) create mode 100644 docs/contributing/contribution_guides/development.md create mode 100644 docs/contributing/contribution_guides/development_guides/contributingToFrontend.md create mode 100644 docs/contributing/contribution_guides/development_guides/contributingToNodes.md create mode 100644 docs/contributing/contribution_guides/documentation.md create mode 100644 docs/contributing/contribution_guides/translation.md create mode 100644 docs/contributing/contribution_guides/tutorials.md diff --git a/docs/contributing/CONTRIBUTING.md b/docs/contributing/CONTRIBUTING.md index 3360faed70..8f0b2d6134 100644 --- a/docs/contributing/CONTRIBUTING.md +++ b/docs/contributing/CONTRIBUTING.md @@ -1,53 +1,44 @@ +# How to Contribute + ## Welcome to Invoke AI - -We're thrilled to have you here and we're excited for you to contribute. - Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use. -Here are some guidelines to help you get started: -### Technical Prerequisites +## Contributing to Invoke AI +Anyone who wishes to contribute to InvokeAI, whether features, bug fixes, code cleanup, testing, code reviews, documentation or translation is very much encouraged to do so. -Front-end: You'll need a working knowledge of React and TypeScript. +To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board. -Back-end: Depending on the scope of your contribution, you may need to know SQLite, FastAPI, Python, and Socketio. Also, a good majority of the backend logic involved in processing images is built in a modular way using a concept called "Nodes", which are isolated functions that carry out individual, discrete operations. This design allows for easy contributions of novel pipelines and capabilities. +### Areas of contribution: -### How to Submit Contributions +#### Development +If you’d like to help with development, please see our [development guide](docs/contributing/.contribution_guides/development.md). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide. -To start contributing, please follow these steps: +#### Documentation +If you’d like to help with documentation, please see our [documentation guide](docs/contributing/.contribution_guides/documenation.md). -1. Familiarize yourself with our roadmap and open projects to see where your skills and interests align. These documents can serve as a source of inspiration. -2. Open a Pull Request (PR) with a clear description of the feature you're adding or the problem you're solving. Make sure your contribution aligns with the project's vision. -3. Adhere to general best practices. This includes assuming interoperability with other nodes, keeping the scope of your functions as small as possible, and organizing your code according to our architecture documents. +#### Translation +If you'd like to help with translation, please see our [translation guide](docs/contributing/.contribution_guides/translation.md). -### Types of Contributions We're Looking For +#### Tutorials +Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI. -We welcome all contributions that improve the project. Right now, we're especially looking for: +We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our contributor community. -1. Quality of life (QOL) enhancements on the front-end. -2. New backend capabilities added through nodes. -3. Incorporating additional optimizations from the broader open-source software community. -### Communication and Decision-making Process +### Contributors -Project maintainers and code owners review PRs to ensure they align with the project's goals. They may provide design or architectural guidance, suggestions on user experience, or provide more significant feedback on the contribution itself. Expect to receive feedback on your submissions, and don't hesitate to ask questions or propose changes. +This project is a combined effort of dedicated people from across the world. [Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for their time, hard work and effort. -For more robust discussions, or if you're planning to add capabilities not currently listed on our roadmap, please reach out to us on our Discord server. That way, we can ensure your proposed contribution aligns with the project's direction before you start writing code. +### Code of Conduct -### Code of Conduct and Contribution Expectations +The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our Code of Conduct **** to learn more. -We want everyone in our community to have a positive experience. To facilitate this, we've established a code of conduct and a statement of values that we expect all contributors to adhere to. Please take a moment to review these documents—they're essential to maintaining a respectful and inclusive environment. +### Support -By making a contribution to this project, you certify that: +For support, please use this repository's [GitHub Issues](https://github.com/invoke-ai/InvokeAI/issues), or join the [Discord](https://discord.gg/ZmtBAhwWhy). -1. The contribution was created in whole or in part by you and you have the right to submit it under the open-source license indicated in this project’s GitHub repository; or -2. The contribution is based upon previous work that, to the best of your knowledge, is covered under an appropriate open-source license and you have the right under that license to submit that work with modifications, whether created in whole or in part by you, under the same open-source license (unless you are permitted to submit under a different license); or -3. The contribution was provided directly to you by some other person who certified (1) or (2) and you have not modified it; or -4. You understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information you submit with it, including your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open-source license(s) involved. - -This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project. - -This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution. +Original portions of the software are Copyright (c) 2023 by respective contributors. --- diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md new file mode 100644 index 0000000000..33e565606f --- /dev/null +++ b/docs/contributing/contribution_guides/development.md @@ -0,0 +1,80 @@ +# Development + +## **What do I need to know to help?** + +If you are looking to help to with a code contribution, InvokeAI uses several different technologies under the hood: Python (Pydantic, FastAPI, diffusers) and Typescript (React, Redux Toolkit, ChakraUI, Mantine, Konva). Familiarity with StableDiffusion and image generation concepts is helpful, but not essential. + +For more information, please review our area specific documentation: + +* #### [InvokeAI Architecure](../ARCHITECTURE.md) +* #### [Frontend Documentation](development_guides/contributingToFrontend.md) +* #### [Node Documentation](../INVOCATIONS.md) + +If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md). + +There are two paths to making a development contribution: + +1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors. + 1. Additional items can be found on our roadmap <******************************link to roadmap>******************************. The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help. +2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.** + +*Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.* + +## **How do I make a contribution?** + +Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown! + +1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success. +2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**. +3. Clone the repository to your local machine using: + +```bash +**git clone** https://github.com/your-GitHub-username/InvokeAI.git +``` + +1. Create a new branch for your fix using: + +```bash +**git checkout -b branch-name-here** +``` + +1. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. +2. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index: + +```bash +**git add insert-paths-of-changed-files-here** +``` + +1. Store the contents of the index with a descriptive message. + +```bash +**git commit -m "Insert a short message of the changes made here"** +``` + +1. Push the changes to the remote repository using + +```markdown +**git push origin branch-name-here** +``` + +1. Submit a pull request to the **main** branch of the InvokeAI repository. +2. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234". +3. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! +4. Wait for the pull request to be reviewed by other collaborators. +5. Make changes to the pull request if the reviewer(s) recommend them. +6. Celebrate your success after your pull request is merged! + +If you’d like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). + +## **Where can I go for help?** + +If you need help, you can ask questions in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord. + +For frontend related work, **@pyschedelicious** is the best person to reach out to. + +For backend related work, please reach out to **@pyschedelicious, @blessedcoolant** or **@lstein**. + +## **What does the Code of Conduct mean for me?** + +Our [Code of Conduct](../../CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code. + diff --git a/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md new file mode 100644 index 0000000000..ee0a4ef1cb --- /dev/null +++ b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md @@ -0,0 +1,65 @@ +# Contributing to the Frontend + +# InvokeAI Web UI + +- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui) + - [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack) + - [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing) + - [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment) + - [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds) + +The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex. + +Code is located in `invokeai/frontend/web/` for review. + +## Stack + +State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK: + +- `createAsyncThunk` for HTTP requests +- `createEntityAdapter` for fetching images and models +- `createListenerMiddleware` for workflows + +The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md. + +Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help). + +[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & Mantine for components and styling. + +[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling. + +[Vite](https://vitejs.dev/) for bundling. + +Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo. + +## Contributing + +Thanks for your interest in contributing to the InvokeAI Web UI! + +We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active. + +### Dev Environment + +Install [node](https://nodejs.org/en/download/) and [yarn classic](https://classic.yarnpkg.com/lang/en/). + +From `invokeai/frontend/web/` run `yarn install` to get everything set up. + +Start everything in dev mode: + +1. Start the dev server: `yarn dev` +2. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root` +3. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/) + +### VSCode Remote Dev + +We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH: + +`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host` + +### Production builds + +For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo. + +If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app. + +To build for production, run `yarn build`. \ No newline at end of file diff --git a/docs/contributing/contribution_guides/development_guides/contributingToNodes.md b/docs/contributing/contribution_guides/development_guides/contributingToNodes.md new file mode 100644 index 0000000000..f5f0962bd0 --- /dev/null +++ b/docs/contributing/contribution_guides/development_guides/contributingToNodes.md @@ -0,0 +1 @@ +# Contributing to Nodes \ No newline at end of file diff --git a/docs/contributing/contribution_guides/documentation.md b/docs/contributing/contribution_guides/documentation.md new file mode 100644 index 0000000000..1a0ec03cf7 --- /dev/null +++ b/docs/contributing/contribution_guides/documentation.md @@ -0,0 +1,13 @@ +# Documentation + +Documentation is an important part of any open source project. It provides a clear and concise way to communicate how the software works, how to use it, and how to troubleshoot issues. Without proper documentation, it can be difficult for users to understand the purpose and functionality of the project. + +## Contributing + +All documentation is maintained in the InvokeAI GitHub repository. If you come across documentation that is out of date or incorrect, please submit a pull request with the necessary changes. + +When updating or creating documentation, please keep in mind InvokeAI is a tool for everyone, not just those who have familiarity with generative art. + +## Help & Questions + +Please ping @imic1 or @hipsterusername in the [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions. \ No newline at end of file diff --git a/docs/contributing/contribution_guides/translation.md b/docs/contributing/contribution_guides/translation.md new file mode 100644 index 0000000000..669e403346 --- /dev/null +++ b/docs/contributing/contribution_guides/translation.md @@ -0,0 +1,19 @@ +# Translation + +InvokeAI uses [Weblate](https://weblate.org/) for translation. Weblate is a FOSS project providing a scalable translation service. Weblate automates the tedious parts of managing translation of a growing project, and the service is generously provided at no cost to FOSS projects like InvokeAI. + +## Contributing + +If you'd like to contribute by adding or updating a translation, please visit our [Weblate project](https://hosted.weblate.org/engage/invokeai/). You'll need to sign in with your GitHub account (a number of other accounts are supported, including Google). + +Once signed in, select a language and then the Web UI component. From here you can Browse and Translate strings from English to your chosen language. Zen mode offers a simpler translation experience. + +Your changes will be attributed to you in the automated PR process; you don't need to do anything else. + +## Help & Questions + +Please check Weblate's [documentation](https://docs.weblate.org/en/latest/index.html) or ping @Harvestor on [Discord](https://discord.com/channels/1020123559063990373/1049495067846524939) if you have any questions. + +## Thanks + +Thanks to the InvokeAI community for their efforts to translate the project! \ No newline at end of file diff --git a/docs/contributing/contribution_guides/tutorials.md b/docs/contributing/contribution_guides/tutorials.md new file mode 100644 index 0000000000..0d550e7023 --- /dev/null +++ b/docs/contributing/contribution_guides/tutorials.md @@ -0,0 +1,11 @@ +# Tutorials + +Tutorials help new & existing users expand their abilty to use InvokeAI to the full extent of our features and services. + +Currently, we have a set of tutorials available on our [YouTube channel](https://www.youtube.com/@invokeai), but as InvokeAI continues to evolve with new updates, we want to ensure that we are giving our users the resources they need to succeed. + +Tutorials can be in the form of videos or article walkthroughs on a subject of your choice. We recommend focusing tutorials on the key image generation methods, or on a specific component within one of the image generation methods. + +## Contributing + +Please reach out to @imic or @hipsterusername on [Discord](https://discord.gg/ZmtBAhwWhy) to help create tutorials for InvokeAI. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index ebd9ec0acf..cf1daaa779 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -12,7 +12,7 @@ repo_url: 'https://github.com/invoke-ai/InvokeAI' edit_uri: edit/main/docs/ # Copyright -copyright: Copyright © 2022 InvokeAI Team +copyright: Copyright © 2023 InvokeAI Team # Configuration theme: @@ -35,8 +35,11 @@ theme: features: - navigation.instant - navigation.tabs + - navigation.tabs.sticky - navigation.top - navigation.tracking + - navigation.indexes + - navigation.path - search.highlight - search.suggest - toc.integrate @@ -95,3 +98,58 @@ plugins: 'installation/INSTALL_DOCKER.md': 'installation/040_INSTALL_DOCKER.md' 'installation/INSTALLING_MODELS.md': 'installation/050_INSTALLING_MODELS.md' 'installation/INSTALL_PATCHMATCH.md': 'installation/060_INSTALL_PATCHMATCH.md' + +nav: + - Home: 'index.md' + - Installation: + - Overview: 'installation/index.md' + - Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md' + - Installing manually: 'installation/010_INSTALL_AUTOMATED.md' + - NVIDIA Cuda / AMD ROCm: 'installation/030_INSTALL_CUDA_AND_ROCM.md' + - Installing with Docker: 'installation/040_INSTALL_DOCKER.md' + - Installing Models: 'installation/040_INSTALL_DOCKER.md' + - Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md' + - Installing xFormers: 'installation/070_INSTALL_XFORMERS.md' + - Developers Documentation: 'installation/Developers_documentation/BUILDING_BINARY_INSTALLERS.md' + - Deprecated Documentation: + - Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md' + - Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md' + - Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md' + - Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_macOS.md' + - Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md' + - Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md' + - Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md' + - Features: + - Overview: 'features/index.md' + - Concepts: 'features/CONCEPTS.md' + - Configuration: 'features/CONFIGURATION.md' + - ControlNet: 'features/CONTROLNET.md' + - Image-to-Image: 'features/IMG2IMG.md' + - Controlling Logging: 'features/LOGGING.md' + - Model Mergeing: 'features/MODEL_MERGING.md' + - Nodes Editor (Experimental): 'features/NODES.md' + - NSFW Checker: 'features/NSFW.md' + - Postprocessing: 'features/POSTPROCESS.md' + - Prompting Features: 'features/PROMPTS.md' + - Training: 'features/TRAINING.md' + - Unified Canvas: 'features/UNIFIED_CANVAS.md' + - Variations: 'features/VARIATIONS.md' + - InvokeAI Web Server: 'features/WEB.md' + - WebUI Hotkeys: "features/WEBUIHOTKEYS.md" + - Other: 'features/OTHER.md' + - Contributing: + - How to Contribute: 'contributing/CONTRIBUTING.md' + - Development: 'contributing/contribution_guides/development.md' + - Documentation: 'contributing/contribution_guides/documentation.md' + - Translation: 'contributing/contribution_guides/translation.md' + - Tutorials: 'contributing/contribution_guides/tutorials.md' + - Changelog: 'CHANGELOG.md' + - Deprecated: + - Command Line Interface: 'deprecated/CLI.md' + - Embiggen: 'deprecated/EMBIGGEN.md' + - Inpainting: 'deprecated/INPAINTING.MD' + - Outpainting: 'deprecated/OUTPAINTING.MD' + - Help: + - Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md' + + From ff74370eda53f920ed6f25af9a04f0296048a93d Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Wed, 19 Jul 2023 15:39:29 +1000 Subject: [PATCH 02/51] =?UTF-8?q?=E2=80=A2=20Updated=20best=20practices=20?= =?UTF-8?q?=E2=80=A2=20Updated=20index=20with=20new=20contribution=20guide?= =?UTF-8?q?=20link?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/contributing/CONTRIBUTING.md | 6 ++--- .../contribution_guides/development.md | 25 +++++++++++++------ docs/index.md | 10 +++----- mkdocs.yml | 1 + 4 files changed, 24 insertions(+), 18 deletions(-) diff --git a/docs/contributing/CONTRIBUTING.md b/docs/contributing/CONTRIBUTING.md index 8f0b2d6134..fe18345ec9 100644 --- a/docs/contributing/CONTRIBUTING.md +++ b/docs/contributing/CONTRIBUTING.md @@ -12,10 +12,10 @@ To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the ### Areas of contribution: #### Development -If you’d like to help with development, please see our [development guide](docs/contributing/.contribution_guides/development.md). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide. +If you’d like to help with development, please see our [development guide](contribution_guides/development.md). If you’re unfamiliar with contributing to open source projects, there is a tutorial contained within the development guide. #### Documentation -If you’d like to help with documentation, please see our [documentation guide](docs/contributing/.contribution_guides/documenation.md). +If you’d like to help with documentation, please see our [documentation guide](contribution_guides/documenation.md). #### Translation If you'd like to help with translation, please see our [translation guide](docs/contributing/.contribution_guides/translation.md). @@ -32,7 +32,7 @@ This project is a combined effort of dedicated people from across the world. [C ### Code of Conduct -The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our Code of Conduct **** to learn more. +The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../../CODE_OF_CONDUCT.md) to learn more. ### Support diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md index 33e565606f..6c42efd051 100644 --- a/docs/contributing/contribution_guides/development.md +++ b/docs/contributing/contribution_guides/development.md @@ -9,6 +9,7 @@ For more information, please review our area specific documentation: * #### [InvokeAI Architecure](../ARCHITECTURE.md) * #### [Frontend Documentation](development_guides/contributingToFrontend.md) * #### [Node Documentation](../INVOCATIONS.md) +* #### [Local Development](../LOCAL_DEVELOPMENT.md) If you don't feel ready to make a code contribution yet, no problem! You can also help out in other ways, such as [documentation](documentation.md) or [translation](translation.md). @@ -20,44 +21,52 @@ There are two paths to making a development contribution: *Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.* +## Best Practices: +* Keep your pull requests small. Smaller pull requests are more likely to be accepted and merged +* Comments! Commenting your code helps reviwers easily understand your contribution +* Use Python and Typescript’s typing systems, and consider using an editor with [LSP](https://microsoft.github.io/language-server-protocol/) support to streamline development +* Make all communications public. This ensure knowledge is shared with the whole community + ## **How do I make a contribution?** Never made an open source contribution before? Wondering how contributions work in our project? Here's a quick rundown! -1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success. +Before starting these steps, ensure you have your local environment [configured for development](../LOCAL_DEVELOPMENT.md). + +1. Find a [good first issue](https://github.com/invoke-ai/InvokeAI/contribute) that you are interested in addressing or a feature that you would like to add. Then, reach out to our team in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord to ensure you are setup for success. 2. Fork the [InvokeAI](https://github.com/invoke-ai/InvokeAI) repository to your GitHub profile. This means that you will have a copy of the repository under **your-GitHub-username/InvokeAI**. 3. Clone the repository to your local machine using: ```bash -**git clone** https://github.com/your-GitHub-username/InvokeAI.git +git clone https://github.com/your-GitHub-username/InvokeAI.git ``` 1. Create a new branch for your fix using: ```bash -**git checkout -b branch-name-here** +git checkout -b branch-name-here ``` 1. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. 2. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index: ```bash -**git add insert-paths-of-changed-files-here** +git add insert-paths-of-changed-files-here ``` 1. Store the contents of the index with a descriptive message. ```bash -**git commit -m "Insert a short message of the changes made here"** +git commit -m "Insert a short message of the changes made here" ``` 1. Push the changes to the remote repository using ```markdown -**git push origin branch-name-here** +git push origin branch-name-here ``` -1. Submit a pull request to the **main** branch of the InvokeAI repository. +1. Submit a pull request to the **development** branch of the InvokeAI repository. 2. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234". 3. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! 4. Wait for the pull request to be reviewed by other collaborators. @@ -76,5 +85,5 @@ For backend related work, please reach out to **@pyschedelicious, @blessedcoolan ## **What does the Code of Conduct mean for me?** -Our [Code of Conduct](../../CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code. +Our [Code of Conduct](CODE_OF_CONDUCT.md) means that you are responsible for treating everyone on the project with respect and courtesy regardless of their identity. If you are the victim of any inappropriate behavior or comments as described in our Code of Conduct, we are here for you and will do the best to ensure that the abuser is reprimanded appropriately, per our code. diff --git a/docs/index.md b/docs/index.md index 229cb6cf05..c2085c9f78 100644 --- a/docs/index.md +++ b/docs/index.md @@ -221,14 +221,10 @@ get solutions for common installation problems and other issues. Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much -encouraged to do so. If you are unfamiliar with how to contribute to GitHub -projects, here is a -[Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). +encouraged to do so. -A full set of contribution guidelines, along with templates, are in progress, -but for now the most important thing is to **make your pull request against the -"development" branch**, and not against "main". This will help keep public -breakage to a minimum and will allow you to propose more radical changes. +[Please take a look at our Contribution documentation to learn more about contributing to InvokeAI. +](contributing/CONTRIBUTING.md) ## :octicons-person-24: Contributors diff --git a/mkdocs.yml b/mkdocs.yml index cf1daaa779..42672fc0ab 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -143,6 +143,7 @@ nav: - Documentation: 'contributing/contribution_guides/documentation.md' - Translation: 'contributing/contribution_guides/translation.md' - Tutorials: 'contributing/contribution_guides/tutorials.md' + - Local Development: contributing/LOCAL_DEVELOPMENT.md - Changelog: 'CHANGELOG.md' - Deprecated: - Command Line Interface: 'deprecated/CLI.md' From 6ba48af0a924ef5729dd1e85f8cb3682ef0a0b15 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Wed, 19 Jul 2023 22:04:17 +1000 Subject: [PATCH 03/51] Added community node documentation --- docs/nodes/communityNodes.md | 28 ++++++++++++++++++++++++ docs/nodes/overview.md | 41 ++++++++++++++++++++++++++++++++++++ mkdocs.yml | 23 ++++++++++++++------ 3 files changed, 85 insertions(+), 7 deletions(-) create mode 100644 docs/nodes/communityNodes.md create mode 100644 docs/nodes/overview.md diff --git a/docs/nodes/communityNodes.md b/docs/nodes/communityNodes.md new file mode 100644 index 0000000000..2c6941d3c8 --- /dev/null +++ b/docs/nodes/communityNodes.md @@ -0,0 +1,28 @@ +# Community Nodes + +These are nodes that have been developed by the community for the community. If you're not sure what a node is, you can learn more about nodes [here](overview.md). + +If you'd like to submit a node for the community, please refer to the [node creation overview](overview.md). + +To download a node, simply download the `.py` node file from the link and add it to the `/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node. + +To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor. + +## List of Nodes + +-------------------------------- +### Super Cool Node Template + +**Description:** This node allows you to do super cool things with InvokeAI. + +**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py + +**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json + +**Output Examples** + +![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png) + + +## Help +If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy). \ No newline at end of file diff --git a/docs/nodes/overview.md b/docs/nodes/overview.md new file mode 100644 index 0000000000..f7dc2384dd --- /dev/null +++ b/docs/nodes/overview.md @@ -0,0 +1,41 @@ +# Nodes +## What are Nodes? +An Node is simply a single operation that takes in some inputs and gives +out some outputs. We can then chain multiple nodes together to create more +complex functionality. All InvokeAI features are added through nodes. + +This means nodes can be used to easily extend the image generation capabilities of InvokeAI, and allow you build workflows to suit your needs. + +You can read more about nodes and the node editor [here](../features/NODES.md). + + +## Downloading Nodes +To download a new node, visit our list of [Community Nodes](communityNodes.md). These are codes that have been created by the community, for the community. + + +## Contributing Nodes + +To learn about creating a new node, please visit our [Node creation documenation](../contributing/INVOCATIONS.md). + +Once you’ve created a node and confirmed that it behaves as expected locally, follow these steps: +- Make sure the node is contained in a new Python (.py) file +- Submit a pull request with a link to your node in GitHub against the `nodes` branch to add the node to the [Community Nodes](Community Nodes) list + - Make sure you are following the template below and have provided all relevant details about the node and what it does. +- A maintainer will review the pull request and node. If the node is aligned with the direction of the project, you might be asked for permission to include it in the core project. + +### Community Node Template + +```markdown +-------------------------------- +### Super Cool Node Template + +**Description:** This node allows you to do super cool things with InvokeAI. + +**Node Link:** https://github.com/invoke-ai/InvokeAI/fake_node.py + +**Example Node Graph:** https://github.com/invoke-ai/InvokeAI/fake_node_graph.json + +**Output Examples** + +![InvokeAI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png) +``` diff --git a/mkdocs.yml b/mkdocs.yml index 42672fc0ab..7d3e0e0b85 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -104,10 +104,10 @@ nav: - Installation: - Overview: 'installation/index.md' - Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md' - - Installing manually: 'installation/010_INSTALL_AUTOMATED.md' + - Installing manually: 'installation/020_INSTALL_MANUAL.md' - NVIDIA Cuda / AMD ROCm: 'installation/030_INSTALL_CUDA_AND_ROCM.md' - Installing with Docker: 'installation/040_INSTALL_DOCKER.md' - - Installing Models: 'installation/040_INSTALL_DOCKER.md' + - Installing Models: 'installation/050_INSTALLING_MODELS.md' - Installing PyPatchMatch: 'installation/060_INSTALL_PATCHMATCH.md' - Installing xFormers: 'installation/070_INSTALL_XFORMERS.md' - Developers Documentation: 'installation/Developers_documentation/BUILDING_BINARY_INSTALLERS.md' @@ -115,10 +115,13 @@ nav: - Binary Installer: 'installation/deprecated_documentation/INSTALL_BINARY.md' - Runninng InvokeAI on Google Colab: 'installation/deprecated_documentation/INSTALL_JUPYTER.md' - Manual Installation on Linux: 'installation/deprecated_documentation/INSTALL_LINUX.md' - - Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_macOS.md' + - Manual Installation on macOS: 'installation/deprecated_documentation/INSTALL_MAC.md' - Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md' - Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md' - Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md' + - Community Nodes: + - Community Nodes: 'nodes/communityNodes.md' + - Overview: 'nodes/overview.md' - Features: - Overview: 'features/index.md' - Concepts: 'features/CONCEPTS.md' @@ -139,18 +142,24 @@ nav: - Other: 'features/OTHER.md' - Contributing: - How to Contribute: 'contributing/CONTRIBUTING.md' - - Development: 'contributing/contribution_guides/development.md' + - Development: + - Overview: 'contributing/contribution_guides/development.md' + - InvokeAI Architecture: 'contributing/ARCHITECTURE.md' + - Frontend Documentation: 'contributing/contribution_guides/development_guides/contributingToFrontend.md' + - Local Development: 'contributing/LOCAL_DEVELOPMENT.md' - Documentation: 'contributing/contribution_guides/documentation.md' - Translation: 'contributing/contribution_guides/translation.md' - Tutorials: 'contributing/contribution_guides/tutorials.md' - - Local Development: contributing/LOCAL_DEVELOPMENT.md - Changelog: 'CHANGELOG.md' - Deprecated: - Command Line Interface: 'deprecated/CLI.md' - Embiggen: 'deprecated/EMBIGGEN.md' - - Inpainting: 'deprecated/INPAINTING.MD' - - Outpainting: 'deprecated/OUTPAINTING.MD' + - Inpainting: 'deprecated/INPAINTING.md' + - Outpainting: 'deprecated/OUTPAINTING.md' - Help: - Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md' + - Other: + - Contributors: 'other/CONTRIBUTORS.md' + - CompViz-README: 'other/README-CompViz.md' From c291b82b94bdc82fa3cd1fac87f7d47f9be0abef Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Wed, 19 Jul 2023 23:56:38 +1000 Subject: [PATCH 04/51] Added contribution disclaimer --- docs/contributing/CONTRIBUTING.md | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/docs/contributing/CONTRIBUTING.md b/docs/contributing/CONTRIBUTING.md index fe18345ec9..4b38e30633 100644 --- a/docs/contributing/CONTRIBUTING.md +++ b/docs/contributing/CONTRIBUTING.md @@ -32,7 +32,18 @@ This project is a combined effort of dedicated people from across the world. [C ### Code of Conduct -The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../../CODE_OF_CONDUCT.md) to learn more. +The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../../CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. + +By making a contribution to this project, you certify that: + +1. The contribution was created in whole or in part by you and you have the right to submit it under the open-source license indicated in this project’s GitHub repository; or +2. The contribution is based upon previous work that, to the best of your knowledge, is covered under an appropriate open-source license and you have the right under that license to submit that work with modifications, whether created in whole or in part by you, under the same open-source license (unless you are permitted to submit under a different license); or +3. The contribution was provided directly to you by some other person who certified (1) or (2) and you have not modified it; or +4. You understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information you submit with it, including your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open-source license(s) involved. + +This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project. + +This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution. ### Support From c557402dbb631a417db16a3f3bf2bc83ac92ff88 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Sun, 9 Jul 2023 21:21:14 +1000 Subject: [PATCH 05/51] feat(api): use next available port Resolves #3515 @ebr @brandonrising can't imagine this would cause issues but just FYI --- invokeai/app/api_app.py | 14 +++++++++++++- 1 file changed, 13 insertions(+), 1 deletion(-) diff --git a/invokeai/app/api_app.py b/invokeai/app/api_app.py index 4afa3aa161..fa4643d934 100644 --- a/invokeai/app/api_app.py +++ b/invokeai/app/api_app.py @@ -4,6 +4,7 @@ import sys from inspect import signature import uvicorn +import socket from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware @@ -193,9 +194,20 @@ app.mount("/", ) def invoke_api(): + def find_port(port: int): + """Find a port not in use starting at given port""" + # Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon! + # https://github.com/WaylonWalker + with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: + if s.connect_ex(("localhost", port)) == 0: + return find_port(port=port + 1) + else: + return port + + port = find_port(app_config.port) # Start our own event loop for eventing usage loop = asyncio.new_event_loop() - config = uvicorn.Config(app=app, host=app_config.host, port=app_config.port, loop=loop) + config = uvicorn.Config(app=app, host=app_config.host, port=port, loop=loop) # Use access_log to turn off logging server = uvicorn.Server(config) loop.run_until_complete(server.serve()) From 509514f11d41054132753d9561831917351e6342 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 13 Jul 2023 21:42:26 +1000 Subject: [PATCH 06/51] feat(api): display warning when port is in use --- invokeai/app/api_app.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/invokeai/app/api_app.py b/invokeai/app/api_app.py index fa4643d934..824b697d54 100644 --- a/invokeai/app/api_app.py +++ b/invokeai/app/api_app.py @@ -205,6 +205,8 @@ def invoke_api(): return port port = find_port(app_config.port) + if port != app_config.port: + logger.warn(f"Port {app_config.port} in use, using port {port}") # Start our own event loop for eventing usage loop = asyncio.new_event_loop() config = uvicorn.Config(app=app, host=app_config.host, port=port, loop=loop) From a683379ddac6ce7a4c51992378b4c39a120cc914 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 20 Jul 2023 09:28:21 +1000 Subject: [PATCH 07/51] Updated docs to be more accurate based on Lincoln's feedback --- docs/contributing/contribution_guides/development.md | 2 +- .../development_guides/contributingToFrontend.md | 10 +++++----- .../development_guides/contributingToNodes.md | 1 - docs/nodes/communityNodes.md | 2 +- 4 files changed, 7 insertions(+), 8 deletions(-) delete mode 100644 docs/contributing/contribution_guides/development_guides/contributingToNodes.md diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md index 6c42efd051..584fb5a4ed 100644 --- a/docs/contributing/contribution_guides/development.md +++ b/docs/contributing/contribution_guides/development.md @@ -66,7 +66,7 @@ git commit -m "Insert a short message of the changes made here" git push origin branch-name-here ``` -1. Submit a pull request to the **development** branch of the InvokeAI repository. +1. Submit a pull request to the **main** branch of the InvokeAI repository. 2. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234". 3. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! 4. Wait for the pull request to be reviewed by other collaborators. diff --git a/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md index ee0a4ef1cb..08f7c69ce7 100644 --- a/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md +++ b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md @@ -10,7 +10,7 @@ The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex. -Code is located in `invokeai/frontend/web/` for review. +Code is located in `invokeai/frontend/web/src` for review. ## Stack @@ -45,10 +45,10 @@ Install [node](https://nodejs.org/en/download/) and [yarn classic](https://cl From `invokeai/frontend/web/` run `yarn install` to get everything set up. Start everything in dev mode: - -1. Start the dev server: `yarn dev` -2. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root` -3. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/) +1. Ensure your virtual environment is running +2. Start the dev server: `yarn dev` +3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root` +4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/) ### VSCode Remote Dev diff --git a/docs/contributing/contribution_guides/development_guides/contributingToNodes.md b/docs/contributing/contribution_guides/development_guides/contributingToNodes.md deleted file mode 100644 index f5f0962bd0..0000000000 --- a/docs/contributing/contribution_guides/development_guides/contributingToNodes.md +++ /dev/null @@ -1 +0,0 @@ -# Contributing to Nodes \ No newline at end of file diff --git a/docs/nodes/communityNodes.md b/docs/nodes/communityNodes.md index 2c6941d3c8..a17ac05f3b 100644 --- a/docs/nodes/communityNodes.md +++ b/docs/nodes/communityNodes.md @@ -4,7 +4,7 @@ These are nodes that have been developed by the community for the community. If If you'd like to submit a node for the community, please refer to the [node creation overview](overview.md). -To download a node, simply download the `.py` node file from the link and add it to the `/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node. +To download a node, simply download the `.py` node file from the link and add it to the `invoke/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node. To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor. From 53ed2521681ba0c605edab6bf56ad754a88dc1d8 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 20 Jul 2023 09:34:16 +1000 Subject: [PATCH 08/51] Fixed typos in docs --- docs/nodes/communityNodes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/nodes/communityNodes.md b/docs/nodes/communityNodes.md index a17ac05f3b..8bd9a613a0 100644 --- a/docs/nodes/communityNodes.md +++ b/docs/nodes/communityNodes.md @@ -4,7 +4,7 @@ These are nodes that have been developed by the community for the community. If If you'd like to submit a node for the community, please refer to the [node creation overview](overview.md). -To download a node, simply download the `.py` node file from the link and add it to the `invoke/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node. +To download a node, simply download the `.py` node file from the link and add it to the `invokeai/app/invocations/` folder in your Invoke AI install location. Along with the node, an example node graph should be provided to help you get started with the node. To use a community node graph, download the the `.json` node graph file and load it into Invoke AI via the **Load Nodes** button on the Node Editor. From a0b5930340db2b363174c2356ba8c06d92a7b0e8 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 20 Jul 2023 09:35:09 +1000 Subject: [PATCH 09/51] Updated Code of Conduct URL --- docs/contributing/CONTRIBUTING.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/contributing/CONTRIBUTING.md b/docs/contributing/CONTRIBUTING.md index 4b38e30633..415b214f48 100644 --- a/docs/contributing/CONTRIBUTING.md +++ b/docs/contributing/CONTRIBUTING.md @@ -32,7 +32,7 @@ This project is a combined effort of dedicated people from across the world. [C ### Code of Conduct -The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](../../CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. +The InvokeAI community is a welcoming place, and we want your help in maintaining that. Please review our [Code of Conduct](https://github.com/invoke-ai/InvokeAI/blob/main/CODE_OF_CONDUCT.md) to learn more - it's essential to maintaining a respectful and inclusive environment. By making a contribution to this project, you certify that: From 1e5310793ce85939939690e4caacd26240bc4dbc Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 20 Jul 2023 09:46:05 +1000 Subject: [PATCH 10/51] Updated PR template --- pull_request_template.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/pull_request_template.md b/pull_request_template.md index e365120f24..04d9a96a99 100644 --- a/pull_request_template.md +++ b/pull_request_template.md @@ -5,6 +5,7 @@ - [ ] Bug Fix - [ ] Optimization - [ ] Documentation Update +- [ ] Community Node ## Have you discussed this change with the InvokeAI team? @@ -12,6 +13,11 @@ - [ ] No, because: +## Have you updated relevant documentation? +- [ ] Yes +- [ ] No + + ## Description From 12cae33dcdf037fcd95e60c79545ccbda0cdcc21 Mon Sep 17 00:00:00 2001 From: Lincoln Stein Date: Wed, 19 Jul 2023 20:57:14 -0400 Subject: [PATCH 11/51] fix inpaint model detection (#3843) Co-authored-by: Lincoln Stein --- invokeai/backend/model_management/model_probe.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/invokeai/backend/model_management/model_probe.py b/invokeai/backend/model_management/model_probe.py index c8e33db393..6a65401675 100644 --- a/invokeai/backend/model_management/model_probe.py +++ b/invokeai/backend/model_management/model_probe.py @@ -39,6 +39,7 @@ class ModelProbe(object): CLASS2TYPE = { 'StableDiffusionPipeline' : ModelType.Main, + 'StableDiffusionInpaintPipeline' : ModelType.Main, 'StableDiffusionXLPipeline' : ModelType.Main, 'StableDiffusionXLImg2ImgPipeline' : ModelType.Main, 'AutoencoderKL' : ModelType.Vae, @@ -401,7 +402,7 @@ class PipelineFolderProbe(FolderProbeBase): in_channels = conf['in_channels'] if in_channels == 9: - return ModelVariantType.Inpainting + return ModelVariantType.Inpaint elif in_channels == 5: return ModelVariantType.Depth elif in_channels == 4: From 4e3f58552c92eeee73ba496bc0eec42c1b707d9f Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 18:52:30 -0700 Subject: [PATCH 12/51] Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize --- invokeai/app/util/controlnet_utils.py | 235 ++++++++++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 invokeai/app/util/controlnet_utils.py diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py new file mode 100644 index 0000000000..920bca081b --- /dev/null +++ b/invokeai/app/util/controlnet_utils.py @@ -0,0 +1,235 @@ +import torch +import numpy as np +import cv2 +from PIL import Image +from diffusers.utils import PIL_INTERPOLATION + +from einops import rearrange +from controlnet_aux.util import HWC3, resize_image + +################################################################### +# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet +################################################################### +# High Quality Edge Thinning using Pure Python +# Written by Lvmin Zhangu +# 2023 April +# Stanford University +# If you use this, please Cite "High Quality Edge Thinning using Pure Python", Lvmin Zhang, In Mikubill/sd-webui-controlnet. + +lvmin_kernels_raw = [ + np.array([ + [-1, -1, -1], + [0, 1, 0], + [1, 1, 1] + ], dtype=np.int32), + np.array([ + [0, -1, -1], + [1, 1, -1], + [0, 1, 0] + ], dtype=np.int32) +] + +lvmin_kernels = [] +lvmin_kernels += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_kernels_raw] + +lvmin_prunings_raw = [ + np.array([ + [-1, -1, -1], + [-1, 1, -1], + [0, 0, -1] + ], dtype=np.int32), + np.array([ + [-1, -1, -1], + [-1, 1, -1], + [-1, 0, 0] + ], dtype=np.int32) +] + +lvmin_prunings = [] +lvmin_prunings += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_prunings_raw] + + +def remove_pattern(x, kernel): + objects = cv2.morphologyEx(x, cv2.MORPH_HITMISS, kernel) + objects = np.where(objects > 127) + x[objects] = 0 + return x, objects[0].shape[0] > 0 + + +def thin_one_time(x, kernels): + y = x + is_done = True + for k in kernels: + y, has_update = remove_pattern(y, k) + if has_update: + is_done = False + return y, is_done + + +def lvmin_thin(x, prunings=True): + y = x + for i in range(32): + y, is_done = thin_one_time(y, lvmin_kernels) + if is_done: + break + if prunings: + y, _ = thin_one_time(y, lvmin_prunings) + return y + + +def nake_nms(x): + f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) + f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) + f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8) + f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8) + y = np.zeros_like(x) + for f in [f1, f2, f3, f4]: + np.putmask(y, cv2.dilate(x, kernel=f) == x, x) + return y + + +########################################################################### +# Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet +# modified for InvokeAI +########################################################################### +# def detectmap_proc(detected_map, module, resize_mode, h, w): +@staticmethod +def np_img_resize( + np_img: np.ndarray, + resize_mode: str, + h: int, + w: int, + device: torch.device = torch.device('cpu') +): + print("in np_img_resize") + # if 'inpaint' in module: + # np_img = np_img.astype(np.float32) + # else: + # np_img = HWC3(np_img) + np_img = HWC3(np_img) + + def safe_numpy(x): + # A very safe method to make sure that Apple/Mac works + y = x + + # below is very boring but do not change these. If you change these Apple or Mac may fail. + y = y.copy() + y = np.ascontiguousarray(y) + y = y.copy() + return y + + def get_pytorch_control(x): + # A very safe method to make sure that Apple/Mac works + y = x + + # below is very boring but do not change these. If you change these Apple or Mac may fail. + y = torch.from_numpy(y) + y = y.float() / 255.0 + y = rearrange(y, 'h w c -> 1 c h w') + y = y.clone() + # y = y.to(devices.get_device_for("controlnet")) + y = y.to(device) + y = y.clone() + return y + + def high_quality_resize(x: np.ndarray, + size): + # Written by lvmin + # Super high-quality control map up-scaling, considering binary, seg, and one-pixel edges + inpaint_mask = None + if x.ndim == 3 and x.shape[2] == 4: + inpaint_mask = x[:, :, 3] + x = x[:, :, 0:3] + + new_size_is_smaller = (size[0] * size[1]) < (x.shape[0] * x.shape[1]) + new_size_is_bigger = (size[0] * size[1]) > (x.shape[0] * x.shape[1]) + unique_color_count = np.unique(x.reshape(-1, x.shape[2]), axis=0).shape[0] + is_one_pixel_edge = False + is_binary = False + if unique_color_count == 2: + is_binary = np.min(x) < 16 and np.max(x) > 240 + if is_binary: + xc = x + xc = cv2.erode(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1) + xc = cv2.dilate(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1) + one_pixel_edge_count = np.where(xc < x)[0].shape[0] + all_edge_count = np.where(x > 127)[0].shape[0] + is_one_pixel_edge = one_pixel_edge_count * 2 > all_edge_count + + if 2 < unique_color_count < 200: + interpolation = cv2.INTER_NEAREST + elif new_size_is_smaller: + interpolation = cv2.INTER_AREA + else: + interpolation = cv2.INTER_CUBIC # Must be CUBIC because we now use nms. NEVER CHANGE THIS + + y = cv2.resize(x, size, interpolation=interpolation) + if inpaint_mask is not None: + inpaint_mask = cv2.resize(inpaint_mask, size, interpolation=interpolation) + + if is_binary: + y = np.mean(y.astype(np.float32), axis=2).clip(0, 255).astype(np.uint8) + if is_one_pixel_edge: + y = nake_nms(y) + _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) + y = lvmin_thin(y, prunings=new_size_is_bigger) + else: + _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) + y = np.stack([y] * 3, axis=2) + + if inpaint_mask is not None: + inpaint_mask = (inpaint_mask > 127).astype(np.float32) * 255.0 + inpaint_mask = inpaint_mask[:, :, None].clip(0, 255).astype(np.uint8) + y = np.concatenate([y, inpaint_mask], axis=2) + + return y + + # if resize_mode == external_code.ResizeMode.RESIZE: + if resize_mode == "just_resize": # RESIZE + print("just resizing") + np_img = high_quality_resize(np_img, (w, h)) + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img + + old_h, old_w, _ = np_img.shape + old_w = float(old_w) + old_h = float(old_h) + k0 = float(h) / old_h + k1 = float(w) / old_w + + safeint = lambda x: int(np.round(x)) + + # if resize_mode == external_code.ResizeMode.OUTER_FIT: + if resize_mode == "fill_resize": # OUTER_FIT + print("fill + resizing") + k = min(k0, k1) + borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0) + high_quality_border_color = np.median(borders, axis=0).astype(np_img.dtype) + if len(high_quality_border_color) == 4: + # Inpaint hijack + high_quality_border_color[3] = 255 + high_quality_background = np.tile(high_quality_border_color[None, None], [h, w, 1]) + np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) + new_h, new_w, _ = np_img.shape + pad_h = max(0, (h - new_h) // 2) + pad_w = max(0, (w - new_w) // 2) + high_quality_background[pad_h:pad_h + new_h, pad_w:pad_w + new_w] = np_img + np_img = high_quality_background + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img + else: # resize_mode == "crop_resize" (INNER_FIT) + print("crop + resizing") + k = max(k0, k1) + np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) + new_h, new_w, _ = np_img.shape + pad_h = max(0, (new_h - h) // 2) + pad_w = max(0, (new_w - w) // 2) + np_img = np_img[pad_h:pad_h + h, pad_w:pad_w + w] + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img From 3a987b2e722e0f16e35f007d285109d254cca0e0 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:01:14 -0700 Subject: [PATCH 13/51] Added revised prepare_control_image() that leverages lvmin high quality resizing --- invokeai/app/util/controlnet_utils.py | 61 +++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 4 deletions(-) diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py index 920bca081b..67fd7bb43e 100644 --- a/invokeai/app/util/controlnet_utils.py +++ b/invokeai/app/util/controlnet_utils.py @@ -107,7 +107,6 @@ def np_img_resize( w: int, device: torch.device = torch.device('cpu') ): - print("in np_img_resize") # if 'inpaint' in module: # np_img = np_img.astype(np.float32) # else: @@ -192,7 +191,6 @@ def np_img_resize( # if resize_mode == external_code.ResizeMode.RESIZE: if resize_mode == "just_resize": # RESIZE - print("just resizing") np_img = high_quality_resize(np_img, (w, h)) np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img @@ -207,7 +205,6 @@ def np_img_resize( # if resize_mode == external_code.ResizeMode.OUTER_FIT: if resize_mode == "fill_resize": # OUTER_FIT - print("fill + resizing") k = min(k0, k1) borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0) high_quality_border_color = np.median(borders, axis=0).astype(np_img.dtype) @@ -224,7 +221,6 @@ def np_img_resize( np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img else: # resize_mode == "crop_resize" (INNER_FIT) - print("crop + resizing") k = max(k0, k1) np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) new_h, new_w, _ = np_img.shape @@ -233,3 +229,60 @@ def np_img_resize( np_img = np_img[pad_h:pad_h + h, pad_w:pad_w + w] np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img + +def prepare_control_image( + # image used to be Union[PIL.Image.Image, List[PIL.Image.Image], torch.Tensor, List[torch.Tensor]] + # but now should be able to assume that image is a single PIL.Image, which simplifies things + image: Image, + # FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions? + # latents_to_match_resolution, # TorchTensor of shape (batch_size, 3, height, width) + width=512, # should be 8 * latent.shape[3] + height=512, # should be 8 * latent height[2] + # batch_size=1, # currently no batching + # num_images_per_prompt=1, # currently only single image + device="cuda", + dtype=torch.float16, + do_classifier_free_guidance=True, + control_mode="balanced", + resize_mode="just_resize_simple", +): + # FIXME: implement "crop_resize_simple" and "fill_resize_simple", or pull them out + if (resize_mode == "just_resize_simple" or + resize_mode == "crop_resize_simple" or + resize_mode == "fill_resize_simple"): + image = image.convert("RGB") + if (resize_mode == "just_resize_simple"): + image = image.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) + elif (resize_mode == "crop_resize_simple"): # not yet implemented + pass + elif (resize_mode == "fill_resize_simple"): # not yet implemented + pass + nimage = np.array(image) + nimage = nimage[None, :] + nimage = np.concatenate([nimage], axis=0) + # normalizing RGB values to [0,1] range (in PIL.Image they are [0-255]) + nimage = np.array(nimage).astype(np.float32) / 255.0 + nimage = nimage.transpose(0, 3, 1, 2) + timage = torch.from_numpy(nimage) + + # use fancy lvmin controlnet resizing + elif (resize_mode == "just_resize" or resize_mode == "crop_resize" or resize_mode == "fill_resize"): + nimage = np.array(image) + timage, nimage = np_img_resize( + np_img=nimage, + resize_mode=resize_mode, + h=height, + w=width, + # device=torch.device('cpu') + device=device, + ) + else: + pass + print("ERROR: invalid resize_mode ==> ", resize_mode) + exit(1) + + timage = timage.to(device=device, dtype=dtype) + cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced") + if do_classifier_free_guidance and not cfg_injection: + timage = torch.cat([timage] * 2) + return timage From f2515d9480b7569b32c94228d0a6296f5d652246 Mon Sep 17 00:00:00 2001 From: Lincoln Stein Date: Wed, 19 Jul 2023 22:13:56 -0400 Subject: [PATCH 14/51] fix v1-finetune.yaml is not in the subpath of "" (#3848) Co-authored-by: Lincoln Stein --- invokeai/app/services/config.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/invokeai/app/services/config.py b/invokeai/app/services/config.py index 0a70c1dd5d..35334b1bef 100644 --- a/invokeai/app/services/config.py +++ b/invokeai/app/services/config.py @@ -277,7 +277,7 @@ class InvokeAISettings(BaseSettings): @classmethod def _excluded_from_yaml(self)->List[str]: # combination of deprecated parameters and internal ones that shouldn't be exposed as invokeai.yaml options - return ['type','initconf', 'gpu_mem_reserved', 'max_loaded_models', 'version', 'from_file', 'model', 'restore'] + return ['type','initconf', 'gpu_mem_reserved', 'max_loaded_models', 'version', 'from_file', 'model', 'restore', 'root'] class Config: env_file_encoding = 'utf-8' @@ -446,7 +446,7 @@ setting environment variables INVOKEAI_. Path to the runtime root directory ''' if self.root: - return Path(self.root).expanduser() + return Path(self.root).expanduser().absolute() else: return self.find_root() From f6d5e930201723079eb70dba9ed8a8145cadc390 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 14:16:32 +1200 Subject: [PATCH 15/51] fix: Model List not scrolling through checkpoints (#3849) --- .../subpanels/ModelManagerPanel/ModelList.tsx | 81 ++++++++++--------- 1 file changed, 43 insertions(+), 38 deletions(-) diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelList.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelList.tsx index c9f8384b9a..722bd83b6e 100644 --- a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelList.tsx +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelList.tsx @@ -75,42 +75,49 @@ const ModelList = (props: ModelListProps) => { labelPos="side" /> - {['images', 'diffusers'].includes(modelFormatFilter) && - filteredDiffusersModels.length > 0 && ( - - - - Diffusers - - {filteredDiffusersModels.map((model) => ( - - ))} - - - )} - {['images', 'checkpoint'].includes(modelFormatFilter) && - filteredCheckpointModels.length > 0 && ( - - - - Checkpoint - - {filteredCheckpointModels.map((model) => ( - - ))} - - - )} + + {['images', 'diffusers'].includes(modelFormatFilter) && + filteredDiffusersModels.length > 0 && ( + + + + Diffusers + + {filteredDiffusersModels.map((model) => ( + + ))} + + + )} + {['images', 'checkpoint'].includes(modelFormatFilter) && + filteredCheckpointModels.length > 0 && ( + + + + Checkpoints + + {filteredCheckpointModels.map((model) => ( + + ))} + + + )} + ); @@ -146,8 +153,6 @@ const StyledModelContainer = (props: PropsWithChildren) => { return ( Date: Wed, 19 Jul 2023 22:16:56 -0400 Subject: [PATCH 16/51] change GET to POST method for model synchronization route --- invokeai/app/api/routers/models.py | 54 +++--------------------------- 1 file changed, 4 insertions(+), 50 deletions(-) diff --git a/invokeai/app/api/routers/models.py b/invokeai/app/api/routers/models.py index cc6d09f761..fb83fb36dd 100644 --- a/invokeai/app/api/routers/models.py +++ b/invokeai/app/api/routers/models.py @@ -315,20 +315,21 @@ async def list_ckpt_configs( return ApiDependencies.invoker.services.model_manager.list_checkpoint_configs() -@models_router.get( +@models_router.post( "/sync", operation_id="sync_to_config", responses={ 201: { "description": "synchronization successful" }, }, status_code = 201, - response_model = None + response_model = bool ) async def sync_to_config( )->None: """Call after making changes to models.yaml, autoimport directories or models directory to synchronize in-memory data structures with disk data structures.""" - return ApiDependencies.invoker.services.model_manager.sync_to_config() + ApiDependencies.invoker.services.model_manager.sync_to_config() + return True @models_router.put( "/merge/{base_model}", @@ -373,50 +374,3 @@ async def merge_models( except ValueError as e: raise HTTPException(status_code=400, detail=str(e)) return response - -# The rename operation is now supported by update_model and no longer needs to be -# a standalone route. -# @models_router.post( -# "/rename/{base_model}/{model_type}/{model_name}", -# operation_id="rename_model", -# responses= { -# 201: {"description" : "The model was renamed successfully"}, -# 404: {"description" : "The model could not be found"}, -# 409: {"description" : "There is already a model corresponding to the new name"}, -# }, -# status_code=201, -# response_model=ImportModelResponse -# ) -# async def rename_model( -# base_model: BaseModelType = Path(description="Base model"), -# model_type: ModelType = Path(description="The type of model"), -# model_name: str = Path(description="current model name"), -# new_name: Optional[str] = Query(description="new model name", default=None), -# new_base: Optional[BaseModelType] = Query(description="new model base", default=None), -# ) -> ImportModelResponse: -# """ Rename a model""" - -# logger = ApiDependencies.invoker.services.logger - -# try: -# result = ApiDependencies.invoker.services.model_manager.rename_model( -# base_model = base_model, -# model_type = model_type, -# model_name = model_name, -# new_name = new_name, -# new_base = new_base, -# ) -# logger.debug(result) -# logger.info(f'Successfully renamed {model_name}=>{new_name}') -# model_raw = ApiDependencies.invoker.services.model_manager.list_model( -# model_name=new_name or model_name, -# base_model=new_base or base_model, -# model_type=model_type -# ) -# return parse_obj_as(ImportModelResponse, model_raw) -# except ModelNotFoundException as e: -# logger.error(str(e)) -# raise HTTPException(status_code=404, detail=str(e)) -# except ValueError as e: -# logger.error(str(e)) -# raise HTTPException(status_code=409, detail=str(e)) From 6affe42310c0bdce08d87fcb8e10315e98972821 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:17:24 -0700 Subject: [PATCH 17/51] Added resize_mode param to ControlNet node --- invokeai/app/invocations/controlnet_image_processors.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/invokeai/app/invocations/controlnet_image_processors.py b/invokeai/app/invocations/controlnet_image_processors.py index 43cad3dcaf..911fede8fb 100644 --- a/invokeai/app/invocations/controlnet_image_processors.py +++ b/invokeai/app/invocations/controlnet_image_processors.py @@ -85,8 +85,8 @@ CONTROLNET_DEFAULT_MODELS = [ CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)] CONTROLNET_MODE_VALUES = Literal[tuple( ["balanced", "more_prompt", "more_control", "unbalanced"])] -# crop and fill options not ready yet -# CONTROLNET_RESIZE_VALUES = Literal[tuple(["just_resize", "crop_resize", "fill_resize"])] +CONTROLNET_RESIZE_VALUES = Literal[tuple( + ["just_resize", "crop_resize", "fill_resize", "just_resize_simple",])] class ControlNetModelField(BaseModel): @@ -111,7 +111,8 @@ class ControlField(BaseModel): description="When the ControlNet is last applied (% of total steps)") control_mode: CONTROLNET_MODE_VALUES = Field( default="balanced", description="The control mode to use") - # resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use") + resize_mode: CONTROLNET_RESIZE_VALUES = Field( + default="just_resize", description="The resize mode to use") @validator("control_weight") def validate_control_weight(cls, v): @@ -161,6 +162,7 @@ class ControlNetInvocation(BaseInvocation): end_step_percent: float = Field(default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)") control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode used") + resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode used") # fmt: on class Config(InvocationConfig): @@ -187,6 +189,7 @@ class ControlNetInvocation(BaseInvocation): begin_step_percent=self.begin_step_percent, end_step_percent=self.end_step_percent, control_mode=self.control_mode, + resize_mode=self.resize_mode, ), ) From 6e36c275c99fe3c8f9e9b314f020a274d16562b4 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 14:17:51 +1200 Subject: [PATCH 18/51] feat: Add Setting Switch Component (#3847) --- invokeai/frontend/web/public/locales/en.json | 7 ++- .../web/src/common/components/IAISwitch.tsx | 4 +- .../SettingsModal/SettingSwitch.tsx | 57 +++++++++++++++++++ .../SettingsModal/SettingsModal.tsx | 48 +++++++++------- 4 files changed, 90 insertions(+), 26 deletions(-) create mode 100644 invokeai/frontend/web/src/features/system/components/SettingsModal/SettingSwitch.tsx diff --git a/invokeai/frontend/web/public/locales/en.json b/invokeai/frontend/web/public/locales/en.json index 32ff574925..f5922a6ff4 100644 --- a/invokeai/frontend/web/public/locales/en.json +++ b/invokeai/frontend/web/public/locales/en.json @@ -547,7 +547,8 @@ "saveSteps": "Save images every n steps", "confirmOnDelete": "Confirm On Delete", "displayHelpIcons": "Display Help Icons", - "useCanvasBeta": "Use Canvas Beta Layout", + "alternateCanvasLayout": "Alternate Canvas Layout", + "enableNodesEditor": "Enable Nodes Editor", "enableImageDebugging": "Enable Image Debugging", "useSlidersForAll": "Use Sliders For All Options", "showProgressInViewer": "Show Progress Images in Viewer", @@ -564,7 +565,9 @@ "ui": "User Interface", "favoriteSchedulers": "Favorite Schedulers", "favoriteSchedulersPlaceholder": "No schedulers favorited", - "showAdvancedOptions": "Show Advanced Options" + "showAdvancedOptions": "Show Advanced Options", + "experimental": "Experimental", + "beta": "Beta" }, "toast": { "serverError": "Server Error", diff --git a/invokeai/frontend/web/src/common/components/IAISwitch.tsx b/invokeai/frontend/web/src/common/components/IAISwitch.tsx index d25ab0d87e..9803626397 100644 --- a/invokeai/frontend/web/src/common/components/IAISwitch.tsx +++ b/invokeai/frontend/web/src/common/components/IAISwitch.tsx @@ -9,7 +9,7 @@ import { } from '@chakra-ui/react'; import { memo } from 'react'; -interface Props extends SwitchProps { +export interface IAISwitchProps extends SwitchProps { label?: string; width?: string | number; formControlProps?: FormControlProps; @@ -20,7 +20,7 @@ interface Props extends SwitchProps { /** * Customized Chakra FormControl + Switch multi-part component. */ -const IAISwitch = (props: Props) => { +const IAISwitch = (props: IAISwitchProps) => { const { label, isDisabled = false, diff --git a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingSwitch.tsx b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingSwitch.tsx new file mode 100644 index 0000000000..e035b90d3a --- /dev/null +++ b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingSwitch.tsx @@ -0,0 +1,57 @@ +import { Badge, BadgeProps, Flex, Text, TextProps } from '@chakra-ui/react'; +import IAISwitch, { IAISwitchProps } from 'common/components/IAISwitch'; +import { useTranslation } from 'react-i18next'; + +type SettingSwitchProps = IAISwitchProps & { + label: string; + useBadge?: boolean; + badgeLabel?: string; + textProps?: TextProps; + badgeProps?: BadgeProps; +}; + +export default function SettingSwitch(props: SettingSwitchProps) { + const { t } = useTranslation(); + + const { + label, + textProps, + useBadge = false, + badgeLabel = t('settings.experimental'), + badgeProps, + ...rest + } = props; + + return ( + + + + {label} + + {useBadge && ( + + {badgeLabel} + + )} + + + + ); +} diff --git a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx index ccc4a9aa24..5ec7a09b67 100644 --- a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx +++ b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx @@ -11,13 +11,12 @@ import { Text, useDisclosure, } from '@chakra-ui/react'; -import { createSelector, current } from '@reduxjs/toolkit'; +import { createSelector } from '@reduxjs/toolkit'; import { VALID_LOG_LEVELS } from 'app/logging/useLogger'; import { LOCALSTORAGE_KEYS, LOCALSTORAGE_PREFIX } from 'app/store/constants'; import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; import IAIButton from 'common/components/IAIButton'; import IAIMantineSelect from 'common/components/IAIMantineSelect'; -import IAISwitch from 'common/components/IAISwitch'; import { systemSelector } from 'features/system/store/systemSelectors'; import { SystemState, @@ -48,8 +47,9 @@ import { } from 'react'; import { useTranslation } from 'react-i18next'; import { LogLevelName } from 'roarr'; -import SettingsSchedulers from './SettingsSchedulers'; +import SettingSwitch from './SettingSwitch'; import SettingsClearIntermediates from './SettingsClearIntermediates'; +import SettingsSchedulers from './SettingsSchedulers'; const selector = createSelector( [systemSelector, uiSelector], @@ -206,7 +206,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { {t('settings.general')} - ) => @@ -214,7 +214,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { } /> {shouldShowAdvancedOptionsSettings && ( - ) => @@ -231,37 +231,29 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { {t('settings.ui')} - ) => dispatch(setShouldDisplayGuides(e.target.checked)) } /> - {shouldShowBetaLayout && ( - ) => - dispatch(setShouldUseCanvasBetaLayout(e.target.checked)) - } - /> - )} - ) => dispatch(setShouldUseSliders(e.target.checked)) } /> - ) => dispatch(setShouldShowProgressInViewer(e.target.checked)) } /> - ) => @@ -270,9 +262,21 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { ) } /> + {shouldShowBetaLayout && ( + ) => + dispatch(setShouldUseCanvasBetaLayout(e.target.checked)) + } + /> + )} {shouldShowNodesToggle && ( - @@ -282,7 +286,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { {shouldShowDeveloperSettings && ( {t('settings.developer')} - { value={consoleLogLevel} data={VALID_LOG_LEVELS.concat()} /> - ) => From e918168f7aabced1833381371cb7f4552745e7e8 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:21:17 -0700 Subject: [PATCH 19/51] Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image() Added resize_mode to ControlNetData class. --- .../stable_diffusion/diffusers_pipeline.py | 53 +------------------ 1 file changed, 2 insertions(+), 51 deletions(-) diff --git a/invokeai/backend/stable_diffusion/diffusers_pipeline.py b/invokeai/backend/stable_diffusion/diffusers_pipeline.py index 228fbd0585..8acfb100a6 100644 --- a/invokeai/backend/stable_diffusion/diffusers_pipeline.py +++ b/invokeai/backend/stable_diffusion/diffusers_pipeline.py @@ -219,6 +219,7 @@ class ControlNetData: begin_step_percent: float = Field(default=0.0) end_step_percent: float = Field(default=1.0) control_mode: str = Field(default="balanced") + resize_mode: str = Field(default="just_resize") @dataclass @@ -653,7 +654,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline): if cfg_injection: # Inferred ControlNet only for the conditional batch. # To apply the output of ControlNet to both the unconditional and conditional batches, - # add 0 to the unconditional batch to keep it unchanged. + # prepend zeros for unconditional batch down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples] mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample]) @@ -954,53 +955,3 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline): debug_image( img, f"latents {msg} {i+1}/{len(decoded)}", debug_status=True ) - - # Copied from diffusers pipeline_stable_diffusion_controlnet.py - # Returns torch.Tensor of shape (batch_size, 3, height, width) - @staticmethod - def prepare_control_image( - image, - # FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions? - # latents, - width=512, # should be 8 * latent.shape[3] - height=512, # should be 8 * latent height[2] - batch_size=1, - num_images_per_prompt=1, - device="cuda", - dtype=torch.float16, - do_classifier_free_guidance=True, - control_mode="balanced" - ): - - if not isinstance(image, torch.Tensor): - if isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - images = [] - for image_ in image: - image_ = image_.convert("RGB") - image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) - image_ = np.array(image_) - image_ = image_[None, :] - images.append(image_) - image = images - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - - image_batch_size = image.shape[0] - if image_batch_size == 1: - repeat_by = batch_size - else: - # image batch size is the same as prompt batch size - repeat_by = num_images_per_prompt - image = image.repeat_interleave(repeat_by, dim=0) - image = image.to(device=device, dtype=dtype) - cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced") - if do_classifier_free_guidance and not cfg_injection: - image = torch.cat([image] * 2) - return image From c2b99e7545311325348c9ffb559d6921a85534bd Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:26:49 -0700 Subject: [PATCH 20/51] Switching over to controlnet_utils prepare_control_image(), with added resize_mode. --- invokeai/app/invocations/latent.py | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/invokeai/app/invocations/latent.py b/invokeai/app/invocations/latent.py index cd15fe156b..b4c3454c88 100644 --- a/invokeai/app/invocations/latent.py +++ b/invokeai/app/invocations/latent.py @@ -30,6 +30,7 @@ from .compel import ConditioningField from .controlnet_image_processors import ControlField from .image import ImageOutput from .model import ModelInfo, UNetField, VaeField +from invokeai.app.util.controlnet_utils import prepare_control_image from diffusers.models.attention_processor import ( AttnProcessor2_0, @@ -288,7 +289,7 @@ class TextToLatentsInvocation(BaseInvocation): # and add in batch_size, num_images_per_prompt? # and do real check for classifier_free_guidance? # prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width) - control_image = model.prepare_control_image( + control_image = prepare_control_image( image=input_image, do_classifier_free_guidance=do_classifier_free_guidance, width=control_width_resize, @@ -298,13 +299,18 @@ class TextToLatentsInvocation(BaseInvocation): device=control_model.device, dtype=control_model.dtype, control_mode=control_info.control_mode, + resize_mode=control_info.resize_mode, ) control_item = ControlNetData( - model=control_model, image_tensor=control_image, + model=control_model, + image_tensor=control_image, weight=control_info.control_weight, begin_step_percent=control_info.begin_step_percent, end_step_percent=control_info.end_step_percent, control_mode=control_info.control_mode, + # any resizing needed should currently be happening in prepare_control_image(), + # but adding resize_mode to ControlNetData in case needed in the future + resize_mode=control_info.resize_mode, ) control_data.append(control_item) # MultiControlNetModel has been refactored out, just need list[ControlNetData] @@ -601,7 +607,7 @@ class ResizeLatentsInvocation(BaseInvocation): antialias: bool = Field( default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)") - + class Config(InvocationConfig): schema_extra = { "ui": { @@ -647,7 +653,7 @@ class ScaleLatentsInvocation(BaseInvocation): antialias: bool = Field( default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)") - + class Config(InvocationConfig): schema_extra = { "ui": { From d76bf4444c62889faf429183d32f3f9b14ea76a2 Mon Sep 17 00:00:00 2001 From: Lincoln Stein Date: Wed, 19 Jul 2023 22:46:49 -0400 Subject: [PATCH 21/51] Update invokeai/app/api/routers/models.py Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com> --- invokeai/app/api/routers/models.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/invokeai/app/api/routers/models.py b/invokeai/app/api/routers/models.py index fb83fb36dd..870ca33534 100644 --- a/invokeai/app/api/routers/models.py +++ b/invokeai/app/api/routers/models.py @@ -325,7 +325,7 @@ async def list_ckpt_configs( response_model = bool ) async def sync_to_config( -)->None: +)->bool: """Call after making changes to models.yaml, autoimport directories or models directory to synchronize in-memory data structures with disk data structures.""" ApiDependencies.invoker.services.model_manager.sync_to_config() From 039091c5d45781bdf8216d5ddb9e5568f8f2dc02 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 20 Jul 2023 13:16:55 +1000 Subject: [PATCH 22/51] Updated frontend docs to be more accurate --- .../contribution_guides/development.md | 26 ++++++++++--------- .../contributingToFrontend.md | 16 +++++++++--- pull_request_template.md | 2 +- 3 files changed, 28 insertions(+), 16 deletions(-) diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md index 584fb5a4ed..59c2b05c0e 100644 --- a/docs/contributing/contribution_guides/development.md +++ b/docs/contributing/contribution_guides/development.md @@ -41,37 +41,39 @@ Before starting these steps, ensure you have your local environment [configured git clone https://github.com/your-GitHub-username/InvokeAI.git ``` -1. Create a new branch for your fix using: +If you're unfamiliar with using Git through the commandline, [GitHub Desktop](https://desktop.github.com) is a easy-to-use alternative with a UI. You can do all the same steps listed here, but through the interface. + +4. Create a new branch for your fix using: ```bash git checkout -b branch-name-here ``` -1. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. -2. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index: +5. Make the appropriate changes for the issue you are trying to address or the feature that you want to add. +6. Add the file contents of the changed files to the "snapshot" git uses to manage the state of the project, also known as the index: ```bash git add insert-paths-of-changed-files-here ``` -1. Store the contents of the index with a descriptive message. +7. Store the contents of the index with a descriptive message. ```bash git commit -m "Insert a short message of the changes made here" ``` -1. Push the changes to the remote repository using +8. Push the changes to the remote repository using ```markdown git push origin branch-name-here ``` -1. Submit a pull request to the **main** branch of the InvokeAI repository. -2. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234". -3. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! -4. Wait for the pull request to be reviewed by other collaborators. -5. Make changes to the pull request if the reviewer(s) recommend them. -6. Celebrate your success after your pull request is merged! +9. Submit a pull request to the **main** branch of the InvokeAI repository. +10. Title the pull request with a short description of the changes made and the issue or bug number associated with your change. For example, you can title an issue like so "Added more log outputting to resolve #1234". +11. In the description of the pull request, explain the changes that you made, any issues you think exist with the pull request you made, and any questions you have for the maintainer. It's OK if your pull request is not perfect (no pull request is), the reviewer will be able to help you fix any problems and improve it! +12. Wait for the pull request to be reviewed by other collaborators. +13. Make changes to the pull request if the reviewer(s) recommend them. +14. Celebrate your success after your pull request is merged! If you’d like to learn more about contributing to Open Source projects, here is a [Getting Started Guide](https://opensource.com/article/19/7/create-pull-request-github). @@ -81,7 +83,7 @@ If you need help, you can ask questions in the [#dev-chat](https://discord.com/c For frontend related work, **@pyschedelicious** is the best person to reach out to. -For backend related work, please reach out to **@pyschedelicious, @blessedcoolant** or **@lstein**. +For backend related work, please reach out to **@blessedcoolant**, **@lstein**, **@StAlKeR7779** or **@pyschedelicious**. ## **What does the Code of Conduct mean for me?** diff --git a/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md index 08f7c69ce7..d1f0fb7d38 100644 --- a/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md +++ b/docs/contributing/contribution_guides/development_guides/contributingToFrontend.md @@ -10,7 +10,7 @@ The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex. -Code is located in `invokeai/frontend/web/src` for review. +Code is located in `invokeai/frontend/web/` for review. ## Stack @@ -24,7 +24,7 @@ The API client and associated types are generated from the OpenAPI schema. See A Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help). -[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & Mantine for components and styling. +[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling. [Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling. @@ -40,7 +40,17 @@ We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](http ### Dev Environment -Install [node](https://nodejs.org/en/download/) and [yarn classic](https://classic.yarnpkg.com/lang/en/). +**Setup** + +1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with: +```bash +node --version +``` +2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this: +```bash +npm install --global yarn +yarn --version +``` From `invokeai/frontend/web/` run `yarn install` to get everything set up. diff --git a/pull_request_template.md b/pull_request_template.md index 04d9a96a99..7a0203ef92 100644 --- a/pull_request_template.md +++ b/pull_request_template.md @@ -5,7 +5,7 @@ - [ ] Bug Fix - [ ] Optimization - [ ] Documentation Update -- [ ] Community Node +- [ ] Community Node Submission ## Have you discussed this change with the InvokeAI team? From 187cf906fa53330c53ca07b3ffd85abe83cdf014 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 15:44:22 +1000 Subject: [PATCH 23/51] ui: enhance intermediates clear, enhance board auto-add (#3851) * feat(ui): enhance clear intermediates feature - retrieve the # of intermediates using a new query (just uses list images endpoint w/ limit of 0) - display the count in the UI - add types for clearIntermediates mutation - minor styling and verbiage changes * feat(ui): remove unused settings option for guides * feat(ui): use solid badge variant consistent with the rest of the usage of badges * feat(ui): update board ctx menu, add board auto-add - add context menu to system boards - only open is select board. did this so that you dont think its broken when you click it - add auto-add board. you can right click a user board to enable it for auto-add, or use the gallery settings popover to select it. the invoke button has a tooltip on a short delay to remind you that you have auto-add enabled - made useBoardName hook, provide it a board id and it gets your the board name - removed `boardIdToAdTo` state & logic, updated workflows to auto-switch and auto-add on image generation * fix(ui): clear controlnet when clearing intermediates * feat: Make Add Board icon a button * feat(db, api): clear intermediates now clears all of them * feat(ui): make reset webui text subtext style * feat(ui): board name change submits on blur --------- Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> --- invokeai/app/api/routers/images.py | 22 ++-- invokeai/app/services/image_record_storage.py | 31 +++++ invokeai/app/services/images.py | 60 ++++----- .../socketio/socketInvocationComplete.ts | 33 +++-- .../components/Boards/BoardAutoAddSelect.tsx | 80 ++++++++++++ .../components/Boards/BoardContextMenu.tsx | 60 +++++++++ .../Boards/BoardsList/AddBoardButton.tsx | 12 +- .../Boards/BoardsList/AllAssetsBoard.tsx | 1 + .../Boards/BoardsList/AllImagesBoard.tsx | 1 + .../Boards/BoardsList/BatchBoard.tsx | 1 + .../Boards/BoardsList/GalleryBoard.tsx | 92 ++++++------- .../Boards/BoardsList/GenericBoard.tsx | 121 ++++++++++-------- .../Boards/BoardsList/NoBoardBoard.tsx | 1 + .../Boards/GalleryBoardContextMenuItems.tsx | 79 ++++++++++++ .../Boards/SystemBoardContextMenuItems.tsx | 12 ++ .../gallery/components/GalleryBoardName.tsx | 59 ++++----- .../components/GallerySettingsPopover.tsx | 18 +-- .../features/gallery/store/gallerySlice.ts | 25 +++- .../ProcessButtons/InvokeButton.tsx | 108 ++++++++++------ .../components/ProcessButtons/Loopback.tsx | 33 ----- .../ProcessButtons/ProcessButtons.tsx | 1 - .../SettingsClearIntermediates.tsx | 59 +++++---- .../SettingsModal/SettingsModal.tsx | 22 +--- .../src/features/system/store/systemSlice.ts | 9 -- .../ModelManagerPanel/ModelListItem.tsx | 11 +- .../web/src/services/api/endpoints/images.ts | 11 +- .../src/services/api/hooks/useBoardName.ts | 26 ++++ 27 files changed, 647 insertions(+), 341 deletions(-) create mode 100644 invokeai/frontend/web/src/features/gallery/components/Boards/BoardAutoAddSelect.tsx create mode 100644 invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx create mode 100644 invokeai/frontend/web/src/features/gallery/components/Boards/GalleryBoardContextMenuItems.tsx create mode 100644 invokeai/frontend/web/src/features/gallery/components/Boards/SystemBoardContextMenuItems.tsx delete mode 100644 invokeai/frontend/web/src/features/parameters/components/ProcessButtons/Loopback.tsx create mode 100644 invokeai/frontend/web/src/services/api/hooks/useBoardName.ts diff --git a/invokeai/app/api/routers/images.py b/invokeai/app/api/routers/images.py index 559afa3b37..01e7dd2c26 100644 --- a/invokeai/app/api/routers/images.py +++ b/invokeai/app/api/routers/images.py @@ -1,8 +1,7 @@ import io from typing import Optional -from fastapi import (Body, HTTPException, Path, Query, Request, Response, - UploadFile) +from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile from fastapi.responses import FileResponse from fastapi.routing import APIRouter from PIL import Image @@ -11,9 +10,11 @@ from invokeai.app.invocations.metadata import ImageMetadata from invokeai.app.models.image import ImageCategory, ResourceOrigin from invokeai.app.services.image_record_storage import OffsetPaginatedResults from invokeai.app.services.item_storage import PaginatedResults -from invokeai.app.services.models.image_record import (ImageDTO, - ImageRecordChanges, - ImageUrlsDTO) +from invokeai.app.services.models.image_record import ( + ImageDTO, + ImageRecordChanges, + ImageUrlsDTO, +) from ..dependencies import ApiDependencies @@ -84,15 +85,16 @@ async def delete_image( # TODO: Does this need any exception handling at all? pass + @images_router.post("/clear-intermediates", operation_id="clear_intermediates") async def clear_intermediates() -> int: - """Clears first 100 intermediates""" + """Clears all intermediates""" try: - count_deleted = ApiDependencies.invoker.services.images.delete_many(is_intermediate=True) + count_deleted = ApiDependencies.invoker.services.images.delete_intermediates() return count_deleted except Exception as e: - # TODO: Does this need any exception handling at all? + raise HTTPException(status_code=500, detail="Failed to clear intermediates") pass @@ -130,6 +132,7 @@ async def get_image_dto( except Exception as e: raise HTTPException(status_code=404) + @images_router.get( "/{image_name}/metadata", operation_id="get_image_metadata", @@ -254,7 +257,8 @@ async def list_image_dtos( default=None, description="Whether to list intermediate images." ), board_id: Optional[str] = Query( - default=None, description="The board id to filter by. Use 'none' to find images without a board." + default=None, + description="The board id to filter by. Use 'none' to find images without a board.", ), offset: int = Query(default=0, description="The page offset"), limit: int = Query(default=10, description="The number of images per page"), diff --git a/invokeai/app/services/image_record_storage.py b/invokeai/app/services/image_record_storage.py index 09c3bdcc3e..eb69679a35 100644 --- a/invokeai/app/services/image_record_storage.py +++ b/invokeai/app/services/image_record_storage.py @@ -122,6 +122,11 @@ class ImageRecordStorageBase(ABC): """Deletes many image records.""" pass + @abstractmethod + def delete_intermediates(self) -> list[str]: + """Deletes all intermediate image records, returning a list of deleted image names.""" + pass + @abstractmethod def save( self, @@ -461,6 +466,32 @@ class SqliteImageRecordStorage(ImageRecordStorageBase): finally: self._lock.release() + + def delete_intermediates(self) -> list[str]: + try: + self._lock.acquire() + self._cursor.execute( + """--sql + SELECT image_name FROM images + WHERE is_intermediate = TRUE; + """ + ) + result = cast(list[sqlite3.Row], self._cursor.fetchall()) + image_names = list(map(lambda r: r[0], result)) + self._cursor.execute( + """--sql + DELETE FROM images + WHERE is_intermediate = TRUE; + """ + ) + self._conn.commit() + return image_names + except sqlite3.Error as e: + self._conn.rollback() + raise ImageRecordDeleteException from e + finally: + self._lock.release() + def save( self, image_name: str, diff --git a/invokeai/app/services/images.py b/invokeai/app/services/images.py index 13c6c04719..6fdb6237f8 100644 --- a/invokeai/app/services/images.py +++ b/invokeai/app/services/images.py @@ -6,21 +6,33 @@ from typing import TYPE_CHECKING, Optional from PIL.Image import Image as PILImageType from invokeai.app.invocations.metadata import ImageMetadata -from invokeai.app.models.image import (ImageCategory, - InvalidImageCategoryException, - InvalidOriginException, ResourceOrigin) -from invokeai.app.services.board_image_record_storage import \ - BoardImageRecordStorageBase +from invokeai.app.models.image import ( + ImageCategory, + InvalidImageCategoryException, + InvalidOriginException, + ResourceOrigin, +) +from invokeai.app.services.board_image_record_storage import BoardImageRecordStorageBase from invokeai.app.services.image_file_storage import ( - ImageFileDeleteException, ImageFileNotFoundException, - ImageFileSaveException, ImageFileStorageBase) + ImageFileDeleteException, + ImageFileNotFoundException, + ImageFileSaveException, + ImageFileStorageBase, +) from invokeai.app.services.image_record_storage import ( - ImageRecordDeleteException, ImageRecordNotFoundException, - ImageRecordSaveException, ImageRecordStorageBase, OffsetPaginatedResults) + ImageRecordDeleteException, + ImageRecordNotFoundException, + ImageRecordSaveException, + ImageRecordStorageBase, + OffsetPaginatedResults, +) from invokeai.app.services.item_storage import ItemStorageABC -from invokeai.app.services.models.image_record import (ImageDTO, ImageRecord, - ImageRecordChanges, - image_record_to_dto) +from invokeai.app.services.models.image_record import ( + ImageDTO, + ImageRecord, + ImageRecordChanges, + image_record_to_dto, +) from invokeai.app.services.resource_name import NameServiceBase from invokeai.app.services.urls import UrlServiceBase from invokeai.app.util.metadata import get_metadata_graph_from_raw_session @@ -109,12 +121,10 @@ class ImageServiceABC(ABC): pass @abstractmethod - def delete_many(self, is_intermediate: bool) -> int: - """Deletes many images.""" + def delete_intermediates(self) -> int: + """Deletes all intermediate images.""" pass - - @abstractmethod def delete_images_on_board(self, board_id: str): """Deletes all images on a board.""" @@ -401,21 +411,13 @@ class ImageService(ImageServiceABC): except Exception as e: self._services.logger.error("Problem deleting image records and files") raise e - - def delete_many(self, is_intermediate: bool): + + def delete_intermediates(self) -> int: try: - # only clears 100 at a time - images = self._services.image_records.get_many(offset=0, limit=100, is_intermediate=is_intermediate,) - count = len(images.items) - image_name_list = list( - map( - lambda r: r.image_name, - images.items, - ) - ) - for image_name in image_name_list: + image_names = self._services.image_records.delete_intermediates() + count = len(image_names) + for image_name in image_names: self._services.image_files.delete(image_name) - self._services.image_records.delete_many(image_name_list) return count except ImageRecordDeleteException: self._services.logger.error(f"Failed to delete image records") diff --git a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts index c2c57e0913..97cccfa05c 100644 --- a/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts +++ b/invokeai/frontend/web/src/app/store/middleware/listenerMiddleware/listeners/socketio/socketInvocationComplete.ts @@ -6,11 +6,7 @@ import { imageSelected, } from 'features/gallery/store/gallerySlice'; import { progressImageSet } from 'features/system/store/systemSlice'; -import { - SYSTEM_BOARDS, - imagesAdapter, - imagesApi, -} from 'services/api/endpoints/images'; +import { imagesAdapter, imagesApi } from 'services/api/endpoints/images'; import { isImageOutput } from 'services/api/guards'; import { sessionCanceled } from 'services/api/thunks/session'; import { @@ -32,8 +28,7 @@ export const addInvocationCompleteEventListener = () => { ); const session_id = action.payload.data.graph_execution_state_id; - const { cancelType, isCancelScheduled, boardIdToAddTo } = - getState().system; + const { cancelType, isCancelScheduled } = getState().system; // Handle scheduled cancelation if (cancelType === 'scheduled' && isCancelScheduled) { @@ -88,26 +83,28 @@ export const addInvocationCompleteEventListener = () => { ) ); - // add image to the board if we had one selected - if (boardIdToAddTo && !SYSTEM_BOARDS.includes(boardIdToAddTo)) { + const { autoAddBoardId } = gallery; + + // add image to the board if auto-add is enabled + if (autoAddBoardId) { dispatch( imagesApi.endpoints.addImageToBoard.initiate({ - board_id: boardIdToAddTo, + board_id: autoAddBoardId, imageDTO, }) ); } - const { selectedBoardId } = gallery; - - if (boardIdToAddTo && boardIdToAddTo !== selectedBoardId) { - dispatch(boardIdSelected(boardIdToAddTo)); - } else if (!boardIdToAddTo) { - dispatch(boardIdSelected('all')); - } + const { selectedBoardId, shouldAutoSwitch } = gallery; // If auto-switch is enabled, select the new image - if (getState().gallery.shouldAutoSwitch) { + if (shouldAutoSwitch) { + // if auto-add is enabled, switch the board as the image comes in + if (autoAddBoardId && autoAddBoardId !== selectedBoardId) { + dispatch(boardIdSelected(autoAddBoardId)); + } else if (!autoAddBoardId) { + dispatch(boardIdSelected('images')); + } dispatch(imageSelected(imageDTO.image_name)); } } diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardAutoAddSelect.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardAutoAddSelect.tsx new file mode 100644 index 0000000000..827d49c88e --- /dev/null +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardAutoAddSelect.tsx @@ -0,0 +1,80 @@ +import { SelectItem } from '@mantine/core'; +import { createSelector } from '@reduxjs/toolkit'; +import { stateSelector } from 'app/store/store'; +import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; +import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; +import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect'; +import IAIMantineSelectItemWithTooltip from 'common/components/IAIMantineSelectItemWithTooltip'; +import { autoAddBoardIdChanged } from 'features/gallery/store/gallerySlice'; +import { useCallback, useRef } from 'react'; +import { useListAllBoardsQuery } from 'services/api/endpoints/boards'; + +const selector = createSelector( + [stateSelector], + ({ gallery }) => { + const { autoAddBoardId } = gallery; + + return { + autoAddBoardId, + }; + }, + defaultSelectorOptions +); + +const BoardAutoAddSelect = () => { + const dispatch = useAppDispatch(); + const { autoAddBoardId } = useAppSelector(selector); + const inputRef = useRef(null); + const { boards, hasBoards } = useListAllBoardsQuery(undefined, { + selectFromResult: ({ data }) => { + const boards: SelectItem[] = [ + { + label: 'None', + value: 'none', + }, + ]; + data?.forEach(({ board_id, board_name }) => { + boards.push({ + label: board_name, + value: board_id, + }); + }); + return { + boards, + hasBoards: boards.length > 1, + }; + }, + }); + + const handleChange = useCallback( + (v: string | null) => { + if (!v) { + return; + } + + dispatch(autoAddBoardIdChanged(v === 'none' ? null : v)); + }, + [dispatch] + ); + + return ( + + item.label?.toLowerCase().includes(value.toLowerCase().trim()) || + item.value.toLowerCase().includes(value.toLowerCase().trim()) + } + onChange={handleChange} + /> + ); +}; + +export default BoardAutoAddSelect; diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx new file mode 100644 index 0000000000..fa3a6b03be --- /dev/null +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx @@ -0,0 +1,60 @@ +import { Box, MenuItem, MenuList } from '@chakra-ui/react'; +import { useAppDispatch } from 'app/store/storeHooks'; +import { ContextMenu, ContextMenuProps } from 'chakra-ui-contextmenu'; +import { boardIdSelected } from 'features/gallery/store/gallerySlice'; +import { memo, useCallback } from 'react'; +import { FaFolder } from 'react-icons/fa'; +import { BoardDTO } from 'services/api/types'; +import { menuListMotionProps } from 'theme/components/menu'; +import GalleryBoardContextMenuItems from './GalleryBoardContextMenuItems'; +import SystemBoardContextMenuItems from './SystemBoardContextMenuItems'; + +type Props = { + board?: BoardDTO; + board_id: string; + children: ContextMenuProps['children']; + setBoardToDelete?: (board?: BoardDTO) => void; +}; + +const BoardContextMenu = memo( + ({ board, board_id, setBoardToDelete, children }: Props) => { + const dispatch = useAppDispatch(); + const handleSelectBoard = useCallback(() => { + dispatch(boardIdSelected(board?.board_id ?? board_id)); + }, [board?.board_id, board_id, dispatch]); + return ( + + + menuProps={{ size: 'sm', isLazy: true }} + menuButtonProps={{ + bg: 'transparent', + _hover: { bg: 'transparent' }, + }} + renderMenu={() => ( + + } onClickCapture={handleSelectBoard}> + Select Board + + {!board && } + {board && ( + + )} + + )} + > + {children} + + + ); + } +); + +BoardContextMenu.displayName = 'HoverableBoard'; + +export default BoardContextMenu; diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx index a08fdec07f..7a07680878 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AddBoardButton.tsx @@ -1,5 +1,6 @@ -import IAIButton from 'common/components/IAIButton'; +import IAIIconButton from 'common/components/IAIIconButton'; import { useCallback } from 'react'; +import { FaPlus } from 'react-icons/fa'; import { useCreateBoardMutation } from 'services/api/endpoints/boards'; const DEFAULT_BOARD_NAME = 'My Board'; @@ -12,15 +13,14 @@ const AddBoardButton = () => { }, [createBoard]); return ( - } isLoading={isLoading} + tooltip="Add Board" aria-label="Add Board" onClick={handleCreateBoard} size="sm" - sx={{ px: 4 }} - > - Add Board - + /> ); }; diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AllAssetsBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AllAssetsBoard.tsx index 5f4f1cbeb0..76f6238cd0 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AllAssetsBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/AllAssetsBoard.tsx @@ -38,6 +38,7 @@ const AllAssetsBoard = ({ isSelected }: { isSelected: boolean }) => { return ( { return ( { return ( { const dispatch = useAppDispatch(); + const selector = useMemo( + () => + createSelector( + stateSelector, + ({ gallery }) => { + const isSelectedForAutoAdd = + board.board_id === gallery.autoAddBoardId; + + return { isSelectedForAutoAdd }; + }, + defaultSelectorOptions + ), + [board.board_id] + ); + + const { isSelectedForAutoAdd } = useAppSelector(selector); const { currentData: coverImage } = useGetImageDTOQuery( board.cover_image_name ?? skipToken @@ -53,10 +79,6 @@ const GalleryBoard = memo( updateBoard({ board_id, changes: { board_name: newBoardName } }); }; - const handleDeleteBoard = useCallback(() => { - setBoardToDelete(board); - }, [board, setBoardToDelete]); - const droppableData: MoveBoardDropData = useMemo( () => ({ id: board_id, @@ -68,37 +90,10 @@ const GalleryBoard = memo( return ( - - menuProps={{ size: 'sm', isLazy: true }} - menuButtonProps={{ - bg: 'transparent', - _hover: { bg: 'transparent' }, - }} - renderMenu={() => ( - - {board.image_count > 0 && ( - <> - {/* } - onClickCapture={handleAddBoardToBatch} - > - Add Board to Batch - */} - - )} - } - onClickCapture={handleDeleteBoard} - > - Delete Board - - - )} + {(ref) => ( - {board.image_count} + + {board.image_count} + { handleUpdateBoardName(nextValue); }} @@ -205,7 +209,7 @@ const GalleryBoard = memo( )} - + ); } diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx index 5067dac33a..226100c490 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx @@ -2,9 +2,12 @@ import { As, Badge, Flex } from '@chakra-ui/react'; import { TypesafeDroppableData } from 'app/components/ImageDnd/typesafeDnd'; import IAIDroppable from 'common/components/IAIDroppable'; import { IAINoContentFallback } from 'common/components/IAIImageFallback'; +import { BoardId } from 'features/gallery/store/gallerySlice'; import { ReactNode } from 'react'; +import BoardContextMenu from '../BoardContextMenu'; type GenericBoardProps = { + board_id: BoardId; droppableData?: TypesafeDroppableData; onClick: () => void; isSelected: boolean; @@ -22,6 +25,7 @@ const formatBadgeCount = (count: number) => const GenericBoard = (props: GenericBoardProps) => { const { + board_id, droppableData, onClick, isSelected, @@ -32,67 +36,72 @@ const GenericBoard = (props: GenericBoardProps) => { } = props; return ( - - - + + {(ref) => ( - {badgeCount !== undefined && ( - {formatBadgeCount(badgeCount)} - )} + + + + {badgeCount !== undefined && ( + {formatBadgeCount(badgeCount)} + )} + + + + + {label} + - - - - {label} - - + )} + ); }; diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx index a47d4cd49f..772f5af97d 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/NoBoardBoard.tsx @@ -39,6 +39,7 @@ const NoBoardBoard = ({ isSelected }: { isSelected: boolean }) => { return ( Move} onClick={handleClick} diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/GalleryBoardContextMenuItems.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/GalleryBoardContextMenuItems.tsx new file mode 100644 index 0000000000..5d39eaaf28 --- /dev/null +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/GalleryBoardContextMenuItems.tsx @@ -0,0 +1,79 @@ +import { MenuItem } from '@chakra-ui/react'; +import { createSelector } from '@reduxjs/toolkit'; +import { stateSelector } from 'app/store/store'; +import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; +import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; +import { autoAddBoardIdChanged } from 'features/gallery/store/gallerySlice'; +import { memo, useCallback, useMemo } from 'react'; +import { FaMinus, FaPlus, FaTrash } from 'react-icons/fa'; +import { BoardDTO } from 'services/api/types'; + +type Props = { + board: BoardDTO; + setBoardToDelete?: (board?: BoardDTO) => void; +}; + +const GalleryBoardContextMenuItems = ({ board, setBoardToDelete }: Props) => { + const dispatch = useAppDispatch(); + + const selector = useMemo( + () => + createSelector( + stateSelector, + ({ gallery }) => { + const isSelectedForAutoAdd = + board.board_id === gallery.autoAddBoardId; + + return { isSelectedForAutoAdd }; + }, + defaultSelectorOptions + ), + [board.board_id] + ); + + const { isSelectedForAutoAdd } = useAppSelector(selector); + + const handleDelete = useCallback(() => { + if (!setBoardToDelete) { + return; + } + setBoardToDelete(board); + }, [board, setBoardToDelete]); + + const handleToggleAutoAdd = useCallback(() => { + dispatch( + autoAddBoardIdChanged(isSelectedForAutoAdd ? null : board.board_id) + ); + }, [board.board_id, dispatch, isSelectedForAutoAdd]); + + return ( + <> + {board.image_count > 0 && ( + <> + {/* } + onClickCapture={handleAddBoardToBatch} + > + Add Board to Batch + */} + + )} + : } + onClickCapture={handleToggleAutoAdd} + > + {isSelectedForAutoAdd ? 'Disable Auto-Add' : 'Auto-Add to this Board'} + + } + onClickCapture={handleDelete} + > + Delete Board + + + ); +}; + +export default memo(GalleryBoardContextMenuItems); diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/SystemBoardContextMenuItems.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/SystemBoardContextMenuItems.tsx new file mode 100644 index 0000000000..58eb6d2c0c --- /dev/null +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/SystemBoardContextMenuItems.tsx @@ -0,0 +1,12 @@ +import { BoardId } from 'features/gallery/store/gallerySlice'; +import { memo } from 'react'; + +type Props = { + board_id: BoardId; +}; + +const SystemBoardContextMenuItems = ({ board_id }: Props) => { + return <>; +}; + +export default memo(SystemBoardContextMenuItems); diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx index 4aa65b234e..12454dd15b 100644 --- a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx @@ -1,20 +1,18 @@ import { ChevronUpIcon } from '@chakra-ui/icons'; -import { Button, Flex, Text } from '@chakra-ui/react'; +import { Box, Button, Flex, Spacer, Text } from '@chakra-ui/react'; import { createSelector } from '@reduxjs/toolkit'; import { stateSelector } from 'app/store/store'; import { useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import { memo } from 'react'; -import { useListAllBoardsQuery } from 'services/api/endpoints/boards'; +import { useBoardName } from 'services/api/hooks/useBoardName'; const selector = createSelector( [stateSelector], (state) => { const { selectedBoardId } = state.gallery; - return { - selectedBoardId, - }; + return { selectedBoardId }; }, defaultSelectorOptions ); @@ -27,25 +25,7 @@ type Props = { const GalleryBoardName = (props: Props) => { const { isOpen, onToggle } = props; const { selectedBoardId } = useAppSelector(selector); - const { selectedBoardName } = useListAllBoardsQuery(undefined, { - selectFromResult: ({ data }) => { - let selectedBoardName = ''; - if (selectedBoardId === 'images') { - selectedBoardName = 'All Images'; - } else if (selectedBoardId === 'assets') { - selectedBoardName = 'All Assets'; - } else if (selectedBoardId === 'no_board') { - selectedBoardName = 'No Board'; - } else if (selectedBoardId === 'batch') { - selectedBoardName = 'Batch'; - } else { - const selectedBoard = data?.find((b) => b.board_id === selectedBoardId); - selectedBoardName = selectedBoard?.board_name || 'Unknown Board'; - } - - return { selectedBoardName }; - }, - }); + const boardName = useBoardName(selectedBoardId); return ( { size="sm" variant="ghost" sx={{ + position: 'relative', + gap: 2, w: 'full', justifyContent: 'center', alignItems: 'center', @@ -64,19 +46,22 @@ const GalleryBoardName = (props: Props) => { }, }} > - - {selectedBoardName} - + + + + {boardName} + + + { /> } > - + { dispatch(shouldAutoSwitchChanged(e.target.checked)) } /> + ); diff --git a/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts b/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts index 340559561f..314f933e9b 100644 --- a/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts +++ b/invokeai/frontend/web/src/features/gallery/store/gallerySlice.ts @@ -25,6 +25,7 @@ export type BoardId = type GalleryState = { selection: string[]; shouldAutoSwitch: boolean; + autoAddBoardId: string | null; galleryImageMinimumWidth: number; selectedBoardId: BoardId; batchImageNames: string[]; @@ -34,6 +35,7 @@ type GalleryState = { export const initialGalleryState: GalleryState = { selection: [], shouldAutoSwitch: true, + autoAddBoardId: null, galleryImageMinimumWidth: 96, selectedBoardId: 'images', batchImageNames: [], @@ -123,14 +125,34 @@ export const gallerySlice = createSlice({ state.batchImageNames = []; state.selection = []; }, + autoAddBoardIdChanged: (state, action: PayloadAction) => { + state.autoAddBoardId = action.payload; + }, }, extraReducers: (builder) => { builder.addMatcher( boardsApi.endpoints.deleteBoard.matchFulfilled, (state, action) => { - if (action.meta.arg.originalArgs === state.selectedBoardId) { + const deletedBoardId = action.meta.arg.originalArgs; + if (deletedBoardId === state.selectedBoardId) { state.selectedBoardId = 'images'; } + if (deletedBoardId === state.autoAddBoardId) { + state.autoAddBoardId = null; + } + } + ); + builder.addMatcher( + boardsApi.endpoints.listAllBoards.matchFulfilled, + (state, action) => { + const boards = action.payload; + if (!state.autoAddBoardId) { + return; + } + + if (!boards.map((b) => b.board_id).includes(state.autoAddBoardId)) { + state.autoAddBoardId = null; + } } ); }, @@ -147,6 +169,7 @@ export const { isBatchEnabledChanged, imagesAddedToBatch, imagesRemovedFromBatch, + autoAddBoardIdChanged, } = gallerySlice.actions; export default gallerySlice.reducer; diff --git a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/InvokeButton.tsx b/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/InvokeButton.tsx index ab4949392d..3880f717b9 100644 --- a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/InvokeButton.tsx +++ b/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/InvokeButton.tsx @@ -1,6 +1,9 @@ -import { Box, ChakraProps } from '@chakra-ui/react'; +import { Box, ChakraProps, Tooltip } from '@chakra-ui/react'; +import { createSelector } from '@reduxjs/toolkit'; import { userInvoked } from 'app/store/actions'; +import { stateSelector } from 'app/store/store'; import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; +import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import IAIButton, { IAIButtonProps } from 'common/components/IAIButton'; import IAIIconButton, { IAIIconButtonProps, @@ -8,11 +11,13 @@ import IAIIconButton, { import { useIsReadyToInvoke } from 'common/hooks/useIsReadyToInvoke'; import { clampSymmetrySteps } from 'features/parameters/store/generationSlice'; import ProgressBar from 'features/system/components/ProgressBar'; +import { selectIsBusy } from 'features/system/store/systemSelectors'; import { activeTabNameSelector } from 'features/ui/store/uiSelectors'; import { useCallback } from 'react'; import { useHotkeys } from 'react-hotkeys-hook'; import { useTranslation } from 'react-i18next'; import { FaPlay } from 'react-icons/fa'; +import { useBoardName } from 'services/api/hooks/useBoardName'; const IN_PROGRESS_STYLES: ChakraProps['sx'] = { _disabled: { @@ -26,6 +31,20 @@ const IN_PROGRESS_STYLES: ChakraProps['sx'] = { }, }; +const selector = createSelector( + [stateSelector, activeTabNameSelector, selectIsBusy], + ({ gallery }, activeTabName, isBusy) => { + const { autoAddBoardId } = gallery; + + return { + isBusy, + autoAddBoardId, + activeTabName, + }; + }, + defaultSelectorOptions +); + interface InvokeButton extends Omit { iconButton?: boolean; @@ -35,8 +54,8 @@ export default function InvokeButton(props: InvokeButton) { const { iconButton = false, ...rest } = props; const dispatch = useAppDispatch(); const isReady = useIsReadyToInvoke(); - const activeTabName = useAppSelector(activeTabNameSelector); - const isProcessing = useAppSelector((state) => state.system.isProcessing); + const { isBusy, autoAddBoardId, activeTabName } = useAppSelector(selector); + const autoAddBoardName = useBoardName(autoAddBoardId); const handleInvoke = useCallback(() => { dispatch(clampSymmetrySteps()); @@ -75,43 +94,52 @@ export default function InvokeButton(props: InvokeButton) { )} - {iconButton ? ( - } - isDisabled={!isReady || isProcessing} - onClick={handleInvoke} - tooltip={t('parameters.invoke')} - tooltipProps={{ placement: 'top' }} - colorScheme="accent" - id="invoke-button" - {...rest} - sx={{ - w: 'full', - flexGrow: 1, - ...(isProcessing ? IN_PROGRESS_STYLES : {}), - }} - /> - ) : ( - - Invoke - - )} + + {iconButton ? ( + } + isDisabled={!isReady || isBusy} + onClick={handleInvoke} + tooltip={t('parameters.invoke')} + tooltipProps={{ placement: 'top' }} + colorScheme="accent" + id="invoke-button" + {...rest} + sx={{ + w: 'full', + flexGrow: 1, + ...(isBusy ? IN_PROGRESS_STYLES : {}), + }} + /> + ) : ( + + Invoke + + )} + ); diff --git a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/Loopback.tsx b/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/Loopback.tsx deleted file mode 100644 index 3bd405d1ce..0000000000 --- a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/Loopback.tsx +++ /dev/null @@ -1,33 +0,0 @@ -import { createSelector } from '@reduxjs/toolkit'; -import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; -import IAIIconButton from 'common/components/IAIIconButton'; -import { postprocessingSelector } from 'features/parameters/store/postprocessingSelectors'; -import { setShouldLoopback } from 'features/parameters/store/postprocessingSlice'; -import { useTranslation } from 'react-i18next'; -import { FaRecycle } from 'react-icons/fa'; - -const loopbackSelector = createSelector( - postprocessingSelector, - ({ shouldLoopback }) => shouldLoopback -); - -const LoopbackButton = () => { - const dispatch = useAppDispatch(); - const shouldLoopback = useAppSelector(loopbackSelector); - - const { t } = useTranslation(); - - return ( - } - onClick={() => { - dispatch(setShouldLoopback(!shouldLoopback)); - }} - /> - ); -}; - -export default LoopbackButton; diff --git a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/ProcessButtons.tsx b/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/ProcessButtons.tsx index 4449866ef2..f132092012 100644 --- a/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/ProcessButtons.tsx +++ b/invokeai/frontend/web/src/features/parameters/components/ProcessButtons/ProcessButtons.tsx @@ -9,7 +9,6 @@ const ProcessButtons = () => { return ( - {/* {activeTabName === 'img2img' && } */} ); diff --git a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsClearIntermediates.tsx b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsClearIntermediates.tsx index d75eb4d4c2..9d095f3511 100644 --- a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsClearIntermediates.tsx +++ b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsClearIntermediates.tsx @@ -1,60 +1,71 @@ -import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; -import { useCallback, useEffect, useState } from 'react'; -import { StyledFlex } from './SettingsModal'; import { Heading, Text } from '@chakra-ui/react'; +import { useAppDispatch } from 'app/store/storeHooks'; +import { useCallback, useEffect } from 'react'; import IAIButton from '../../../../common/components/IAIButton'; -import { useClearIntermediatesMutation } from '../../../../services/api/endpoints/images'; -import { addToast } from '../../store/systemSlice'; +import { + useClearIntermediatesMutation, + useGetIntermediatesCountQuery, +} from '../../../../services/api/endpoints/images'; import { resetCanvas } from '../../../canvas/store/canvasSlice'; +import { addToast } from '../../store/systemSlice'; +import { StyledFlex } from './SettingsModal'; +import { controlNetReset } from 'features/controlNet/store/controlNetSlice'; export default function SettingsClearIntermediates() { const dispatch = useAppDispatch(); - const [isDisabled, setIsDisabled] = useState(false); + + const { data: intermediatesCount, refetch: updateIntermediatesCount } = + useGetIntermediatesCountQuery(); const [clearIntermediates, { isLoading: isLoadingClearIntermediates }] = useClearIntermediatesMutation(); const handleClickClearIntermediates = useCallback(() => { - clearIntermediates({}) + clearIntermediates() .unwrap() .then((response) => { + dispatch(controlNetReset()); dispatch(resetCanvas()); dispatch( addToast({ - title: - response === 0 - ? `No intermediates to clear` - : `Successfully cleared ${response} intermediates`, + title: `Cleared ${response} intermediates`, status: 'info', }) ); - if (response < 100) { - setIsDisabled(true); - } }); }, [clearIntermediates, dispatch]); + useEffect(() => { + // update the count on mount + updateIntermediatesCount(); + }, [updateIntermediatesCount]); + + const buttonText = intermediatesCount + ? `Clear ${intermediatesCount} Intermediate${ + intermediatesCount > 1 ? 's' : '' + }` + : 'No Intermediates to Clear'; + return ( Clear Intermediates - {isDisabled ? 'Intermediates Cleared' : 'Clear 100 Intermediates'} + {buttonText} - - Will permanently delete first 100 intermediates found on disk and in - database + + Clearing intermediates will reset your Canvas and ControlNet state. - This will also clear your canvas state. - + Intermediate images are byproducts of generation, different from the - result images in the gallery. Purging intermediates will free disk - space. Your gallery images will not be deleted. + result images in the gallery. Clearing intermediates will free disk + space. + Your gallery images will not be deleted. ); } diff --git a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx index 5ec7a09b67..31dd6162ec 100644 --- a/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx +++ b/invokeai/frontend/web/src/features/system/components/SettingsModal/SettingsModal.tsx @@ -24,7 +24,6 @@ import { setEnableImageDebugging, setIsNodesEnabled, setShouldConfirmOnDelete, - setShouldDisplayGuides, shouldAntialiasProgressImageChanged, shouldLogToConsoleChanged, } from 'features/system/store/systemSlice'; @@ -56,7 +55,6 @@ const selector = createSelector( (system: SystemState, ui: UIState) => { const { shouldConfirmOnDelete, - shouldDisplayGuides, enableImageDebugging, consoleLogLevel, shouldLogToConsole, @@ -73,7 +71,6 @@ const selector = createSelector( return { shouldConfirmOnDelete, - shouldDisplayGuides, enableImageDebugging, shouldUseCanvasBetaLayout, shouldUseSliders, @@ -139,7 +136,6 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { const { shouldConfirmOnDelete, - shouldDisplayGuides, enableImageDebugging, shouldUseCanvasBetaLayout, shouldUseSliders, @@ -195,7 +191,7 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { @@ -231,14 +227,6 @@ const SettingsModal = ({ children, config }: SettingsModalProps) => { {t('settings.ui')} - ) => - dispatch(setShouldDisplayGuides(e.target.checked)) - } - /> - { {shouldShowResetWebUiText && ( <> - {t('settings.resetWebUIDesc1')} - {t('settings.resetWebUIDesc2')} + + {t('settings.resetWebUIDesc1')} + + + {t('settings.resetWebUIDesc2')} + )} diff --git a/invokeai/frontend/web/src/features/system/store/systemSlice.ts b/invokeai/frontend/web/src/features/system/store/systemSlice.ts index 5bbaa2265f..1f2b452151 100644 --- a/invokeai/frontend/web/src/features/system/store/systemSlice.ts +++ b/invokeai/frontend/web/src/features/system/store/systemSlice.ts @@ -38,7 +38,6 @@ export interface SystemState { currentIteration: number; totalIterations: number; currentStatusHasSteps: boolean; - shouldDisplayGuides: boolean; isCancelable: boolean; enableImageDebugging: boolean; toastQueue: UseToastOptions[]; @@ -84,14 +83,12 @@ export interface SystemState { shouldAntialiasProgressImage: boolean; language: keyof typeof LANGUAGES; isUploading: boolean; - boardIdToAddTo?: string; isNodesEnabled: boolean; } export const initialSystemState: SystemState = { isConnected: false, isProcessing: false, - shouldDisplayGuides: true, isGFPGANAvailable: true, isESRGANAvailable: true, shouldConfirmOnDelete: true, @@ -134,9 +131,6 @@ export const systemSlice = createSlice({ setShouldConfirmOnDelete: (state, action: PayloadAction) => { state.shouldConfirmOnDelete = action.payload; }, - setShouldDisplayGuides: (state, action: PayloadAction) => { - state.shouldDisplayGuides = action.payload; - }, setIsCancelable: (state, action: PayloadAction) => { state.isCancelable = action.payload; }, @@ -204,7 +198,6 @@ export const systemSlice = createSlice({ */ builder.addCase(appSocketSubscribed, (state, action) => { state.sessionId = action.payload.sessionId; - state.boardIdToAddTo = action.payload.boardId; state.canceledSession = ''; }); @@ -213,7 +206,6 @@ export const systemSlice = createSlice({ */ builder.addCase(appSocketUnsubscribed, (state) => { state.sessionId = null; - state.boardIdToAddTo = undefined; }); /** @@ -390,7 +382,6 @@ export const { setIsProcessing, setShouldConfirmOnDelete, setCurrentStatus, - setShouldDisplayGuides, setIsCancelable, setEnableImageDebugging, addToast, diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelListItem.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelListItem.tsx index 224b0ac003..4de5131f65 100644 --- a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelListItem.tsx +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerPanel/ModelListItem.tsx @@ -98,16 +98,7 @@ export default function ModelListItem(props: ModelListItemProps) { onClick={handleSelectModel} > - + { modelBaseTypeMap[ model.base_model as keyof typeof modelBaseTypeMap diff --git a/invokeai/frontend/web/src/services/api/endpoints/images.ts b/invokeai/frontend/web/src/services/api/endpoints/images.ts index a37edd48aa..52f410e315 100644 --- a/invokeai/frontend/web/src/services/api/endpoints/images.ts +++ b/invokeai/frontend/web/src/services/api/endpoints/images.ts @@ -127,6 +127,13 @@ export const imagesApi = api.injectEndpoints({ // 24 hours - reducing this to a few minutes would reduce memory usage. keepUnusedDataFor: 86400, }), + getIntermediatesCount: build.query({ + query: () => ({ url: getListImagesUrl({ is_intermediate: true }) }), + providesTags: ['IntermediatesCount'], + transformResponse: (response: OffsetPaginatedResults_ImageDTO_) => { + return response.total; + }, + }), getImageDTO: build.query({ query: (image_name) => ({ url: `images/${image_name}` }), providesTags: (result, error, arg) => { @@ -148,8 +155,9 @@ export const imagesApi = api.injectEndpoints({ }, keepUnusedDataFor: 86400, // 24 hours }), - clearIntermediates: build.mutation({ + clearIntermediates: build.mutation({ query: () => ({ url: `images/clear-intermediates`, method: 'POST' }), + invalidatesTags: ['IntermediatesCount'], }), deleteImage: build.mutation({ query: ({ image_name }) => ({ @@ -617,6 +625,7 @@ export const imagesApi = api.injectEndpoints({ }); export const { + useGetIntermediatesCountQuery, useListImagesQuery, useLazyListImagesQuery, useGetImageDTOQuery, diff --git a/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts b/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts new file mode 100644 index 0000000000..d63b6e0425 --- /dev/null +++ b/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts @@ -0,0 +1,26 @@ +import { BoardId } from 'features/gallery/store/gallerySlice'; +import { useListAllBoardsQuery } from '../endpoints/boards'; + +export const useBoardName = (board_id: BoardId | null | undefined) => { + const { boardName } = useListAllBoardsQuery(undefined, { + selectFromResult: ({ data }) => { + let boardName = ''; + if (board_id === 'images') { + boardName = 'All Images'; + } else if (board_id === 'assets') { + boardName = 'All Assets'; + } else if (board_id === 'no_board') { + boardName = 'No Board'; + } else if (board_id === 'batch') { + boardName = 'Batch'; + } else { + const selectedBoard = data?.find((b) => b.board_id === board_id); + boardName = selectedBoard?.board_name || 'Unknown Board'; + } + + return { boardName }; + }, + }); + + return boardName; +}; From 82eb1f107564aeea80f42741e5125c997786f06f Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 18:50:43 +1200 Subject: [PATCH 24/51] feat: Add Sync Models to UI --- invokeai/frontend/web/public/locales/en.json | 7 +- .../fields/ModelInputFieldComponent.tsx | 31 +++++---- .../MainModel/ParamMainModelSelect.tsx | 28 +++++--- .../tabs/ModelManager/ModelManagerTab.tsx | 12 +++- .../subpanels/ModelManagerSettingsPanel.tsx | 10 +++ .../ModelManagerSettingsPanel/SyncModels.tsx | 35 ++++++++++ .../SyncModelsButton.tsx | 66 +++++++++++++++++++ .../web/src/services/api/endpoints/models.ts | 13 ++++ .../frontend/web/src/services/api/schema.d.ts | 24 +++---- 9 files changed, 190 insertions(+), 36 deletions(-) create mode 100644 invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel.tsx create mode 100644 invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModels.tsx create mode 100644 invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton.tsx diff --git a/invokeai/frontend/web/public/locales/en.json b/invokeai/frontend/web/public/locales/en.json index 32ff574925..20b11a61e9 100644 --- a/invokeai/frontend/web/public/locales/en.json +++ b/invokeai/frontend/web/public/locales/en.json @@ -455,7 +455,12 @@ "addDifference": "Add Difference", "pickModelType": "Pick Model Type", "selectModel": "Select Model", - "importModels": "Import Models" + "importModels": "Import Models", + "settings": "Settings", + "syncModels": "Sync Models", + "syncModelsDesc": "If your models are out of sync with the backend, you can refresh them up using this option. This is generally handy in cases where you manually update your models.yaml file or add models to the InvokeAI root folder after the application has booted.", + "modelsSynced": "Models Synced", + "modelSyncFailed": "Model Sync Failed" }, "parameters": { "general": "General", diff --git a/invokeai/frontend/web/src/features/nodes/components/fields/ModelInputFieldComponent.tsx b/invokeai/frontend/web/src/features/nodes/components/fields/ModelInputFieldComponent.tsx index 67878ed82c..b578298149 100644 --- a/invokeai/frontend/web/src/features/nodes/components/fields/ModelInputFieldComponent.tsx +++ b/invokeai/frontend/web/src/features/nodes/components/fields/ModelInputFieldComponent.tsx @@ -5,10 +5,12 @@ import { ModelInputFieldTemplate, } from 'features/nodes/types/types'; +import { Box, Flex } from '@chakra-ui/react'; import { SelectItem } from '@mantine/core'; import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect'; import { MODEL_TYPE_MAP } from 'features/parameters/types/constants'; import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainModelParam'; +import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton'; import { forEach } from 'lodash-es'; import { memo, useCallback, useMemo } from 'react'; import { useTranslation } from 'react-i18next'; @@ -88,18 +90,23 @@ const ModelInputFieldComponent = ( data={[]} /> ) : ( - 0 ? 'Select a model' : 'No models available'} - data={data} - error={data.length === 0} - disabled={data.length === 0} - onChange={handleChangeModel} - /> + + 0 ? 'Select a model' : 'No models available'} + data={data} + error={data.length === 0} + disabled={data.length === 0} + onChange={handleChangeModel} + /> + + + + ); }; diff --git a/invokeai/frontend/web/src/features/parameters/components/Parameters/MainModel/ParamMainModelSelect.tsx b/invokeai/frontend/web/src/features/parameters/components/Parameters/MainModel/ParamMainModelSelect.tsx index eee2a36d1b..75f1bc8bd9 100644 --- a/invokeai/frontend/web/src/features/parameters/components/Parameters/MainModel/ParamMainModelSelect.tsx +++ b/invokeai/frontend/web/src/features/parameters/components/Parameters/MainModel/ParamMainModelSelect.tsx @@ -4,6 +4,7 @@ import { useTranslation } from 'react-i18next'; import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; import IAIMantineSearchableSelect from 'common/components/IAIMantineSearchableSelect'; +import { Box, Flex } from '@chakra-ui/react'; import { SelectItem } from '@mantine/core'; import { createSelector } from '@reduxjs/toolkit'; import { stateSelector } from 'app/store/store'; @@ -11,6 +12,7 @@ import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import { modelSelected } from 'features/parameters/store/actions'; import { MODEL_TYPE_MAP } from 'features/parameters/types/constants'; import { modelIdToMainModelParam } from 'features/parameters/util/modelIdToMainModelParam'; +import SyncModelsButton from 'features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton'; import { forEach } from 'lodash-es'; import { useGetMainModelsQuery } from 'services/api/endpoints/models'; @@ -84,16 +86,22 @@ const ParamMainModelSelect = () => { data={[]} /> ) : ( - 0 ? 'Select a model' : 'No models available'} - data={data} - error={data.length === 0} - disabled={data.length === 0} - onChange={handleChangeModel} - /> + + 0 ? 'Select a model' : 'No models available'} + data={data} + error={data.length === 0} + disabled={data.length === 0} + onChange={handleChangeModel} + w="100%" + /> + + + + ); }; diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/ModelManagerTab.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/ModelManagerTab.tsx index d397058795..1c8ea3a735 100644 --- a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/ModelManagerTab.tsx +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/ModelManagerTab.tsx @@ -4,8 +4,13 @@ import { ReactNode, memo } from 'react'; import ImportModelsPanel from './subpanels/ImportModelsPanel'; import MergeModelsPanel from './subpanels/MergeModelsPanel'; import ModelManagerPanel from './subpanels/ModelManagerPanel'; +import ModelManagerSettingsPanel from './subpanels/ModelManagerSettingsPanel'; -type ModelManagerTabName = 'modelManager' | 'importModels' | 'mergeModels'; +type ModelManagerTabName = + | 'modelManager' + | 'importModels' + | 'mergeModels' + | 'settings'; type ModelManagerTabInfo = { id: ModelManagerTabName; @@ -29,6 +34,11 @@ const tabs: ModelManagerTabInfo[] = [ label: i18n.t('modelManager.mergeModels'), content: , }, + { + id: 'settings', + label: i18n.t('modelManager.settings'), + content: , + }, ]; const ModelManagerTab = () => { diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel.tsx new file mode 100644 index 0000000000..eebd46de6d --- /dev/null +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel.tsx @@ -0,0 +1,10 @@ +import { Flex } from '@chakra-ui/react'; +import SyncModels from './ModelManagerSettingsPanel/SyncModels'; + +export default function ModelManagerSettingsPanel() { + return ( + + + + ); +} diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModels.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModels.tsx new file mode 100644 index 0000000000..e92b118566 --- /dev/null +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModels.tsx @@ -0,0 +1,35 @@ +import { Flex, Text } from '@chakra-ui/react'; +import { useTranslation } from 'react-i18next'; +import SyncModelsButton from './SyncModelsButton'; + +export default function SyncModels() { + const { t } = useTranslation(); + + return ( + + + {t('modelManager.syncModels')} + + {t('modelManager.syncModelsDesc')} + + + + + ); +} diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton.tsx new file mode 100644 index 0000000000..e42794c0b4 --- /dev/null +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ModelManager/subpanels/ModelManagerSettingsPanel/SyncModelsButton.tsx @@ -0,0 +1,66 @@ +import { makeToast } from 'app/components/Toaster'; +import { useAppDispatch } from 'app/store/storeHooks'; +import IAIButton from 'common/components/IAIButton'; +import IAIIconButton from 'common/components/IAIIconButton'; +import { addToast } from 'features/system/store/systemSlice'; +import { useTranslation } from 'react-i18next'; +import { FaSync } from 'react-icons/fa'; +import { useSyncModelsMutation } from 'services/api/endpoints/models'; + +type SyncModelsButtonProps = { + iconMode?: boolean; +}; + +export default function SyncModelsButton(props: SyncModelsButtonProps) { + const { iconMode = false } = props; + const dispatch = useAppDispatch(); + const { t } = useTranslation(); + + const [syncModels, { isLoading }] = useSyncModelsMutation(); + + const syncModelsHandler = () => { + syncModels() + .unwrap() + .then((_) => { + dispatch( + addToast( + makeToast({ + title: `${t('modelManager.modelsSynced')}`, + status: 'success', + }) + ) + ); + }) + .catch((error) => { + if (error) { + dispatch( + addToast( + makeToast({ + title: `${t('modelManager.modelSyncFailed')}`, + status: 'error', + }) + ) + ); + } + }); + }; + + return !iconMode ? ( + + Sync Models + + ) : ( + } + tooltip={t('modelManager.syncModels')} + aria-label={t('modelManager.syncModels')} + isLoading={isLoading} + onClick={syncModelsHandler} + size="sm" + /> + ); +} diff --git a/invokeai/frontend/web/src/services/api/endpoints/models.ts b/invokeai/frontend/web/src/services/api/endpoints/models.ts index 27e9aefcdb..ff82bc2802 100644 --- a/invokeai/frontend/web/src/services/api/endpoints/models.ts +++ b/invokeai/frontend/web/src/services/api/endpoints/models.ts @@ -93,6 +93,9 @@ type AddMainModelArg = { type AddMainModelResponse = paths['/api/v1/models/add']['post']['responses']['201']['content']['application/json']; +type SyncModelsResponse = + paths['/api/v1/models/sync']['post']['responses']['201']['content']['application/json']; + export type SearchFolderResponse = paths['/api/v1/models/search']['get']['responses']['200']['content']['application/json']; @@ -244,6 +247,15 @@ export const modelsApi = api.injectEndpoints({ }, invalidatesTags: [{ type: 'MainModel', id: LIST_TAG }], }), + syncModels: build.mutation({ + query: () => { + return { + url: `models/sync`, + method: 'POST', + }; + }, + invalidatesTags: [{ type: 'MainModel', id: LIST_TAG }], + }), getLoRAModels: build.query, void>({ query: () => ({ url: 'models/', params: { model_type: 'lora' } }), providesTags: (result, error, arg) => { @@ -423,6 +435,7 @@ export const { useAddMainModelsMutation, useConvertMainModelsMutation, useMergeMainModelsMutation, + useSyncModelsMutation, useGetModelsInFolderQuery, useGetCheckpointConfigsQuery, } = modelsApi; diff --git a/invokeai/frontend/web/src/services/api/schema.d.ts b/invokeai/frontend/web/src/services/api/schema.d.ts index 26b2e8e37f..6a2e176ffd 100644 --- a/invokeai/frontend/web/src/services/api/schema.d.ts +++ b/invokeai/frontend/web/src/services/api/schema.d.ts @@ -126,7 +126,7 @@ export type paths = { * @description Call after making changes to models.yaml, autoimport directories or models directory to synchronize * in-memory data structures with disk data structures. */ - get: operations["sync_to_config"]; + post: operations["sync_to_config"]; }; "/api/v1/models/merge/{base_model}": { /** @@ -1279,7 +1279,7 @@ export type components = { * @description The nodes in this graph */ nodes?: { - [key: string]: (components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined; + [key: string]: (components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]) | undefined; }; /** * Edges @@ -1322,7 +1322,7 @@ export type components = { * @description The results of node executions */ results: { - [key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined; + [key: string]: (components["schemas"]["ImageOutput"] | components["schemas"]["MaskOutput"] | components["schemas"]["ControlOutput"] | components["schemas"]["ModelLoaderOutput"] | components["schemas"]["LoraLoaderOutput"] | components["schemas"]["VaeLoaderOutput"] | components["schemas"]["MetadataAccumulatorOutput"] | components["schemas"]["IntCollectionOutput"] | components["schemas"]["FloatCollectionOutput"] | components["schemas"]["ImageCollectionOutput"] | components["schemas"]["CompelOutput"] | components["schemas"]["ClipSkipInvocationOutput"] | components["schemas"]["LatentsOutput"] | components["schemas"]["IntOutput"] | components["schemas"]["FloatOutput"] | components["schemas"]["NoiseOutput"] | components["schemas"]["StringOutput"] | components["schemas"]["PromptOutput"] | components["schemas"]["PromptCollectionOutput"] | components["schemas"]["SDXLModelLoaderOutput"] | components["schemas"]["SDXLRefinerModelLoaderOutput"] | components["schemas"]["GraphInvocationOutput"] | components["schemas"]["IterateInvocationOutput"] | components["schemas"]["CollectInvocationOutput"]) | undefined; }; /** * Errors @@ -5323,12 +5323,6 @@ export type components = { */ image?: components["schemas"]["ImageField"]; }; - /** - * StableDiffusion1ModelFormat - * @description An enumeration. - * @enum {string} - */ - StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; /** * StableDiffusion2ModelFormat * @description An enumeration. @@ -5341,6 +5335,12 @@ export type components = { * @enum {string} */ StableDiffusionXLModelFormat: "checkpoint" | "diffusers"; + /** + * StableDiffusion1ModelFormat + * @description An enumeration. + * @enum {string} + */ + StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; }; responses: never; parameters: never; @@ -5451,7 +5451,7 @@ export type operations = { }; requestBody: { content: { - "application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]; + "application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]; }; }; responses: { @@ -5488,7 +5488,7 @@ export type operations = { }; requestBody: { content: { - "application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]; + "application/json": components["schemas"]["LoadImageInvocation"] | components["schemas"]["ShowImageInvocation"] | components["schemas"]["ImageCropInvocation"] | components["schemas"]["ImagePasteInvocation"] | components["schemas"]["MaskFromAlphaInvocation"] | components["schemas"]["ImageMultiplyInvocation"] | components["schemas"]["ImageChannelInvocation"] | components["schemas"]["ImageConvertInvocation"] | components["schemas"]["ImageBlurInvocation"] | components["schemas"]["ImageResizeInvocation"] | components["schemas"]["ImageScaleInvocation"] | components["schemas"]["ImageLerpInvocation"] | components["schemas"]["ImageInverseLerpInvocation"] | components["schemas"]["ControlNetInvocation"] | components["schemas"]["ImageProcessorInvocation"] | components["schemas"]["MainModelLoaderInvocation"] | components["schemas"]["LoraLoaderInvocation"] | components["schemas"]["VaeLoaderInvocation"] | components["schemas"]["MetadataAccumulatorInvocation"] | components["schemas"]["RangeInvocation"] | components["schemas"]["RangeOfSizeInvocation"] | components["schemas"]["RandomRangeInvocation"] | components["schemas"]["ImageCollectionInvocation"] | components["schemas"]["CompelInvocation"] | components["schemas"]["SDXLCompelPromptInvocation"] | components["schemas"]["SDXLRefinerCompelPromptInvocation"] | components["schemas"]["SDXLRawPromptInvocation"] | components["schemas"]["SDXLRefinerRawPromptInvocation"] | components["schemas"]["ClipSkipInvocation"] | components["schemas"]["CvInpaintInvocation"] | components["schemas"]["TextToLatentsInvocation"] | components["schemas"]["LatentsToImageInvocation"] | components["schemas"]["ResizeLatentsInvocation"] | components["schemas"]["ScaleLatentsInvocation"] | components["schemas"]["ImageToLatentsInvocation"] | components["schemas"]["InpaintInvocation"] | components["schemas"]["InfillColorInvocation"] | components["schemas"]["InfillTileInvocation"] | components["schemas"]["InfillPatchMatchInvocation"] | components["schemas"]["AddInvocation"] | components["schemas"]["SubtractInvocation"] | components["schemas"]["MultiplyInvocation"] | components["schemas"]["DivideInvocation"] | components["schemas"]["RandomIntInvocation"] | components["schemas"]["NoiseInvocation"] | components["schemas"]["ParamIntInvocation"] | components["schemas"]["ParamFloatInvocation"] | components["schemas"]["ParamStringInvocation"] | components["schemas"]["FloatLinearRangeInvocation"] | components["schemas"]["StepParamEasingInvocation"] | components["schemas"]["DynamicPromptInvocation"] | components["schemas"]["PromptsFromFileInvocation"] | components["schemas"]["SDXLModelLoaderInvocation"] | components["schemas"]["SDXLRefinerModelLoaderInvocation"] | components["schemas"]["SDXLTextToLatentsInvocation"] | components["schemas"]["SDXLLatentsToLatentsInvocation"] | components["schemas"]["ESRGANInvocation"] | components["schemas"]["GraphInvocation"] | components["schemas"]["IterateInvocation"] | components["schemas"]["CollectInvocation"] | components["schemas"]["CannyImageProcessorInvocation"] | components["schemas"]["HedImageProcessorInvocation"] | components["schemas"]["LineartImageProcessorInvocation"] | components["schemas"]["LineartAnimeImageProcessorInvocation"] | components["schemas"]["OpenposeImageProcessorInvocation"] | components["schemas"]["MidasDepthImageProcessorInvocation"] | components["schemas"]["NormalbaeImageProcessorInvocation"] | components["schemas"]["MlsdImageProcessorInvocation"] | components["schemas"]["PidiImageProcessorInvocation"] | components["schemas"]["ContentShuffleImageProcessorInvocation"] | components["schemas"]["ZoeDepthImageProcessorInvocation"] | components["schemas"]["MediapipeFaceProcessorInvocation"] | components["schemas"]["LeresImageProcessorInvocation"] | components["schemas"]["TileResamplerProcessorInvocation"] | components["schemas"]["SegmentAnythingProcessorInvocation"] | components["schemas"]["LatentsToLatentsInvocation"]; }; }; responses: { @@ -5927,7 +5927,7 @@ export type operations = { /** @description synchronization successful */ 201: { content: { - "application/json": unknown; + "application/json": boolean; }; }; }; From 09dfcc42776357cdeba43e6a897cdbc8b9367c08 Mon Sep 17 00:00:00 2001 From: user1 Date: Thu, 20 Jul 2023 00:38:20 -0700 Subject: [PATCH 25/51] Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors --- invokeai/app/util/controlnet_utils.py | 56 ++++++++++++++++++++++++++- 1 file changed, 55 insertions(+), 1 deletion(-) diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py index 67fd7bb43e..342fa147c5 100644 --- a/invokeai/app/util/controlnet_utils.py +++ b/invokeai/app/util/controlnet_utils.py @@ -94,12 +94,66 @@ def nake_nms(x): return y +################################################################################ +# copied from Mikubill/sd-webui-controlnet external_code.py and modified for InvokeAI +################################################################################ +# FIXME: not using yet, if used in the future will most likely require modification of preprocessors +def pixel_perfect_resolution( + image: np.ndarray, + target_H: int, + target_W: int, + resize_mode: str, +) -> int: + """ + Calculate the estimated resolution for resizing an image while preserving aspect ratio. + + The function first calculates scaling factors for height and width of the image based on the target + height and width. Then, based on the chosen resize mode, it either takes the smaller or the larger + scaling factor to estimate the new resolution. + + If the resize mode is OUTER_FIT, the function uses the smaller scaling factor, ensuring the whole image + fits within the target dimensions, potentially leaving some empty space. + + If the resize mode is not OUTER_FIT, the function uses the larger scaling factor, ensuring the target + dimensions are fully filled, potentially cropping the image. + + After calculating the estimated resolution, the function prints some debugging information. + + Args: + image (np.ndarray): A 3D numpy array representing an image. The dimensions represent [height, width, channels]. + target_H (int): The target height for the image. + target_W (int): The target width for the image. + resize_mode (ResizeMode): The mode for resizing. + + Returns: + int: The estimated resolution after resizing. + """ + raw_H, raw_W, _ = image.shape + + k0 = float(target_H) / float(raw_H) + k1 = float(target_W) / float(raw_W) + + if resize_mode == "fill_resize": + estimation = min(k0, k1) * float(min(raw_H, raw_W)) + else: # "crop_resize" or "just_resize" (or possibly "just_resize_simple"?) + estimation = max(k0, k1) * float(min(raw_H, raw_W)) + + # print(f"Pixel Perfect Computation:") + # print(f"resize_mode = {resize_mode}") + # print(f"raw_H = {raw_H}") + # print(f"raw_W = {raw_W}") + # print(f"target_H = {target_H}") + # print(f"target_W = {target_W}") + # print(f"estimation = {estimation}") + + return int(np.round(estimation)) + + ########################################################################### # Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet # modified for InvokeAI ########################################################################### # def detectmap_proc(detected_map, module, resize_mode, h, w): -@staticmethod def np_img_resize( np_img: np.ndarray, resize_mode: str, From 6cb9167a1b4fc04527815f644acfd5115284e4a5 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 18:52:30 -0700 Subject: [PATCH 26/51] Added controlnet_utils.py with code from lvmin for high quality resize, crop+resize, fill+resize --- invokeai/app/util/controlnet_utils.py | 235 ++++++++++++++++++++++++++ 1 file changed, 235 insertions(+) create mode 100644 invokeai/app/util/controlnet_utils.py diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py new file mode 100644 index 0000000000..920bca081b --- /dev/null +++ b/invokeai/app/util/controlnet_utils.py @@ -0,0 +1,235 @@ +import torch +import numpy as np +import cv2 +from PIL import Image +from diffusers.utils import PIL_INTERPOLATION + +from einops import rearrange +from controlnet_aux.util import HWC3, resize_image + +################################################################### +# Copy of scripts/lvminthin.py from Mikubill/sd-webui-controlnet +################################################################### +# High Quality Edge Thinning using Pure Python +# Written by Lvmin Zhangu +# 2023 April +# Stanford University +# If you use this, please Cite "High Quality Edge Thinning using Pure Python", Lvmin Zhang, In Mikubill/sd-webui-controlnet. + +lvmin_kernels_raw = [ + np.array([ + [-1, -1, -1], + [0, 1, 0], + [1, 1, 1] + ], dtype=np.int32), + np.array([ + [0, -1, -1], + [1, 1, -1], + [0, 1, 0] + ], dtype=np.int32) +] + +lvmin_kernels = [] +lvmin_kernels += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_kernels_raw] +lvmin_kernels += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_kernels_raw] + +lvmin_prunings_raw = [ + np.array([ + [-1, -1, -1], + [-1, 1, -1], + [0, 0, -1] + ], dtype=np.int32), + np.array([ + [-1, -1, -1], + [-1, 1, -1], + [-1, 0, 0] + ], dtype=np.int32) +] + +lvmin_prunings = [] +lvmin_prunings += [np.rot90(x, k=0, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=1, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=2, axes=(0, 1)) for x in lvmin_prunings_raw] +lvmin_prunings += [np.rot90(x, k=3, axes=(0, 1)) for x in lvmin_prunings_raw] + + +def remove_pattern(x, kernel): + objects = cv2.morphologyEx(x, cv2.MORPH_HITMISS, kernel) + objects = np.where(objects > 127) + x[objects] = 0 + return x, objects[0].shape[0] > 0 + + +def thin_one_time(x, kernels): + y = x + is_done = True + for k in kernels: + y, has_update = remove_pattern(y, k) + if has_update: + is_done = False + return y, is_done + + +def lvmin_thin(x, prunings=True): + y = x + for i in range(32): + y, is_done = thin_one_time(y, lvmin_kernels) + if is_done: + break + if prunings: + y, _ = thin_one_time(y, lvmin_prunings) + return y + + +def nake_nms(x): + f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) + f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) + f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8) + f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8) + y = np.zeros_like(x) + for f in [f1, f2, f3, f4]: + np.putmask(y, cv2.dilate(x, kernel=f) == x, x) + return y + + +########################################################################### +# Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet +# modified for InvokeAI +########################################################################### +# def detectmap_proc(detected_map, module, resize_mode, h, w): +@staticmethod +def np_img_resize( + np_img: np.ndarray, + resize_mode: str, + h: int, + w: int, + device: torch.device = torch.device('cpu') +): + print("in np_img_resize") + # if 'inpaint' in module: + # np_img = np_img.astype(np.float32) + # else: + # np_img = HWC3(np_img) + np_img = HWC3(np_img) + + def safe_numpy(x): + # A very safe method to make sure that Apple/Mac works + y = x + + # below is very boring but do not change these. If you change these Apple or Mac may fail. + y = y.copy() + y = np.ascontiguousarray(y) + y = y.copy() + return y + + def get_pytorch_control(x): + # A very safe method to make sure that Apple/Mac works + y = x + + # below is very boring but do not change these. If you change these Apple or Mac may fail. + y = torch.from_numpy(y) + y = y.float() / 255.0 + y = rearrange(y, 'h w c -> 1 c h w') + y = y.clone() + # y = y.to(devices.get_device_for("controlnet")) + y = y.to(device) + y = y.clone() + return y + + def high_quality_resize(x: np.ndarray, + size): + # Written by lvmin + # Super high-quality control map up-scaling, considering binary, seg, and one-pixel edges + inpaint_mask = None + if x.ndim == 3 and x.shape[2] == 4: + inpaint_mask = x[:, :, 3] + x = x[:, :, 0:3] + + new_size_is_smaller = (size[0] * size[1]) < (x.shape[0] * x.shape[1]) + new_size_is_bigger = (size[0] * size[1]) > (x.shape[0] * x.shape[1]) + unique_color_count = np.unique(x.reshape(-1, x.shape[2]), axis=0).shape[0] + is_one_pixel_edge = False + is_binary = False + if unique_color_count == 2: + is_binary = np.min(x) < 16 and np.max(x) > 240 + if is_binary: + xc = x + xc = cv2.erode(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1) + xc = cv2.dilate(xc, np.ones(shape=(3, 3), dtype=np.uint8), iterations=1) + one_pixel_edge_count = np.where(xc < x)[0].shape[0] + all_edge_count = np.where(x > 127)[0].shape[0] + is_one_pixel_edge = one_pixel_edge_count * 2 > all_edge_count + + if 2 < unique_color_count < 200: + interpolation = cv2.INTER_NEAREST + elif new_size_is_smaller: + interpolation = cv2.INTER_AREA + else: + interpolation = cv2.INTER_CUBIC # Must be CUBIC because we now use nms. NEVER CHANGE THIS + + y = cv2.resize(x, size, interpolation=interpolation) + if inpaint_mask is not None: + inpaint_mask = cv2.resize(inpaint_mask, size, interpolation=interpolation) + + if is_binary: + y = np.mean(y.astype(np.float32), axis=2).clip(0, 255).astype(np.uint8) + if is_one_pixel_edge: + y = nake_nms(y) + _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) + y = lvmin_thin(y, prunings=new_size_is_bigger) + else: + _, y = cv2.threshold(y, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) + y = np.stack([y] * 3, axis=2) + + if inpaint_mask is not None: + inpaint_mask = (inpaint_mask > 127).astype(np.float32) * 255.0 + inpaint_mask = inpaint_mask[:, :, None].clip(0, 255).astype(np.uint8) + y = np.concatenate([y, inpaint_mask], axis=2) + + return y + + # if resize_mode == external_code.ResizeMode.RESIZE: + if resize_mode == "just_resize": # RESIZE + print("just resizing") + np_img = high_quality_resize(np_img, (w, h)) + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img + + old_h, old_w, _ = np_img.shape + old_w = float(old_w) + old_h = float(old_h) + k0 = float(h) / old_h + k1 = float(w) / old_w + + safeint = lambda x: int(np.round(x)) + + # if resize_mode == external_code.ResizeMode.OUTER_FIT: + if resize_mode == "fill_resize": # OUTER_FIT + print("fill + resizing") + k = min(k0, k1) + borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0) + high_quality_border_color = np.median(borders, axis=0).astype(np_img.dtype) + if len(high_quality_border_color) == 4: + # Inpaint hijack + high_quality_border_color[3] = 255 + high_quality_background = np.tile(high_quality_border_color[None, None], [h, w, 1]) + np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) + new_h, new_w, _ = np_img.shape + pad_h = max(0, (h - new_h) // 2) + pad_w = max(0, (w - new_w) // 2) + high_quality_background[pad_h:pad_h + new_h, pad_w:pad_w + new_w] = np_img + np_img = high_quality_background + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img + else: # resize_mode == "crop_resize" (INNER_FIT) + print("crop + resizing") + k = max(k0, k1) + np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) + new_h, new_w, _ = np_img.shape + pad_h = max(0, (new_h - h) // 2) + pad_w = max(0, (new_w - w) // 2) + np_img = np_img[pad_h:pad_h + h, pad_w:pad_w + w] + np_img = safe_numpy(np_img) + return get_pytorch_control(np_img), np_img From b8e0810ed1acf1af630c44d8e95caeb64ccb02a5 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:01:14 -0700 Subject: [PATCH 27/51] Added revised prepare_control_image() that leverages lvmin high quality resizing --- invokeai/app/util/controlnet_utils.py | 61 +++++++++++++++++++++++++-- 1 file changed, 57 insertions(+), 4 deletions(-) diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py index 920bca081b..67fd7bb43e 100644 --- a/invokeai/app/util/controlnet_utils.py +++ b/invokeai/app/util/controlnet_utils.py @@ -107,7 +107,6 @@ def np_img_resize( w: int, device: torch.device = torch.device('cpu') ): - print("in np_img_resize") # if 'inpaint' in module: # np_img = np_img.astype(np.float32) # else: @@ -192,7 +191,6 @@ def np_img_resize( # if resize_mode == external_code.ResizeMode.RESIZE: if resize_mode == "just_resize": # RESIZE - print("just resizing") np_img = high_quality_resize(np_img, (w, h)) np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img @@ -207,7 +205,6 @@ def np_img_resize( # if resize_mode == external_code.ResizeMode.OUTER_FIT: if resize_mode == "fill_resize": # OUTER_FIT - print("fill + resizing") k = min(k0, k1) borders = np.concatenate([np_img[0, :, :], np_img[-1, :, :], np_img[:, 0, :], np_img[:, -1, :]], axis=0) high_quality_border_color = np.median(borders, axis=0).astype(np_img.dtype) @@ -224,7 +221,6 @@ def np_img_resize( np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img else: # resize_mode == "crop_resize" (INNER_FIT) - print("crop + resizing") k = max(k0, k1) np_img = high_quality_resize(np_img, (safeint(old_w * k), safeint(old_h * k))) new_h, new_w, _ = np_img.shape @@ -233,3 +229,60 @@ def np_img_resize( np_img = np_img[pad_h:pad_h + h, pad_w:pad_w + w] np_img = safe_numpy(np_img) return get_pytorch_control(np_img), np_img + +def prepare_control_image( + # image used to be Union[PIL.Image.Image, List[PIL.Image.Image], torch.Tensor, List[torch.Tensor]] + # but now should be able to assume that image is a single PIL.Image, which simplifies things + image: Image, + # FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions? + # latents_to_match_resolution, # TorchTensor of shape (batch_size, 3, height, width) + width=512, # should be 8 * latent.shape[3] + height=512, # should be 8 * latent height[2] + # batch_size=1, # currently no batching + # num_images_per_prompt=1, # currently only single image + device="cuda", + dtype=torch.float16, + do_classifier_free_guidance=True, + control_mode="balanced", + resize_mode="just_resize_simple", +): + # FIXME: implement "crop_resize_simple" and "fill_resize_simple", or pull them out + if (resize_mode == "just_resize_simple" or + resize_mode == "crop_resize_simple" or + resize_mode == "fill_resize_simple"): + image = image.convert("RGB") + if (resize_mode == "just_resize_simple"): + image = image.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) + elif (resize_mode == "crop_resize_simple"): # not yet implemented + pass + elif (resize_mode == "fill_resize_simple"): # not yet implemented + pass + nimage = np.array(image) + nimage = nimage[None, :] + nimage = np.concatenate([nimage], axis=0) + # normalizing RGB values to [0,1] range (in PIL.Image they are [0-255]) + nimage = np.array(nimage).astype(np.float32) / 255.0 + nimage = nimage.transpose(0, 3, 1, 2) + timage = torch.from_numpy(nimage) + + # use fancy lvmin controlnet resizing + elif (resize_mode == "just_resize" or resize_mode == "crop_resize" or resize_mode == "fill_resize"): + nimage = np.array(image) + timage, nimage = np_img_resize( + np_img=nimage, + resize_mode=resize_mode, + h=height, + w=width, + # device=torch.device('cpu') + device=device, + ) + else: + pass + print("ERROR: invalid resize_mode ==> ", resize_mode) + exit(1) + + timage = timage.to(device=device, dtype=dtype) + cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced") + if do_classifier_free_guidance and not cfg_injection: + timage = torch.cat([timage] * 2) + return timage From f2f49bd8d02b04eee31af2d8f2dde13e80b26ac2 Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:17:24 -0700 Subject: [PATCH 28/51] Added resize_mode param to ControlNet node --- invokeai/app/invocations/controlnet_image_processors.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/invokeai/app/invocations/controlnet_image_processors.py b/invokeai/app/invocations/controlnet_image_processors.py index 43cad3dcaf..911fede8fb 100644 --- a/invokeai/app/invocations/controlnet_image_processors.py +++ b/invokeai/app/invocations/controlnet_image_processors.py @@ -85,8 +85,8 @@ CONTROLNET_DEFAULT_MODELS = [ CONTROLNET_NAME_VALUES = Literal[tuple(CONTROLNET_DEFAULT_MODELS)] CONTROLNET_MODE_VALUES = Literal[tuple( ["balanced", "more_prompt", "more_control", "unbalanced"])] -# crop and fill options not ready yet -# CONTROLNET_RESIZE_VALUES = Literal[tuple(["just_resize", "crop_resize", "fill_resize"])] +CONTROLNET_RESIZE_VALUES = Literal[tuple( + ["just_resize", "crop_resize", "fill_resize", "just_resize_simple",])] class ControlNetModelField(BaseModel): @@ -111,7 +111,8 @@ class ControlField(BaseModel): description="When the ControlNet is last applied (% of total steps)") control_mode: CONTROLNET_MODE_VALUES = Field( default="balanced", description="The control mode to use") - # resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use") + resize_mode: CONTROLNET_RESIZE_VALUES = Field( + default="just_resize", description="The resize mode to use") @validator("control_weight") def validate_control_weight(cls, v): @@ -161,6 +162,7 @@ class ControlNetInvocation(BaseInvocation): end_step_percent: float = Field(default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)") control_mode: CONTROLNET_MODE_VALUES = Field(default="balanced", description="The control mode used") + resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode used") # fmt: on class Config(InvocationConfig): @@ -187,6 +189,7 @@ class ControlNetInvocation(BaseInvocation): begin_step_percent=self.begin_step_percent, end_step_percent=self.end_step_percent, control_mode=self.control_mode, + resize_mode=self.resize_mode, ), ) From bab8b6d24048442be0d67a9e37461ed991464aed Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:21:17 -0700 Subject: [PATCH 29/51] Removed diffusers_pipeline prepare_control_image() -- replaced with controlnet_utils.prepare_control_image() Added resize_mode to ControlNetData class. --- .../stable_diffusion/diffusers_pipeline.py | 53 +------------------ 1 file changed, 2 insertions(+), 51 deletions(-) diff --git a/invokeai/backend/stable_diffusion/diffusers_pipeline.py b/invokeai/backend/stable_diffusion/diffusers_pipeline.py index 228fbd0585..8acfb100a6 100644 --- a/invokeai/backend/stable_diffusion/diffusers_pipeline.py +++ b/invokeai/backend/stable_diffusion/diffusers_pipeline.py @@ -219,6 +219,7 @@ class ControlNetData: begin_step_percent: float = Field(default=0.0) end_step_percent: float = Field(default=1.0) control_mode: str = Field(default="balanced") + resize_mode: str = Field(default="just_resize") @dataclass @@ -653,7 +654,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline): if cfg_injection: # Inferred ControlNet only for the conditional batch. # To apply the output of ControlNet to both the unconditional and conditional batches, - # add 0 to the unconditional batch to keep it unchanged. + # prepend zeros for unconditional batch down_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_samples] mid_sample = torch.cat([torch.zeros_like(mid_sample), mid_sample]) @@ -954,53 +955,3 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline): debug_image( img, f"latents {msg} {i+1}/{len(decoded)}", debug_status=True ) - - # Copied from diffusers pipeline_stable_diffusion_controlnet.py - # Returns torch.Tensor of shape (batch_size, 3, height, width) - @staticmethod - def prepare_control_image( - image, - # FIXME: need to fix hardwiring of width and height, change to basing on latents dimensions? - # latents, - width=512, # should be 8 * latent.shape[3] - height=512, # should be 8 * latent height[2] - batch_size=1, - num_images_per_prompt=1, - device="cuda", - dtype=torch.float16, - do_classifier_free_guidance=True, - control_mode="balanced" - ): - - if not isinstance(image, torch.Tensor): - if isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - images = [] - for image_ in image: - image_ = image_.convert("RGB") - image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"]) - image_ = np.array(image_) - image_ = image_[None, :] - images.append(image_) - image = images - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - - image_batch_size = image.shape[0] - if image_batch_size == 1: - repeat_by = batch_size - else: - # image batch size is the same as prompt batch size - repeat_by = num_images_per_prompt - image = image.repeat_interleave(repeat_by, dim=0) - image = image.to(device=device, dtype=dtype) - cfg_injection = (control_mode == "more_control" or control_mode == "unbalanced") - if do_classifier_free_guidance and not cfg_injection: - image = torch.cat([image] * 2) - return image From 909f538fb5a62b432c8aa0f61bd2e1a17141766e Mon Sep 17 00:00:00 2001 From: user1 Date: Wed, 19 Jul 2023 19:26:49 -0700 Subject: [PATCH 30/51] Switching over to controlnet_utils prepare_control_image(), with added resize_mode. --- invokeai/app/invocations/latent.py | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/invokeai/app/invocations/latent.py b/invokeai/app/invocations/latent.py index cd15fe156b..b4c3454c88 100644 --- a/invokeai/app/invocations/latent.py +++ b/invokeai/app/invocations/latent.py @@ -30,6 +30,7 @@ from .compel import ConditioningField from .controlnet_image_processors import ControlField from .image import ImageOutput from .model import ModelInfo, UNetField, VaeField +from invokeai.app.util.controlnet_utils import prepare_control_image from diffusers.models.attention_processor import ( AttnProcessor2_0, @@ -288,7 +289,7 @@ class TextToLatentsInvocation(BaseInvocation): # and add in batch_size, num_images_per_prompt? # and do real check for classifier_free_guidance? # prepare_control_image should return torch.Tensor of shape(batch_size, 3, height, width) - control_image = model.prepare_control_image( + control_image = prepare_control_image( image=input_image, do_classifier_free_guidance=do_classifier_free_guidance, width=control_width_resize, @@ -298,13 +299,18 @@ class TextToLatentsInvocation(BaseInvocation): device=control_model.device, dtype=control_model.dtype, control_mode=control_info.control_mode, + resize_mode=control_info.resize_mode, ) control_item = ControlNetData( - model=control_model, image_tensor=control_image, + model=control_model, + image_tensor=control_image, weight=control_info.control_weight, begin_step_percent=control_info.begin_step_percent, end_step_percent=control_info.end_step_percent, control_mode=control_info.control_mode, + # any resizing needed should currently be happening in prepare_control_image(), + # but adding resize_mode to ControlNetData in case needed in the future + resize_mode=control_info.resize_mode, ) control_data.append(control_item) # MultiControlNetModel has been refactored out, just need list[ControlNetData] @@ -601,7 +607,7 @@ class ResizeLatentsInvocation(BaseInvocation): antialias: bool = Field( default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)") - + class Config(InvocationConfig): schema_extra = { "ui": { @@ -647,7 +653,7 @@ class ScaleLatentsInvocation(BaseInvocation): antialias: bool = Field( default=False, description="Whether or not to antialias (applied in bilinear and bicubic modes only)") - + class Config(InvocationConfig): schema_extra = { "ui": { From 70fec9ddab9c0e3c4f44ecff9e5a2f686ee61461 Mon Sep 17 00:00:00 2001 From: user1 Date: Thu, 20 Jul 2023 00:38:20 -0700 Subject: [PATCH 31/51] Added pixel_perfect_resolution() method to controlnet_utils.py, but not using yet. To be usable this will likely require modification of ControlNet preprocessors --- invokeai/app/util/controlnet_utils.py | 56 ++++++++++++++++++++++++++- 1 file changed, 55 insertions(+), 1 deletion(-) diff --git a/invokeai/app/util/controlnet_utils.py b/invokeai/app/util/controlnet_utils.py index 67fd7bb43e..342fa147c5 100644 --- a/invokeai/app/util/controlnet_utils.py +++ b/invokeai/app/util/controlnet_utils.py @@ -94,12 +94,66 @@ def nake_nms(x): return y +################################################################################ +# copied from Mikubill/sd-webui-controlnet external_code.py and modified for InvokeAI +################################################################################ +# FIXME: not using yet, if used in the future will most likely require modification of preprocessors +def pixel_perfect_resolution( + image: np.ndarray, + target_H: int, + target_W: int, + resize_mode: str, +) -> int: + """ + Calculate the estimated resolution for resizing an image while preserving aspect ratio. + + The function first calculates scaling factors for height and width of the image based on the target + height and width. Then, based on the chosen resize mode, it either takes the smaller or the larger + scaling factor to estimate the new resolution. + + If the resize mode is OUTER_FIT, the function uses the smaller scaling factor, ensuring the whole image + fits within the target dimensions, potentially leaving some empty space. + + If the resize mode is not OUTER_FIT, the function uses the larger scaling factor, ensuring the target + dimensions are fully filled, potentially cropping the image. + + After calculating the estimated resolution, the function prints some debugging information. + + Args: + image (np.ndarray): A 3D numpy array representing an image. The dimensions represent [height, width, channels]. + target_H (int): The target height for the image. + target_W (int): The target width for the image. + resize_mode (ResizeMode): The mode for resizing. + + Returns: + int: The estimated resolution after resizing. + """ + raw_H, raw_W, _ = image.shape + + k0 = float(target_H) / float(raw_H) + k1 = float(target_W) / float(raw_W) + + if resize_mode == "fill_resize": + estimation = min(k0, k1) * float(min(raw_H, raw_W)) + else: # "crop_resize" or "just_resize" (or possibly "just_resize_simple"?) + estimation = max(k0, k1) * float(min(raw_H, raw_W)) + + # print(f"Pixel Perfect Computation:") + # print(f"resize_mode = {resize_mode}") + # print(f"raw_H = {raw_H}") + # print(f"raw_W = {raw_W}") + # print(f"target_H = {target_H}") + # print(f"target_W = {target_W}") + # print(f"estimation = {estimation}") + + return int(np.round(estimation)) + + ########################################################################### # Copied from detectmap_proc method in scripts/detectmap_proc.py in Mikubill/sd-webui-controlnet # modified for InvokeAI ########################################################################### # def detectmap_proc(detected_map, module, resize_mode, h, w): -@staticmethod def np_img_resize( np_img: np.ndarray, resize_mode: str, From b7cdda07812db5dd3ec7a108ddfb5b71014beb36 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:48:35 +1200 Subject: [PATCH 32/51] feat: Add ControlNet Resize Mode to Linear UI --- .../controlNet/components/ControlNet.tsx | 8 +-- .../parameters/ParamControlNetResizeMode.tsx | 62 +++++++++++++++++++ .../controlNet/store/controlNetSlice.ts | 26 ++++++-- .../addControlNetToLinearGraph.ts | 2 + .../frontend/web/src/services/api/schema.d.ts | 26 ++++++-- 5 files changed, 109 insertions(+), 15 deletions(-) create mode 100644 invokeai/frontend/web/src/features/controlNet/components/parameters/ParamControlNetResizeMode.tsx diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index 368b9f727c..3ba702d3bb 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -24,6 +24,7 @@ import ParamControlNetShouldAutoConfig from './ParamControlNetShouldAutoConfig'; import ParamControlNetBeginEnd from './parameters/ParamControlNetBeginEnd'; import ParamControlNetControlMode from './parameters/ParamControlNetControlMode'; import ParamControlNetProcessorSelect from './parameters/ParamControlNetProcessorSelect'; +import ParamControlNetResizeMode from './parameters/ParamControlNetResizeMode'; type ControlNetProps = { controlNetId: string; @@ -151,7 +152,7 @@ const ControlNet = (props: ControlNetProps) => { /> )} - + { )} - - - + + diff --git a/invokeai/frontend/web/src/features/controlNet/components/parameters/ParamControlNetResizeMode.tsx b/invokeai/frontend/web/src/features/controlNet/components/parameters/ParamControlNetResizeMode.tsx new file mode 100644 index 0000000000..4b31ebfc64 --- /dev/null +++ b/invokeai/frontend/web/src/features/controlNet/components/parameters/ParamControlNetResizeMode.tsx @@ -0,0 +1,62 @@ +import { createSelector } from '@reduxjs/toolkit'; +import { stateSelector } from 'app/store/store'; +import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; +import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; +import IAIMantineSelect from 'common/components/IAIMantineSelect'; +import { + ResizeModes, + controlNetResizeModeChanged, +} from 'features/controlNet/store/controlNetSlice'; +import { useCallback, useMemo } from 'react'; +import { useTranslation } from 'react-i18next'; + +type ParamControlNetResizeModeProps = { + controlNetId: string; +}; + +const RESIZE_MODE_DATA = [ + { label: 'Resize', value: 'just_resize' }, + { label: 'Crop', value: 'crop_resize' }, + { label: 'Fill', value: 'fill_resize' }, +]; + +export default function ParamControlNetResizeMode( + props: ParamControlNetResizeModeProps +) { + const { controlNetId } = props; + const dispatch = useAppDispatch(); + const selector = useMemo( + () => + createSelector( + stateSelector, + ({ controlNet }) => { + const { resizeMode, isEnabled } = + controlNet.controlNets[controlNetId]; + return { resizeMode, isEnabled }; + }, + defaultSelectorOptions + ), + [controlNetId] + ); + + const { resizeMode, isEnabled } = useAppSelector(selector); + + const { t } = useTranslation(); + + const handleResizeModeChange = useCallback( + (resizeMode: ResizeModes) => { + dispatch(controlNetResizeModeChanged({ controlNetId, resizeMode })); + }, + [controlNetId, dispatch] + ); + + return ( + + ); +} diff --git a/invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts b/invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts index 663edfd65f..2f8668115a 100644 --- a/invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts +++ b/invokeai/frontend/web/src/features/controlNet/store/controlNetSlice.ts @@ -3,6 +3,7 @@ import { RootState } from 'app/store/store'; import { ControlNetModelParam } from 'features/parameters/types/parameterSchemas'; import { cloneDeep, forEach } from 'lodash-es'; import { imagesApi } from 'services/api/endpoints/images'; +import { components } from 'services/api/schema'; import { isAnySessionRejected } from 'services/api/thunks/session'; import { appSocketInvocationError } from 'services/events/actions'; import { controlNetImageProcessed } from './actions'; @@ -16,11 +17,13 @@ import { RequiredControlNetProcessorNode, } from './types'; -export type ControlModes = - | 'balanced' - | 'more_prompt' - | 'more_control' - | 'unbalanced'; +export type ControlModes = NonNullable< + components['schemas']['ControlNetInvocation']['control_mode'] +>; + +export type ResizeModes = NonNullable< + components['schemas']['ControlNetInvocation']['resize_mode'] +>; export const initialControlNet: Omit = { isEnabled: true, @@ -29,6 +32,7 @@ export const initialControlNet: Omit = { beginStepPct: 0, endStepPct: 1, controlMode: 'balanced', + resizeMode: 'just_resize', controlImage: null, processedControlImage: null, processorType: 'canny_image_processor', @@ -45,6 +49,7 @@ export type ControlNetConfig = { beginStepPct: number; endStepPct: number; controlMode: ControlModes; + resizeMode: ResizeModes; controlImage: string | null; processedControlImage: string | null; processorType: ControlNetProcessorType; @@ -215,6 +220,16 @@ export const controlNetSlice = createSlice({ const { controlNetId, controlMode } = action.payload; state.controlNets[controlNetId].controlMode = controlMode; }, + controlNetResizeModeChanged: ( + state, + action: PayloadAction<{ + controlNetId: string; + resizeMode: ResizeModes; + }> + ) => { + const { controlNetId, resizeMode } = action.payload; + state.controlNets[controlNetId].resizeMode = resizeMode; + }, controlNetProcessorParamsChanged: ( state, action: PayloadAction<{ @@ -342,6 +357,7 @@ export const { controlNetBeginStepPctChanged, controlNetEndStepPctChanged, controlNetControlModeChanged, + controlNetResizeModeChanged, controlNetProcessorParamsChanged, controlNetProcessorTypeChanged, controlNetReset, diff --git a/invokeai/frontend/web/src/features/nodes/util/graphBuilders/addControlNetToLinearGraph.ts b/invokeai/frontend/web/src/features/nodes/util/graphBuilders/addControlNetToLinearGraph.ts index 0f882f248d..578c4371f2 100644 --- a/invokeai/frontend/web/src/features/nodes/util/graphBuilders/addControlNetToLinearGraph.ts +++ b/invokeai/frontend/web/src/features/nodes/util/graphBuilders/addControlNetToLinearGraph.ts @@ -48,6 +48,7 @@ export const addControlNetToLinearGraph = ( beginStepPct, endStepPct, controlMode, + resizeMode, model, processorType, weight, @@ -60,6 +61,7 @@ export const addControlNetToLinearGraph = ( begin_step_percent: beginStepPct, end_step_percent: endStepPct, control_mode: controlMode, + resize_mode: resizeMode, control_model: model as ControlNetInvocation['control_model'], control_weight: weight, }; diff --git a/invokeai/frontend/web/src/services/api/schema.d.ts b/invokeai/frontend/web/src/services/api/schema.d.ts index 6a2e176ffd..3ecef092af 100644 --- a/invokeai/frontend/web/src/services/api/schema.d.ts +++ b/invokeai/frontend/web/src/services/api/schema.d.ts @@ -167,7 +167,7 @@ export type paths = { "/api/v1/images/clear-intermediates": { /** * Clear Intermediates - * @description Clears first 100 intermediates + * @description Clears all intermediates */ post: operations["clear_intermediates"]; }; @@ -800,6 +800,13 @@ export type components = { * @enum {string} */ control_mode?: "balanced" | "more_prompt" | "more_control" | "unbalanced"; + /** + * Resize Mode + * @description The resize mode to use + * @default just_resize + * @enum {string} + */ + resize_mode?: "just_resize" | "crop_resize" | "fill_resize" | "just_resize_simple"; }; /** * ControlNetInvocation @@ -859,6 +866,13 @@ export type components = { * @enum {string} */ control_mode?: "balanced" | "more_prompt" | "more_control" | "unbalanced"; + /** + * Resize Mode + * @description The resize mode used + * @default just_resize + * @enum {string} + */ + resize_mode?: "just_resize" | "crop_resize" | "fill_resize" | "just_resize_simple"; }; /** ControlNetModelConfig */ ControlNetModelConfig: { @@ -5324,11 +5338,11 @@ export type components = { image?: components["schemas"]["ImageField"]; }; /** - * StableDiffusion2ModelFormat + * StableDiffusion1ModelFormat * @description An enumeration. * @enum {string} */ - StableDiffusion2ModelFormat: "checkpoint" | "diffusers"; + StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; /** * StableDiffusionXLModelFormat * @description An enumeration. @@ -5336,11 +5350,11 @@ export type components = { */ StableDiffusionXLModelFormat: "checkpoint" | "diffusers"; /** - * StableDiffusion1ModelFormat + * StableDiffusion2ModelFormat * @description An enumeration. * @enum {string} */ - StableDiffusion1ModelFormat: "checkpoint" | "diffusers"; + StableDiffusion2ModelFormat: "checkpoint" | "diffusers"; }; responses: never; parameters: never; @@ -6125,7 +6139,7 @@ export type operations = { }; /** * Clear Intermediates - * @description Clears first 100 intermediates + * @description Clears all intermediates */ clear_intermediates: { responses: { From 2872ae2aab6894534c69e11c6254fc51bf3dc64f Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:53:45 +1200 Subject: [PATCH 33/51] fix: Adjust layout of Resize Mode dropdown Moved it next to ControlMode to make it more compact --- .../web/src/features/controlNet/components/ControlNet.tsx | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index 3ba702d3bb..78959e6695 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -184,8 +184,10 @@ const ControlNet = (props: ControlNetProps) => { )} - - + + + + From 603989dc0d443dc965421c7ec7ba3eb21865964c Mon Sep 17 00:00:00 2001 From: Lincoln Stein Date: Thu, 20 Jul 2023 08:33:01 -0400 Subject: [PATCH 34/51] added get_log_level and set_log_level operations to the app route --- invokeai/app/api/routers/app_info.py | 37 ++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/invokeai/app/api/routers/app_info.py b/invokeai/app/api/routers/app_info.py index 8e2955c9aa..e1bfeda4a1 100644 --- a/invokeai/app/api/routers/app_info.py +++ b/invokeai/app/api/routers/app_info.py @@ -1,9 +1,22 @@ +from enum import Enum +from fastapi import Body from fastapi.routing import APIRouter from pydantic import BaseModel, Field from invokeai.backend.image_util.patchmatch import PatchMatch from invokeai.version import __version__ +from ..dependencies import ApiDependencies +from invokeai.backend.util.logging import logging + +class LogLevel(int, Enum): + NotSet = logging.NOTSET + Debug = logging.DEBUG + Info = logging.INFO + Warning = logging.WARNING + Error = logging.ERROR + Critical = logging.CRITICAL + app_router = APIRouter(prefix="/v1/app", tags=["app"]) @@ -34,3 +47,27 @@ async def get_config() -> AppConfig: if PatchMatch.patchmatch_available(): infill_methods.append('patchmatch') return AppConfig(infill_methods=infill_methods) + +@app_router.get( + "/logging", + operation_id="get_log_level", + responses={200: {"description" : "The operation was successful"}}, + response_model = LogLevel, +) +async def get_log_level( +) -> LogLevel: + """Returns the log level""" + return LogLevel(ApiDependencies.invoker.services.logger.level) + +@app_router.post( + "/logging", + operation_id="set_log_level", + responses={200: {"description" : "The operation was successful"}}, + response_model = LogLevel, +) +async def set_log_level( + level: LogLevel = Body(description="New log verbosity level"), +) -> LogLevel: + """Sets the log verbosity level""" + ApiDependencies.invoker.services.logger.setLevel(level) + return LogLevel(ApiDependencies.invoker.services.logger.level) From 190ba5af59075a41593a105ec512ad32405f8ac5 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 21:32:34 +1000 Subject: [PATCH 35/51] feat(ui): boards styling --- .../web/src/common/components/IAIDndImage.tsx | 20 +- .../src/common/components/IAIDroppable.tsx | 3 +- .../components/Boards/BoardContextMenu.tsx | 54 ++-- .../Boards/BoardsList/BoardsList.tsx | 99 ++++--- .../Boards/BoardsList/BoardsSearch.tsx | 65 ++++- .../Boards/BoardsList/GalleryBoard.tsx | 271 +++++++++++------- .../Boards/BoardsList/GenericBoard.tsx | 2 +- .../Boards/BoardsList/SystemBoardButton.tsx | 53 ++++ .../gallery/components/GalleryPanel.tsx | 2 +- .../components/ImageGrid/GalleryImage.tsx | 59 ++-- .../src/features/ui/components/InvokeTabs.tsx | 2 +- .../ui/components/tabs/ResizeHandle.tsx | 2 +- .../src/services/api/hooks/useBoardName.ts | 4 +- .../web/src/theme/components/editable.ts | 56 ++++ invokeai/frontend/web/src/theme/theme.ts | 12 + .../src/theme/util/getInputOutlineStyles.ts | 3 + 16 files changed, 482 insertions(+), 225 deletions(-) create mode 100644 invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/SystemBoardButton.tsx create mode 100644 invokeai/frontend/web/src/theme/components/editable.ts diff --git a/invokeai/frontend/web/src/common/components/IAIDndImage.tsx b/invokeai/frontend/web/src/common/components/IAIDndImage.tsx index 6082843c55..57d54e155e 100644 --- a/invokeai/frontend/web/src/common/components/IAIDndImage.tsx +++ b/invokeai/frontend/web/src/common/components/IAIDndImage.tsx @@ -17,13 +17,13 @@ import { } from 'common/components/IAIImageFallback'; import ImageMetadataOverlay from 'common/components/ImageMetadataOverlay'; import { useImageUploadButton } from 'common/hooks/useImageUploadButton'; +import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu'; import { MouseEvent, ReactElement, SyntheticEvent, memo } from 'react'; import { FaImage, FaUndo, FaUpload } from 'react-icons/fa'; import { ImageDTO, PostUploadAction } from 'services/api/types'; import { mode } from 'theme/util/mode'; import IAIDraggable from './IAIDraggable'; import IAIDroppable from './IAIDroppable'; -import ImageContextMenu from 'features/gallery/components/ImageContextMenu/ImageContextMenu'; type IAIDndImageProps = { imageDTO: ImageDTO | undefined; @@ -148,7 +148,9 @@ const IAIDndImage = (props: IAIDndImageProps) => { maxH: 'full', borderRadius: 'base', shadow: isSelected ? 'selected.light' : undefined, - _dark: { shadow: isSelected ? 'selected.dark' : undefined }, + _dark: { + shadow: isSelected ? 'selected.dark' : undefined, + }, ...imageSx, }} /> @@ -183,13 +185,6 @@ const IAIDndImage = (props: IAIDndImageProps) => { )} {!imageDTO && isUploadDisabled && noContentFallback} - {!isDropDisabled && ( - - )} {imageDTO && !isDragDisabled && ( { onClick={onClick} /> )} + {!isDropDisabled && ( + + )} {onClickReset && withResetIcon && imageDTO && ( ; }; const IAIDroppable = (props: IAIDroppableProps) => { - const { dropLabel, data, disabled } = props; + const { dropLabel, data, disabled, hoverRef } = props; const dndId = useRef(uuidv4()); const { isOver, setNodeRef, active } = useDroppable({ diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx index fa3a6b03be..3b3303f0c8 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardContextMenu.tsx @@ -23,34 +23,32 @@ const BoardContextMenu = memo( dispatch(boardIdSelected(board?.board_id ?? board_id)); }, [board?.board_id, board_id, dispatch]); return ( - - - menuProps={{ size: 'sm', isLazy: true }} - menuButtonProps={{ - bg: 'transparent', - _hover: { bg: 'transparent' }, - }} - renderMenu={() => ( - - } onClickCapture={handleSelectBoard}> - Select Board - - {!board && } - {board && ( - - )} - - )} - > - {children} - - + + menuProps={{ size: 'sm', isLazy: true }} + menuButtonProps={{ + bg: 'transparent', + _hover: { bg: 'transparent' }, + }} + renderMenu={() => ( + + } onClickCapture={handleSelectBoard}> + Select Board + + {!board && } + {board && ( + + )} + + )} + > + {children} + ); } ); diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx index 61b8856ff9..60be0c4ab3 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsList.tsx @@ -1,27 +1,21 @@ -import { - Collapse, - Flex, - Grid, - GridItem, - useDisclosure, -} from '@chakra-ui/react'; +import { ButtonGroup, Collapse, Flex, Grid, GridItem } from '@chakra-ui/react'; import { createSelector } from '@reduxjs/toolkit'; import { stateSelector } from 'app/store/store'; import { useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; +import IAIIconButton from 'common/components/IAIIconButton'; +import { AnimatePresence, motion } from 'framer-motion'; import { OverlayScrollbarsComponent } from 'overlayscrollbars-react'; -import { memo, useState } from 'react'; +import { memo, useCallback, useState } from 'react'; +import { FaSearch } from 'react-icons/fa'; import { useListAllBoardsQuery } from 'services/api/endpoints/boards'; +import { BoardDTO } from 'services/api/types'; import { useFeatureStatus } from '../../../../system/hooks/useFeatureStatus'; +import DeleteBoardModal from '../DeleteBoardModal'; import AddBoardButton from './AddBoardButton'; -import AllAssetsBoard from './AllAssetsBoard'; -import AllImagesBoard from './AllImagesBoard'; -import BatchBoard from './BatchBoard'; import BoardsSearch from './BoardsSearch'; import GalleryBoard from './GalleryBoard'; -import NoBoardBoard from './NoBoardBoard'; -import DeleteBoardModal from '../DeleteBoardModal'; -import { BoardDTO } from 'services/api/types'; +import SystemBoardButton from './SystemBoardButton'; const selector = createSelector( [stateSelector], @@ -48,7 +42,10 @@ const BoardsList = (props: Props) => { ) : boards; const [boardToDelete, setBoardToDelete] = useState(); - const [searchMode, setSearchMode] = useState(false); + const [isSearching, setIsSearching] = useState(false); + const handleClickSearchIcon = useCallback(() => { + setIsSearching((v) => !v); + }, []); return ( <> @@ -64,7 +61,54 @@ const BoardsList = (props: Props) => { }} > - + + {isSearching ? ( + + + + ) : ( + + + + + + + + )} + + } + /> { - {!searchMode && ( - <> - - - - - - - - - - {isBatchEnabled && ( - - - - )} - - )} {filteredBoards && filteredBoards.map((board) => ( diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx index fffe50f6a7..f556b83d24 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/BoardsSearch.tsx @@ -10,7 +10,14 @@ import { stateSelector } from 'app/store/store'; import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import { setBoardSearchText } from 'features/gallery/store/boardSlice'; -import { memo } from 'react'; +import { + ChangeEvent, + KeyboardEvent, + memo, + useCallback, + useEffect, + useRef, +} from 'react'; const selector = createSelector( [stateSelector], @@ -22,31 +29,60 @@ const selector = createSelector( ); type Props = { - setSearchMode: (searchMode: boolean) => void; + setIsSearching: (isSearching: boolean) => void; }; const BoardsSearch = (props: Props) => { - const { setSearchMode } = props; + const { setIsSearching } = props; const dispatch = useAppDispatch(); const { searchText } = useAppSelector(selector); + const inputRef = useRef(null); - const handleBoardSearch = (searchTerm: string) => { - setSearchMode(searchTerm.length > 0); - dispatch(setBoardSearchText(searchTerm)); - }; - const clearBoardSearch = () => { - setSearchMode(false); + const handleBoardSearch = useCallback( + (searchTerm: string) => { + dispatch(setBoardSearchText(searchTerm)); + }, + [dispatch] + ); + + const clearBoardSearch = useCallback(() => { dispatch(setBoardSearchText('')); - }; + setIsSearching(false); + }, [dispatch, setIsSearching]); + + const handleKeydown = useCallback( + (e: KeyboardEvent) => { + // exit search mode on escape + if (e.key === 'Escape') { + clearBoardSearch(); + } + }, + [clearBoardSearch] + ); + + const handleChange = useCallback( + (e: ChangeEvent) => { + handleBoardSearch(e.target.value); + }, + [handleBoardSearch] + ); + + useEffect(() => { + // focus the search box on mount + if (!inputRef.current) { + return; + } + inputRef.current.focus(); + }, []); return ( { - handleBoardSearch(e.target.value); - }} + onKeyDown={handleKeydown} + onChange={handleChange} /> {searchText && searchText.length && ( @@ -55,7 +91,8 @@ const BoardsSearch = (props: Props) => { size="xs" variant="ghost" aria-label="Clear Search" - icon={} + opacity={0.5} + icon={} /> )} diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index 46e7cbcca8..b41b54fff8 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -6,9 +6,9 @@ import { EditableInput, EditablePreview, Flex, + Icon, Image, Text, - useColorMode, } from '@chakra-ui/react'; import { createSelector } from '@reduxjs/toolkit'; import { skipToken } from '@reduxjs/toolkit/dist/query'; @@ -17,14 +17,12 @@ import { stateSelector } from 'app/store/store'; import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import IAIDroppable from 'common/components/IAIDroppable'; -import { IAINoContentFallback } from 'common/components/IAIImageFallback'; import { boardIdSelected } from 'features/gallery/store/gallerySlice'; -import { memo, useCallback, useMemo } from 'react'; -import { FaUser } from 'react-icons/fa'; +import { memo, useCallback, useMemo, useState } from 'react'; +import { FaFolder } from 'react-icons/fa'; import { useUpdateBoardMutation } from 'services/api/endpoints/boards'; import { useGetImageDTOQuery } from 'services/api/endpoints/images'; import { BoardDTO } from 'services/api/types'; -import { mode } from 'theme/util/mode'; import BoardContextMenu from '../BoardContextMenu'; const AUTO_ADD_BADGE_STYLES: ChakraProps['sx'] = { @@ -66,8 +64,9 @@ const GalleryBoard = memo( board.cover_image_name ?? skipToken ); - const { colorMode } = useColorMode(); const { board_name, board_id } = board; + const [localBoardName, setLocalBoardName] = useState(board_name); + const handleSelectBoard = useCallback(() => { dispatch(boardIdSelected(board_id)); }, [board_id, dispatch]); @@ -75,10 +74,6 @@ const GalleryBoard = memo( const [updateBoard, { isLoading: isUpdateBoardLoading }] = useUpdateBoardMutation(); - const handleUpdateBoardName = (newBoardName: string) => { - updateBoard({ board_id, changes: { board_name: newBoardName } }); - }; - const droppableData: MoveBoardDropData = useMemo( () => ({ id: board_id, @@ -88,59 +83,116 @@ const GalleryBoard = memo( [board_id] ); + const handleSubmit = useCallback( + (newBoardName: string) => { + if (!newBoardName) { + // empty strings are not allowed + setLocalBoardName(board_name); + return; + } + if (newBoardName === board_name) { + // don't updated the board name if it hasn't changed + return; + } + updateBoard({ board_id, changes: { board_name: newBoardName } }) + .unwrap() + .then((response) => { + // update local state + setLocalBoardName(response.board_name); + }) + .catch(() => { + // revert on error + setLocalBoardName(board_name); + }); + }, + [board_id, board_name, updateBoard] + ); + + const handleChange = useCallback((newBoardName: string) => { + setLocalBoardName(newBoardName); + }, []); + return ( - - + - {(ref) => ( - + + {(ref) => ( - {board.cover_image_name && coverImage?.thumbnail_url && ( - - )} - {!(board.cover_image_name && coverImage?.thumbnail_url) && ( - - )} + + {coverImage?.thumbnail_url ? ( + + ) : ( + + + + )} + + + + + + + + + Move} /> - - - { - handleUpdateBoardName(nextValue); - }} - sx={{ maxW: 'full' }} - > - - - - - - )} - + )} + + ); } diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx index 226100c490..99c0a4681f 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx @@ -92,7 +92,7 @@ const GenericBoard = (props: GenericBoardProps) => { h: 'full', alignItems: 'center', fontWeight: isSelected ? 600 : undefined, - fontSize: 'xs', + fontSize: 'sm', color: isSelected ? 'base.900' : 'base.700', _dark: { color: isSelected ? 'base.50' : 'base.200' }, }} diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/SystemBoardButton.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/SystemBoardButton.tsx new file mode 100644 index 0000000000..b538eee9d1 --- /dev/null +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/SystemBoardButton.tsx @@ -0,0 +1,53 @@ +import { createSelector } from '@reduxjs/toolkit'; +import { stateSelector } from 'app/store/store'; +import { useAppDispatch, useAppSelector } from 'app/store/storeHooks'; +import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; +import IAIButton from 'common/components/IAIButton'; +import { boardIdSelected } from 'features/gallery/store/gallerySlice'; +import { memo, useCallback, useMemo } from 'react'; +import { useBoardName } from 'services/api/hooks/useBoardName'; + +type Props = { + board_id: 'images' | 'assets' | 'no_board'; +}; + +const SystemBoardButton = ({ board_id }: Props) => { + const dispatch = useAppDispatch(); + + const selector = useMemo( + () => + createSelector( + [stateSelector], + ({ gallery }) => { + const { selectedBoardId } = gallery; + return { isSelected: selectedBoardId === board_id }; + }, + defaultSelectorOptions + ), + [board_id] + ); + + const { isSelected } = useAppSelector(selector); + + const boardName = useBoardName(board_id); + + const handleClick = useCallback(() => { + dispatch(boardIdSelected(board_id)); + }, [board_id, dispatch]); + + return ( + + {boardName} + + ); +}; + +export default memo(SystemBoardButton); diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryPanel.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryPanel.tsx index 2aa44e50a1..1bbec03f3e 100644 --- a/invokeai/frontend/web/src/features/gallery/components/GalleryPanel.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/GalleryPanel.tsx @@ -109,7 +109,7 @@ const GalleryDrawer = () => { isResizable={true} isOpen={shouldShowGallery} onClose={handleCloseGallery} - minWidth={337} + minWidth={400} > diff --git a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImage.tsx b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImage.tsx index dcce3a1b18..bf627b9591 100644 --- a/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImage.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/ImageGrid/GalleryImage.tsx @@ -1,4 +1,4 @@ -import { Box } from '@chakra-ui/react'; +import { Box, Flex } from '@chakra-ui/react'; import { createSelector } from '@reduxjs/toolkit'; import { TypesafeDraggableData } from 'app/components/ImageDnd/typesafeDnd'; import { stateSelector } from 'app/store/store'; @@ -86,38 +86,31 @@ const GalleryImage = (props: HoverableImageProps) => { return ( - - {(ref) => ( - - } - // resetTooltip="Delete image" - // withResetIcon // removed bc it's too easy to accidentally delete images - /> - - )} - + + } + // resetTooltip="Delete image" + // withResetIcon // removed bc it's too easy to accidentally delete images + /> + ); }; diff --git a/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx b/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx index 94195a27c1..6c683470e7 100644 --- a/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx +++ b/invokeai/frontend/web/src/features/ui/components/InvokeTabs.tsx @@ -105,7 +105,7 @@ const enabledTabsSelector = createSelector( } ); -const MIN_GALLERY_WIDTH = 300; +const MIN_GALLERY_WIDTH = 350; const DEFAULT_GALLERY_PCT = 20; export const NO_GALLERY_TABS: InvokeTabName[] = ['modelManager']; diff --git a/invokeai/frontend/web/src/features/ui/components/tabs/ResizeHandle.tsx b/invokeai/frontend/web/src/features/ui/components/tabs/ResizeHandle.tsx index 7ef0b48784..57f2e89ef0 100644 --- a/invokeai/frontend/web/src/features/ui/components/tabs/ResizeHandle.tsx +++ b/invokeai/frontend/web/src/features/ui/components/tabs/ResizeHandle.tsx @@ -3,7 +3,7 @@ import { memo } from 'react'; import { PanelResizeHandle } from 'react-resizable-panels'; import { mode } from 'theme/util/mode'; -type ResizeHandleProps = FlexProps & { +type ResizeHandleProps = Omit & { direction?: 'horizontal' | 'vertical'; }; diff --git a/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts b/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts index d63b6e0425..cbe0ec1808 100644 --- a/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts +++ b/invokeai/frontend/web/src/services/api/hooks/useBoardName.ts @@ -6,9 +6,9 @@ export const useBoardName = (board_id: BoardId | null | undefined) => { selectFromResult: ({ data }) => { let boardName = ''; if (board_id === 'images') { - boardName = 'All Images'; + boardName = 'Images'; } else if (board_id === 'assets') { - boardName = 'All Assets'; + boardName = 'Assets'; } else if (board_id === 'no_board') { boardName = 'No Board'; } else if (board_id === 'batch') { diff --git a/invokeai/frontend/web/src/theme/components/editable.ts b/invokeai/frontend/web/src/theme/components/editable.ts new file mode 100644 index 0000000000..19321e5968 --- /dev/null +++ b/invokeai/frontend/web/src/theme/components/editable.ts @@ -0,0 +1,56 @@ +import { editableAnatomy as parts } from '@chakra-ui/anatomy'; +import { + createMultiStyleConfigHelpers, + defineStyle, +} from '@chakra-ui/styled-system'; +import { mode } from '@chakra-ui/theme-tools'; + +const { definePartsStyle, defineMultiStyleConfig } = + createMultiStyleConfigHelpers(parts.keys); + +const baseStylePreview = defineStyle({ + borderRadius: 'md', + py: '1', + transitionProperty: 'common', + transitionDuration: 'normal', +}); + +const baseStyleInput = defineStyle((props) => ({ + borderRadius: 'md', + py: '1', + transitionProperty: 'common', + transitionDuration: 'normal', + width: 'full', + _focusVisible: { boxShadow: 'outline' }, + _placeholder: { opacity: 0.6 }, + '::selection': { + color: mode('accent.900', 'accent.50')(props), + bg: mode('accent.200', 'accent.400')(props), + }, +})); + +const baseStyleTextarea = defineStyle({ + borderRadius: 'md', + py: '1', + transitionProperty: 'common', + transitionDuration: 'normal', + width: 'full', + _focusVisible: { boxShadow: 'outline' }, + _placeholder: { opacity: 0.6 }, +}); + +const invokeAI = definePartsStyle((props) => ({ + preview: baseStylePreview, + input: baseStyleInput(props), + textarea: baseStyleTextarea, +})); + +export const editableTheme = defineMultiStyleConfig({ + variants: { + invokeAI, + }, + defaultProps: { + size: 'sm', + variant: 'invokeAI', + }, +}); diff --git a/invokeai/frontend/web/src/theme/theme.ts b/invokeai/frontend/web/src/theme/theme.ts index 42a5a12c3f..7fc515c2fe 100644 --- a/invokeai/frontend/web/src/theme/theme.ts +++ b/invokeai/frontend/web/src/theme/theme.ts @@ -20,6 +20,7 @@ import { tabsTheme } from './components/tabs'; import { textTheme } from './components/text'; import { textareaTheme } from './components/textarea'; import { tooltipTheme } from './components/tooltip'; +import { editableTheme } from './components/editable'; export const theme: ThemeOverride = { config: { @@ -74,12 +75,23 @@ export const theme: ThemeOverride = { '0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-400)', dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-400)', }, + hoverSelected: { + light: + '0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-500)', + dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-300)', + }, + hoverUnselected: { + light: + '0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-200)', + dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-600)', + }, nodeSelectedOutline: `0 0 0 2px var(--invokeai-colors-accent-450)`, }, colors: InvokeAIColors, components: { Button: buttonTheme, // Button and IconButton Input: inputTheme, + Editable: editableTheme, Textarea: textareaTheme, Tabs: tabsTheme, Progress: progressTheme, diff --git a/invokeai/frontend/web/src/theme/util/getInputOutlineStyles.ts b/invokeai/frontend/web/src/theme/util/getInputOutlineStyles.ts index 8cf64cbd94..ba5fc9e4c1 100644 --- a/invokeai/frontend/web/src/theme/util/getInputOutlineStyles.ts +++ b/invokeai/frontend/web/src/theme/util/getInputOutlineStyles.ts @@ -37,4 +37,7 @@ export const getInputOutlineStyles = (props: StyleFunctionProps) => ({ _placeholder: { color: mode('base.700', 'base.400')(props), }, + '::selection': { + bg: mode('accent.200', 'accent.400')(props), + }, }); From f32bd5dd104c0b48b2c7c6ac2911f9eb4c6016a1 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 23:48:28 +1200 Subject: [PATCH 36/51] fix: Minor color tweaks to the name plate on boards --- .../gallery/components/Boards/BoardsList/GalleryBoard.tsx | 2 +- invokeai/frontend/web/src/theme/theme.ts | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index b41b54fff8..bcf269ccc2 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -241,7 +241,7 @@ const GalleryBoard = memo( borderBottomRadius: 'base', bg: 'accent.400', color: isSelected ? 'base.50' : 'base.100', - _dark: { color: 'base.800', bg: 'accent.200' }, + _dark: { color: 'base.200', bg: 'accent.500' }, lineHeight: 'short', fontSize: 'xs', }} diff --git a/invokeai/frontend/web/src/theme/theme.ts b/invokeai/frontend/web/src/theme/theme.ts index 7fc515c2fe..6f7a719e85 100644 --- a/invokeai/frontend/web/src/theme/theme.ts +++ b/invokeai/frontend/web/src/theme/theme.ts @@ -4,6 +4,7 @@ import { InvokeAIColors } from './colors/colors'; import { accordionTheme } from './components/accordion'; import { buttonTheme } from './components/button'; import { checkboxTheme } from './components/checkbox'; +import { editableTheme } from './components/editable'; import { formLabelTheme } from './components/formLabel'; import { inputTheme } from './components/input'; import { menuTheme } from './components/menu'; @@ -20,7 +21,6 @@ import { tabsTheme } from './components/tabs'; import { textTheme } from './components/text'; import { textareaTheme } from './components/textarea'; import { tooltipTheme } from './components/tooltip'; -import { editableTheme } from './components/editable'; export const theme: ThemeOverride = { config: { @@ -73,7 +73,7 @@ export const theme: ThemeOverride = { selected: { light: '0px 0px 0px 1px var(--invokeai-colors-base-150), 0px 0px 0px 4px var(--invokeai-colors-accent-400)', - dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-400)', + dark: '0px 0px 0px 1px var(--invokeai-colors-base-900), 0px 0px 0px 4px var(--invokeai-colors-accent-500)', }, hoverSelected: { light: From ab9b5f3b958d9a6cb134a9992a6c6ddd1279d614 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 23:52:12 +1200 Subject: [PATCH 37/51] fix: Possible fix to the name plate getting displaced --- .../gallery/components/Boards/BoardsList/GalleryBoard.tsx | 1 + 1 file changed, 1 insertion(+) diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index bcf269ccc2..d2a4693c36 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -233,6 +233,7 @@ const GalleryBoard = memo( sx={{ position: 'absolute', bottom: 0, + left: 0, p: 1, justifyContent: 'center', alignItems: 'center', From da523fa32fc3430e7798e69014a29f705fd382eb Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Thu, 20 Jul 2023 23:54:11 +1200 Subject: [PATCH 38/51] fix: Editable text aligning left instead of inplace. --- .../gallery/components/Boards/BoardsList/GalleryBoard.tsx | 1 + 1 file changed, 1 insertion(+) diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index d2a4693c36..d28cb33200 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -272,6 +272,7 @@ const GalleryBoard = memo( p: 0, _focusVisible: { p: 0, + textAlign: 'center', // get rid of the edit border boxShadow: 'none', }, From d52355655817923185e9a0e942ccae0f2f059063 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Fri, 21 Jul 2023 00:03:10 +1200 Subject: [PATCH 39/51] fix: Truncate board name if longer than 20 chars --- .../web/src/features/gallery/components/GalleryBoardName.tsx | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx index 12454dd15b..897cf4e7e8 100644 --- a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx @@ -58,7 +58,9 @@ const GalleryBoardName = (props: Props) => { }, }} > - {boardName} + {boardName.length > 20 + ? `${boardName.substring(0, 20)}...` + : boardName} From 1e3cebbf423792aaa01039438f19063283ef94c3 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:18:30 +1000 Subject: [PATCH 40/51] feat(ui): add useBoardTotal hook to get total items in board actually not using it now, but it's there --- .../Boards/BoardsList/GenericBoard.tsx | 2 +- .../gallery/components/GalleryBoardName.tsx | 12 +++-- .../src/services/api/hooks/useBoardTotal.ts | 53 +++++++++++++++++++ 3 files changed, 62 insertions(+), 5 deletions(-) create mode 100644 invokeai/frontend/web/src/services/api/hooks/useBoardTotal.ts diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx index 99c0a4681f..fa7f944a24 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GenericBoard.tsx @@ -17,7 +17,7 @@ type GenericBoardProps = { badgeCount?: number; }; -const formatBadgeCount = (count: number) => +export const formatBadgeCount = (count: number) => Intl.NumberFormat('en-US', { notation: 'compact', maximumFractionDigits: 1, diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx index 897cf4e7e8..2e2efb8e25 100644 --- a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx @@ -4,7 +4,7 @@ import { createSelector } from '@reduxjs/toolkit'; import { stateSelector } from 'app/store/store'; import { useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; -import { memo } from 'react'; +import { memo, useMemo } from 'react'; import { useBoardName } from 'services/api/hooks/useBoardName'; const selector = createSelector( @@ -26,6 +26,12 @@ const GalleryBoardName = (props: Props) => { const { isOpen, onToggle } = props; const { selectedBoardId } = useAppSelector(selector); const boardName = useBoardName(selectedBoardId); + const formattedBoardName = useMemo(() => { + if (boardName.length > 20) { + return `${boardName.substring(0, 20)}...`; + } + return boardName; + }, [boardName]); return ( { }, }} > - {boardName.length > 20 - ? `${boardName.substring(0, 20)}...` - : boardName} + {formattedBoardName} diff --git a/invokeai/frontend/web/src/services/api/hooks/useBoardTotal.ts b/invokeai/frontend/web/src/services/api/hooks/useBoardTotal.ts new file mode 100644 index 0000000000..8deccd8947 --- /dev/null +++ b/invokeai/frontend/web/src/services/api/hooks/useBoardTotal.ts @@ -0,0 +1,53 @@ +import { skipToken } from '@reduxjs/toolkit/dist/query'; +import { + ASSETS_CATEGORIES, + BoardId, + IMAGE_CATEGORIES, + INITIAL_IMAGE_LIMIT, +} from 'features/gallery/store/gallerySlice'; +import { useMemo } from 'react'; +import { ListImagesArgs, useListImagesQuery } from '../endpoints/images'; + +const baseQueryArg: ListImagesArgs = { + offset: 0, + limit: INITIAL_IMAGE_LIMIT, + is_intermediate: false, +}; + +const imagesQueryArg: ListImagesArgs = { + categories: IMAGE_CATEGORIES, + ...baseQueryArg, +}; + +const assetsQueryArg: ListImagesArgs = { + categories: ASSETS_CATEGORIES, + ...baseQueryArg, +}; + +const noBoardQueryArg: ListImagesArgs = { + board_id: 'none', + ...baseQueryArg, +}; + +export const useBoardTotal = (board_id: BoardId | null | undefined) => { + const queryArg = useMemo(() => { + if (!board_id) { + return; + } + if (board_id === 'images') { + return imagesQueryArg; + } else if (board_id === 'assets') { + return assetsQueryArg; + } else if (board_id === 'no_board') { + return noBoardQueryArg; + } else { + return { board_id, ...baseQueryArg }; + } + }, [board_id]); + + const { total } = useListImagesQuery(queryArg ?? skipToken, { + selectFromResult: ({ currentData }) => ({ total: currentData?.total }), + }); + + return total; +}; From a481607d3fd9ce633a1191a3aa9ff23e6c7722d2 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:47:59 +1000 Subject: [PATCH 41/51] feat(ui): boards are only punch-you-in-the-face-purple if selected --- .../gallery/components/Boards/BoardsList/GalleryBoard.tsx | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index d28cb33200..ef21d51257 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -240,9 +240,12 @@ const GalleryBoard = memo( w: 'full', maxW: 'full', borderBottomRadius: 'base', - bg: 'accent.400', + bg: isSelected ? 'accent.400' : 'base.600', color: isSelected ? 'base.50' : 'base.100', - _dark: { color: 'base.200', bg: 'accent.500' }, + _dark: { + bg: isSelected ? 'accent.500' : 'base.600', + color: isSelected ? 'base.50' : 'base.100', + }, lineHeight: 'short', fontSize: 'xs', }} From 2771328853ca2e63404d090e362d5c240ec7a733 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:48:20 +1000 Subject: [PATCH 42/51] feat(ui): reduce saturation by 8% for 1337 contrast --- invokeai/frontend/web/src/theme/colors/colors.ts | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) diff --git a/invokeai/frontend/web/src/theme/colors/colors.ts b/invokeai/frontend/web/src/theme/colors/colors.ts index bcb2e43c0b..99260ee071 100644 --- a/invokeai/frontend/web/src/theme/colors/colors.ts +++ b/invokeai/frontend/web/src/theme/colors/colors.ts @@ -2,11 +2,16 @@ import { InvokeAIThemeColors } from 'theme/themeTypes'; import { generateColorPalette } from 'theme/util/generateColorPalette'; const BASE = { H: 220, S: 16 }; -const ACCENT = { H: 250, S: 52 }; -const WORKING = { H: 47, S: 50 }; -const WARNING = { H: 28, S: 50 }; -const OK = { H: 113, S: 50 }; -const ERROR = { H: 0, S: 50 }; +const ACCENT = { H: 250, S: 42 }; +// const ACCENT = { H: 250, S: 52 }; +const WORKING = { H: 47, S: 42 }; +// const WORKING = { H: 47, S: 50 }; +const WARNING = { H: 28, S: 42 }; +// const WARNING = { H: 28, S: 50 }; +const OK = { H: 113, S: 42 }; +// const OK = { H: 113, S: 50 }; +const ERROR = { H: 0, S: 42 }; +// const ERROR = { H: 0, S: 50 }; export const InvokeAIColors: InvokeAIThemeColors = { base: generateColorPalette(BASE.H, BASE.S), From 9e27fd9b906520651d4991656f2f61c46369850d Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 22:49:27 +1000 Subject: [PATCH 43/51] feat(ui): color tweak on board --- .../gallery/components/Boards/BoardsList/GalleryBoard.tsx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx index ef21d51257..5d76ad743c 100644 --- a/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/Boards/BoardsList/GalleryBoard.tsx @@ -240,7 +240,7 @@ const GalleryBoard = memo( w: 'full', maxW: 'full', borderBottomRadius: 'base', - bg: isSelected ? 'accent.400' : 'base.600', + bg: isSelected ? 'accent.400' : 'base.500', color: isSelected ? 'base.50' : 'base.100', _dark: { bg: isSelected ? 'accent.500' : 'base.600', From 8dfe196c4fe60c22d88f52950137cd1a73808b26 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Fri, 21 Jul 2023 00:54:03 +1200 Subject: [PATCH 44/51] feat: Add Image Count to Board Name --- .../gallery/components/GalleryBoardName.tsx | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx index 2e2efb8e25..27565a52aa 100644 --- a/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx +++ b/invokeai/frontend/web/src/features/gallery/components/GalleryBoardName.tsx @@ -6,6 +6,7 @@ import { useAppSelector } from 'app/store/storeHooks'; import { defaultSelectorOptions } from 'app/store/util/defaultMemoizeOptions'; import { memo, useMemo } from 'react'; import { useBoardName } from 'services/api/hooks/useBoardName'; +import { useBoardTotal } from 'services/api/hooks/useBoardTotal'; const selector = createSelector( [stateSelector], @@ -26,12 +27,17 @@ const GalleryBoardName = (props: Props) => { const { isOpen, onToggle } = props; const { selectedBoardId } = useAppSelector(selector); const boardName = useBoardName(selectedBoardId); + const numOfBoardImages = useBoardTotal(selectedBoardId); + const formattedBoardName = useMemo(() => { - if (boardName.length > 20) { - return `${boardName.substring(0, 20)}...`; + if (!boardName || !numOfBoardImages) { + return ''; } - return boardName; - }, [boardName]); + if (boardName.length > 20) { + return `${boardName.substring(0, 20)}... (${numOfBoardImages})`; + } + return `${boardName} (${numOfBoardImages})`; + }, [boardName, numOfBoardImages]); return ( Date: Fri, 21 Jul 2023 01:04:16 +1200 Subject: [PATCH 45/51] fix: Layout shift on the ControlNet Panel --- .../web/src/features/controlNet/components/ControlNet.tsx | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index 78959e6695..4674023c75 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -152,7 +152,7 @@ const ControlNet = (props: ControlNetProps) => { /> )} - + { h: 28, w: 28, aspectRatio: '1/1', - mt: 3, }} > From 8fb970d436f5b3dd3b4fb77dd44eafd884efb9e0 Mon Sep 17 00:00:00 2001 From: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com> Date: Fri, 21 Jul 2023 01:07:00 +1200 Subject: [PATCH 46/51] fix: Use layout gap to control layout instead of margin --- .../web/src/features/controlNet/components/ControlNet.tsx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index 4674023c75..e2f8bc125c 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -69,7 +69,7 @@ const ControlNet = (props: ControlNetProps) => { { /> )} - + Date: Fri, 21 Jul 2023 01:14:19 +1200 Subject: [PATCH 47/51] fix: Expand chevron icon being too small --- .../web/src/features/controlNet/components/ControlNet.tsx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index e2f8bc125c..da44eba7ab 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -122,7 +122,7 @@ const ControlNet = (props: ControlNetProps) => { icon={ Date: Fri, 21 Jul 2023 01:21:04 +1200 Subject: [PATCH 48/51] fix: Chevron icon styling --- .../src/features/controlNet/components/ControlNet.tsx | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx index da44eba7ab..d858e46fdb 100644 --- a/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx +++ b/invokeai/frontend/web/src/features/controlNet/components/ControlNet.tsx @@ -118,11 +118,16 @@ const ControlNet = (props: ControlNetProps) => { tooltip={isExpanded ? 'Hide Advanced' : 'Show Advanced'} aria-label={isExpanded ? 'Hide Advanced' : 'Show Advanced'} onClick={toggleIsExpanded} - variant="link" + variant="ghost" + sx={{ + _hover: { + bg: 'none', + }, + }} icon={ Date: Thu, 20 Jul 2023 09:33:21 -0400 Subject: [PATCH 49/51] if updating intermediate, dont add to gallery list cache --- invokeai/frontend/web/src/services/api/endpoints/images.ts | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/invokeai/frontend/web/src/services/api/endpoints/images.ts b/invokeai/frontend/web/src/services/api/endpoints/images.ts index 52f410e315..dc17b28798 100644 --- a/invokeai/frontend/web/src/services/api/endpoints/images.ts +++ b/invokeai/frontend/web/src/services/api/endpoints/images.ts @@ -258,7 +258,12 @@ export const imagesApi = api.injectEndpoints({ */ const patches: PatchCollection[] = []; - const { image_name, board_id, image_category } = oldImageDTO; + const { image_name, board_id, image_category, is_intermediate } = oldImageDTO; + + const isChangingFromIntermediate = changes.is_intermediate === false; + // do not add intermediates to gallery cache + if (is_intermediate && !isChangingFromIntermediate) return; + const categories = IMAGE_CATEGORIES.includes(image_category) ? IMAGE_CATEGORIES : ASSETS_CATEGORIES; From 9dc28373d863834d24152c93804a3730ef7fedd2 Mon Sep 17 00:00:00 2001 From: Mary Hipp Date: Thu, 20 Jul 2023 09:38:52 -0400 Subject: [PATCH 50/51] use brackets --- invokeai/frontend/web/src/services/api/endpoints/images.ts | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/invokeai/frontend/web/src/services/api/endpoints/images.ts b/invokeai/frontend/web/src/services/api/endpoints/images.ts index dc17b28798..9901003f28 100644 --- a/invokeai/frontend/web/src/services/api/endpoints/images.ts +++ b/invokeai/frontend/web/src/services/api/endpoints/images.ts @@ -262,7 +262,9 @@ export const imagesApi = api.injectEndpoints({ const isChangingFromIntermediate = changes.is_intermediate === false; // do not add intermediates to gallery cache - if (is_intermediate && !isChangingFromIntermediate) return; + if (is_intermediate && !isChangingFromIntermediate) { + return; + } const categories = IMAGE_CATEGORIES.includes(image_category) ? IMAGE_CATEGORIES From cd21d2f2b6bfda74d61372f6215de0b860070516 Mon Sep 17 00:00:00 2001 From: psychedelicious <4822129+psychedelicious@users.noreply.github.com> Date: Thu, 20 Jul 2023 17:59:55 +1000 Subject: [PATCH 51/51] fix(ui): fix no_board cache not updating two areas marked TODO were not TODONE! --- .../web/src/services/api/endpoints/images.ts | 99 +++++++++---------- 1 file changed, 48 insertions(+), 51 deletions(-) diff --git a/invokeai/frontend/web/src/services/api/endpoints/images.ts b/invokeai/frontend/web/src/services/api/endpoints/images.ts index 9901003f28..5eeb86d9c5 100644 --- a/invokeai/frontend/web/src/services/api/endpoints/images.ts +++ b/invokeai/frontend/web/src/services/api/endpoints/images.ts @@ -169,10 +169,11 @@ export const imagesApi = api.injectEndpoints({ ], async onQueryStarted(imageDTO, { dispatch, queryFulfilled }) { /** - * Cache changes for deleteImage: - * - Remove from "All Images" - * - Remove from image's `board_id` if it has one, or "No Board" if not - * - Remove from "Batch" + * Cache changes for `deleteImage`: + * - *remove* from "All Images" / "All Assets" + * - IF it has a board: + * - THEN *remove* from it's own board + * - ELSE *remove* from "No Board" */ const { image_name, board_id, image_category } = imageDTO; @@ -181,22 +182,23 @@ export const imagesApi = api.injectEndpoints({ // That means constructing the possible query args that are serialized into the cache key... const removeFromCacheKeys: ListImagesArgs[] = []; + + // determine `categories`, i.e. do we update "All Images" or "All Assets" const categories = IMAGE_CATEGORIES.includes(image_category) ? IMAGE_CATEGORIES : ASSETS_CATEGORIES; - // All Images board (e.g. no board) + // remove from "All Images" removeFromCacheKeys.push({ categories }); - // Board specific if (board_id) { + // remove from it's own board removeFromCacheKeys.push({ board_id }); } else { - // TODO: No Board + // remove from "No Board" + removeFromCacheKeys.push({ board_id: 'none' }); } - // TODO: Batch - const patches: PatchCollection[] = []; removeFromCacheKeys.forEach((cacheKey) => { patches.push( @@ -240,25 +242,24 @@ export const imagesApi = api.injectEndpoints({ { imageDTO: oldImageDTO, changes: _changes }, { dispatch, queryFulfilled, getState } ) { - // TODO: Should we handle changes to boards via this mutation? Seems reasonable... - // let's be extra-sure we do not accidentally change categories const changes = omit(_changes, 'image_category'); /** - * Cache changes for `updateImage`: - * - Update the ImageDTO - * - Update the image in "All Images" board: - * - IF it is in the date range represented by the cache: - * - add the image IF it is not already in the cache & update the total - * - ELSE update the image IF it is already in the cache + * Cache changes for "updateImage": + * - *update* "getImageDTO" cache + * - for "All Images" || "All Assets": + * - IF it is not already in the cache + * - THEN *add* it to "All Images" / "All Assets" and update the total + * - ELSE *update* it * - IF the image has a board: - * - Update the image in it's own board - * - ELSE Update the image in the "No Board" board (TODO) + * - THEN *update* it's own board + * - ELSE *update* the "No Board" board */ const patches: PatchCollection[] = []; - const { image_name, board_id, image_category, is_intermediate } = oldImageDTO; + const { image_name, board_id, image_category, is_intermediate } = + oldImageDTO; const isChangingFromIntermediate = changes.is_intermediate === false; // do not add intermediates to gallery cache @@ -266,13 +267,12 @@ export const imagesApi = api.injectEndpoints({ return; } + // determine `categories`, i.e. do we update "All Images" or "All Assets" const categories = IMAGE_CATEGORIES.includes(image_category) ? IMAGE_CATEGORIES : ASSETS_CATEGORIES; - // TODO: No Board - - // Update `getImageDTO` cache + // update `getImageDTO` cache patches.push( dispatch( imagesApi.util.updateQueryData( @@ -288,9 +288,13 @@ export const imagesApi = api.injectEndpoints({ // Update the "All Image" or "All Assets" board const queryArgsToUpdate: ListImagesArgs[] = [{ categories }]; + // IF the image has a board: if (board_id) { - // We also need to update the user board + // THEN update it's own board queryArgsToUpdate.push({ board_id }); + } else { + // ELSE update the "No Board" board + queryArgsToUpdate.push({ board_id: 'none' }); } queryArgsToUpdate.forEach((queryArg) => { @@ -378,12 +382,12 @@ export const imagesApi = api.injectEndpoints({ return; } - // Add the image to the "All Images" / "All Assets" board - const queryArg = { - categories: IMAGE_CATEGORIES.includes(image_category) - ? IMAGE_CATEGORIES - : ASSETS_CATEGORIES, - }; + // determine `categories`, i.e. do we update "All Images" or "All Assets" + const categories = IMAGE_CATEGORIES.includes(image_category) + ? IMAGE_CATEGORIES + : ASSETS_CATEGORIES; + + const queryArg = { categories }; dispatch( imagesApi.util.updateQueryData('listImages', queryArg, (draft) => { @@ -417,16 +421,14 @@ export const imagesApi = api.injectEndpoints({ { dispatch, queryFulfilled, getState } ) { /** - * Cache changes for addImageToBoard: - * - Remove from "No Board" - * - Remove from `old_board_id` if it has one - * - Add to new `board_id` - * - IF the image's `created_at` is within the range of the board's cached images + * Cache changes for `addImageToBoard`: + * - *update* the `getImageDTO` cache + * - *remove* from "No Board" + * - IF the image has an old `board_id`: + * - THEN *remove* from it's old `board_id` + * - IF the image's `created_at` is within the range of the board's cached images * - OR the board cache has length of 0 or 1 - * - Update the `total` for each board whose cache is updated - * - Update the ImageDTO - * - * TODO: maybe total should just be updated in the boards endpoints? + * - THEN *add* it to new `board_id` */ const { image_name, board_id: old_board_id } = oldImageDTO; @@ -434,13 +436,10 @@ export const imagesApi = api.injectEndpoints({ // Figure out the `listImages` caches that we need to update const removeFromQueryArgs: ListImagesArgs[] = []; - // TODO: No Board - // TODO: Batch - - // Remove from No Board + // remove from "No Board" removeFromQueryArgs.push({ board_id: 'none' }); - // Remove from old board + // remove from old board if (old_board_id) { removeFromQueryArgs.push({ board_id: old_board_id }); } @@ -541,17 +540,15 @@ export const imagesApi = api.injectEndpoints({ { dispatch, queryFulfilled, getState } ) { /** - * Cache changes for removeImageFromBoard: - * - Add to "No Board" - * - IF the image's `created_at` is within the range of the board's cached images - * - Remove from `old_board_id` - * - Update the ImageDTO + * Cache changes for `removeImageFromBoard`: + * - *update* `getImageDTO` + * - IF the image's `created_at` is within the range of the board's cached images + * - THEN *add* to "No Board" + * - *remove* from `old_board_id` */ const { image_name, board_id: old_board_id } = imageDTO; - // TODO: Batch - const patches: PatchCollection[] = []; // Updated imageDTO with new board_id