86a74e929a
Node authors may now create their own arbitrary/custom field types. Any pydantic model is supported. Two notes: 1. Your field type's class name must be unique. Suggest prefixing fields with something related to the node pack as a kind of namespace. 2. Custom field types function as connection-only fields. For example, if your custom field has string attributes, you will not get a text input for that attribute when you give a node a field with your custom type. This is the same behaviour as other complex fields that don't have custom UIs in the workflow editor - like, say, a string collection. feat(ui): fix tooltips for custom types We need to hold onto the original type of the field so they don't all just show up as "Unknown". fix(ui): fix ts error with custom fields feat(ui): custom field types connection validation In the initial commit, a custom field's original type was added to the *field templates* only as `originalType`. Custom fields' `type` property was `"Custom"`*. This allowed for type safety throughout the UI logic. *Actually, it was `"Unknown"`, but I changed it to custom for clarity. Connection validation logic, however, uses the *field instance* of the node/field. Like the templates, *field instances* with custom types have their `type` set to `"Custom"`, but they didn't have an `originalType` property. As a result, all custom fields could be connected to all other custom fields. To resolve this, we need to add `originalType` to the *field instances*, then switch the validation logic to use this instead of `type`. This ended up needing a bit of fanagling: - If we make `originalType` a required property on field instances, existing workflows will break during connection validation, because they won't have this property. We'd need a new layer of logic to migrate the workflows, adding the new `originalType` property. While this layer is probably needed anyways, typing `originalType` as optional is much simpler. Workflow migration logic can come layer. (Technically, we could remove all references to field types from the workflow files, and let the templates hold all this information. This feels like a significant change and I'm reluctant to do it now.) - Because `originalType` is optional, anywhere we care about the type of a field, we need to use it over `type`. So there are a number of `field.originalType ?? field.type` expressions. This is a bit of a gotcha, we'll need to remember this in the future. - We use `Array.prototype.includes()` often in the workflow editor, e.g. `COLLECTION_TYPES.includes(type)`. In these cases, the const array is of type `FieldType[]`, and `type` is is `FieldType`. Because we now support custom types, the arg `type` is now widened from `FieldType` to `string`. This causes a TS error. This behaviour is somewhat controversial (see https://github.com/microsoft/TypeScript/issues/14520). These expressions are now rewritten as `COLLECTION_TYPES.some((t) => t === type)` to satisfy TS. It's logically equivalent. fix(ui): typo feat(ui): add CustomCollection and CustomPolymorphic field types feat(ui): add validation for CustomCollection & CustomPolymorphic types - Update connection validation for custom types - Use simple string parsing to determine if a field is a collection or polymorphic type. - No longer need to keep a list of collection and polymorphic types. - Added runtime checks in `baseinvocation.py` to ensure no fields are named in such a way that it could mess up the new parsing chore(ui): remove errant console.log fix(ui): rename 'nodes.currentConnectionFieldType' -> 'nodes.connectionStartFieldType' This was confusingly named and kept tripping me up. Renamed to be consistent with the `reactflow` `ConnectionStartParams` type. fix(ui): fix ts error feat(nodes): add runtime check for custom field names "Custom", "CustomCollection" and "CustomPolymorphic" are reserved field names. chore(ui): add TODO for revising field type names wip refactor fieldtype structured wip refactor field types wip refactor types wip refactor types fix node layout refactor field types chore: mypy organisation organisation organisation fix(nodes): fix field orig_required, field_kind and input statuses feat(nodes): remove broken implementation of default_factory on InputField Use of this could break connection validation due to the difference in node schemas required fields and invoke() required args. Removed entirely for now. It wasn't ever actually used by the system, because all graphs always had values provided for fields where default_factory was used. Also, pydantic is smart enough to not reuse the same object when specifying a default value - it clones the object first. So, the common pattern of `default_factory=list` is extraneous. It can just be `default=[]`. fix(nodes): fix InputField name validation workflow validation validation chore: ruff feat(nodes): fix up baseinvocation comments fix(ui): improve typing & logic of buildFieldInputTemplate improved error handling in parseFieldType fix: back compat for deprecated default_factory and UIType feat(nodes): do not show node packs loaded log if none loaded chore(ui): typegen |
||
---|---|---|
.dev_scripts | ||
.github | ||
coverage | ||
docker | ||
docs | ||
installer | ||
invokeai | ||
scripts | ||
tests | ||
.dockerignore | ||
.editorconfig | ||
.git-blame-ignore-revs | ||
.gitattributes | ||
.gitignore | ||
.gitmodules | ||
.pre-commit-config.yaml | ||
.prettierrc.yaml | ||
flake.lock | ||
flake.nix | ||
InvokeAI_Statement_of_Values.md | ||
LICENSE | ||
LICENSE-SD1+SD2.txt | ||
LICENSE-SDXL.txt | ||
Makefile | ||
mkdocs.yml | ||
pyproject.toml | ||
README.md | ||
Stable_Diffusion_v1_Model_Card.md |
Invoke AI - Generative AI for Professional Creatives
Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit invoke.ai
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
Quick links: [How to Install] [Discord Server] [Documentation and Tutorials] [Bug Reports] [Discussion, Ideas & Q&A] [Contributing]
Table of Contents
Table of Contents 📝
Getting Started
More About Invoke
Supporting the Project
- 🤝 Contributing
- 👥 Contributors
- 💕 Support
Quick Start
For full installation and upgrade instructions, please see: InvokeAI Installation Overview
If upgrading from version 2.3, please read Migrating a 2.3 root directory to 3.0 first.
Automatic Installer (suggested for 1st time users)
-
Go to the bottom of the Latest Release Page
-
Download the .zip file for your OS (Windows/macOS/Linux).
-
Unzip the file.
-
Windows: double-click on the
install.bat
script. macOS: Open a Terminal window, drag the fileinstall.sh
from Finder into the Terminal, and press return. Linux: runinstall.sh
. -
You'll be asked to confirm the location of the folder in which to install InvokeAI and its image generation model files. Pick a location with at least 15 GB of free memory. More if you plan on installing lots of models.
-
Wait while the installer does its thing. After installing the software, the installer will launch a script that lets you configure InvokeAI and select a set of starting image generation models.
-
Find the folder that InvokeAI was installed into (it is not the same as the unpacked zip file directory!) The default location of this folder (if you didn't change it in step 5) is
~/invokeai
on Linux/Mac systems, andC:\Users\YourName\invokeai
on Windows. This directory will contain launcher scripts namedinvoke.sh
andinvoke.bat
. -
On Windows systems, double-click on the
invoke.bat
file. On macOS, open a Terminal window, draginvoke.sh
from the folder into the Terminal, and press return. On Linux, runinvoke.sh
-
Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
-
Type
banana sushi
in the box on the top left and clickInvoke
Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command npm install -g yarn
if needed)
-
Open a command-line window on your machine. The PowerShell is recommended for Windows.
-
Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
mkdir invokeai
-
Create a virtual environment named
.venv
inside this directory and activate it:cd invokeai python -m venv .venv --prompt InvokeAI
-
Activate the virtual environment (do it every time you run InvokeAI)
For Linux/Mac users:
source .venv/bin/activate
For Windows users:
.venv\Scripts\activate
-
Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
For Windows/Linux with an NVIDIA GPU:
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
For Linux with an AMD GPU:
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
For non-GPU systems:
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
For Macintoshes, either Intel or M1/M2/M3:
pip install InvokeAI --use-pep517
-
Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
invokeai-configure --root .
Don't miss the dot at the end!
-
Launch the web server (do it every time you run InvokeAI):
invokeai-web
-
Point your browser to http://localhost:9090 to bring up the web interface.
-
Type
banana sushi
in the box on the top left and clickInvoke
.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using source .venv/bin/activate
or .venv\Scripts\activate
.
Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). For full installation and upgrade instructions, please see: InvokeAI Installation Overview
Migrating a v2.3 InvokeAI root directory
The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named invokeai
and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. invokeai-3
and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
Creating a new root directory and migrating old models
This is the safer recipe because it leaves your old root directory in place to fall back on.
-
Follow the instructions above to create and install InvokeAI in a directory that has a different name from the 2.3 invokeai directory. In this example, we will use "invokeai-3"
-
When you are prompted to select models to install, select a minimal set of models, such as stable-diffusion-v1.5 only.
-
After installation is complete launch
invokeai.sh
(Linux/Mac) orinvokeai.bat
and select option 8 "Open the developers console". This will take you to the command line. -
Issue the command
invokeai-migrate3 --from /path/to/v2.3-root --to /path/to/invokeai-3-root
. Provide the correct--from
and--to
paths for your v2.3 and v3.0 root directories respectively.
This will copy and convert your old models from 2.3 format to 3.0
format and create a new models
directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed models.orig
and can be deleted once you have
confirmed that the migration was successful.
If you wish, you can pass the 2.3 root directory to both --from
and
--to
in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0 without touching the command line. *This recipe does not work on Windows platforms due to a bug in the Windows version of the 2.3 upgrade script. See the next section for a Windows recipe.
For Mac and Linux Users:
-
Launch the InvokeAI launcher script in your current v2.3 root directory.
-
Select option [9] "Update InvokeAI" to bring up the updater dialog.
-
Select option [1] to upgrade to the latest release.
-
Once the upgrade is finished you will be returned to the launcher menu. Select option [7] "Re-run the configure script to fix a broken install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and update it to the 3.0 format. The following files will be replaced:
- The invokeai.init file, replaced by invokeai.yaml
- The models directory
- The configs/models.yaml model index
The original versions of these files will be saved with the suffix ".orig" appended to the end. Once you have confirmed that the upgrade worked, you can safely remove these files. Alternatively you can restore a working v2.3 directory by removing the new files and restoring the ".orig" files' original names.
For Windows Users:
Windows Users can upgrade with the
- Enter the 2.3 root directory you wish to upgrade
- Launch
invoke.sh
orinvoke.bat
- Select the "Developer's console" option [8]
- Type the following commands
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
(Replace v3.0.0
with the current release number if this document is out of date).
The first command will install and upgrade new software to run InvokeAI. The second will prepare the 2.3 directory for use with 3.0. You may now launch the WebUI in the usual way, by selecting option [1] from the launcher script
Migrating Images
The migration script will migrate your invokeai settings and models, including textual inversion models, LoRAs and merges that you may have installed previously. However it does not migrate the generated images stored in your 2.3-format outputs directory. To do this, you need to run an additional step:
-
From a working InvokeAI 3.0 root directory, start the launcher and enter menu option [8] to open the "developer's console".
-
At the developer's console command line, type the command:
invokeai-import-images
- This will lead you through the process of confirming the desired source and destination for the imported images. The images will appear in the gallery board of your choice, and contain the original prompt, model name, and other parameters used to generate the image.
(Many kudos to techjedi for contributing this script.)
Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver).
System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB of VRAM is highly recommended for rendering using the Stable Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.
Memory - At least 12 GB Main Memory RAM.
Disk - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
Features
Feature documentation can be reviewed by navigating to the InvokeAI Documentation page
Web Server & UI
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
Unified Canvas
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
Workflows & Nodes
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
Board & Gallery Management
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
Other features
- Support for both ckpt and diffusers models
- SD 2.0, 2.1, XL support
- Upscaling Tools
- Embedding Manager & Support
- Model Manager & Support
- Workflow creation & management
- Node-Based Architecture
Latest Changes
For our latest changes, view our Release Notes and the CHANGELOG.
Troubleshooting
Please check out our Troubleshooting Guide to get solutions for common installation problems and other issues. For more help, please join our Discord
Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so.
Get started with contributing by reading our Contribution documentation, joining the #dev-chat or the GitHub discussion board.
If you are unfamiliar with how to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing: New Contributor Checklist.
We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our community.
Welcome to InvokeAI!
Contributors
This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.
Support
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
Original portions of the software are Copyright (c) 2023 by respective contributors.