Go to file
2023-02-03 10:27:03 -05:00
.dev_scripts Replace --full_precision with --precision that works even if not specified 2022-09-20 17:08:00 -04:00
.github swap codeowners for installer 2023-02-02 13:54:53 -05:00
binary_installer use 🧨diffusers model (#1583) 2023-01-15 09:22:46 -05:00
docker-build fix broken Dockerfile 2023-02-01 01:21:07 +01:00
docs Merge branch 'main' into update/docs/remove-requirements-step 2023-02-02 16:14:45 -05:00
installer (installer) fix failure to create venv over an existing venv 2023-02-03 00:36:28 -05:00
invokeai rebuild front end 2023-02-02 21:55:01 -05:00
ldm prevent crash when no default model defined 2023-02-03 02:27:50 -05:00
notebooks Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
scripts (scripts) rename/reorganize CLI scripts 2023-01-28 17:39:33 -05:00
static Global replace [ \t]+$, add "GB" (#1751) 2022-12-19 16:36:39 +00:00
tests update test_path.py to also verify caution.png 2023-02-01 00:22:28 +01:00
.dockerignore fix broken Dockerfile 2023-02-01 01:21:07 +01:00
.editorconfig Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
.gitattributes Global replace [ \t]+$, add "GB" (#1751) 2022-12-19 16:36:39 +00:00
.gitignore rebuild front end 2023-02-02 21:55:01 -05:00
.gitmodules remove src directory, which is gumming up conda installs; addresses issue #77 2022-08-25 10:43:05 -04:00
.prettierrc.yaml change printWidth for markdown files to 80 2022-09-17 02:23:00 +02:00
CODE_OF_CONDUCT.md Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
InvokeAI_Statement_of_Values.md Add @ebr to Contributors (#2095) 2022-12-21 14:33:08 -05:00
LICENSE adding license using GitHub template 2022-10-17 12:09:24 -04:00
LICENSE-ModelWeights.txt added assertion checks for out-of-bound arguments; added various copyright and license agreement files 2022-08-24 09:22:27 -04:00
mkdocs.yml (docs) add redirects for moved pages (#2063) 2022-12-18 08:04:58 +00:00
pyproject.toml (package) provide more legacy aliases to entrypoints to minimize user surprise 2023-02-02 01:03:51 -05:00
README.md make images in README.md compatible to pypi 2023-01-20 23:30:24 +01:00
shell.nix nix: add shell.nix file 2022-10-25 07:08:31 -04:00
Stable_Diffusion_v1_Model_Card.md Global replace [ \t]+$, add "GB" (#1751) 2022-12-19 16:36:39 +00:00

project logo

InvokeAI: A Stable Diffusion Toolkit

discord badge

latest release badge github stars badge github forks badge

CI checks on main badge latest commit to main badge

github open issues badge github open prs badge

InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.

Quick links: [How to Install] [Discord Server] [Documentation and Tutorials] [Code and Downloads] [Bug Reports] [Discussion, Ideas & Q&A]

Note: InvokeAI is rapidly evolving. Please use the Issues tab to report bugs and make feature requests. Be sure to use the provided templates. They will help us diagnose issues faster.

canvas preview

Getting Started with InvokeAI

For full installation and upgrade instructions, please see: InvokeAI Installation Overview

  1. Go to the bottom of the Latest Release Page
  2. Download the .zip file for your OS (Windows/macOS/Linux).
  3. Unzip the file.
  4. If you are on Windows, double-click on the install.bat script. On macOS, open a Terminal window, drag the file install.sh from Finder into the Terminal, and press return. On Linux, run install.sh.
  5. Wait a while, until it is done.
  6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the invoke.bat file. On macOS, open a Terminal window, drag invoke.sh from the folder into the Terminal, and press return. On Linux, run invoke.sh
  7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
  8. Type banana sushi in the box on the top left and click Invoke

Table of Contents

  1. Installation
  2. Hardware Requirements
  3. Features
  4. Latest Changes
  5. Troubleshooting
  6. Contributing
  7. Contributors
  8. Support
  9. Further Reading

Installation

This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). For full installation and upgrade instructions, please see: InvokeAI Installation Overview

Hardware Requirements

InvokeAI is supported across Linux, Windows and macOS. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver).

System

You will need one of the following:

  • An NVIDIA-based graphics card with 4 GB or more VRAM memory.
  • An Apple computer with an M1 chip.

We do not recommend the GTX 1650 or 1660 series video cards. They are unable to run in half-precision mode and do not have sufficient VRAM to render 512x512 images.

Memory

  • At least 12 GB Main Memory RAM.

Disk

  • At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.

Features

Feature documentation can be reviewed by navigating to the InvokeAI Documentation page

Web Server & UI

InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.

Unified Canvas

The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.

Advanced Prompt Syntax

InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.

Command Line Interface

For users utilizing a terminal-based environment, or who want to take advantage of CLI features, InvokeAI offers an extensive and actively supported command-line interface that provides the full suite of generation functionality available in the tool.

Other features

  • Support for both ckpt and diffusers models
  • SD 2.0, 2.1 support
  • Noise Control & Tresholding
  • Popular Sampler Support
  • Upscaling & Face Restoration Tools
  • Embedding Manager & Support
  • Model Manager & Support

Coming Soon

  • Node-Based Architecture & UI
  • And more...

Latest Changes

For our latest changes, view our Release Notes and the CHANGELOG.

Troubleshooting

Please check out our Q&A to get solutions for common installation problems and other issues.

Contributing

Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so.

To join, just raise your hand on the InvokeAI Discord server (#dev-chat) or the GitHub discussion board.

If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide. A full set of contribution guidelines, along with templates, are in progress. You can make your pull request against the "main" branch.

We hope you enjoy using our software as much as we enjoy creating it, and we hope that some of those of you who are reading this will elect to become part of our community.

Welcome to InvokeAI!

Contributors

This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.

Support

For support, please use this repository's GitHub Issues tracking service, or join the Discord.

Original portions of the software are Copyright (c) 2023 by respective contributors.