13 KiB
title |
---|
Home |
InvokeAI is an implementation of Stable Diffusion, the open source text-to-image and image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB or RAM.
Quick links: [Discord Server] [Code and Downloads] [Bug Reports] [Discussion, Ideas & Q&A]
!!! note
This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
:octicons-package-dependencies-24: Installation
This fork is supported across Linux, Windows and Macintosh. Linux users can use either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm driver). For full installation and upgrade instructions, please see: InvokeAI Installation Overview
Linux users who wish to make use of the PyPatchMatch inpainting functions will need to perform a bit of extra work to enable this module. Instructions can be found at Installing PyPatchMatch.
:fontawesome-solid-computer: Hardware Requirements
:octicons-cpu-24: System
You wil need one of the following:
- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory.
- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux only)
- :fontawesome-brands-apple: An Apple computer with an M1 chip.
:fontawesome-solid-memory: Memory
- At least 12 GB Main Memory RAM.
:fontawesome-regular-hard-drive: Disk
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
!!! info
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the invoke script in
full-precision mode as shown below.
Similarly, specify full-precision mode on Apple M1 hardware.
Precision is auto configured based on the device. If however you encounter errors like
`expected type Float but found Half` or `not implemented for Half` you can try starting
`invoke.py` with the `--precision=float32` flag:
```bash
(invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
```
:octicons-gift-24: InvokeAI Features
- Miscellaneous
:octicons-log-16: Latest Changes
v2.1.3 (13 November 2022)
- A choice of installer scripts that automate installation and configuration. See Installation.
- A streamlined manual installation process that works for both Conda and PIP-only installs. See Manual Installation.
- The ability to save frequently-used startup options (model to load, steps, sampler, etc) in a
.invokeai
file. See Client - Support for AMD GPU cards (non-CUDA) on Linux machines.
- Multiple bugs and edge cases squashed.
v2.1.0 (2 November 2022)
- Inpainting support in the WebGUI
- Greatly improved navigation and user experience in the WebGUI
- The prompt syntax has been enhanced with prompt weighting, cross-attention and prompt merging.
- You can now load multiple models and switch among them quickly without leaving the CLI.
- The installation process (via
scripts/configure_invokeai.py
) now lets you select among several popular Stable Diffusion models and downloads and installs them on your behalf. Among other models, this script will install the current Stable Diffusion 1.5 model as well as a StabilityAI variable autoencoder (VAE) which improves face generation. - Tired of struggling with photoeditors to get the masked region of for inpainting just right? Let the AI make the mask for you using text masking. This feature allows you to specify the part of the image to paint over using just English-language phrases.
- Tired of seeing the head of your subjects cropped off? Uncrop them in the CLI with the outcrop feature.
- Tired of seeing your subject's bodies duplicated or mangled when generating
larger-dimension images? Check out the
--hires
option in the CLI, or select the corresponding toggle in the WebGUI. - We now support textual inversion and fine-tune .bin styles and subjects from
the Hugging Face archive of
SD Concepts. Load the .bin file
using the
--embedding_path
option. (The next version will support merging and loading of multiple simultaneous models). - ...
v2.0.1 (13 October 2022)
- fix noisy images at high step count when using k* samplers
- dream.py script now calls invoke.py module directly rather than via a new python process (which could break the environment)
v2.0.0 (9 October 2022)
dream.py
script renamedinvoke.py
. Adream.py
script wrapper remains for backward compatibility.- Completely new WebGUI - launch with
python3 scripts/invoke.py --web
- Support for inpainting and outpainting
- img2img runs on all k* samplers
- Support for negative prompts
- Support for CodeFormer face reconstruction
- Support for Textual Inversion on Macintoshes
- Support in both WebGUI and CLI for
post-processing
of previously-generated images using facial reconstruction, ESRGAN
upscaling, outcropping (similar to DALL-E infinite canvas), and "embiggen"
upscaling. See the
!fix
command. - New
--hires
option oninvoke>
line allows larger images to be created without duplicating elements, at the cost of some performance. - New
--perlin
and--threshold
options allow you to add and control variation during image generation (see Thresholding and Perlin Noise Initialization - Extensive metadata now written into PNG files, allowing reliable regeneration of images and tweaking of previous settings.
- Command-line completion in
invoke.py
now works on Windows, Linux and Mac platforms. - Improved
command-line
completion behavior. New commands added:
- List command-line history with
!history
- Search command-line history with
!search
- Clear history with
!clear
- List command-line history with
- Deprecated
--full_precision
/-F
. Simply omit it andinvoke.py
will auto configure. To switch away from auto use the new flag like--precision=float32
.
For older changelogs, please visit the CHANGELOG.
:material-target: Troubleshooting
Please check out our :material-frequently-asked-questions: Q&A to get solutions for common installation problems and other issues.
:octicons-repo-push-24: Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so. If you are unfamiliar with how to contribute to GitHub projects, here is a Getting Started Guide.
A full set of contribution guidelines, along with templates, are in progress, but for now the most important thing is to make your pull request against the "development" branch, and not against "main". This will help keep public breakage to a minimum and will allow you to propose more radical changes.
:octicons-person-24: Contributors
This fork is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.
:octicons-question-24: Support
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an email if you use and like the script.
Original portions of the software are Copyright (c) 2020 Lincoln D. Stein
:octicons-book-24: Further Reading
Please see the original README for more information on this software and underlying algorithm, located in the file README-CompViz.md.