mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
merge draft docs
This commit is contained in:
commit
7d64a5849f
69
README.md
69
README.md
@ -1,8 +1,11 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||
![project logo](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/invoke_ai_banner.png)
|
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
|
||||||
|
|
||||||
|
# Invoke AI - Generative AI for Professional Creatives
|
||||||
|
## Image Generation for Stable Diffusion, Custom-Trained Models, and more.
|
||||||
|
Learn more about us and get started instantly at [invoke.ai](https://invoke.ai)
|
||||||
|
|
||||||
# InvokeAI: A Stable Diffusion Toolkit
|
|
||||||
|
|
||||||
[![discord badge]][discord link]
|
[![discord badge]][discord link]
|
||||||
|
|
||||||
@ -68,18 +71,23 @@ the foundation for multiple commercial products.
|
|||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
1. [Quick Start](#getting-started-with-invokeai)
|
Table of Contents 📝
|
||||||
2. [Installation](#detailed-installation-instructions)
|
|
||||||
3. [Hardware Requirements](#hardware-requirements)
|
|
||||||
4. [Features](#features)
|
|
||||||
5. [Latest Changes](#latest-changes)
|
|
||||||
6. [Troubleshooting](#troubleshooting)
|
|
||||||
7. [Contributing](#contributing)
|
|
||||||
8. [Contributors](#contributors)
|
|
||||||
9. [Support](#support)
|
|
||||||
10. [Further Reading](#further-reading)
|
|
||||||
|
|
||||||
## Getting Started with InvokeAI
|
**Getting Started**
|
||||||
|
1. 🏁 [Quick Start](#quick-start)
|
||||||
|
3. 🖥️ [Hardware Requirements](#hardware-requirements)
|
||||||
|
|
||||||
|
**More About Invoke**
|
||||||
|
1. 🌟 [Features](#features)
|
||||||
|
2. 📣 [Latest Changes](#latest-changes)
|
||||||
|
3. 🛠️ [Troubleshooting](#troubleshooting)
|
||||||
|
|
||||||
|
**Supporting the Project**
|
||||||
|
1. 🤝 [Contributing](#contributing)
|
||||||
|
2. 👥 [Contributors](#contributors)
|
||||||
|
3. 💕 [Support](#support)
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
For full installation and upgrade instructions, please see:
|
For full installation and upgrade instructions, please see:
|
||||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
||||||
@ -95,9 +103,8 @@ directory to 3.0](#migrating-to-3) first.
|
|||||||
|
|
||||||
3. Unzip the file.
|
3. Unzip the file.
|
||||||
|
|
||||||
4. If you are on Windows, double-click on the `install.bat` script. On
|
4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
|
||||||
macOS, open a Terminal window, drag the file `install.sh` from Finder
|
into the Terminal, and press return. **Linux:** run `install.sh`.
|
||||||
into the Terminal, and press return. On Linux, run `install.sh`.
|
|
||||||
|
|
||||||
5. You'll be asked to confirm the location of the folder in which
|
5. You'll be asked to confirm the location of the folder in which
|
||||||
to install InvokeAI and its image generation model files. Pick a
|
to install InvokeAI and its image generation model files. Pick a
|
||||||
@ -123,7 +130,7 @@ and go to http://localhost:9090.
|
|||||||
|
|
||||||
10. Type `banana sushi` in the box on the top left and click `Invoke`
|
10. Type `banana sushi` in the box on the top left and click `Invoke`
|
||||||
|
|
||||||
### Command-Line Installation (for users familiar with Terminals)
|
### Command-Line Installation (for developers and users familiar with Terminals)
|
||||||
|
|
||||||
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
|
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
|
||||||
not supported.
|
not supported.
|
||||||
@ -306,13 +313,9 @@ We do not recommend the GTX 1650 or 1660 series video cards. They are
|
|||||||
unable to run in half-precision mode and do not have sufficient VRAM
|
unable to run in half-precision mode and do not have sufficient VRAM
|
||||||
to render 512x512 images.
|
to render 512x512 images.
|
||||||
|
|
||||||
### Memory
|
**Memory** - At least 12 GB Main Memory RAM.
|
||||||
|
|
||||||
- At least 12 GB Main Memory RAM.
|
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||||
|
|
||||||
### Disk
|
|
||||||
|
|
||||||
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
@ -328,7 +331,7 @@ The Unified Canvas is a fully integrated canvas implementation with support for
|
|||||||
|
|
||||||
### *Advanced Prompt Syntax*
|
### *Advanced Prompt Syntax*
|
||||||
|
|
||||||
InvokeAI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
|
Invoke AI's advanced prompt syntax allows for token weighting, cross-attention control, and prompt blending, allowing for fine-tuned tweaking of your invocations and exploration of the latent space.
|
||||||
|
|
||||||
### *Command Line Interface*
|
### *Command Line Interface*
|
||||||
|
|
||||||
@ -338,16 +341,12 @@ For users utilizing a terminal-based environment, or who want to take advantage
|
|||||||
|
|
||||||
- *Support for both ckpt and diffusers models*
|
- *Support for both ckpt and diffusers models*
|
||||||
- *SD 2.0, 2.1 support*
|
- *SD 2.0, 2.1 support*
|
||||||
- *Noise Control & Tresholding*
|
|
||||||
- *Popular Sampler Support*
|
|
||||||
- *Upscaling & Face Restoration Tools*
|
- *Upscaling & Face Restoration Tools*
|
||||||
- *Embedding Manager & Support*
|
- *Embedding Manager & Support*
|
||||||
- *Model Manager & Support*
|
- *Model Manager & Support*
|
||||||
|
- *Node-Based Architecture*
|
||||||
### Coming Soon
|
- *Node-Based Plug-&-Play UI (Beta)*
|
||||||
|
- *Boards & Gallery Management
|
||||||
- *Node-Based Architecture & UI*
|
|
||||||
- And more...
|
|
||||||
|
|
||||||
### Latest Changes
|
### Latest Changes
|
||||||
|
|
||||||
@ -355,12 +354,12 @@ For our latest changes, view our [Release
|
|||||||
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
|
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
|
||||||
[CHANGELOG](docs/CHANGELOG.md).
|
[CHANGELOG](docs/CHANGELOG.md).
|
||||||
|
|
||||||
## Troubleshooting
|
### Troubleshooting
|
||||||
|
|
||||||
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
||||||
problems and other issues.
|
problems and other issues.
|
||||||
|
|
||||||
## Contributing
|
## 🤝 Contributing
|
||||||
|
|
||||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
||||||
cleanup, testing, or code reviews, is very much encouraged to do so.
|
cleanup, testing, or code reviews, is very much encouraged to do so.
|
||||||
@ -379,14 +378,12 @@ to become part of our community.
|
|||||||
|
|
||||||
Welcome to InvokeAI!
|
Welcome to InvokeAI!
|
||||||
|
|
||||||
### Contributors
|
### 👥 Contributors
|
||||||
|
|
||||||
This fork is a combined effort of various people from across the world.
|
This fork is a combined effort of various people from across the world.
|
||||||
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
|
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
|
||||||
their time, hard work and effort.
|
their time, hard work and effort.
|
||||||
|
|
||||||
Thanks to [Weblate](https://weblate.org/) for generously providing translation services to this project.
|
|
||||||
|
|
||||||
### Support
|
### Support
|
||||||
|
|
||||||
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
|
For support, please use this repository's GitHub Issues tracking service, or join the Discord.
|
||||||
|
@ -4,6 +4,236 @@ title: Changelog
|
|||||||
|
|
||||||
# :octicons-log-16: **Changelog**
|
# :octicons-log-16: **Changelog**
|
||||||
|
|
||||||
|
## v2.3.5 <small>(22 May 2023)</small>
|
||||||
|
|
||||||
|
This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
|
||||||
|
|
||||||
|
### LoRA and LyCORIS Support Improvement
|
||||||
|
|
||||||
|
A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
|
||||||
|
Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
|
||||||
|
Support for the newer LoKR LyCORIS files has been added.
|
||||||
|
|
||||||
|
### Library Updates and Speed/Reproducibility Advancements
|
||||||
|
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
|
||||||
|
|
||||||
|
Here are the new library versions:
|
||||||
|
Library Version
|
||||||
|
Torch 2.0.0
|
||||||
|
Diffusers 0.16.1
|
||||||
|
Xformers 0.0.19
|
||||||
|
Compel 1.1.5
|
||||||
|
Other Improvements
|
||||||
|
|
||||||
|
### Performance Improvements
|
||||||
|
|
||||||
|
When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
|
||||||
|
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
|
||||||
|
When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
|
||||||
|
|
||||||
|
|
||||||
|
## v2.3.4 <small>(7 April 2023)</small>
|
||||||
|
|
||||||
|
What's New in 2.3.4
|
||||||
|
|
||||||
|
This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
|
||||||
|
### LoRA and LyCORIS Support
|
||||||
|
|
||||||
|
LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
|
||||||
|
|
||||||
|
To use LoRA/LyCORIS models in InvokeAI:
|
||||||
|
|
||||||
|
Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
|
||||||
|
|
||||||
|
Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
|
||||||
|
|
||||||
|
family sitting at dinner table eating sushi withLora(sushi,0.9)
|
||||||
|
family sitting at dinner table eating sushi withLora(sushi, 0.75)
|
||||||
|
family sitting at dinner table eating sushi withLora(sushi)
|
||||||
|
|
||||||
|
Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
|
||||||
|
|
||||||
|
Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
|
||||||
|
|
||||||
|
You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
|
||||||
|
|
||||||
|
### New WebUI LoRA and Textual Inversion Buttons
|
||||||
|
|
||||||
|
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
|
||||||
|
|
||||||
|
Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or <textual-inversion> prompt fragment into the positive prompt. The number in parentheses indicates the number of trigger terms currently in the prompt. You may click the button again and deselect the LoRA or trigger to remove it from the prompt, or simply edit the prompt directly.
|
||||||
|
|
||||||
|
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
|
||||||
|
|
||||||
|
By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
|
||||||
|
### Minor features and fixes
|
||||||
|
|
||||||
|
This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
|
||||||
|
|
||||||
|
### Known Bugs in 2.3.4
|
||||||
|
|
||||||
|
These are known bugs in the release.
|
||||||
|
|
||||||
|
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
|
||||||
|
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
|
||||||
|
|
||||||
|
|
||||||
|
## v2.3.3 <small>(28 March 2023)</small>
|
||||||
|
|
||||||
|
This is a bugfix and minor feature release.
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
Since version 2.3.2 the following bugs have been fixed:
|
||||||
|
Bugs
|
||||||
|
|
||||||
|
When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
|
||||||
|
Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
|
||||||
|
The batch script log file names have been fixed to be compatible with Windows.
|
||||||
|
Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
|
||||||
|
Support loading of legacy config files that have no personalization (textual inversion) section.
|
||||||
|
An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
|
||||||
|
Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
|
||||||
|
|
||||||
|
Enhancements
|
||||||
|
|
||||||
|
It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
|
||||||
|
The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
|
||||||
|
If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
|
||||||
|
On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
|
||||||
|
When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
|
||||||
|
|
||||||
|
my-favorite-model.ckpt
|
||||||
|
my-favorite-model.yaml
|
||||||
|
my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
|
||||||
|
|
||||||
|
### Known Bugs in 2.3.3
|
||||||
|
|
||||||
|
These are known bugs in the release.
|
||||||
|
|
||||||
|
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
|
||||||
|
Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
|
||||||
|
|
||||||
|
|
||||||
|
## v2.3.2 <small>(11 March 2023)</small>
|
||||||
|
This is a bugfix and minor feature release.
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
Since version 2.3.1 the following bugs have been fixed:
|
||||||
|
|
||||||
|
Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
|
||||||
|
Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
|
||||||
|
The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
|
||||||
|
When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
|
||||||
|
Crashes that occurred during model merging.
|
||||||
|
Restore previous naming of Stable Diffusion base and 768 models.
|
||||||
|
Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
|
||||||
|
|
||||||
|
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments.
|
||||||
|
New "Invokeai-batch" script
|
||||||
|
|
||||||
|
### Invoke AI Batch
|
||||||
|
2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
|
||||||
|
|
||||||
|
a shack in the mountains, photograph
|
||||||
|
a shack in the mountains, watercolor
|
||||||
|
a shack in the mountains, oil painting
|
||||||
|
a chalet in the mountains, photograph
|
||||||
|
a chalet in the mountains, watercolor
|
||||||
|
a chalet in the mountains, oil painting
|
||||||
|
a shack in the desert, photograph
|
||||||
|
...
|
||||||
|
|
||||||
|
If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
|
||||||
|
|
||||||
|
To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
|
||||||
|
|
||||||
|
|
||||||
|
### Known Bugs in 2.3.2
|
||||||
|
|
||||||
|
These are known bugs in the release.
|
||||||
|
|
||||||
|
The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
|
||||||
|
Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
|
||||||
|
|
||||||
|
|
||||||
|
## v2.3.1 <small>(22 February 2023)</small>
|
||||||
|
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
|
||||||
|
|
||||||
|
### Enhanced support for model management
|
||||||
|
|
||||||
|
InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
|
||||||
|
|
||||||
|
There are three ways of accessing the model management features:
|
||||||
|
|
||||||
|
From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
|
||||||
|
|
||||||
|
Using the Model Installer App
|
||||||
|
|
||||||
|
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
|
||||||
|
|
||||||
|
Command-line users can start this app using the command invokeai-model-install.
|
||||||
|
|
||||||
|
Using the Command Line Client (CLI)
|
||||||
|
|
||||||
|
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
|
||||||
|
|
||||||
|
Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
|
||||||
|
|
||||||
|
Please see INSTALLING MODELS for more information on model management.
|
||||||
|
|
||||||
|
### An Improved Installer Experience
|
||||||
|
|
||||||
|
The installer now launches a console-based UI for setting and changing commonly-used startup options:
|
||||||
|
|
||||||
|
After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
|
||||||
|
|
||||||
|
Command-line users can launch the new configure app using invokeai-configure.
|
||||||
|
|
||||||
|
This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
|
||||||
|
|
||||||
|
Command-line users can run this interface by typing invokeai-configure
|
||||||
|
|
||||||
|
### Image Symmetry Options
|
||||||
|
|
||||||
|
There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
|
||||||
|
|
||||||
|
### A New Unified Canvas Look
|
||||||
|
|
||||||
|
This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
|
||||||
|
|
||||||
|
Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
|
||||||
|
|
||||||
|
Model conversion and merging within the WebUI
|
||||||
|
|
||||||
|
The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts.
|
||||||
|
An easier way to contribute translations to the WebUI
|
||||||
|
|
||||||
|
We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute.
|
||||||
|
Numerous internal bugfixes and performance issues
|
||||||
|
|
||||||
|
### Bug Fixes
|
||||||
|
This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished.
|
||||||
|
Summary of InvokeAI command line scripts (all accessible via the launcher menu)
|
||||||
|
Command Description
|
||||||
|
invokeai Command line interface
|
||||||
|
invokeai --web Web interface
|
||||||
|
invokeai-model-install Model installer with console forms-based front end
|
||||||
|
invokeai-ti --gui Textual inversion, with a console forms-based front end
|
||||||
|
invokeai-merge --gui Model merging, with a console forms-based front end
|
||||||
|
invokeai-configure Startup configuration; can also be used to reinstall support models
|
||||||
|
invokeai-update InvokeAI software updater
|
||||||
|
|
||||||
|
### Known Bugs in 2.3.1
|
||||||
|
|
||||||
|
These are known bugs in the release.
|
||||||
|
MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## v2.3.0 <small>(15 January 2023)</small>
|
## v2.3.0 <small>(15 January 2023)</small>
|
||||||
|
|
||||||
**Transition to diffusers
|
**Transition to diffusers
|
||||||
@ -264,7 +494,7 @@ sections describe what's new for InvokeAI.
|
|||||||
[Manual Installation](installation/020_INSTALL_MANUAL.md).
|
[Manual Installation](installation/020_INSTALL_MANUAL.md).
|
||||||
- The ability to save frequently-used startup options (model to load, steps,
|
- The ability to save frequently-used startup options (model to load, steps,
|
||||||
sampler, etc) in a `.invokeai` file. See
|
sampler, etc) in a `.invokeai` file. See
|
||||||
[Client](features/CLI.md)
|
[Client](deprecated/CLI.md)
|
||||||
- Support for AMD GPU cards (non-CUDA) on Linux machines.
|
- Support for AMD GPU cards (non-CUDA) on Linux machines.
|
||||||
- Multiple bugs and edge cases squashed.
|
- Multiple bugs and edge cases squashed.
|
||||||
|
|
||||||
@ -387,7 +617,7 @@ sections describe what's new for InvokeAI.
|
|||||||
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
- `dream.py` script renamed `invoke.py`. A `dream.py` script wrapper remains for
|
||||||
backward compatibility.
|
backward compatibility.
|
||||||
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
- Completely new WebGUI - launch with `python3 scripts/invoke.py --web`
|
||||||
- Support for [inpainting](features/INPAINTING.md) and
|
- Support for [inpainting](deprecated/INPAINTING.md) and
|
||||||
[outpainting](features/OUTPAINTING.md)
|
[outpainting](features/OUTPAINTING.md)
|
||||||
- img2img runs on all k\* samplers
|
- img2img runs on all k\* samplers
|
||||||
- Support for
|
- Support for
|
||||||
@ -399,7 +629,7 @@ sections describe what's new for InvokeAI.
|
|||||||
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
|
using facial reconstruction, ESRGAN upscaling, outcropping (similar to DALL-E
|
||||||
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
|
infinite canvas), and "embiggen" upscaling. See the `!fix` command.
|
||||||
- New `--hires` option on `invoke>` line allows
|
- New `--hires` option on `invoke>` line allows
|
||||||
[larger images to be created without duplicating elements](features/CLI.md#this-is-an-example-of-txt2img),
|
[larger images to be created without duplicating elements](deprecated/CLI.md#this-is-an-example-of-txt2img),
|
||||||
at the cost of some performance.
|
at the cost of some performance.
|
||||||
- New `--perlin` and `--threshold` options allow you to add and control
|
- New `--perlin` and `--threshold` options allow you to add and control
|
||||||
variation during image generation (see
|
variation during image generation (see
|
||||||
@ -408,7 +638,7 @@ sections describe what's new for InvokeAI.
|
|||||||
of images and tweaking of previous settings.
|
of images and tweaking of previous settings.
|
||||||
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
|
- Command-line completion in `invoke.py` now works on Windows, Linux and Mac
|
||||||
platforms.
|
platforms.
|
||||||
- Improved [command-line completion behavior](features/CLI.md) New commands
|
- Improved [command-line completion behavior](deprecated/CLI.md) New commands
|
||||||
added:
|
added:
|
||||||
- List command-line history with `!history`
|
- List command-line history with `!history`
|
||||||
- Search command-line history with `!search`
|
- Search command-line history with `!search`
|
||||||
|
BIN
docs/assets/features/restoration-montage.png
Normal file
BIN
docs/assets/features/restoration-montage.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 4.0 MiB |
BIN
docs/assets/features/upscale-dialog.png
Normal file
BIN
docs/assets/features/upscale-dialog.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 310 KiB |
BIN
docs/assets/features/upscaling-montage.png
Normal file
BIN
docs/assets/features/upscaling-montage.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 8.3 MiB |
54
docs/contributing/CONTRIBUTING.md
Normal file
54
docs/contributing/CONTRIBUTING.md
Normal file
@ -0,0 +1,54 @@
|
|||||||
|
## Welcome to Invoke AI
|
||||||
|
|
||||||
|
We're thrilled to have you here and we're excited for you to contribute.
|
||||||
|
|
||||||
|
Invoke AI originated as a project built by the community, and that vision carries forward today as we aim to build the best pro-grade tools available. We work together to incorporate the latest in AI/ML research, making these tools available in over 20 languages to artists and creatives around the world as part of our fully permissive OSS project designed for individual users to self-host and use.
|
||||||
|
|
||||||
|
Here are some guidelines to help you get started:
|
||||||
|
|
||||||
|
### Technical Prerequisites
|
||||||
|
|
||||||
|
Front-end: You'll need a working knowledge of React and TypeScript.
|
||||||
|
|
||||||
|
Back-end: Depending on the scope of your contribution, you may need to know SQLite, FastAPI, Python, and Socketio. Also, a good majority of the backend logic involved in processing images is built in a modular way using a concept called "Nodes", which are isolated functions that carry out individual, discrete operations. This design allows for easy contributions of novel pipelines and capabilities.
|
||||||
|
|
||||||
|
### How to Submit Contributions
|
||||||
|
|
||||||
|
To start contributing, please follow these steps:
|
||||||
|
|
||||||
|
1. Familiarize yourself with our roadmap and open projects to see where your skills and interests align. These documents can serve as a source of inspiration.
|
||||||
|
2. Open a Pull Request (PR) with a clear description of the feature you're adding or the problem you're solving. Make sure your contribution aligns with the project's vision.
|
||||||
|
3. Adhere to general best practices. This includes assuming interoperability with other nodes, keeping the scope of your functions as small as possible, and organizing your code according to our architecture documents.
|
||||||
|
|
||||||
|
### Types of Contributions We're Looking For
|
||||||
|
|
||||||
|
We welcome all contributions that improve the project. Right now, we're especially looking for:
|
||||||
|
|
||||||
|
1. Quality of life (QOL) enhancements on the front-end.
|
||||||
|
2. New backend capabilities added through nodes.
|
||||||
|
3. Incorporating additional optimizations from the broader open-source software community.
|
||||||
|
|
||||||
|
### Communication and Decision-making Process
|
||||||
|
|
||||||
|
Project maintainers and code owners review PRs to ensure they align with the project's goals. They may provide design or architectural guidance, suggestions on user experience, or provide more significant feedback on the contribution itself. Expect to receive feedback on your submissions, and don't hesitate to ask questions or propose changes.
|
||||||
|
|
||||||
|
For more robust discussions, or if you're planning to add capabilities not currently listed on our roadmap, please reach out to us on our Discord server. That way, we can ensure your proposed contribution aligns with the project's direction before you start writing code.
|
||||||
|
|
||||||
|
### Code of Conduct and Contribution Expectations
|
||||||
|
|
||||||
|
We want everyone in our community to have a positive experience. To facilitate this, we've established a code of conduct and a statement of values that we expect all contributors to adhere to. Please take a moment to review these documents—they're essential to maintaining a respectful and inclusive environment.
|
||||||
|
|
||||||
|
By making a contribution to this project, you certify that:
|
||||||
|
|
||||||
|
1. The contribution was created in whole or in part by you and you have the right to submit it under the open-source license indicated in this project’s GitHub repository; or
|
||||||
|
2. The contribution is based upon previous work that, to the best of your knowledge, is covered under an appropriate open-source license and you have the right under that license to submit that work with modifications, whether created in whole or in part by you, under the same open-source license (unless you are permitted to submit under a different license); or
|
||||||
|
3. The contribution was provided directly to you by some other person who certified (1) or (2) and you have not modified it; or
|
||||||
|
4. You understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information you submit with it, including your sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open-source license(s) involved.
|
||||||
|
|
||||||
|
This disclaimer is not a license and does not grant any rights or permissions. You must obtain necessary permissions and licenses, including from third parties, before contributing to this project.
|
||||||
|
|
||||||
|
This disclaimer is provided "as is" without warranty of any kind, whether expressed or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, or non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the contribution or the use or other dealings in the contribution.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Remember, your contributions help make this project great. We're excited to see what you'll bring to our community!
|
@ -205,14 +205,14 @@ Here are the invoke> command that apply to txt2img:
|
|||||||
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
|
| `--seamless` | | `False` | Activate seamless tiling for interesting effects |
|
||||||
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
|
| `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
|
||||||
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
|
| `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
|
||||||
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
|
| `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](../features/OTHER.md#weighted-prompts) |
|
||||||
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
| `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
||||||
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
|
| `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
|
||||||
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
|
| `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
|
||||||
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
|
| `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
|
||||||
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
| `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
||||||
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
|
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](../features/VARIATIONS.md). |
|
||||||
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
| `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](../features/VARIATIONS.md) for now to use this. |
|
||||||
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
| `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
|
||||||
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
|
| `--h_symmetry_time_pct <float>` | | `None` | Create symmetry along the X axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
|
||||||
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
|
| `--v_symmetry_time_pct <float>` | | `None` | Create symmetry along the Y axis at the desired percent complete of the generation process. (Must be between 0.0 and 1.0; set to a very small number like 0.0001 for just after the first step of generation.) |
|
||||||
@ -257,7 +257,7 @@ additional options:
|
|||||||
by `-M`. You may also supply just a single initial image with the areas
|
by `-M`. You may also supply just a single initial image with the areas
|
||||||
to overpaint made transparent, but you must be careful not to destroy
|
to overpaint made transparent, but you must be careful not to destroy
|
||||||
the pixels underneath when you create the transparent areas. See
|
the pixels underneath when you create the transparent areas. See
|
||||||
[Inpainting](./INPAINTING.md) for details.
|
[Inpainting](INPAINTING.md) for details.
|
||||||
|
|
||||||
inpainting accepts all the arguments used for txt2img and img2img, as well as
|
inpainting accepts all the arguments used for txt2img and img2img, as well as
|
||||||
the --mask (-M) and --text_mask (-tm) arguments:
|
the --mask (-M) and --text_mask (-tm) arguments:
|
||||||
@ -297,7 +297,7 @@ invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
|
|||||||
|
|
||||||
You can load and use hundreds of community-contributed Textual
|
You can load and use hundreds of community-contributed Textual
|
||||||
Inversion models just by typing the appropriate trigger phrase. Please
|
Inversion models just by typing the appropriate trigger phrase. Please
|
||||||
see [Concepts Library](CONCEPTS.md) for more details.
|
see [Concepts Library](../features/CONCEPTS.md) for more details.
|
||||||
|
|
||||||
## Other Commands
|
## Other Commands
|
||||||
|
|
@ -65,39 +65,21 @@ find out what each concept is for, you can browse the
|
|||||||
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
|
[Hugging Face concepts library](https://huggingface.co/sd-concepts-library) and
|
||||||
look at examples of what each concept produces.
|
look at examples of what each concept produces.
|
||||||
|
|
||||||
When you have an idea of a concept you wish to try, go to the command-line
|
To load concepts, you will need to open the Web UI's configuration
|
||||||
client (CLI) and type a `<` character and the beginning of the Hugging Face
|
dialogue and activate "Show Textual Inversions from HF Concepts
|
||||||
concept name you wish to load. Press ++tab++, and the CLI will show you all
|
Library". This will then add a list of HF Concepts to the dropdown
|
||||||
matching concepts. You can also type `<` and hit ++tab++ to get a listing of all
|
"Add Textual Inversion" menu. Select the concept(s) of your choice and
|
||||||
~800 concepts, but be prepared to scroll up to see them all! If there is more
|
they will be incorporated into the positive prompt. A few concepts are
|
||||||
than one match you can continue to type and ++tab++ until the concept is
|
designed for the negative prompt, in which case you can add them to
|
||||||
completed.
|
the negative prompt box by select the down arrow icon next to the
|
||||||
|
textual inversion menu.
|
||||||
|
|
||||||
!!! example
|
There are nearly 1000 HF concepts, more than will fit into a menu. For
|
||||||
|
this reason we only show the most popular concepts (those which have
|
||||||
if you type in `<x` and hit ++tab++, you'll be prompted with the completions:
|
received 5 or more likes). If you wish to use a concept that is not on
|
||||||
|
the list, you may simply type its name surrounded by brackets. For
|
||||||
```py
|
example, to load the concept named "xidiversity", add `<xidiversity>`
|
||||||
<xatu2> <xatu> <xbh> <xi> <xidiversity> <xioboma> <xuna> <xyz>
|
to the positive or negative prompt text.
|
||||||
```
|
|
||||||
|
|
||||||
Now type `id` and press ++tab++. It will be autocompleted to `<xidiversity>`
|
|
||||||
because this is a unique match.
|
|
||||||
|
|
||||||
Finish your prompt and generate as usual. You may include multiple concept terms
|
|
||||||
in the prompt.
|
|
||||||
|
|
||||||
If you have never used this concept before, you will see a message that the TI
|
|
||||||
model is being downloaded and installed. After this, the concept will be saved
|
|
||||||
locally (in the `models/sd-concepts-library` directory) for future use.
|
|
||||||
|
|
||||||
Several steps happen during downloading and installation, including a scan of
|
|
||||||
the file for malicious code. Should any errors occur, you will be warned and the
|
|
||||||
concept will fail to load. Generation will then continue treating the trigger
|
|
||||||
term as a normal string of characters (e.g. as literal `<ghibli-face>`).
|
|
||||||
|
|
||||||
You can also use `<concept-names>` in the WebGUI's prompt textbox. There is no
|
|
||||||
autocompletion at this time.
|
|
||||||
|
|
||||||
## Installing your Own TI Files
|
## Installing your Own TI Files
|
||||||
|
|
||||||
@ -112,18 +94,11 @@ At startup time, InvokeAI will scan the `embeddings` directory and load any TI
|
|||||||
files it finds there. At startup you will see a message similar to this one:
|
files it finds there. At startup you will see a message similar to this one:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
>> Current embedding manager terms: *, <HOI4-Leader>, <princess-knight>
|
>> Current embedding manager terms: <HOI4-Leader>, <princess-knight>
|
||||||
```
|
```
|
||||||
|
|
||||||
Note the `*` trigger term. This is a placeholder term that many early TI
|
The terms you can use will appear in the "Add Textual Inversion"
|
||||||
tutorials taught people to use rather than a more descriptive term.
|
dropdown menu above the HF Concepts.
|
||||||
Unfortunately, if you have multiple TI files that all use this term, only the
|
|
||||||
first one loaded will be triggered by use of the term.
|
|
||||||
|
|
||||||
To avoid this problem, you can use the `merge_embeddings.py` script to merge two
|
|
||||||
or more TI files together. If it encounters a collision of terms, the script
|
|
||||||
will prompt you to select new terms that do not collide. See
|
|
||||||
[Textual Inversion](TEXTUAL_INVERSION.md) for details.
|
|
||||||
|
|
||||||
## Further Reading
|
## Further Reading
|
||||||
|
|
||||||
|
92
docs/features/CONTROLNET.md
Normal file
92
docs/features/CONTROLNET.md
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
---
|
||||||
|
title: ControlNet
|
||||||
|
---
|
||||||
|
|
||||||
|
# :material-loupe: ControlNet
|
||||||
|
|
||||||
|
## ControlNet
|
||||||
|
|
||||||
|
ControlNet
|
||||||
|
|
||||||
|
ControlNet is a powerful set of features developed by the open-source community (notably, Stanford researcher [**@ilyasviel**](https://github.com/lllyasviel)) that allows you to apply a secondary neural network model to your image generation process in Invoke.
|
||||||
|
|
||||||
|
With ControlNet, you can get more control over the output of your image generation, providing you with a way to direct the network towards generating images that better fit your desired style or outcome.
|
||||||
|
|
||||||
|
|
||||||
|
### How it works
|
||||||
|
|
||||||
|
ControlNet works by analyzing an input image, pre-processing that image to identify relevant information that can be interpreted by each specific ControlNet model, and then inserting that control information into the generation process. This can be used to adjust the style, composition, or other aspects of the image to better achieve a specific result.
|
||||||
|
|
||||||
|
|
||||||
|
### Models
|
||||||
|
|
||||||
|
As part of the model installation, ControlNet models can be selected including a variety of pre-trained models that have been added to achieve different effects or styles in your generated images. Further ControlNet models may require additional code functionality to also be incorporated into Invoke's Invocations folder. You should expect to follow any installation instructions for ControlNet models loaded outside the default models provided by Invoke. The default models include:
|
||||||
|
|
||||||
|
|
||||||
|
**Canny**:
|
||||||
|
|
||||||
|
When the Canny model is used in ControlNet, Invoke will attempt to generate images that match the edges detected.
|
||||||
|
|
||||||
|
Canny edge detection works by detecting the edges in an image by looking for abrupt changes in intensity. It is known for its ability to detect edges accurately while reducing noise and false edges, and the preprocessor can identify more information by decreasing the thresholds.
|
||||||
|
|
||||||
|
**M-LSD**:
|
||||||
|
|
||||||
|
M-LSD is another edge detection algorithm used in ControlNet. It stands for Multi-Scale Line Segment Detector.
|
||||||
|
|
||||||
|
It detects straight line segments in an image by analyzing the local structure of the image at multiple scales. It can be useful for architectural imagery, or anything where straight-line structural information is needed for the resulting output.
|
||||||
|
|
||||||
|
**Lineart**:
|
||||||
|
|
||||||
|
The Lineart model in ControlNet generates line drawings from an input image. The resulting pre-processed image is a simplified version of the original, with only the outlines of objects visible.The Lineart model in ControlNet is known for its ability to accurately capture the contours of the objects in an input sketch.
|
||||||
|
|
||||||
|
**Lineart Anime**:
|
||||||
|
|
||||||
|
A variant of the Lineart model that generates line drawings with a distinct style inspired by anime and manga art styles.
|
||||||
|
|
||||||
|
**Depth**:
|
||||||
|
A model that generates depth maps of images, allowing you to create more realistic 3D models or to simulate depth effects in post-processing.
|
||||||
|
|
||||||
|
**Normal Map (BAE):**
|
||||||
|
A model that generates normal maps from input images, allowing for more realistic lighting effects in 3D rendering.
|
||||||
|
|
||||||
|
**Image Segmentation**:
|
||||||
|
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
|
||||||
|
|
||||||
|
|
||||||
|
**Openpose**:
|
||||||
|
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
|
||||||
|
|
||||||
|
**Mediapipe Face**:
|
||||||
|
|
||||||
|
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.
|
||||||
|
|
||||||
|
**Tile (experimental)**:
|
||||||
|
|
||||||
|
The Tile model fills out details in the image to match the image, rather than the prompt. The Tile Model is a versatile tool that offers a range of functionalities. Its primary capabilities can be boiled down to two main behaviors:
|
||||||
|
|
||||||
|
- It can reinterpret specific details within an image and create fresh, new elements.
|
||||||
|
- It has the ability to disregard global instructions if there's a discrepancy between them and the local context or specific parts of the image. In such cases, it uses the local context to guide the process.
|
||||||
|
|
||||||
|
The Tile Model can be a powerful tool in your arsenal for enhancing image quality and details. If there are undesirable elements in your images, such as blurriness caused by resizing, this model can effectively eliminate these issues, resulting in cleaner, crisper images. Moreover, it can generate and add refined details to your images, improving their overall quality and appeal.
|
||||||
|
|
||||||
|
**Pix2Pix (experimental)**
|
||||||
|
|
||||||
|
With Pix2Pix, you can input an image into the controlnet, and then "instruct" the model to change it using your prompt. For example, you can say "Make it winter" to add more wintry elements to a scene.
|
||||||
|
|
||||||
|
**Inpaint**: Coming Soon - Currently this model is available but not functional on the Canvas. An upcoming release will provide additional capabilities for using this model when inpainting.
|
||||||
|
|
||||||
|
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
|
||||||
|
|
||||||
|
|
||||||
|
## Using ControlNet
|
||||||
|
|
||||||
|
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
|
||||||
|
|
||||||
|
|
||||||
|
Each ControlNet has two settings that are applied to the ControlNet.
|
||||||
|
|
||||||
|
Weight - Strength of the Controlnet model applied to the generation for the section, defined by start/end.
|
||||||
|
|
||||||
|
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
|
||||||
|
|
||||||
|
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
|
@ -4,86 +4,13 @@ title: Image-to-Image
|
|||||||
|
|
||||||
# :material-image-multiple: Image-to-Image
|
# :material-image-multiple: Image-to-Image
|
||||||
|
|
||||||
Both the Web and command-line interfaces provide an "img2img" feature
|
InvokeAI provides an "img2img" feature that lets you seed your
|
||||||
that lets you seed your creations with an initial drawing or
|
creations with an initial drawing or photo. This is a really cool
|
||||||
photo. This is a really cool feature that tells stable diffusion to
|
feature that tells stable diffusion to build the prompt on top of the
|
||||||
build the prompt on top of the image you provide, preserving the
|
image you provide, preserving the original's basic shape and layout.
|
||||||
original's basic shape and layout.
|
|
||||||
|
|
||||||
See the [WebUI Guide](WEB.md) for a walkthrough of the img2img feature
|
For a walkthrough of using Image-to-Image in the Web UI, see [InvokeAI
|
||||||
in the InvokeAI web server. This document describes how to use img2img
|
Web Server](./WEB.md#image-to-image).
|
||||||
in the command-line tool.
|
|
||||||
|
|
||||||
## Basic Usage
|
|
||||||
|
|
||||||
Launch the command-line client by launching `invoke.sh`/`invoke.bat`
|
|
||||||
and choosing option (1). Alternative, activate the InvokeAI
|
|
||||||
environment and issue the command `invokeai`.
|
|
||||||
|
|
||||||
Once the `invoke> ` prompt appears, you can start an img2img render by
|
|
||||||
pointing to a seed file with the `-I` option as shown here:
|
|
||||||
|
|
||||||
!!! example ""
|
|
||||||
|
|
||||||
```commandline
|
|
||||||
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
|
|
||||||
| original image | generated image |
|
|
||||||
| :------------: | :-------------: |
|
|
||||||
| ![original-image](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png){ width=320 } | ![generated-image](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png){ width=320 } |
|
|
||||||
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
|
||||||
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
|
|
||||||
the original intact), to `1.0` (ignore the original completely). The default is
|
|
||||||
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
|
|
||||||
options include `-C` (classification free guidance scale), and `-s` (steps).
|
|
||||||
Unlike `txt2img`, adding steps will continuously change the resulting image and
|
|
||||||
it will not converge.
|
|
||||||
|
|
||||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
|
|
||||||
count variants on the original image. This is done by passing the first
|
|
||||||
generated image back into img2img the requested number of times. It generates
|
|
||||||
interesting variants.
|
|
||||||
|
|
||||||
Note that the prompt makes a big difference. For example, this slight variation
|
|
||||||
on the prompt produces a very different image:
|
|
||||||
|
|
||||||
<figure markdown>
|
|
||||||
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png){ width=320 }
|
|
||||||
<caption markdown>photograph of a tree on a hill with a river</caption>
|
|
||||||
</figure>
|
|
||||||
|
|
||||||
!!! tip
|
|
||||||
|
|
||||||
When designing prompts, think about how the images scraped from the internet were
|
|
||||||
captioned. Very few photographs will be labeled "photograph" or "photorealistic."
|
|
||||||
They will, however, be captioned with the publication, photographer, camera model,
|
|
||||||
or film settings.
|
|
||||||
|
|
||||||
If the initial image contains transparent regions, then Stable Diffusion will
|
|
||||||
only draw within the transparent regions, a process called
|
|
||||||
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
|
|
||||||
However, for this to work correctly, the color information underneath the
|
|
||||||
transparent needs to be preserved, not erased.
|
|
||||||
|
|
||||||
!!! warning "**IMPORTANT ISSUE** "
|
|
||||||
|
|
||||||
`img2img` does not work properly on initial images smaller
|
|
||||||
than 512x512. Please scale your image to at least 512x512 before using it.
|
|
||||||
Larger images are not a problem, but may run out of VRAM on your GPU card. To
|
|
||||||
fix this, use the --fit option, which downscales the initial image to fit within
|
|
||||||
the box specified by width x height:
|
|
||||||
|
|
||||||
```
|
|
||||||
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
|
||||||
```
|
|
||||||
|
|
||||||
## How does it actually work, though?
|
|
||||||
|
|
||||||
The main difference between `img2img` and `prompt2img` is the starting point.
|
The main difference between `img2img` and `prompt2img` is the starting point.
|
||||||
While `prompt2img` always starts with pure gaussian noise and progressively
|
While `prompt2img` always starts with pure gaussian noise and progressively
|
||||||
@ -99,10 +26,6 @@ seed `1592514025` develops something like this:
|
|||||||
|
|
||||||
!!! example ""
|
!!! example ""
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![latent steps](../assets/img2img/000019.steps.png){ width=720 }
|
![latent steps](../assets/img2img/000019.steps.png){ width=720 }
|
||||||
</figure>
|
</figure>
|
||||||
@ -157,17 +80,8 @@ Diffusion has less chance to refine itself, so the result ends up inheriting all
|
|||||||
the problems of my bad drawing.
|
the problems of my bad drawing.
|
||||||
|
|
||||||
If you want to try this out yourself, all of these are using a seed of
|
If you want to try this out yourself, all of these are using a seed of
|
||||||
`1592514025` with a width/height of `384`, step count `10`, the default sampler
|
`1592514025` with a width/height of `384`, step count `10`, the
|
||||||
(`k_lms`), and the single-word prompt `"fire"`:
|
`k_lms` sampler, and the single-word prompt `"fire"`.
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
|
||||||
```
|
|
||||||
|
|
||||||
The code for rendering intermediates is on my (damian0815's) branch
|
|
||||||
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
|
|
||||||
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
|
|
||||||
generating an image.
|
|
||||||
|
|
||||||
### Compensating for the reduced step count
|
### Compensating for the reduced step count
|
||||||
|
|
||||||
@ -180,10 +94,6 @@ give each generation 20 steps.
|
|||||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
||||||
does `20` steps from my image):
|
does `20` steps from my image):
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![000035.1592514025](../assets/img2img/000035.1592514025.png)
|
![000035.1592514025](../assets/img2img/000035.1592514025.png)
|
||||||
</figure>
|
</figure>
|
||||||
@ -191,10 +101,6 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
|||||||
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
|
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
|
||||||
make sure SD does `20` steps from my image):
|
make sure SD does `20` steps from my image):
|
||||||
|
|
||||||
```commandline
|
|
||||||
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![000046.1592514025](../assets/img2img/000046.1592514025.png)
|
![000046.1592514025](../assets/img2img/000046.1592514025.png)
|
||||||
</figure>
|
</figure>
|
||||||
|
@ -71,6 +71,3 @@ under the selected name and register it with InvokeAI.
|
|||||||
use InvokeAI conventions - only alphanumeric letters and the
|
use InvokeAI conventions - only alphanumeric letters and the
|
||||||
characters ".+-".
|
characters ".+-".
|
||||||
|
|
||||||
## Caveats
|
|
||||||
|
|
||||||
This is a new script and may contain bugs.
|
|
||||||
|
@ -31,10 +31,22 @@ turned on and off on the command line using `--nsfw_checker` and
|
|||||||
|
|
||||||
At installation time, InvokeAI will ask whether the checker should be
|
At installation time, InvokeAI will ask whether the checker should be
|
||||||
activated by default (neither argument given on the command line). The
|
activated by default (neither argument given on the command line). The
|
||||||
response is stored in the InvokeAI initialization file (usually
|
response is stored in the InvokeAI initialization file
|
||||||
`invokeai.init` in your home directory). You can change the default at any
|
(`invokeai.yaml` in the InvokeAI root directory). You can change the
|
||||||
time by opening this file in a text editor and commenting or
|
default at any time by opening this file in a text editor and
|
||||||
uncommenting the line `--nsfw_checker`.
|
changing the line `nsfw_checker:` from true to false or vice-versa:
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
...
|
||||||
|
Features:
|
||||||
|
esrgan: true
|
||||||
|
internet_available: true
|
||||||
|
log_tokenization: false
|
||||||
|
nsfw_checker: true
|
||||||
|
patchmatch: true
|
||||||
|
restore: true
|
||||||
|
```
|
||||||
|
|
||||||
## Caveats
|
## Caveats
|
||||||
|
|
||||||
@ -79,11 +91,3 @@ generates. However, it does write metadata into the PNG data area,
|
|||||||
including the prompt used to generate the image and relevant parameter
|
including the prompt used to generate the image and relevant parameter
|
||||||
settings. These fields can be examined using the `sd-metadata.py`
|
settings. These fields can be examined using the `sd-metadata.py`
|
||||||
script that comes with the InvokeAI package.
|
script that comes with the InvokeAI package.
|
||||||
|
|
||||||
Note that several other Stable Diffusion distributions offer
|
|
||||||
wavelet-based "invisible" watermarking. We have experimented with the
|
|
||||||
library used to generate these watermarks and have reached the
|
|
||||||
conclusion that while the watermarking library may be adding
|
|
||||||
watermarks to PNG images, the currently available version is unable to
|
|
||||||
retrieve them successfully. If and when a functioning version of the
|
|
||||||
library becomes available, we will offer this feature as well.
|
|
||||||
|
@ -18,43 +18,16 @@ Output Example:
|
|||||||
|
|
||||||
## **Seamless Tiling**
|
## **Seamless Tiling**
|
||||||
|
|
||||||
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
|
The seamless tiling mode causes generated images to seamlessly tile
|
||||||
`--seamless` option when starting the script which will result in all generated images to tile, or
|
with itself creating repetitive wallpaper-like patterns. To use it,
|
||||||
for each `invoke>` prompt as shown here:
|
activate the Seamless Tiling option in the Web GUI and then select
|
||||||
|
whether to tile on the X (horizontal) and/or Y (vertical) axes. Tiling
|
||||||
|
will then be active for the next set of generations.
|
||||||
|
|
||||||
|
A nice prompt to test seamless tiling with is:
|
||||||
|
|
||||||
```python
|
|
||||||
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
|
||||||
```
|
```
|
||||||
|
pond garden with lotus by claude monet"
|
||||||
By default this will tile on both the X and Y axes. However, you can also specify specific axes to tile on with `--seamless_axes`.
|
|
||||||
Possible values are `x`, `y`, and `x,y`:
|
|
||||||
```python
|
|
||||||
invoke> "pond garden with lotus by claude monet" --seamless --seamless_axes=x -s100 -n4
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **Shortcuts: Reusing Seeds**
|
|
||||||
|
|
||||||
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
|
|
||||||
1.11. Provide a `-S` (or `--seed`) switch of `-1` to use the seed of the most recent image
|
|
||||||
generated. If you produced multiple images with the `-n` switch, then you can go back further
|
|
||||||
using `-2`, `-3`, etc. up to the first image generated by the previous command. Sorry, but you can't go
|
|
||||||
back further than one command.
|
|
||||||
|
|
||||||
Here's an example of using this to do a quick refinement. It also illustrates using the new `-G`
|
|
||||||
switch to turn on upscaling and face enhancement (see previous section):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> a cute child playing hopscotch -G0.5
|
|
||||||
[...]
|
|
||||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
|
||||||
|
|
||||||
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
|
|
||||||
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
|
|
||||||
reusing previous seed 3498014304
|
|
||||||
[...]
|
|
||||||
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
@ -73,66 +46,27 @@ This will tell the sampler to invest 25% of its effort on the tabby cat aspect o
|
|||||||
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
|
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
|
||||||
combination of integers and floating point numbers, and they do not need to add up to 1.
|
combination of integers and floating point numbers, and they do not need to add up to 1.
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **Filename Format**
|
|
||||||
|
|
||||||
The argument `--fnformat` allows to specify the filename of the
|
|
||||||
image. Supported wildcards are all arguments what can be set such as
|
|
||||||
`perlin`, `seed`, `threshold`, `height`, `width`, `gfpgan_strength`,
|
|
||||||
`sampler_name`, `steps`, `model`, `upscale`, `prompt`, `cfg_scale`,
|
|
||||||
`prefix`.
|
|
||||||
|
|
||||||
The following prompt
|
|
||||||
```bash
|
|
||||||
dream> a red car --steps 25 -C 9.8 --perlin 0.1 --fnformat {prompt}_steps.{steps}_cfg.{cfg_scale}_perlin.{perlin}.png
|
|
||||||
```
|
|
||||||
|
|
||||||
generates a file with the name: `outputs/img-samples/a red car_steps.25_cfg.9.8_perlin.0.1.png`
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **Thresholding and Perlin Noise Initialization Options**
|
## **Thresholding and Perlin Noise Initialization Options**
|
||||||
|
|
||||||
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
|
Under the Noise section of the Web UI, you will find two options named
|
||||||
|
Perlin Noise and Noise Threshold. [Perlin
|
||||||
|
noise](https://en.wikipedia.org/wiki/Perlin_noise) is a type of
|
||||||
|
structured noise used to simulate terrain and other natural
|
||||||
|
textures. The slider controls the percentage of perlin noise that will
|
||||||
|
be mixed into the image at the beginning of generation. Adding a little
|
||||||
|
perlin noise to a generation will alter the image substantially.
|
||||||
|
|
||||||
|
The noise threshold limits the range of the latent values during
|
||||||
|
sampling and helps combat the oversharpening seem with higher CFG
|
||||||
|
scale values.
|
||||||
|
|
||||||
For better intuition into what these options do in practice:
|
For better intuition into what these options do in practice:
|
||||||
|
|
||||||
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
|
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
|
||||||
|
|
||||||
In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
|
In generating this graphic, perlin noise at initialization was
|
||||||
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
|
programmatically varied going across on the diagram by values 0.0,
|
||||||
|
0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied
|
||||||
```bash
|
going down from 0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are
|
||||||
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
|
fixed using the prompt "a portrait of a beautiful young lady" a CFG of
|
||||||
```
|
20, 100 steps, and a seed of 1950357039.
|
||||||
|
|
||||||
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
|
|
||||||
currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **Simplified API**
|
|
||||||
|
|
||||||
For programmers who wish to incorporate stable-diffusion into other products, this repository
|
|
||||||
includes a simplified API for text to image generation, which lets you create images from a prompt
|
|
||||||
in just three lines of code:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
from ldm.generate import Generate
|
|
||||||
g = Generate()
|
|
||||||
outputs = g.txt2img("a unicorn in manhattan")
|
|
||||||
```
|
|
||||||
|
|
||||||
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
|
|
||||||
|
|
||||||
Please see the documentation in ldm/generate.py for more information.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
@ -8,12 +8,6 @@ title: Postprocessing
|
|||||||
|
|
||||||
This extension provides the ability to restore faces and upscale images.
|
This extension provides the ability to restore faces and upscale images.
|
||||||
|
|
||||||
Face restoration and upscaling can be applied at the time you generate the
|
|
||||||
images, or at any later time against a previously-generated PNG file, using the
|
|
||||||
[!fix](#fixing-previously-generated-images) command.
|
|
||||||
[Outpainting and outcropping](OUTPAINTING.md) can only be applied after the
|
|
||||||
fact.
|
|
||||||
|
|
||||||
## Face Fixing
|
## Face Fixing
|
||||||
|
|
||||||
The default face restoration module is GFPGAN. The default upscale is
|
The default face restoration module is GFPGAN. The default upscale is
|
||||||
@ -23,8 +17,7 @@ Real-ESRGAN. For an alternative face restoration module, see
|
|||||||
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
|
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
|
||||||
the standard install location for python packages, and will put GFPGAN into a
|
the standard install location for python packages, and will put GFPGAN into a
|
||||||
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
|
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
|
||||||
should "just work" without further intervention. Simply pass the `--upscale`
|
should "just work" without further intervention. Simply indicate the desired scale on
|
||||||
(`-U`) option on the `invoke>` command line, or indicate the desired scale on
|
|
||||||
the popup in the Web GUI.
|
the popup in the Web GUI.
|
||||||
|
|
||||||
**GFPGAN** requires a series of downloadable model files to work. These are
|
**GFPGAN** requires a series of downloadable model files to work. These are
|
||||||
@ -41,48 +34,75 @@ reconstruction.
|
|||||||
|
|
||||||
### Upscaling
|
### Upscaling
|
||||||
|
|
||||||
`-U : <upscaling_factor> <upscaling_strength>`
|
Open the upscaling dialog by clicking on the "expand" icon located
|
||||||
|
above the image display area in the Web UI:
|
||||||
|
|
||||||
The upscaling prompt argument takes two values. The first value is a scaling
|
<figure markdown>
|
||||||
factor and should be set to either `2` or `4` only. This will either scale the
|
![upscale1](../assets/features/upscale-dialog.png)
|
||||||
image 2x or 4x respectively using different models.
|
</figure>
|
||||||
|
|
||||||
You can set the scaling stength between `0` and `1.0` to control intensity of
|
There are three different upscaling parameters that you can
|
||||||
the of the scaling. This is handy because AI upscalers generally tend to smooth
|
adjust. The first is the scale itself, either 2x or 4x.
|
||||||
out texture details. If you wish to retain some of those for natural looking
|
|
||||||
results, we recommend using values between `0.5 to 0.8`.
|
|
||||||
|
|
||||||
If you do not explicitly specify an upscaling_strength, it will default to 0.75.
|
The second is the "Denoising Strength." Higher values will smooth out
|
||||||
|
the image and remove digital chatter, but may lose fine detail at
|
||||||
|
higher values.
|
||||||
|
|
||||||
|
Third, "Upscale Strength" allows you to adjust how the You can set the
|
||||||
|
scaling stength between `0` and `1.0` to control the intensity of the
|
||||||
|
scaling. AI upscalers generally tend to smooth out texture details. If
|
||||||
|
you wish to retain some of those for natural looking results, we
|
||||||
|
recommend using values between `0.5 to 0.8`.
|
||||||
|
|
||||||
|
[This figure](../assets/features/upscaling-montage.png) illustrates
|
||||||
|
the effects of denoising and strength. The original image was 512x512,
|
||||||
|
4x scaled to 2048x2048. The "original" version on the upper left was
|
||||||
|
scaled using simple pixel averaging. The remainder use the ESRGAN
|
||||||
|
upscaling algorithm at different levels of denoising and strength.
|
||||||
|
|
||||||
|
<figure markdown>
|
||||||
|
![upscaling](../assets/features/upscaling-montage.png){ width=720 }
|
||||||
|
</figure>
|
||||||
|
|
||||||
|
Both denoising and strength default to 0.75.
|
||||||
|
|
||||||
### Face Restoration
|
### Face Restoration
|
||||||
|
|
||||||
`-G : <facetool_strength>`
|
InvokeAI offers alternative two face restoration algorithms,
|
||||||
|
[GFPGAN](https://github.com/TencentARC/GFPGAN) and
|
||||||
|
[CodeFormer](https://huggingface.co/spaces/sczhou/CodeFormer). These
|
||||||
|
algorithms improve the appearance of faces, particularly eyes and
|
||||||
|
mouths. Issues with faces are less common with the latest set of
|
||||||
|
Stable Diffusion models than with the original 1.4 release, but the
|
||||||
|
restoration algorithms can still make a noticeable improvement in
|
||||||
|
certain cases. You can also apply restoration to old photographs you
|
||||||
|
upload.
|
||||||
|
|
||||||
This prompt argument controls the strength of the face restoration that is being
|
To access face restoration, click the "smiley face" icon in the
|
||||||
applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
|
toolbar above the InvokeAI image panel. You will be presented with a
|
||||||
|
dialog that offers a choice between the two algorithm and sliders that
|
||||||
|
allow you to adjust their parameters. Alternatively, you may open the
|
||||||
|
left-hand accordion panel labeled "Face Restoration" and have the
|
||||||
|
restoration algorithm of your choice applied to generated images
|
||||||
|
automatically.
|
||||||
|
|
||||||
You can use either one or both without any conflicts. In cases where you use
|
|
||||||
both, the image will be first upscaled and then the face restoration process
|
|
||||||
will be executed to ensure you get the highest quality facial features.
|
|
||||||
|
|
||||||
`--save_orig`
|
Like upscaling, there are a number of parameters that adjust the face
|
||||||
|
restoration output. GFPGAN has a single parameter, `strength`, which
|
||||||
|
controls how much the algorithm is allowed to adjust the
|
||||||
|
image. CodeFormer has two parameters, `strength`, and `fidelity`,
|
||||||
|
which together control the quality of the output image as described in
|
||||||
|
the [CodeFormer project
|
||||||
|
page](https://shangchenzhou.com/projects/CodeFormer/). Default values
|
||||||
|
are 0.75 for both parameters, which achieves a reasonable balance
|
||||||
|
between changing the image too much and not enough.
|
||||||
|
|
||||||
When you use either `-U` or `-G`, the final result you get is upscaled or face
|
[This figure](../assets/features/restoration-montage.png) illustrates
|
||||||
modified. If you want to save the original Stable Diffusion generation, you can
|
the effects of adjusting GFPGAN and CodeFormer parameters.
|
||||||
use the `-save_orig` prompt argument to save the original unaffected version
|
|
||||||
too.
|
|
||||||
|
|
||||||
### Example Usage
|
<figure markdown>
|
||||||
|
![upscaling](../assets/features/restoration-montage.png){ width=720 }
|
||||||
```bash
|
</figure>
|
||||||
invoke> "superman dancing with a panda bear" -U 2 0.6 -G 0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
This also works with img2img:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
|
|
||||||
@ -95,69 +115,8 @@ invoke> "a man wearing a pineapple hat" -I path/to/your/file.png -U 2 0.5 -G 0.6
|
|||||||
process is complete. While the image generation is taking place, you will still be able to preview
|
process is complete. While the image generation is taking place, you will still be able to preview
|
||||||
the base images.
|
the base images.
|
||||||
|
|
||||||
If you wish to stop during the image generation but want to upscale or face
|
|
||||||
restore a particular generated image, pass it again with the same prompt and
|
|
||||||
generated seed along with the `-U` and `-G` prompt arguments to perform those
|
|
||||||
actions.
|
|
||||||
|
|
||||||
## CodeFormer Support
|
|
||||||
|
|
||||||
This repo also allows you to perform face restoration using
|
|
||||||
[CodeFormer](https://github.com/sczhou/CodeFormer).
|
|
||||||
|
|
||||||
In order to setup CodeFormer to work, you need to download the models like with
|
|
||||||
GFPGAN. You can do this either by running `invokeai-configure` or by manually
|
|
||||||
downloading the
|
|
||||||
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
|
|
||||||
and saving it to `ldm/invoke/restoration/codeformer/weights` folder.
|
|
||||||
|
|
||||||
You can use `-ft` prompt argument to swap between CodeFormer and the default
|
|
||||||
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
|
|
||||||
strength of the restoration effect.
|
|
||||||
|
|
||||||
### CodeFormer Usage
|
|
||||||
|
|
||||||
The following command will perform face restoration with CodeFormer instead of
|
|
||||||
the default gfpgan.
|
|
||||||
|
|
||||||
`<prompt> -G 0.8 -ft codeformer`
|
|
||||||
|
|
||||||
### Other Options
|
|
||||||
|
|
||||||
- `-cf` - cf or CodeFormer Fidelity takes values between `0` and `1`. 0 produces
|
|
||||||
high quality results but low accuracy and 1 produces lower quality results but
|
|
||||||
higher accuacy to your original face.
|
|
||||||
|
|
||||||
The following command will perform face restoration with CodeFormer. CodeFormer
|
|
||||||
will output a result that is closely matching to the input face.
|
|
||||||
|
|
||||||
`<prompt> -G 1.0 -ft codeformer -cf 0.9`
|
|
||||||
|
|
||||||
The following command will perform face restoration with CodeFormer. CodeFormer
|
|
||||||
will output a result that is the best restoration possible. This may deviate
|
|
||||||
slightly from the original face. This is an excellent option to use in
|
|
||||||
situations when there is very little facial data to work with.
|
|
||||||
|
|
||||||
`<prompt> -G 1.0 -ft codeformer -cf 0.1`
|
|
||||||
|
|
||||||
## Fixing Previously-Generated Images
|
|
||||||
|
|
||||||
It is easy to apply face restoration and/or upscaling to any
|
|
||||||
previously-generated file. Just use the syntax
|
|
||||||
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
|
|
||||||
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
|
|
||||||
just run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
|
|
||||||
```
|
|
||||||
|
|
||||||
A new file named `000044.2945021133.fixed.png` will be created in the output
|
|
||||||
directory. Note that the `!fix` command does not replace the original file,
|
|
||||||
unlike the behavior at generate time.
|
|
||||||
|
|
||||||
## How to disable
|
## How to disable
|
||||||
|
|
||||||
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
|
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
|
||||||
you can disable them on the invoke.py command line with the `--no_restore` and
|
you can disable them on the invoke.py command line with the `--no_restore` and
|
||||||
`--no_upscale` options, respectively.
|
`--no_esrgan` options, respectively.
|
||||||
|
@ -4,77 +4,12 @@ title: Prompting-Features
|
|||||||
|
|
||||||
# :octicons-command-palette-24: Prompting-Features
|
# :octicons-command-palette-24: Prompting-Features
|
||||||
|
|
||||||
## **Reading Prompts from a File**
|
|
||||||
|
|
||||||
You can automate `invoke.py` by providing a text file with the prompts you want
|
|
||||||
to run, one line per prompt. The text file must be composed with a text editor
|
|
||||||
(e.g. Notepad) and not a word processor. Each line should look like what you
|
|
||||||
would type at the invoke> prompt:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
"a beautiful sunny day in the park, children playing" -n4 -C10
|
|
||||||
"stormy weather on a mountain top, goats grazing" -s100
|
|
||||||
"innovative packaging for a squid's dinner" -S137038382
|
|
||||||
```
|
|
||||||
|
|
||||||
Then pass this file's name to `invoke.py` when you invoke it:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python scripts/invoke.py --from_file "/path/to/prompts.txt"
|
|
||||||
```
|
|
||||||
|
|
||||||
You may also read a series of prompts from standard input by providing
|
|
||||||
a filename of `-`. For example, here is a python script that creates a
|
|
||||||
matrix of prompts, each one varying slightly:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/usr/bin/env python
|
|
||||||
|
|
||||||
adjectives = ['sunny','rainy','overcast']
|
|
||||||
samplers = ['k_lms','k_euler_a','k_heun']
|
|
||||||
cfg = [7.5, 9, 11]
|
|
||||||
|
|
||||||
for adj in adjectives:
|
|
||||||
for samp in samplers:
|
|
||||||
for cg in cfg:
|
|
||||||
print(f'a {adj} day -A{samp} -C{cg}')
|
|
||||||
```
|
|
||||||
|
|
||||||
Its output looks like this (abbreviated):
|
|
||||||
|
|
||||||
```bash
|
|
||||||
a sunny day -Aklms -C7.5
|
|
||||||
a sunny day -Aklms -C9
|
|
||||||
a sunny day -Aklms -C11
|
|
||||||
a sunny day -Ak_euler_a -C7.5
|
|
||||||
a sunny day -Ak_euler_a -C9
|
|
||||||
...
|
|
||||||
a overcast day -Ak_heun -C9
|
|
||||||
a overcast day -Ak_heun -C11
|
|
||||||
```
|
|
||||||
|
|
||||||
To feed it to invoke.py, pass the filename of "-"
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python matrix.py | python scripts/invoke.py --from_file -
|
|
||||||
```
|
|
||||||
|
|
||||||
When the script is finished, each of the 27 combinations
|
|
||||||
of adjective, sampler and CFG will be executed.
|
|
||||||
|
|
||||||
The command-line interface provides `!fetch` and `!replay` commands
|
|
||||||
which allow you to read the prompts from a single previously-generated
|
|
||||||
image or a whole directory of them, write the prompts to a file, and
|
|
||||||
then replay them. Or you can create your own file of prompts and feed
|
|
||||||
them to the command-line client from within an interactive session.
|
|
||||||
See [Command-Line Interface](CLI.md) for details.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## **Negative and Unconditioned Prompts**
|
## **Negative and Unconditioned Prompts**
|
||||||
|
|
||||||
Any words between a pair of square brackets will instruct Stable Diffusion to
|
Any words between a pair of square brackets will instruct Stable
|
||||||
attempt to ban the concept from the generated image.
|
Diffusion to attempt to ban the concept from the generated image. The
|
||||||
|
same effect is achieved by placing words in the "Negative Prompts"
|
||||||
|
textbox in the Web UI.
|
||||||
|
|
||||||
```text
|
```text
|
||||||
this is a test prompt [not really] to make you understand [cool] how this works.
|
this is a test prompt [not really] to make you understand [cool] how this works.
|
||||||
@ -87,7 +22,9 @@ Here's a prompt that depicts what it does.
|
|||||||
|
|
||||||
original prompt:
|
original prompt:
|
||||||
|
|
||||||
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent pony made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve"`
|
||||||
|
|
||||||
|
`#!bash parameters: steps=20, dimensions=512x768, CFG=7.5, Scheduler=k_euler_a, seed=1654590180`
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
@ -99,7 +36,8 @@ That image has a woman, so if we want the horse without a rider, we can
|
|||||||
influence the image not to have a woman by putting [woman] in the prompt, like
|
influence the image not to have a woman by putting [woman] in the prompt, like
|
||||||
this:
|
this:
|
||||||
|
|
||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]"`
|
||||||
|
(same parameters as above)
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
@ -110,7 +48,8 @@ this:
|
|||||||
That's nice - but say we also don't want the image to be quite so blue. We can
|
That's nice - but say we also don't want the image to be quite so blue. We can
|
||||||
add "blue" to the list of negative prompts, so it's now [woman blue]:
|
add "blue" to the list of negative prompts, so it's now [woman blue]:
|
||||||
|
|
||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]"`
|
||||||
|
(same parameters as above)
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
@ -121,7 +60,8 @@ add "blue" to the list of negative prompts, so it's now [woman blue]:
|
|||||||
Getting close - but there's no sense in having a saddle when our horse doesn't
|
Getting close - but there's no sense in having a saddle when our horse doesn't
|
||||||
have a rider, so we'll add one more negative prompt: [woman blue saddle].
|
have a rider, so we'll add one more negative prompt: [woman blue saddle].
|
||||||
|
|
||||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]"`
|
||||||
|
(same parameters as above)
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
|
|
||||||
@ -261,19 +201,6 @@ Prompt2prompt `.swap()` is not compatible with xformers, which will be temporari
|
|||||||
The `prompt2prompt` code is based off
|
The `prompt2prompt` code is based off
|
||||||
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
||||||
|
|
||||||
Note that `prompt2prompt` is not currently working with the runwayML inpainting
|
|
||||||
model, and may never work due to the way this model is set up. If you attempt to
|
|
||||||
use `prompt2prompt` you will get the original image back. However, since this
|
|
||||||
model is so good at inpainting, a good substitute is to use the `clipseg` text
|
|
||||||
masking option:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> a fluffy cat eating a hotdog
|
|
||||||
Outputs:
|
|
||||||
[1010] outputs/000025.2182095108.png: a fluffy cat eating a hotdog
|
|
||||||
invoke> a smiling dog eating a hotdog -I 000025.2182095108.png -tm cat
|
|
||||||
```
|
|
||||||
|
|
||||||
### Escaping parantheses () and speech marks ""
|
### Escaping parantheses () and speech marks ""
|
||||||
|
|
||||||
If the model you are using has parentheses () or speech marks "" as part of its
|
If the model you are using has parentheses () or speech marks "" as part of its
|
||||||
@ -374,6 +301,5 @@ summoning up the concept of some sort of scifi creature? Let's find out.
|
|||||||
Indeed, removing the word "hybrid" produces an image that is more like what we'd
|
Indeed, removing the word "hybrid" produces an image that is more like what we'd
|
||||||
expect.
|
expect.
|
||||||
|
|
||||||
In conclusion, prompt blending is great for exploring creative space, but can be
|
In conclusion, prompt blending is great for exploring creative space,
|
||||||
difficult to direct. A forthcoming release of InvokeAI will feature more
|
but takes some trial and error to achieve the desired effect.
|
||||||
deterministic prompt weighting.
|
|
@ -46,11 +46,19 @@ start the front end by selecting choice (3):
|
|||||||
|
|
||||||
```sh
|
```sh
|
||||||
Do you want to generate images using the
|
Do you want to generate images using the
|
||||||
1. command-line
|
1: Browser-based UI
|
||||||
2. browser-based UI
|
2: Command-line interface
|
||||||
3. textual inversion training
|
3: Run textual inversion training
|
||||||
4. open the developer console
|
4: Merge models (diffusers type only)
|
||||||
Please enter 1, 2, 3, or 4: [1] 3
|
5: Download and install models
|
||||||
|
6: Change InvokeAI startup options
|
||||||
|
7: Re-run the configure script to fix a broken install
|
||||||
|
8: Open the developer console
|
||||||
|
9: Update InvokeAI
|
||||||
|
10: Command-line help
|
||||||
|
Q: Quit
|
||||||
|
|
||||||
|
Please enter 1-10, Q: [1]
|
||||||
```
|
```
|
||||||
|
|
||||||
From the command line, with the InvokeAI virtual environment active,
|
From the command line, with the InvokeAI virtual environment active,
|
||||||
|
@ -6,9 +6,7 @@ title: Variations
|
|||||||
|
|
||||||
## Intro
|
## Intro
|
||||||
|
|
||||||
Release 1.13 of SD-Dream adds support for image variations.
|
InvokeAI's support for variations enables you to do the following:
|
||||||
|
|
||||||
You are able to do the following:
|
|
||||||
|
|
||||||
1. Generate a series of systematic variations of an image, given a prompt. The
|
1. Generate a series of systematic variations of an image, given a prompt. The
|
||||||
amount of variation from one image to the next can be controlled.
|
amount of variation from one image to the next can be controlled.
|
||||||
@ -30,19 +28,7 @@ The prompt we will use throughout is:
|
|||||||
This will be indicated as `#!bash "prompt"` in the examples below.
|
This will be indicated as `#!bash "prompt"` in the examples below.
|
||||||
|
|
||||||
First we let SD create a series of images in the usual way, in this case
|
First we let SD create a series of images in the usual way, in this case
|
||||||
requesting six iterations:
|
requesting six iterations.
|
||||||
|
|
||||||
```bash
|
|
||||||
invoke> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
|
||||||
...
|
|
||||||
Outputs:
|
|
||||||
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
|
|
||||||
./outputs/Xena/000001.1880768722.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1880768722
|
|
||||||
./outputs/Xena/000001.332057179.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S332057179
|
|
||||||
./outputs/Xena/000001.2224800325.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S2224800325
|
|
||||||
./outputs/Xena/000001.465250761.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S465250761
|
|
||||||
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
|
|
||||||
```
|
|
||||||
|
|
||||||
<figure markdown>
|
<figure markdown>
|
||||||
![var1](../assets/variation_walkthru/000001.3357757885.png)
|
![var1](../assets/variation_walkthru/000001.3357757885.png)
|
||||||
@ -53,22 +39,16 @@ Outputs:
|
|||||||
|
|
||||||
## Step 2 - Generating Variations
|
## Step 2 - Generating Variations
|
||||||
|
|
||||||
Let's try to generate some variations. Using the same seed, we pass the argument
|
Let's try to generate some variations on this image. We select the "*"
|
||||||
`-v0.1` (or --variant_amount), which generates a series of variations each
|
symbol in the line of icons above the image in order to fix the prompt
|
||||||
differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
|
and seed. Then we open up the "Variations" section of the generation
|
||||||
with higher numbers being larger amounts of variation.
|
panel and use the slider to set the variation amount to 0.2. The
|
||||||
|
higher this value, the more each generated image will differ from the
|
||||||
|
previous one.
|
||||||
|
|
||||||
```bash
|
Now we run the prompt a second time, requesting six iterations. You
|
||||||
invoke> "prompt" -n6 -S3357757885 -v0.2
|
will see six images that are thematically related to each other. Try
|
||||||
...
|
increasing and decreasing the variation amount and see what happens.
|
||||||
Outputs:
|
|
||||||
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
|
|
||||||
./outputs/Xena/000002.3647897225.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.2 -S3357757885
|
|
||||||
./outputs/Xena/000002.917731034.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 917731034:0.2 -S3357757885
|
|
||||||
./outputs/Xena/000002.4116285959.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 4116285959:0.2 -S3357757885
|
|
||||||
./outputs/Xena/000002.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1614299449:0.2 -S3357757885
|
|
||||||
./outputs/Xena/000002.1335553075.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1335553075:0.2 -S3357757885
|
|
||||||
```
|
|
||||||
|
|
||||||
### **Variation Sub Seeding**
|
### **Variation Sub Seeding**
|
||||||
|
|
||||||
|
@ -299,14 +299,6 @@ initial image" icons are located.
|
|||||||
|
|
||||||
See the [Unified Canvas Guide](UNIFIED_CANVAS.md)
|
See the [Unified Canvas Guide](UNIFIED_CANVAS.md)
|
||||||
|
|
||||||
## Parting remarks
|
|
||||||
|
|
||||||
This concludes the walkthrough, but there are several more features that you can
|
|
||||||
explore. Please check out the [Command Line Interface](CLI.md) documentation for
|
|
||||||
further explanation of the advanced features that were not covered here.
|
|
||||||
|
|
||||||
The WebUI is only rapid development. Check back regularly for updates!
|
|
||||||
|
|
||||||
## Reference
|
## Reference
|
||||||
|
|
||||||
### Additional Options
|
### Additional Options
|
||||||
@ -349,11 +341,9 @@ the settings configured in the toolbar.
|
|||||||
|
|
||||||
See below for additional documentation related to each feature:
|
See below for additional documentation related to each feature:
|
||||||
|
|
||||||
- [Core Prompt Settings](./CLI.md)
|
|
||||||
- [Variations](./VARIATIONS.md)
|
- [Variations](./VARIATIONS.md)
|
||||||
- [Upscaling](./POSTPROCESS.md#upscaling)
|
- [Upscaling](./POSTPROCESS.md#upscaling)
|
||||||
- [Image to Image](./IMG2IMG.md)
|
- [Image to Image](./IMG2IMG.md)
|
||||||
- [Inpainting](./INPAINTING.md)
|
|
||||||
- [Other](./OTHER.md)
|
- [Other](./OTHER.md)
|
||||||
|
|
||||||
#### Invocation Gallery
|
#### Invocation Gallery
|
||||||
|
@ -13,28 +13,16 @@ Build complex scenes by combine and modifying multiple images in a stepwise
|
|||||||
fashion. This feature combines img2img, inpainting and outpainting in
|
fashion. This feature combines img2img, inpainting and outpainting in
|
||||||
a single convenient digital artist-optimized user interface.
|
a single convenient digital artist-optimized user interface.
|
||||||
|
|
||||||
### * The [Command Line Interface (CLI)](CLI.md)
|
|
||||||
Scriptable access to InvokeAI's features.
|
|
||||||
|
|
||||||
## Image Generation
|
## Image Generation
|
||||||
### * [Prompt Engineering](PROMPTS.md)
|
### * [Prompt Engineering](PROMPTS.md)
|
||||||
Get the images you want with the InvokeAI prompt engineering language.
|
Get the images you want with the InvokeAI prompt engineering language.
|
||||||
|
|
||||||
## * [Post-Processing](POSTPROCESS.md)
|
|
||||||
Restore mangled faces and make images larger with upscaling. Also see the [Embiggen Upscaling Guide](EMBIGGEN.md).
|
|
||||||
|
|
||||||
## * The [Concepts Library](CONCEPTS.md)
|
## * The [Concepts Library](CONCEPTS.md)
|
||||||
Add custom subjects and styles using HuggingFace's repository of embeddings.
|
Add custom subjects and styles using HuggingFace's repository of embeddings.
|
||||||
|
|
||||||
### * [Image-to-Image Guide for the CLI](IMG2IMG.md)
|
### * [Image-to-Image Guide](IMG2IMG.md)
|
||||||
Use a seed image to build new creations in the CLI.
|
Use a seed image to build new creations in the CLI.
|
||||||
|
|
||||||
### * [Inpainting Guide for the CLI](INPAINTING.md)
|
|
||||||
Selectively erase and replace portions of an existing image in the CLI.
|
|
||||||
|
|
||||||
### * [Outpainting Guide for the CLI](OUTPAINTING.md)
|
|
||||||
Extend the borders of the image with an "outcrop" function within the CLI.
|
|
||||||
|
|
||||||
### * [Generating Variations](VARIATIONS.md)
|
### * [Generating Variations](VARIATIONS.md)
|
||||||
Have an image you like and want to generate many more like it? Variations
|
Have an image you like and want to generate many more like it? Variations
|
||||||
are the ticket.
|
are the ticket.
|
||||||
|
121
docs/index.md
121
docs/index.md
@ -13,6 +13,7 @@ title: Home
|
|||||||
|
|
||||||
<div align="center" markdown>
|
<div align="center" markdown>
|
||||||
|
|
||||||
|
|
||||||
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
|
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
|
||||||
|
|
||||||
[![discord badge]][discord link]
|
[![discord badge]][discord link]
|
||||||
@ -131,17 +132,13 @@ This method is recommended for those familiar with running Docker containers
|
|||||||
- [WebUI overview](features/WEB.md)
|
- [WebUI overview](features/WEB.md)
|
||||||
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
|
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
|
||||||
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
|
- [WebUI Unified Canvas for Img2Img, inpainting and outpainting](features/UNIFIED_CANVAS.md)
|
||||||
|
|
||||||
<!-- separator -->
|
<!-- separator -->
|
||||||
### The InvokeAI Command Line Interface
|
|
||||||
- [Command Line Interace Reference Guide](features/CLI.md)
|
|
||||||
<!-- separator -->
|
|
||||||
### Image Management
|
### Image Management
|
||||||
- [Image2Image](features/IMG2IMG.md)
|
- [Image2Image](features/IMG2IMG.md)
|
||||||
- [Inpainting](features/INPAINTING.md)
|
|
||||||
- [Outpainting](features/OUTPAINTING.md)
|
|
||||||
- [Adding custom styles and subjects](features/CONCEPTS.md)
|
- [Adding custom styles and subjects](features/CONCEPTS.md)
|
||||||
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
|
- [Upscaling and Face Reconstruction](features/POSTPROCESS.md)
|
||||||
- [Embiggen upscaling](features/EMBIGGEN.md)
|
|
||||||
- [Other Features](features/OTHER.md)
|
- [Other Features](features/OTHER.md)
|
||||||
|
|
||||||
<!-- separator -->
|
<!-- separator -->
|
||||||
@ -156,83 +153,60 @@ This method is recommended for those familiar with running Docker containers
|
|||||||
- [Prompt Syntax](features/PROMPTS.md)
|
- [Prompt Syntax](features/PROMPTS.md)
|
||||||
- [Generating Variations](features/VARIATIONS.md)
|
- [Generating Variations](features/VARIATIONS.md)
|
||||||
|
|
||||||
## :octicons-log-16: Latest Changes
|
## :octicons-log-16: Important Changes Since Version 2.3
|
||||||
|
|
||||||
### v2.3.0 <small>(9 February 2023)</small>
|
### Nodes
|
||||||
|
|
||||||
#### Migration to Stable Diffusion `diffusers` models
|
Behind the scenes, InvokeAI has been completely rewritten to support
|
||||||
|
"nodes," small unitary operations that can be combined into graphs to
|
||||||
|
form arbitrary workflows. For example, there is a prompt node that
|
||||||
|
processes the prompt string and feeds it to a text2latent node that
|
||||||
|
generates a latent image. The latents are then fed to a latent2image
|
||||||
|
node that translates the latent image into a PNG.
|
||||||
|
|
||||||
Previous versions of InvokeAI supported the original model file format introduced with Stable Diffusion 1.4. In the original format, known variously as "checkpoint", or "legacy" format, there is a single large weights file ending with `.ckpt` or `.safetensors`. Though this format has served the community well, it has a number of disadvantages, including file size, slow loading times, and a variety of non-standard variants that require special-case code to handle. In addition, because checkpoint files are actually a bundle of multiple machine learning sub-models, it is hard to swap different sub-models in and out, or to share common sub-models. A new format, introduced by the StabilityAI company in collaboration with HuggingFace, is called `diffusers` and consists of a directory of individual models. The most immediate benefit of `diffusers` is that they load from disk very quickly. A longer term benefit is that in the near future `diffusers` models will be able to share common sub-models, dramatically reducing disk space when you have multiple fine-tune models derived from the same base.
|
The WebGUI has a node editor that allows you to graphically design and
|
||||||
|
execute custom node graphs. The ability to save and load graphs is
|
||||||
|
still a work in progress, but coming soon.
|
||||||
|
|
||||||
When you perform a new install of version 2.3.0, you will be offered the option to install the `diffusers` versions of a number of popular SD models, including Stable Diffusion versions 1.5 and 2.1 (including the 768x768 pixel version of 2.1). These will act and work just like the checkpoint versions. Do not be concerned if you already have a lot of ".ckpt" or ".safetensors" models on disk! InvokeAI 2.3.0 can still load these and generate images from them without any extra intervention on your part.
|
### Command-Line Interface Retired
|
||||||
|
|
||||||
To take advantage of the optimized loading times of `diffusers` models, InvokeAI offers options to convert legacy checkpoint models into optimized `diffusers` models. If you use the `invokeai` command line interface, the relevant commands are:
|
The original "invokeai" command-line interface has been retired. The
|
||||||
|
`invokeai` command will now launch a new command-line client that can
|
||||||
|
be used by developers to create and test nodes. It is not intended to
|
||||||
|
be used for routine image generation or manipulation.
|
||||||
|
|
||||||
* `!convert_model` -- Take the path to a local checkpoint file or a URL that is pointing to one, convert it into a `diffusers` model, and import it into InvokeAI's models registry file.
|
To launch the Web GUI from the command-line, use the command
|
||||||
* `!optimize_model` -- If you already have a checkpoint model in your InvokeAI models file, this command will accept its short name and convert it into a like-named `diffusers` model, optionally deleting the original checkpoint file.
|
`invokeai-web` rather than the traditional `invokeai --web`.
|
||||||
* `!import_model` -- Take the local path of either a checkpoint file or a `diffusers` model directory and import it into InvokeAI's registry file. You may also provide the ID of any diffusers model that has been published on the [HuggingFace models repository](https://huggingface.co/models?pipeline_tag=text-to-image&sort=downloads) and it will be downloaded and installed automatically.
|
|
||||||
|
|
||||||
The WebGUI offers similar functionality for model management.
|
### ControlNet
|
||||||
|
|
||||||
For advanced users, new command-line options provide additional functionality. Launching `invokeai` with the argument `--autoconvert <path to directory>` takes the path to a directory of checkpoint files, automatically converts them into `diffusers` models and imports them. Each time the script is launched, the directory will be scanned for new checkpoint files to be loaded. Alternatively, the `--ckpt_convert` argument will cause any checkpoint or safetensors model that is already registered with InvokeAI to be converted into a `diffusers` model on the fly, allowing you to take advantage of future diffusers-only features without explicitly converting the model and saving it to disk.
|
This version of InvokeAI features ControlNet, a system that allows you
|
||||||
|
to achieve exact poses for human and animal figures by providing a
|
||||||
|
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
|
||||||
|
|
||||||
Please see [INSTALLING MODELS](https://invoke-ai.github.io/InvokeAI/installation/050_INSTALLING_MODELS/) for more information on model management in both the command-line and Web interfaces.
|
### New Schedulers
|
||||||
|
|
||||||
#### Support for the `XFormers` Memory-Efficient Crossattention Package
|
The list of schedulers has been completely revamped and brought up to date:
|
||||||
|
|
||||||
On CUDA (Nvidia) systems, version 2.3.0 supports the `XFormers` library. Once installed, the`xformers` package dramatically reduces the memory footprint of loaded Stable Diffusion models files and modestly increases image generation speed. `xformers` will be installed and activated automatically if you specify a CUDA system at install time.
|
| **Short Name** | **Scheduler** | **Notes** |
|
||||||
|
|----------------|---------------------------------|-----------------------------|
|
||||||
|
| **ddim** | DDIMScheduler | |
|
||||||
|
| **ddpm** | DDPMScheduler | |
|
||||||
|
| **deis** | DEISMultistepScheduler | |
|
||||||
|
| **lms** | LMSDiscreteScheduler | |
|
||||||
|
| **pndm** | PNDMScheduler | |
|
||||||
|
| **heun** | HeunDiscreteScheduler | original noise schedule |
|
||||||
|
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
|
||||||
|
| **euler** | EulerDiscreteScheduler | original noise schedule |
|
||||||
|
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
|
||||||
|
| **kdpm_2** | KDPM2DiscreteScheduler | |
|
||||||
|
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
|
||||||
|
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
|
||||||
|
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
|
||||||
|
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
|
||||||
|
| **unipc** | UniPCMultistepScheduler | CPU only |
|
||||||
|
|
||||||
The caveat with using `xformers` is that it introduces slightly non-deterministic behavior, and images generated using the same seed and other settings will be subtly different between invocations. Generally the changes are unnoticeable unless you rapidly shift back and forth between images, but to disable `xformers` and restore fully deterministic behavior, you may launch InvokeAI using the `--no-xformers` option. This is most conveniently done by opening the file `invokeai/invokeai.init` with a text editor, and adding the line `--no-xformers` at the bottom.
|
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
|
||||||
|
|
||||||
#### A Negative Prompt Box in the WebUI
|
|
||||||
|
|
||||||
There is now a separate text input box for negative prompts in the WebUI. This is convenient for stashing frequently-used negative prompts ("mangled limbs, bad anatomy"). The `[negative prompt]` syntax continues to work in the main prompt box as well.
|
|
||||||
|
|
||||||
To see exactly how your prompts are being parsed, launch `invokeai` with the `--log_tokenization` option. The console window will then display the tokenization process for both positive and negative prompts.
|
|
||||||
|
|
||||||
#### Model Merging
|
|
||||||
|
|
||||||
Version 2.3.0 offers an intuitive user interface for merging up to three Stable Diffusion models using an intuitive user interface. Model merging allows you to mix the behavior of models to achieve very interesting effects. To use this, each of the models must already be imported into InvokeAI and saved in `diffusers` format, then launch the merger using a new menu item in the InvokeAI launcher script (`invoke.sh`, `invoke.bat`) or directly from the command line with `invokeai-merge --gui`. You will be prompted to select the models to merge, the proportions in which to mix them, and the mixing algorithm. The script will create a new merged `diffusers` model and import it into InvokeAI for your use.
|
|
||||||
|
|
||||||
See [MODEL MERGING](https://invoke-ai.github.io/InvokeAI/features/MODEL_MERGING/) for more details.
|
|
||||||
|
|
||||||
#### Textual Inversion Training
|
|
||||||
|
|
||||||
Textual Inversion (TI) is a technique for training a Stable Diffusion model to emit a particular subject or style when triggered by a keyword phrase. You can perform TI training by placing a small number of images of the subject or style in a directory, and choosing a distinctive trigger phrase, such as "pointillist-style". After successful training, The subject or style will be activated by including `<pointillist-style>` in your prompt.
|
|
||||||
|
|
||||||
Previous versions of InvokeAI were able to perform TI, but it required using a command-line script with dozens of obscure command-line arguments. Version 2.3.0 features an intuitive TI frontend that will build a TI model on top of any `diffusers` model. To access training you can launch from a new item in the launcher script or from the command line using `invokeai-ti --gui`.
|
|
||||||
|
|
||||||
See [TEXTUAL INVERSION](https://invoke-ai.github.io/InvokeAI/features/TEXTUAL_INVERSION/) for further details.
|
|
||||||
|
|
||||||
#### A New Installer Experience
|
|
||||||
|
|
||||||
The InvokeAI installer has been upgraded in order to provide a smoother and hopefully more glitch-free experience. In addition, InvokeAI is now packaged as a PyPi project, allowing developers and power-users to install InvokeAI with the command `pip install InvokeAI --use-pep517`. Please see [Installation](#installation) for details.
|
|
||||||
|
|
||||||
Developers should be aware that the `pip` installation procedure has been simplified and that the `conda` method is no longer supported at all. Accordingly, the `environments_and_requirements` directory has been deleted from the repository.
|
|
||||||
|
|
||||||
#### Command-line name changes
|
|
||||||
|
|
||||||
All of InvokeAI's functionality, including the WebUI, command-line interface, textual inversion training and model merging, can all be accessed from the `invoke.sh` and `invoke.bat` launcher scripts. The menu of options has been expanded to add the new functionality. For the convenience of developers and power users, we have normalized the names of the InvokeAI command-line scripts:
|
|
||||||
|
|
||||||
* `invokeai` -- Command-line client
|
|
||||||
* `invokeai --web` -- Web GUI
|
|
||||||
* `invokeai-merge --gui` -- Model merging script with graphical front end
|
|
||||||
* `invokeai-ti --gui` -- Textual inversion script with graphical front end
|
|
||||||
* `invokeai-configure` -- Configuration tool for initializing the `invokeai` directory and selecting popular starter models.
|
|
||||||
|
|
||||||
For backward compatibility, the old command names are also recognized, including `invoke.py` and `configure-invokeai.py`. However, these are deprecated and will eventually be removed.
|
|
||||||
|
|
||||||
Developers should be aware that the locations of the script's source code has been moved. The new locations are:
|
|
||||||
* `invokeai` => `ldm/invoke/CLI.py`
|
|
||||||
* `invokeai-configure` => `ldm/invoke/config/configure_invokeai.py`
|
|
||||||
* `invokeai-ti`=> `ldm/invoke/training/textual_inversion.py`
|
|
||||||
* `invokeai-merge` => `ldm/invoke/merge_diffusers`
|
|
||||||
|
|
||||||
Developers are strongly encouraged to perform an "editable" install of InvokeAI using `pip install -e . --use-pep517` in the Git repository, and then to call the scripts using their 2.3.0 names, rather than executing the scripts directly. Developers should also be aware that the several important data files have been relocated into a new directory named `invokeai`. This includes the WebGUI's `frontend` and `backend` directories, and the `INITIAL_MODELS.yaml` files used by the installer to select starter models. Eventually all InvokeAI modules will be in subdirectories of `invokeai`.
|
|
||||||
|
|
||||||
Please see [2.3.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v2.3.0) for further details.
|
|
||||||
For older changelogs, please visit the
|
|
||||||
**[CHANGELOG](CHANGELOG/#v223-2-december-2022)**.
|
|
||||||
|
|
||||||
## :material-target: Troubleshooting
|
## :material-target: Troubleshooting
|
||||||
|
|
||||||
@ -268,8 +242,3 @@ free to send me an email if you use and like the script.
|
|||||||
Original portions of the software are Copyright (c) 2022-23
|
Original portions of the software are Copyright (c) 2022-23
|
||||||
by [The InvokeAI Team](https://github.com/invoke-ai).
|
by [The InvokeAI Team](https://github.com/invoke-ai).
|
||||||
|
|
||||||
## :octicons-book-24: Further Reading
|
|
||||||
|
|
||||||
Please see the original README for more information on this software and
|
|
||||||
underlying algorithm, located in the file
|
|
||||||
[README-CompViz.md](other/README-CompViz.md).
|
|
||||||
|
Loading…
Reference in New Issue
Block a user