mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into webui-model-conversion
This commit is contained in:
commit
9e0250c0b4
146
README.md
146
README.md
@ -1,6 +1,6 @@
|
|||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
# InvokeAI: A Stable Diffusion Toolkit
|
# InvokeAI: A Stable Diffusion Toolkit
|
||||||
|
|
||||||
@ -41,38 +41,136 @@ requests. Be sure to use the provided templates. They will help us diagnose issu
|
|||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
# Getting Started with InvokeAI
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Quick Start](#getting-started-with-invokeai)
|
||||||
|
2. [Installation](#detailed-installation-instructions)
|
||||||
|
3. [Hardware Requirements](#hardware-requirements)
|
||||||
|
4. [Features](#features)
|
||||||
|
5. [Latest Changes](#latest-changes)
|
||||||
|
6. [Troubleshooting](#troubleshooting)
|
||||||
|
7. [Contributing](#contributing)
|
||||||
|
8. [Contributors](#contributors)
|
||||||
|
9. [Support](#support)
|
||||||
|
10. [Further Reading](#further-reading)
|
||||||
|
|
||||||
|
## Getting Started with InvokeAI
|
||||||
|
|
||||||
For full installation and upgrade instructions, please see:
|
For full installation and upgrade instructions, please see:
|
||||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/)
|
||||||
|
|
||||||
|
### Automatic Installer (suggested for 1st time users)
|
||||||
|
|
||||||
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
|
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
|
||||||
|
|
||||||
2. Download the .zip file for your OS (Windows/macOS/Linux).
|
2. Download the .zip file for your OS (Windows/macOS/Linux).
|
||||||
|
|
||||||
3. Unzip the file.
|
3. Unzip the file.
|
||||||
4. If you are on Windows, double-click on the `install.bat` script. On macOS, open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press return. On Linux, run `install.sh`.
|
|
||||||
5. Wait a while, until it is done.
|
|
||||||
6. The folder where you ran the installer from will now be filled with lots of files. If you are on Windows, double-click on the `invoke.bat` file. On macOS, open a Terminal window, drag `invoke.sh` from the folder into the Terminal, and press return. On Linux, run `invoke.sh`
|
|
||||||
7. Press 2 to open the "browser-based UI", press enter/return, wait a minute or two for Stable Diffusion to start up, then open your browser and go to http://localhost:9090.
|
|
||||||
8. Type `banana sushi` in the box on the top left and click `Invoke`
|
|
||||||
|
|
||||||
|
4. If you are on Windows, double-click on the `install.bat` script. On
|
||||||
|
macOS, open a Terminal window, drag the file `install.sh` from Finder
|
||||||
|
into the Terminal, and press return. On Linux, run `install.sh`.
|
||||||
|
|
||||||
## Table of Contents
|
5. You'll be asked to confirm the location of the folder in which
|
||||||
|
to install InvokeAI and its image generation model files. Pick a
|
||||||
|
location with at least 15 GB of free memory. More if you plan on
|
||||||
|
installing lots of models.
|
||||||
|
|
||||||
1. [Installation](#installation)
|
6. Wait while the installer does its thing. After installing the software,
|
||||||
2. [Hardware Requirements](#hardware-requirements)
|
the installer will launch a script that lets you configure InvokeAI and
|
||||||
3. [Features](#features)
|
select a set of starting image generaiton models.
|
||||||
4. [Latest Changes](#latest-changes)
|
|
||||||
5. [Troubleshooting](#troubleshooting)
|
|
||||||
6. [Contributing](#contributing)
|
|
||||||
7. [Contributors](#contributors)
|
|
||||||
8. [Support](#support)
|
|
||||||
9. [Further Reading](#further-reading)
|
|
||||||
|
|
||||||
## Installation
|
7. Find the folder that InvokeAI was installed into (it is not the
|
||||||
|
same as the unpacked zip file directory!) The default location of this
|
||||||
|
folder (if you didn't change it in step 5) is `~/invokeai` on
|
||||||
|
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
|
||||||
|
|
||||||
|
8. On Windows systems, double-click on the `invoke.bat` file. On
|
||||||
|
macOS, open a Terminal window, drag `invoke.sh` from the folder into
|
||||||
|
the Terminal, and press return. On Linux, run `invoke.sh`
|
||||||
|
|
||||||
|
9. Press 2 to open the "browser-based UI", press enter/return, wait a
|
||||||
|
minute or two for Stable Diffusion to start up, then open your browser
|
||||||
|
and go to http://localhost:9090.
|
||||||
|
|
||||||
|
10. Type `banana sushi` in the box on the top left and click `Invoke`
|
||||||
|
|
||||||
|
### Command-Line Installation (for users familiar with Terminals)
|
||||||
|
|
||||||
|
You must have Python 3.9 or 3.10 installed on your machine. Earlier or later versions are
|
||||||
|
not supported.
|
||||||
|
|
||||||
|
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
|
||||||
|
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
|
||||||
|
|
||||||
|
```terminal
|
||||||
|
mkdir invokeai
|
||||||
|
````
|
||||||
|
|
||||||
|
3. Create a virtual environment named `.venv` inside this directory and activate it:
|
||||||
|
|
||||||
|
```terminal
|
||||||
|
cd invokeai
|
||||||
|
python -m venv .venv --prompt InvokeAI
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Activate the virtual environment (do it every time you run InvokeAI)
|
||||||
|
|
||||||
|
_For Linux/Mac users:_
|
||||||
|
|
||||||
|
```sh
|
||||||
|
source .venv/bin/activate
|
||||||
|
```
|
||||||
|
|
||||||
|
_For Windows users:_
|
||||||
|
|
||||||
|
```ps
|
||||||
|
.venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
|
||||||
|
|
||||||
|
_For Windows/Linux with an NVIDIA GPU:_
|
||||||
|
|
||||||
|
```terminal
|
||||||
|
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
|
||||||
|
```
|
||||||
|
|
||||||
|
_For Linux with an AMD GPU:_
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
|
||||||
|
```
|
||||||
|
|
||||||
|
_For Macintoshes, either Intel or M1/M2:_
|
||||||
|
|
||||||
|
```sh
|
||||||
|
pip install InvokeAI --use-pep517
|
||||||
|
```
|
||||||
|
|
||||||
|
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
|
||||||
|
|
||||||
|
```terminal
|
||||||
|
invokeai-configure
|
||||||
|
```
|
||||||
|
|
||||||
|
7. Launch the web server (do it every time you run InvokeAI):
|
||||||
|
|
||||||
|
```terminal
|
||||||
|
invokeai --web
|
||||||
|
```
|
||||||
|
|
||||||
|
8. Point your browser to http://localhost:9090 to bring up the web interface.
|
||||||
|
9. Type `banana sushi` in the box on the top left and click `Invoke`.
|
||||||
|
|
||||||
|
Be sure to activate the virtual environment each time before re-launching InvokeAI,
|
||||||
|
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
|
||||||
|
|
||||||
|
### Detailed Installation Instructions
|
||||||
|
|
||||||
This fork is supported across Linux, Windows and Macintosh. Linux
|
This fork is supported across Linux, Windows and Macintosh. Linux
|
||||||
users can use either an Nvidia-based card (with CUDA support) or an
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
@ -80,13 +178,13 @@ AMD card (using the ROCm driver). For full installation and upgrade
|
|||||||
instructions, please see:
|
instructions, please see:
|
||||||
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
|
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
|
||||||
|
|
||||||
### Hardware Requirements
|
## Hardware Requirements
|
||||||
|
|
||||||
InvokeAI is supported across Linux, Windows and macOS. Linux
|
InvokeAI is supported across Linux, Windows and macOS. Linux
|
||||||
users can use either an Nvidia-based card (with CUDA support) or an
|
users can use either an Nvidia-based card (with CUDA support) or an
|
||||||
AMD card (using the ROCm driver).
|
AMD card (using the ROCm driver).
|
||||||
|
|
||||||
#### System
|
### System
|
||||||
|
|
||||||
You will need one of the following:
|
You will need one of the following:
|
||||||
|
|
||||||
@ -98,11 +196,11 @@ We do not recommend the GTX 1650 or 1660 series video cards. They are
|
|||||||
unable to run in half-precision mode and do not have sufficient VRAM
|
unable to run in half-precision mode and do not have sufficient VRAM
|
||||||
to render 512x512 images.
|
to render 512x512 images.
|
||||||
|
|
||||||
#### Memory
|
### Memory
|
||||||
|
|
||||||
- At least 12 GB Main Memory RAM.
|
- At least 12 GB Main Memory RAM.
|
||||||
|
|
||||||
#### Disk
|
### Disk
|
||||||
|
|
||||||
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
- At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
|
||||||
|
|
||||||
@ -152,7 +250,7 @@ Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
|
|||||||
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
Please check out our **[Q&A](https://invoke-ai.github.io/InvokeAI/help/TROUBLESHOOT/#faq)** to get solutions for common installation
|
||||||
problems and other issues.
|
problems and other issues.
|
||||||
|
|
||||||
# Contributing
|
## Contributing
|
||||||
|
|
||||||
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
|
||||||
cleanup, testing, or code reviews, is very much encouraged to do so.
|
cleanup, testing, or code reviews, is very much encouraged to do so.
|
||||||
|
@ -30,21 +30,31 @@ Installation](010_INSTALL_AUTOMATED.md), and in many cases will
|
|||||||
already be installed (if, for example, you have used your system for
|
already be installed (if, for example, you have used your system for
|
||||||
gaming):
|
gaming):
|
||||||
|
|
||||||
* **Python** version 3.9 or 3.10 (3.11 is not recommended).
|
* **Python**
|
||||||
|
|
||||||
* **CUDA Tools** For those with _NVidia GPUs_, you will need to
|
version 3.9 or 3.10 (3.11 is not recommended).
|
||||||
|
|
||||||
|
* **CUDA Tools**
|
||||||
|
|
||||||
|
For those with _NVidia GPUs_, you will need to
|
||||||
install the [CUDA toolkit and optionally the XFormers library](070_INSTALL_XFORMERS.md).
|
install the [CUDA toolkit and optionally the XFormers library](070_INSTALL_XFORMERS.md).
|
||||||
|
|
||||||
* **ROCm Tools** For _Linux users with AMD GPUs_, you will need
|
* **ROCm Tools**
|
||||||
|
|
||||||
|
For _Linux users with AMD GPUs_, you will need
|
||||||
to install the [ROCm toolkit](./030_INSTALL_CUDA_AND_ROCM.md). Note that
|
to install the [ROCm toolkit](./030_INSTALL_CUDA_AND_ROCM.md). Note that
|
||||||
InvokeAI does not support AMD GPUs on Windows systems due to
|
InvokeAI does not support AMD GPUs on Windows systems due to
|
||||||
lack of a Windows ROCm library.
|
lack of a Windows ROCm library.
|
||||||
|
|
||||||
* **Visual C++ Libraries** _Windows users_ must install the free
|
* **Visual C++ Libraries**
|
||||||
|
|
||||||
|
_Windows users_ must install the free
|
||||||
[Visual C++ libraries from Microsoft](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170)
|
[Visual C++ libraries from Microsoft](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170)
|
||||||
|
|
||||||
* **The Xcode command line tools** for _Macintosh users_. Instructions are
|
* **The Xcode command line tools**
|
||||||
available at [Free Code Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
|
|
||||||
|
for _Macintosh users_. Instructions are available at
|
||||||
|
[Free Code Camp](https://www.freecodecamp.org/news/install-xcode-command-line-tools/)
|
||||||
|
|
||||||
* _Macintosh users_ may also need to run the `Install Certificates` command
|
* _Macintosh users_ may also need to run the `Install Certificates` command
|
||||||
if model downloads give lots of certificate errors. Run:
|
if model downloads give lots of certificate errors. Run:
|
||||||
@ -75,7 +85,7 @@ manager, please follow these steps:
|
|||||||
=== "Linux/Mac"
|
=== "Linux/Mac"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
export INVOKEAI_ROOT="~/invokeai"
|
export INVOKEAI_ROOT=~/invokeai
|
||||||
mkdir $INVOKEAI_ROOT
|
mkdir $INVOKEAI_ROOT
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -99,35 +109,30 @@ manager, please follow these steps:
|
|||||||
Windows environment variable using the Advanced System Settings dialogue.
|
Windows environment variable using the Advanced System Settings dialogue.
|
||||||
Refer to your operating system documentation for details.
|
Refer to your operating system documentation for details.
|
||||||
|
|
||||||
|
```terminal
|
||||||
=== "Linux/Mac"
|
|
||||||
```bash
|
|
||||||
cd $INVOKEAI_ROOT
|
cd $INVOKEAI_ROOT
|
||||||
python -m venv create .venv
|
python -m venv .venv --prompt InvokeAI
|
||||||
```
|
|
||||||
|
|
||||||
=== "Windows"
|
|
||||||
```bash
|
|
||||||
cd $INVOKEAI_ROOT
|
|
||||||
python -m venv create .venv
|
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Activate the new environment:
|
4. Activate the new environment:
|
||||||
|
|
||||||
=== "Linux/Mac"
|
=== "Linux/Mac"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
source .venv/bin/activate
|
source .venv/bin/activate
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Windows"
|
=== "Windows"
|
||||||
```bash
|
|
||||||
.venv\script\activate
|
|
||||||
```
|
|
||||||
If you get a permissions error at this point, run the command
|
|
||||||
`Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser`
|
|
||||||
and try `activate` again.
|
|
||||||
|
|
||||||
The command-line prompt should change to to show `(.venv)` at the
|
```ps
|
||||||
|
.venv\Scripts\activate
|
||||||
|
```
|
||||||
|
|
||||||
|
If you get a permissions error at this point, run this command and try again
|
||||||
|
|
||||||
|
`Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser`
|
||||||
|
|
||||||
|
The command-line prompt should change to to show `(InvokeAI)` at the
|
||||||
beginning of the prompt. Note that all the following steps should be
|
beginning of the prompt. Note that all the following steps should be
|
||||||
run while inside the INVOKEAI_ROOT directory
|
run while inside the INVOKEAI_ROOT directory
|
||||||
|
|
||||||
@ -137,38 +142,45 @@ manager, please follow these steps:
|
|||||||
python -m pip install --upgrade pip
|
python -m pip install --upgrade pip
|
||||||
```
|
```
|
||||||
|
|
||||||
6. Install the InvokeAI Package. The `--extra-index-url` option is used to select among CUDA, ROCm and CPU/MPS drivers as shown below:
|
6. Install the InvokeAI Package. The `--extra-index-url` option is used to select among
|
||||||
|
CUDA, ROCm and CPU/MPS drivers as shown below:
|
||||||
|
|
||||||
=== "CUDA (NVidia)"
|
=== "CUDA (NVidia)"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
|
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "ROCm (AMD)"
|
=== "ROCm (AMD)"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
|
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.2
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "CPU (Intel Macs & non-GPU systems)"
|
=== "CPU (Intel Macs & non-GPU systems)"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
|
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "MPS (M1 and M2 Macs)"
|
=== "MPS (M1 and M2 Macs)"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
|
pip install InvokeAI --use-pep517
|
||||||
```
|
```
|
||||||
|
|
||||||
7. Deactivate and reactivate your runtime directory so that the invokeai-specific commands
|
7. Deactivate and reactivate your runtime directory so that the invokeai-specific commands
|
||||||
become available in the environment
|
become available in the environment
|
||||||
|
|
||||||
=== "Linux/Macintosh"
|
=== "Linux/Macintosh"
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
deactivate && source .venv/bin/activate
|
deactivate && source .venv/bin/activate
|
||||||
```
|
```
|
||||||
|
|
||||||
=== "Windows"
|
=== "Windows"
|
||||||
```bash
|
|
||||||
|
```ps
|
||||||
deactivate
|
deactivate
|
||||||
.venv\Scripts\activate
|
.venv\Scripts\activate
|
||||||
```
|
```
|
||||||
@ -179,7 +191,7 @@ manager, please follow these steps:
|
|||||||
models, model config files, directory for textual inversion embeddings, and
|
models, model config files, directory for textual inversion embeddings, and
|
||||||
your outputs.
|
your outputs.
|
||||||
|
|
||||||
```bash
|
```terminal
|
||||||
invokeai-configure
|
invokeai-configure
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -283,7 +295,6 @@ on your system, please see the [Git Installation
|
|||||||
Guide](https://github.com/git-guides/install-git)
|
Guide](https://github.com/git-guides/install-git)
|
||||||
|
|
||||||
1. From the command line, run this command:
|
1. From the command line, run this command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/invoke-ai/InvokeAI.git
|
git clone https://github.com/invoke-ai/InvokeAI.git
|
||||||
```
|
```
|
||||||
@ -314,7 +325,7 @@ installation protocol (important!)
|
|||||||
|
|
||||||
=== "MPS (M1 and M2 Macs)"
|
=== "MPS (M1 and M2 Macs)"
|
||||||
```bash
|
```bash
|
||||||
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
|
pip install -e . --use-pep517
|
||||||
```
|
```
|
||||||
|
|
||||||
Be sure to pass `-e` (for an editable install) and don't forget the
|
Be sure to pass `-e` (for an editable install) and don't forget the
|
||||||
@ -330,5 +341,29 @@ installation protocol (important!)
|
|||||||
repository. You can then use GitHub functions to create and submit
|
repository. You can then use GitHub functions to create and submit
|
||||||
pull requests to contribute improvements to the project.
|
pull requests to contribute improvements to the project.
|
||||||
|
|
||||||
Please see [Contributing](/index.md#Contributing) for hints
|
Please see [Contributing](../index.md#contributing) for hints
|
||||||
on getting started.
|
on getting started.
|
||||||
|
|
||||||
|
### Unsupported Conda Install
|
||||||
|
|
||||||
|
Congratulations, you found the "secret" Conda installation
|
||||||
|
instructions. If you really **really** want to use Conda with InvokeAI
|
||||||
|
you can do so using this unsupported recipe:
|
||||||
|
|
||||||
|
```
|
||||||
|
mkdir ~/invokeai
|
||||||
|
conda create -n invokeai python=3.10
|
||||||
|
conda activate invokeai
|
||||||
|
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu117
|
||||||
|
invokeai-configure --root ~/invokeai
|
||||||
|
invokeai --root ~/invokeai --web
|
||||||
|
```
|
||||||
|
|
||||||
|
The `pip install` command shown in this recipe is for Linux/Windows
|
||||||
|
systems with an NVIDIA GPU. See step (6) above for the command to use
|
||||||
|
with other platforms/GPU combinations. If you don't wish to pass the
|
||||||
|
`--root` argument to `invokeai` with each launch, you may set the
|
||||||
|
environment variable INVOKEAI_ROOT to point to the installation directory.
|
||||||
|
|
||||||
|
Note that if you run into problems with the Conda installation, the InvokeAI
|
||||||
|
staff will **not** be able to help you out. Caveat Emptor!
|
||||||
|
@ -321,6 +321,7 @@ class Generate:
|
|||||||
codeformer_fidelity = None,
|
codeformer_fidelity = None,
|
||||||
save_original = False,
|
save_original = False,
|
||||||
upscale = None,
|
upscale = None,
|
||||||
|
upscale_denoise_str = 0.75,
|
||||||
# this is specific to inpainting and causes more extreme inpainting
|
# this is specific to inpainting and causes more extreme inpainting
|
||||||
inpaint_replace = 0.0,
|
inpaint_replace = 0.0,
|
||||||
# This controls the size at which inpaint occurs (scaled up for inpaint, then back down for the result)
|
# This controls the size at which inpaint occurs (scaled up for inpaint, then back down for the result)
|
||||||
@ -560,6 +561,7 @@ class Generate:
|
|||||||
if upscale is not None or facetool_strength > 0:
|
if upscale is not None or facetool_strength > 0:
|
||||||
self.upscale_and_reconstruct(results,
|
self.upscale_and_reconstruct(results,
|
||||||
upscale = upscale,
|
upscale = upscale,
|
||||||
|
upscale_denoise_str = upscale_denoise_str,
|
||||||
facetool = facetool,
|
facetool = facetool,
|
||||||
strength = facetool_strength,
|
strength = facetool_strength,
|
||||||
codeformer_fidelity = codeformer_fidelity,
|
codeformer_fidelity = codeformer_fidelity,
|
||||||
@ -633,6 +635,7 @@ class Generate:
|
|||||||
facetool_strength = 0.0,
|
facetool_strength = 0.0,
|
||||||
codeformer_fidelity = 0.75,
|
codeformer_fidelity = 0.75,
|
||||||
upscale = None,
|
upscale = None,
|
||||||
|
upscale_denoise_str = 0.75,
|
||||||
out_direction = None,
|
out_direction = None,
|
||||||
outcrop = [],
|
outcrop = [],
|
||||||
save_original = True, # to get new name
|
save_original = True, # to get new name
|
||||||
@ -684,6 +687,7 @@ class Generate:
|
|||||||
codeformer_fidelity = codeformer_fidelity,
|
codeformer_fidelity = codeformer_fidelity,
|
||||||
save_original = save_original,
|
save_original = save_original,
|
||||||
upscale = upscale,
|
upscale = upscale,
|
||||||
|
upscale_denoise_str = upscale_denoise_str,
|
||||||
image_callback = callback,
|
image_callback = callback,
|
||||||
prefix = prefix,
|
prefix = prefix,
|
||||||
)
|
)
|
||||||
@ -952,6 +956,7 @@ class Generate:
|
|||||||
image_list,
|
image_list,
|
||||||
facetool = 'gfpgan',
|
facetool = 'gfpgan',
|
||||||
upscale = None,
|
upscale = None,
|
||||||
|
upscale_denoise_str = 0.75,
|
||||||
strength = 0.0,
|
strength = 0.0,
|
||||||
codeformer_fidelity = 0.75,
|
codeformer_fidelity = 0.75,
|
||||||
save_original = False,
|
save_original = False,
|
||||||
@ -982,7 +987,7 @@ class Generate:
|
|||||||
if len(upscale) < 2:
|
if len(upscale) < 2:
|
||||||
upscale.append(0.75)
|
upscale.append(0.75)
|
||||||
image = self.esrgan.process(
|
image = self.esrgan.process(
|
||||||
image, upscale[1], seed, int(upscale[0]))
|
image, upscale[1], seed, int(upscale[0]), denoise_str=upscale_denoise_str)
|
||||||
else:
|
else:
|
||||||
print(">> ESRGAN is disabled. Image not upscaled.")
|
print(">> ESRGAN is disabled. Image not upscaled.")
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
@ -58,12 +58,9 @@ def main():
|
|||||||
print(f'>> Internet connectivity is {Globals.internet_available}')
|
print(f'>> Internet connectivity is {Globals.internet_available}')
|
||||||
|
|
||||||
if not args.conf:
|
if not args.conf:
|
||||||
if not os.path.exists(os.path.join(Globals.root,'configs','models.yaml')):
|
config_file = os.path.join(Globals.root,'configs','models.yaml')
|
||||||
report_model_error(opt, e)
|
if not os.path.exists(config_file):
|
||||||
# print(f"\n** Error. The file {os.path.join(Globals.root,'configs','models.yaml')} could not be found.")
|
report_model_error(opt, FileNotFoundError(f"The file {config_file} could not be found."))
|
||||||
# print('** Please check the location of your invokeai directory and use the --root_dir option to point to the correct path.')
|
|
||||||
# print('** This script will now exit.')
|
|
||||||
# sys.exit(-1)
|
|
||||||
|
|
||||||
print(f'>> {ldm.invoke.__app_name__}, version {ldm.invoke.__version__}')
|
print(f'>> {ldm.invoke.__app_name__}, version {ldm.invoke.__version__}')
|
||||||
print(f'>> InvokeAI runtime directory is "{Globals.root}"')
|
print(f'>> InvokeAI runtime directory is "{Globals.root}"')
|
||||||
@ -658,7 +655,9 @@ def import_ckpt_model(path_or_url: Union[Path, str], gen, opt, completer) -> Opt
|
|||||||
model_description=default_description
|
model_description=default_description
|
||||||
)
|
)
|
||||||
config_file = None
|
config_file = None
|
||||||
default = Path(Globals.root,'configs/stable-diffusion/v1-inference.yaml')
|
default = Path(Globals.root,'configs/stable-diffusion/v1-inpainting-inference.yaml') \
|
||||||
|
if re.search('inpaint',default_name, flags=re.IGNORECASE) \
|
||||||
|
else Path(Globals.root,'configs/stable-diffusion/v1-inference.yaml')
|
||||||
|
|
||||||
completer.complete_extensions(('.yaml','.yml'))
|
completer.complete_extensions(('.yaml','.yml'))
|
||||||
completer.set_line(str(default))
|
completer.set_line(str(default))
|
||||||
@ -709,12 +708,21 @@ def _get_model_name_and_desc(model_manager,completer,model_name:str='',model_des
|
|||||||
model_description = input(f'Description for this model [{model_description}]: ').strip() or model_description
|
model_description = input(f'Description for this model [{model_description}]: ').strip() or model_description
|
||||||
return model_name, model_description
|
return model_name, model_description
|
||||||
|
|
||||||
|
def _is_inpainting(model_name_or_path: str)->bool:
|
||||||
|
if re.search('inpaint',model_name_or_path, flags=re.IGNORECASE):
|
||||||
|
return not input('Is this an inpainting model? [y] ').startswith(('n','N'))
|
||||||
|
else:
|
||||||
|
return not input('Is this an inpainting model? [n] ').startswith(('y','Y'))
|
||||||
|
|
||||||
def optimize_model(model_name_or_path: str, gen, opt, completer):
|
def optimize_model(model_name_or_path: str, gen, opt, completer):
|
||||||
manager = gen.model_manager
|
manager = gen.model_manager
|
||||||
ckpt_path = None
|
ckpt_path = None
|
||||||
original_config_file = None
|
original_config_file = None
|
||||||
|
|
||||||
if (model_info := manager.model_info(model_name_or_path)):
|
if model_name_or_path == gen.model_name:
|
||||||
|
print("** Can't convert the active model. !switch to another model first. **")
|
||||||
|
return
|
||||||
|
elif (model_info := manager.model_info(model_name_or_path)):
|
||||||
if 'weights' in model_info:
|
if 'weights' in model_info:
|
||||||
ckpt_path = Path(model_info['weights'])
|
ckpt_path = Path(model_info['weights'])
|
||||||
original_config_file = Path(model_info['config'])
|
original_config_file = Path(model_info['config'])
|
||||||
@ -731,7 +739,7 @@ def optimize_model(model_name_or_path:str, gen, opt, completer):
|
|||||||
ckpt_path.stem,
|
ckpt_path.stem,
|
||||||
f'Converted model {ckpt_path.stem}'
|
f'Converted model {ckpt_path.stem}'
|
||||||
)
|
)
|
||||||
is_inpainting = input('Is this an inpainting model? [n] ').startswith(('y','Y'))
|
is_inpainting = _is_inpainting(model_name_or_path)
|
||||||
original_config_file = Path(
|
original_config_file = Path(
|
||||||
'configs',
|
'configs',
|
||||||
'stable-diffusion',
|
'stable-diffusion',
|
||||||
@ -889,6 +897,7 @@ def do_postprocess (gen, opt, callback):
|
|||||||
codeformer_fidelity = opt.codeformer_fidelity,
|
codeformer_fidelity = opt.codeformer_fidelity,
|
||||||
save_original = opt.save_original,
|
save_original = opt.save_original,
|
||||||
upscale = opt.upscale,
|
upscale = opt.upscale,
|
||||||
|
upscale_denoise_str = opt.esrgan_denoise_str,
|
||||||
out_direction = opt.out_direction,
|
out_direction = opt.out_direction,
|
||||||
outcrop = opt.outcrop,
|
outcrop = opt.outcrop,
|
||||||
callback = callback,
|
callback = callback,
|
||||||
@ -950,7 +959,7 @@ def prepare_image_metadata(
|
|||||||
print(f'** The filename format contains an unknown key \'{e.args[0]}\'. Will use {{prefix}}.{{seed}}.png\' instead')
|
print(f'** The filename format contains an unknown key \'{e.args[0]}\'. Will use {{prefix}}.{{seed}}.png\' instead')
|
||||||
filename = f'{prefix}.{seed}.png'
|
filename = f'{prefix}.{seed}.png'
|
||||||
except IndexError:
|
except IndexError:
|
||||||
print(f'** The filename format is broken or complete. Will use \'{{prefix}}.{{seed}}.png\' instead')
|
print("** The filename format is broken or complete. Will use '{prefix}.{seed}.png' instead")
|
||||||
filename = f'{prefix}.{seed}.png'
|
filename = f'{prefix}.{seed}.png'
|
||||||
|
|
||||||
if opt.variation_amount > 0:
|
if opt.variation_amount > 0:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user