Merge branch 'main' into feat/refactor_generation_backend

This commit is contained in:
psychedelicious 2023-08-14 13:01:28 +10:00 committed by GitHub
commit 46a8eed33e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
11 changed files with 38 additions and 119 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 310 KiB

After

Width:  |  Height:  |  Size: 297 KiB

View File

@ -4,35 +4,13 @@ title: Postprocessing
# :material-image-edit: Postprocessing # :material-image-edit: Postprocessing
## Intro This sections details the ability to improve faces and upscale images.
This extension provides the ability to restore faces and upscale images.
## Face Fixing ## Face Fixing
The default face restoration module is GFPGAN. The default upscale is As of InvokeAI 3.0, the easiest way to improve faces created during image generation is through the Inpainting functionality of the Unified Canvas. Simply add the image containing the faces that you would like to improve to the canvas, mask the face to be improved and run the invocation. For best results, make sure to use an inpainting specific model; these are usually identified by the "-inpainting" term in the model name.
Real-ESRGAN. For an alternative face restoration module, see
[CodeFormer Support](#codeformer-support) below.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into ## Upscaling
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. Upscaling with Real-ESRGAN
should "just work" without further intervention. Simply indicate the desired scale on
the popup in the Web GUI.
**GFPGAN** requires a series of downloadable model files to work. These are
loaded when you run `invokeai-configure`. If GFPAN is failing with an
error, please run the following from the InvokeAI directory:
```bash
invokeai-configure
```
If you do not run this script in advance, the GFPGAN module will attempt to
download the models files the first time you try to perform facial
reconstruction.
### Upscaling
Open the upscaling dialog by clicking on the "expand" icon located Open the upscaling dialog by clicking on the "expand" icon located
above the image display area in the Web UI: above the image display area in the Web UI:
@ -41,82 +19,23 @@ above the image display area in the Web UI:
![upscale1](../assets/features/upscale-dialog.png) ![upscale1](../assets/features/upscale-dialog.png)
</figure> </figure>
There are three different upscaling parameters that you can The default upscaling option is Real-ESRGAN x2 Plus, which will scale your image by a factor of two. This means upscaling a 512x512 image will result in a new 1024x1024 image.
adjust. The first is the scale itself, either 2x or 4x.
The second is the "Denoising Strength." Higher values will smooth out Other options are the x4 upscalers, which will scale your image by a factor of 4.
the image and remove digital chatter, but may lose fine detail at
higher values.
Third, "Upscale Strength" allows you to adjust how the You can set the
scaling stength between `0` and `1.0` to control the intensity of the
scaling. AI upscalers generally tend to smooth out texture details. If
you wish to retain some of those for natural looking results, we
recommend using values between `0.5 to 0.8`.
[This figure](../assets/features/upscaling-montage.png) illustrates
the effects of denoising and strength. The original image was 512x512,
4x scaled to 2048x2048. The "original" version on the upper left was
scaled using simple pixel averaging. The remainder use the ESRGAN
upscaling algorithm at different levels of denoising and strength.
<figure markdown>
![upscaling](../assets/features/upscaling-montage.png){ width=720 }
</figure>
Both denoising and strength default to 0.75.
### Face Restoration
InvokeAI offers alternative two face restoration algorithms,
[GFPGAN](https://github.com/TencentARC/GFPGAN) and
[CodeFormer](https://huggingface.co/spaces/sczhou/CodeFormer). These
algorithms improve the appearance of faces, particularly eyes and
mouths. Issues with faces are less common with the latest set of
Stable Diffusion models than with the original 1.4 release, but the
restoration algorithms can still make a noticeable improvement in
certain cases. You can also apply restoration to old photographs you
upload.
To access face restoration, click the "smiley face" icon in the
toolbar above the InvokeAI image panel. You will be presented with a
dialog that offers a choice between the two algorithm and sliders that
allow you to adjust their parameters. Alternatively, you may open the
left-hand accordion panel labeled "Face Restoration" and have the
restoration algorithm of your choice applied to generated images
automatically.
Like upscaling, there are a number of parameters that adjust the face
restoration output. GFPGAN has a single parameter, `strength`, which
controls how much the algorithm is allowed to adjust the
image. CodeFormer has two parameters, `strength`, and `fidelity`,
which together control the quality of the output image as described in
the [CodeFormer project
page](https://shangchenzhou.com/projects/CodeFormer/). Default values
are 0.75 for both parameters, which achieves a reasonable balance
between changing the image too much and not enough.
[This figure](../assets/features/restoration-montage.png) illustrates
the effects of adjusting GFPGAN and CodeFormer parameters.
<figure markdown>
![upscaling](../assets/features/restoration-montage.png){ width=720 }
</figure>
!!! note !!! note
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads Real-ESRGAN is memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
its work. its work.
In single image generations, you will see the output right away but when you are using multiple In single image generations, you will see the output right away but when you are using multiple
iterations, the images will first be generated and then upscaled and face restored after that iterations, the images will first be generated and then upscaled after that
process is complete. While the image generation is taking place, you will still be able to preview process is complete. While the image generation is taking place, you will still be able to preview
the base images. the base images.
## How to disable ## How to disable
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries, If, for some reason, you do not wish to load the ESRGAN libraries,
you can disable them on the invoke.py command line with the `--no_restore` and you can disable them on the invoke.py command line with the `--no_esrgan` options.
`--no_esrgan` options, respectively.

View File

@ -775,7 +775,7 @@ def main():
if not config.model_conf_path.exists(): if not config.model_conf_path.exists():
logger.info("Your InvokeAI root directory is not set up. Calling invokeai-configure.") logger.info("Your InvokeAI root directory is not set up. Calling invokeai-configure.")
from invokeai.frontend.install import invokeai_configure from invokeai.frontend.install.invokeai_configure import invokeai_configure
invokeai_configure() invokeai_configure()
sys.exit(0) sys.exit(0)

View File

@ -1,4 +1,4 @@
import{B as m,g7 as Je,A as y,a5 as Ka,g8 as Xa,af as va,aj as d,g9 as b,ga as t,gb as Ya,gc as h,gd as ua,ge as Ja,gf as Qa,aL as Za,gg as et,ad as rt,gh as at}from"./index-815faab3.js";import{s as fa,n as o,t as tt,o as ha,p as ot,q as ma,v as ga,w as ya,x as it,y as Sa,z as pa,A as xr,B as nt,D as lt,E as st,F as xa,G as $a,H as ka,J as dt,K as _a,L as ct,M as bt,N as vt,O as ut,Q as wa,R as ft,S as ht,T as mt,U as gt,V as yt,W as St,e as pt,X as xt}from"./menu-e9f8a36e.js";var za=String.raw,Ca=za` import{B as m,g7 as Je,A as y,a5 as Ka,g8 as Xa,af as va,aj as d,g9 as b,ga as t,gb as Ya,gc as h,gd as ua,ge as Ja,gf as Qa,aL as Za,gg as et,ad as rt,gh as at}from"./index-deaa1f26.js";import{s as fa,n as o,t as tt,o as ha,p as ot,q as ma,v as ga,w as ya,x as it,y as Sa,z as pa,A as xr,B as nt,D as lt,E as st,F as xa,G as $a,H as ka,J as dt,K as _a,L as ct,M as bt,N as vt,O as ut,Q as wa,R as ft,S as ht,T as mt,U as gt,V as yt,W as St,e as pt,X as xt}from"./menu-b4489359.js";var za=String.raw,Ca=za`
:root, :root,
:host { :host {
--chakra-vh: 100vh; --chakra-vh: 100vh;

View File

@ -12,7 +12,7 @@
margin: 0; margin: 0;
} }
</style> </style>
<script type="module" crossorigin src="./assets/index-815faab3.js"></script> <script type="module" crossorigin src="./assets/index-deaa1f26.js"></script>
</head> </head>
<body dir="ltr"> <body dir="ltr">

View File

@ -20,7 +20,7 @@ export const addStagingAreaImageSavedListener = () => {
// we may need to add it to the autoadd board // we may need to add it to the autoadd board
const { autoAddBoardId } = getState().gallery; const { autoAddBoardId } = getState().gallery;
if (autoAddBoardId) { if (autoAddBoardId && autoAddBoardId !== 'none') {
await dispatch( await dispatch(
imagesApi.endpoints.addImageToBoard.initiate({ imagesApi.endpoints.addImageToBoard.initiate({
imageDTO: newImageDTO, imageDTO: newImageDTO,

View File

@ -1 +1 @@
__version__ = "3.0.2" __version__ = "3.0.2post1"

View File

@ -2,7 +2,7 @@
# Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein) # Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein)
import warnings import warnings
from invokeai.frontend.install import invokeai_configure as configure from invokeai.frontend.install.invokeai_configure import invokeai_configure as configure
if __name__ == "__main__": if __name__ == "__main__":
warnings.warn("configure_invokeai.py is deprecated, running 'invokeai-configure'...", DeprecationWarning) warnings.warn("configure_invokeai.py is deprecated, running 'invokeai-configure'...", DeprecationWarning)