mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Update-docs (#1382)
* update IMG2IMG.md * update INPAINTING.md * update WEBUIHOTKEYS.md * more doc updates (mostly fix formatting): - OUTPAINTING.md - POSTPROCESS.md - PROMPTS.md - VARIATIONS.md - WEB.md - WEBUIHOTKEYS.md
This commit is contained in:
parent
fcdefa0620
commit
7b9a4564b1
@ -6,10 +6,11 @@ title: Image-to-Image
|
||||
|
||||
## `img2img`
|
||||
|
||||
This script also provides an `img2img` feature that lets you seed your creations with an initial
|
||||
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
|
||||
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
|
||||
the `--init_img` option as shown here:
|
||||
This script also provides an `img2img` feature that lets you seed your creations
|
||||
with an initial drawing or photo. This is a really cool feature that tells
|
||||
stable diffusion to build the prompt on top of the image you provide, preserving
|
||||
the original's basic shape and layout. To use it, provide the `--init_img`
|
||||
option as shown here:
|
||||
|
||||
```commandline
|
||||
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
||||
@ -18,31 +19,33 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
|
||||
This will take the original image shown here:
|
||||
|
||||
<figure markdown>
|
||||
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
|
||||
![](https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png)
|
||||
</figure>
|
||||
|
||||
and generate a new image based on it as shown here:
|
||||
|
||||
<figure markdown>
|
||||
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
|
||||
![](https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png)
|
||||
</figure>
|
||||
|
||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` (`-f`) controls how much
|
||||
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
|
||||
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
|
||||
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
|
||||
adding steps will continuously change the resulting image and it will not converge.
|
||||
The `--init_img` (`-I`) option gives the path to the seed picture. `--strength`
|
||||
(`-f`) controls how much the original will be modified, ranging from `0.0` (keep
|
||||
the original intact), to `1.0` (ignore the original completely). The default is
|
||||
`0.75`, and ranges from `0.25-0.90` give interesting results. Other relevant
|
||||
options include `-C` (classification free guidance scale), and `-s` (steps).
|
||||
Unlike `txt2img`, adding steps will continuously change the resulting image and
|
||||
it will not converge.
|
||||
|
||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
|
||||
the original image. This is done by passing the first generated image
|
||||
back into img2img the requested number of times. It generates
|
||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>`
|
||||
count variants on the original image. This is done by passing the first
|
||||
generated image back into img2img the requested number of times. It generates
|
||||
interesting variants.
|
||||
|
||||
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
|
||||
a very different image:
|
||||
Note that the prompt makes a big difference. For example, this slight variation
|
||||
on the prompt produces a very different image:
|
||||
|
||||
<figure markdown>
|
||||
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
|
||||
![](https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png)
|
||||
<caption markdown>photograph of a tree on a hill with a river</caption>
|
||||
</figure>
|
||||
|
||||
@ -52,27 +55,37 @@ a very different image:
|
||||
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
|
||||
model, or film settings.
|
||||
|
||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
|
||||
transparent regions, a process called [`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting). However, for this to work correctly, the color
|
||||
information underneath the transparent needs to be preserved, not erased.
|
||||
If the initial image contains transparent regions, then Stable Diffusion will
|
||||
only draw within the transparent regions, a process called
|
||||
[`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting).
|
||||
However, for this to work correctly, the color information underneath the
|
||||
transparent needs to be preserved, not erased.
|
||||
|
||||
!!! warning
|
||||
|
||||
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
|
||||
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
|
||||
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
|
||||
by width x height:
|
||||
~~~
|
||||
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller
|
||||
than 512x512. Please scale your image to at least 512x512 before using it.
|
||||
Larger images are not a problem, but may run out of VRAM on your GPU card. To
|
||||
fix this, use the --fit option, which downscales the initial image to fit within
|
||||
the box specified by width x height:
|
||||
|
||||
```
|
||||
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
||||
~~~
|
||||
```
|
||||
|
||||
## How does it actually work, though?
|
||||
|
||||
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
|
||||
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
|
||||
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
|
||||
The main difference between `img2img` and `prompt2img` is the starting point.
|
||||
While `prompt2img` always starts with pure gaussian noise and progressively
|
||||
refines it over the requested number of steps, `img2img` skips some of these
|
||||
earlier steps (how many it skips is indirectly controlled by the `--strength`
|
||||
parameter), and uses instead your initial image mixed with gaussian noise as the
|
||||
starting image.
|
||||
|
||||
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
|
||||
**Let's start** by thinking about vanilla `prompt2img`, just generating an image
|
||||
from a prompt. If the step count is 10, then the "latent space" (Stable
|
||||
Diffusion's internal representation of the image) for the prompt "fire" with
|
||||
seed `1592514025` develops something like this:
|
||||
|
||||
```commandline
|
||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||
@ -82,9 +95,16 @@ invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||
![latent steps](../assets/img2img/000019.steps.png)
|
||||
</figure>
|
||||
|
||||
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
|
||||
Put simply: starting from a frame of fuzz/static, SD finds details in each frame
|
||||
that it thinks look like "fire" and brings them a little bit more into focus,
|
||||
gradually scrubbing out the fuzz until a clear image remains.
|
||||
|
||||
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
|
||||
**When you use `img2img`** some of the earlier steps are cut, and instead an
|
||||
initial image of your choice is used. But because of how the maths behind Stable
|
||||
Diffusion works, this image needs to be mixed with just the right amount of
|
||||
noise (fuzz/static) for where it is being inserted. This is where the strength
|
||||
parameter comes in. Depending on the set strength, your image will be inserted
|
||||
into the sequence at the appropriate point, with just the right amount of noise.
|
||||
|
||||
### A concrete example
|
||||
|
||||
@ -94,7 +114,9 @@ I want SD to draw a fire based on this hand-drawn image:
|
||||
![drawing of a fireplace](../assets/img2img/fire-drawing.png)
|
||||
</figure>
|
||||
|
||||
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
|
||||
Let's only do 10 steps, to make it easier to see what's happening. If strength
|
||||
is `0.7`, this is what the internal steps the algorithm has to take will look
|
||||
like:
|
||||
|
||||
<figure markdown>
|
||||
![gravity32](../assets/img2img/000032.steps.gravity.png)
|
||||
@ -106,31 +128,47 @@ With strength `0.4`, the steps look more like this:
|
||||
![gravity30](../assets/img2img/000030.steps.gravity.png)
|
||||
</figure>
|
||||
|
||||
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
|
||||
Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
||||
`0.4`, and notice also how much longer the sequence is with `0.7`:
|
||||
|
||||
| | strength = 0.7 | strength = 0.4 |
|
||||
| -- | -- | -- |
|
||||
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
|
||||
| steps argument to `invoke>` | `-S10` | `-S10` |
|
||||
| steps actually taken | 7 | 4 |
|
||||
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
|
||||
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
|
||||
| | strength = 0.7 | strength = 0.4 |
|
||||
| --------------------------- | ------------------------------------------------------------- | ------------------------------------------------------------- |
|
||||
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
|
||||
| steps argument to `invoke>` | `-S10` | `-S10` |
|
||||
| steps actually taken | 7 | 4 |
|
||||
| latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
|
||||
| output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
|
||||
|
||||
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
|
||||
Both of the outputs look kind of like what I was thinking of. With the strength
|
||||
higher, my input becomes more vague, _and_ Stable Diffusion has more steps to
|
||||
refine its output. But it's not really making what I want, which is a picture of
|
||||
cheery open fire. With the strength lower, my input is more clear, _but_ Stable
|
||||
Diffusion has less chance to refine itself, so the result ends up inheriting all
|
||||
the problems of my bad drawing.
|
||||
|
||||
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`:
|
||||
If you want to try this out yourself, all of these are using a seed of
|
||||
`1592514025` with a width/height of `384`, step count `10`, the default sampler
|
||||
(`k_lms`), and the single-word prompt `"fire"`:
|
||||
|
||||
```commandline
|
||||
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
||||
```
|
||||
|
||||
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
|
||||
The code for rendering intermediates is on my (damian0815's) branch
|
||||
[document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) -
|
||||
run `invoke.py` and check your `outputs/img-samples/intermediates` folder while
|
||||
generating an image.
|
||||
|
||||
### Compensating for the reduced step count
|
||||
|
||||
After putting this guide together I was curious to see how the difference would be if I increased the step count to compensate, so that SD could have the same amount of steps to develop the image regardless of the strength. So I ran the generation again using the same seed, but this time adapting the step count to give each generation 20 steps.
|
||||
After putting this guide together I was curious to see how the difference would
|
||||
be if I increased the step count to compensate, so that SD could have the same
|
||||
amount of steps to develop the image regardless of the strength. So I ran the
|
||||
generation again using the same seed, but this time adapting the step count to
|
||||
give each generation 20 steps.
|
||||
|
||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
|
||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD
|
||||
does `20` steps from my image):
|
||||
|
||||
```commandline
|
||||
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||
@ -140,7 +178,8 @@ invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||
![000035.1592514025](../assets/img2img/000035.1592514025.png)
|
||||
</figure>
|
||||
|
||||
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
|
||||
and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to
|
||||
make sure SD does `20` steps from my image):
|
||||
|
||||
```commandline
|
||||
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
||||
@ -150,7 +189,11 @@ invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
||||
![000046.1592514025](../assets/img2img/000046.1592514025.png)
|
||||
</figure>
|
||||
|
||||
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
|
||||
In both cases the image is nice and clean and "finished", but because at
|
||||
strength `0.7` Stable Diffusion has been give so much more freedom to improve on
|
||||
my badly-drawn flames, they've come out looking much better. You can really see
|
||||
the difference when looking at the latent steps. There's more noise on the first
|
||||
image with strength `0.7`:
|
||||
|
||||
<figure markdown>
|
||||
![gravity46](../assets/img2img/000046.steps.gravity.png)
|
||||
@ -162,15 +205,19 @@ than there is for strength `0.4`:
|
||||
![gravity35](../assets/img2img/000035.steps.gravity.png)
|
||||
</figure>
|
||||
|
||||
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
|
||||
and that extra noise gives the algorithm more choices when it is evaluating how
|
||||
to denoise any particular pixel in the image.
|
||||
|
||||
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
|
||||
Unfortunately, it seems that `img2img` is very sensitive to the step count.
|
||||
Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
|
||||
|
||||
<figure markdown>
|
||||
![gravity45](../assets/img2img/000045.1592514025.png)
|
||||
</figure>
|
||||
|
||||
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
|
||||
By comparing the latents we can sort of see that something got interpreted
|
||||
differently enough on the third or fourth step to lead to a rather different
|
||||
interpretation of the flames.
|
||||
|
||||
<figure markdown>
|
||||
![gravity46](../assets/img2img/000046.steps.gravity.png)
|
||||
@ -180,4 +227,9 @@ By comparing the latents we can sort of see that something got interpreted diffe
|
||||
![gravity45](../assets/img2img/000045.steps.gravity.png)
|
||||
</figure>
|
||||
|
||||
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see [stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.
|
||||
This is the result of a difference in the de-noising "schedule" - basically the
|
||||
noise has to be cleaned by a certain degree each step or the model won't
|
||||
"converge" on the image properly (see
|
||||
[stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more
|
||||
about that). A different step count means a different schedule, which means
|
||||
things get interpreted slightly differently at every step.
|
||||
|
@ -54,11 +54,9 @@ I got off the web.
|
||||
You can selectively mask out the orange and replace it with a baseball in this
|
||||
way:
|
||||
|
||||
```bash
|
||||
invoke> a baseball -I /path/to/still_life.png -tm orange
|
||||
```
|
||||
|
||||
<figure markdown>
|
||||
`#!bash invoke> a baseball -I /path/to/still_life.png -tm orange`
|
||||
|
||||
![](../assets/still-life-inpainted.png)
|
||||
</figure>
|
||||
|
||||
|
@ -6,61 +6,63 @@ title: Outpainting
|
||||
|
||||
## Outpainting and outcropping
|
||||
|
||||
Outpainting is a process by which the AI generates parts of the image
|
||||
that are outside its original frame. It can be used to fix up images
|
||||
in which the subject is off center, or when some detail (often the top
|
||||
of someone's head!) is cut off.
|
||||
Outpainting is a process by which the AI generates parts of the image that are
|
||||
outside its original frame. It can be used to fix up images in which the subject
|
||||
is off center, or when some detail (often the top of someone's head!) is cut
|
||||
off.
|
||||
|
||||
InvokeAI supports two versions of outpainting, one called "outpaint"
|
||||
and the other "outcrop." They work slightly differently and each has
|
||||
its advantages and drawbacks.
|
||||
InvokeAI supports two versions of outpainting, one called "outpaint" and the
|
||||
other "outcrop." They work slightly differently and each has its advantages and
|
||||
drawbacks.
|
||||
|
||||
### Outpainting
|
||||
|
||||
Outpainting is the same as inpainting, except that the painting occurs
|
||||
in the regions outside of the original image. To outpaint using the
|
||||
`invoke.py` command line script, prepare an image in which the borders
|
||||
to be extended are pure black. Add an alpha channel (if there isn't one
|
||||
already), and make the borders completely transparent and the interior
|
||||
completely opaque. If you wish to modify the interior as well, you may
|
||||
create transparent holes in the transparency layer, which `img2img` will
|
||||
paint into as usual.
|
||||
Outpainting is the same as inpainting, except that the painting occurs in the
|
||||
regions outside of the original image. To outpaint using the `invoke.py` command
|
||||
line script, prepare an image in which the borders to be extended are pure
|
||||
black. Add an alpha channel (if there isn't one already), and make the borders
|
||||
completely transparent and the interior completely opaque. If you wish to modify
|
||||
the interior as well, you may create transparent holes in the transparency
|
||||
layer, which `img2img` will paint into as usual.
|
||||
|
||||
Pass the image as the argument to the `-I` switch as you would for
|
||||
regular inpainting:
|
||||
Pass the image as the argument to the `-I` switch as you would for regular
|
||||
inpainting:
|
||||
|
||||
invoke> a stream by a river -I /path/to/transparent_img.png
|
||||
```bash
|
||||
invoke> a stream by a river -I /path/to/transparent_img.png
|
||||
```
|
||||
|
||||
You'll likely be delighted by the results.
|
||||
|
||||
### Tips
|
||||
|
||||
1. Do not try to expand the image too much at once. Generally it is best
|
||||
to expand the margins in 64-pixel increments. 128 pixels often works,
|
||||
but your mileage may vary depending on the nature of the image you are
|
||||
trying to outpaint into.
|
||||
1. Do not try to expand the image too much at once. Generally it is best to
|
||||
expand the margins in 64-pixel increments. 128 pixels often works, but your
|
||||
mileage may vary depending on the nature of the image you are trying to
|
||||
outpaint into.
|
||||
|
||||
2. There are a series of switches that can be used to adjust how the
|
||||
inpainting algorithm operates. In particular, you can use these to
|
||||
minimize the seam that sometimes appears between the original image
|
||||
and the extended part. These switches are:
|
||||
2. There are a series of switches that can be used to adjust how the inpainting
|
||||
algorithm operates. In particular, you can use these to minimize the seam
|
||||
that sometimes appears between the original image and the extended part.
|
||||
These switches are:
|
||||
|
||||
--seam_size SEAM_SIZE Size of the mask around the seam between original and outpainted image (0)
|
||||
--seam_blur SEAM_BLUR The amount to blur the seam inwards (0)
|
||||
--seam_strength STRENGTH The img2img strength to use when filling the seam (0.7)
|
||||
--seam_steps SEAM_STEPS The number of steps to use to fill the seam. (10)
|
||||
--tile_size TILE_SIZE The tile size to use for filling outpaint areas (32)
|
||||
| switch | default | description |
|
||||
| -------------------------- | ------- | ---------------------------------------------------------------------- |
|
||||
| `--seam_size SEAM_SIZE ` | `0` | Size of the mask around the seam between original and outpainted image |
|
||||
| `--seam_blur SEAM_BLUR` | `0` | The amount to blur the seam inwards |
|
||||
| `--seam_strength STRENGTH` | `0.7` | The img2img strength to use when filling the seam |
|
||||
| `--seam_steps SEAM_STEPS` | `10` | The number of steps to use to fill the seam. |
|
||||
| `--tile_size TILE_SIZE` | `32` | The tile size to use for filling outpaint areas |
|
||||
|
||||
### Outcrop
|
||||
|
||||
The `outcrop` extension gives you a convenient `!fix` postprocessing
|
||||
command that allows you to extend a previously-generated image in 64
|
||||
pixel increments in any direction. You can apply the module to any
|
||||
image previously-generated by InvokeAI. Note that it works with
|
||||
arbitrary PNG photographs, but not currently with JPG or other
|
||||
formats. Outcropping is particularly effective when combined with the
|
||||
[runwayML custom inpainting
|
||||
model](INPAINTING.md#using-the-runwayml-inpainting-model).
|
||||
The `outcrop` extension gives you a convenient `!fix` postprocessing command
|
||||
that allows you to extend a previously-generated image in 64 pixel increments in
|
||||
any direction. You can apply the module to any image previously-generated by
|
||||
InvokeAI. Note that it works with arbitrary PNG photographs, but not currently
|
||||
with JPG or other formats. Outcropping is particularly effective when combined
|
||||
with the
|
||||
[runwayML custom inpainting model](INPAINTING.md#using-the-runwayml-inpainting-model).
|
||||
|
||||
Consider this image:
|
||||
|
||||
@ -68,18 +70,17 @@ Consider this image:
|
||||
![curly_woman](../assets/outpainting/curly.png)
|
||||
</figure>
|
||||
|
||||
Pretty nice, but it's annoying that the top of her head is cut
|
||||
off. She's also a bit off center. Let's fix that!
|
||||
Pretty nice, but it's annoying that the top of her head is cut off. She's also a
|
||||
bit off center. Let's fix that!
|
||||
|
||||
```bash
|
||||
invoke> !fix images/curly.png --outcrop top 128 right 64 bottom 64
|
||||
```
|
||||
|
||||
This is saying to apply the `outcrop` extension by extending the top
|
||||
of the image by 128 pixels, and the right and bottom of the image by
|
||||
64 pixels. You can use any combination of top|left|right|bottom, and
|
||||
specify any number of pixels to extend. You can also abbreviate
|
||||
`--outcrop` to `-c`.
|
||||
This is saying to apply the `outcrop` extension by extending the top of the
|
||||
image by 128 pixels, and the right and bottom of the image by 64 pixels. You can
|
||||
use any combination of top|left|right|bottom, and specify any number of pixels
|
||||
to extend. You can also abbreviate `--outcrop` to `-c`.
|
||||
|
||||
The result looks like this:
|
||||
|
||||
@ -87,39 +88,38 @@ The result looks like this:
|
||||
![curly_woman_outcrop](../assets/outpainting/curly-outcrop-2.png)
|
||||
</figure>
|
||||
|
||||
The new image is larger than the original (576x704)
|
||||
because 64 pixels were added to the top and right sides. You will
|
||||
need enough VRAM to process an image of this size.
|
||||
The new image is larger than the original (576x704) because 64 pixels were added
|
||||
to the top and right sides. You will need enough VRAM to process an image of
|
||||
this size.
|
||||
|
||||
A number of caveats:
|
||||
|
||||
1. Although you can specify any pixel values, they will be rounded up
|
||||
to the nearest multiple of 64. Smaller values are better. Larger
|
||||
extensions are more likely to generate artefacts. However, if you wish
|
||||
you can run the !fix command repeatedly to cautiously expand the
|
||||
image.
|
||||
1. Although you can specify any pixel values, they will be rounded up to the
|
||||
nearest multiple of 64. Smaller values are better. Larger extensions are more
|
||||
likely to generate artefacts. However, if you wish you can run the !fix
|
||||
command repeatedly to cautiously expand the image.
|
||||
|
||||
2. The extension is stochastic, meaning that each time you run it
|
||||
you'll get a slightly different result. You can run it repeatedly
|
||||
until you get an image you like. Unfortunately `!fix` does not
|
||||
currently respect the `-n` (`--iterations`) argument.
|
||||
2. The extension is stochastic, meaning that each time you run it you'll get a
|
||||
slightly different result. You can run it repeatedly until you get an image
|
||||
you like. Unfortunately `!fix` does not currently respect the `-n`
|
||||
(`--iterations`) argument.
|
||||
|
||||
3. Your results will be _much_ better if you use the `inpaint-1.5`
|
||||
model released by runwayML and installed by default by
|
||||
`scripts/preload_models.py`. This model was trained specifically to
|
||||
harmoniously fill in image gaps. The standard model will work as well,
|
||||
but you may notice color discontinuities at the border.
|
||||
3. Your results will be _much_ better if you use the `inpaint-1.5` model
|
||||
released by runwayML and installed by default by `scripts/preload_models.py`.
|
||||
This model was trained specifically to harmoniously fill in image gaps. The
|
||||
standard model will work as well, but you may notice color discontinuities at
|
||||
the border.
|
||||
|
||||
4. When using the `inpaint-1.5` model, you may notice subtle changes
|
||||
to the area within the original image. This is because the model
|
||||
performs an encoding/decoding on the image as a whole. This does not
|
||||
occur with the standard model.
|
||||
4. When using the `inpaint-1.5` model, you may notice subtle changes to the area
|
||||
within the original image. This is because the model performs an
|
||||
encoding/decoding on the image as a whole. This does not occur with the
|
||||
standard model.
|
||||
|
||||
## Outpaint
|
||||
|
||||
The `outpaint` extension does the same thing, but with subtle
|
||||
differences. Starting with the same image, here is how we would add an
|
||||
additional 64 pixels to the top of the image:
|
||||
The `outpaint` extension does the same thing, but with subtle differences.
|
||||
Starting with the same image, here is how we would add an additional 64 pixels
|
||||
to the top of the image:
|
||||
|
||||
```bash
|
||||
invoke> !fix images/curly.png --out_direction top 64
|
||||
@ -138,15 +138,15 @@ outcropping:
|
||||
|
||||
- You can only specify one direction to extend at a time.
|
||||
- The image is **not** resized. Instead, the image is shifted by the specified
|
||||
number of pixels. If you look carefully, you'll see that less of the lady's
|
||||
torso is visible in the image.
|
||||
- Because the image dimensions remain the same, there's no rounding
|
||||
to multiples of 64.
|
||||
- Attempting to outpaint larger areas will frequently give rise to ugly
|
||||
ghosting effects.
|
||||
number of pixels. If you look carefully, you'll see that less of the lady's
|
||||
torso is visible in the image.
|
||||
- Because the image dimensions remain the same, there's no rounding to multiples
|
||||
of 64.
|
||||
- Attempting to outpaint larger areas will frequently give rise to ugly ghosting
|
||||
effects.
|
||||
- For best results, try increasing the step number.
|
||||
- If you don't specify a pixel value in `-D`, it will default to half
|
||||
of the whole image, which is likely not what you want.
|
||||
- If you don't specify a pixel value in `-D`, it will default to half of the
|
||||
whole image, which is likely not what you want.
|
||||
|
||||
!!! tip
|
||||
|
||||
|
@ -19,14 +19,14 @@ applied after the fact.
|
||||
|
||||
The default face restoration module is GFPGAN. The default upscale is
|
||||
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
|
||||
Support] below.
|
||||
Support](#codeformer-support) below.
|
||||
|
||||
As of version 1.14, environment.yaml will install the Real-ESRGAN
|
||||
package into the standard install location for python packages, and
|
||||
will put GFPGAN into a subdirectory of "src" in the InvokeAI
|
||||
directory. Upscaling with Real-ESRGAN should "just work" without
|
||||
further intervention. Simply pass the --upscale (-U) option on the
|
||||
invoke> command line, or indicate the desired scale on the popup in
|
||||
further intervention. Simply pass the `--upscale` (`-U`) option on the
|
||||
`invoke>` command line, or indicate the desired scale on the popup in
|
||||
the Web GUI.
|
||||
|
||||
**GFPGAN** requires a series of downloadable model files to
|
||||
|
@ -6,14 +6,15 @@ title: Prompting-Features
|
||||
|
||||
## **Reading Prompts from a File**
|
||||
|
||||
You can automate `invoke.py` by providing a text file with the prompts you want to run, one line per
|
||||
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
|
||||
Each line should look like what you would type at the invoke> prompt:
|
||||
You can automate `invoke.py` by providing a text file with the prompts you want
|
||||
to run, one line per prompt. The text file must be composed with a text editor
|
||||
(e.g. Notepad) and not a word processor. Each line should look like what you
|
||||
would type at the invoke> prompt:
|
||||
|
||||
```bash
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
"a beautiful sunny day in the park, children playing" -n4 -C10
|
||||
"stormy weather on a mountain top, goats grazing" -s100
|
||||
"innovative packaging for a squid's dinner" -S137038382
|
||||
```
|
||||
|
||||
Then pass this file's name to `invoke.py` when you invoke it:
|
||||
@ -22,7 +23,8 @@ Then pass this file's name to `invoke.py` when you invoke it:
|
||||
(invokeai) ~/stable-diffusion$ python3 scripts/invoke.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of `-`:
|
||||
You may read a series of prompts from standard input by providing a filename of
|
||||
`-`:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/invoke.py --from_file -
|
||||
@ -32,14 +34,15 @@ You may read a series of prompts from standard input by providing a filename of
|
||||
|
||||
## **Negative and Unconditioned Prompts**
|
||||
|
||||
Any words between a pair of square brackets will instruct Stable
|
||||
Diffusion to attempt to ban the concept from the generated image.
|
||||
Any words between a pair of square brackets will instruct Stable Diffusion to
|
||||
attempt to ban the concept from the generated image.
|
||||
|
||||
```text
|
||||
this is a test prompt [not really] to make you understand [cool] how this works.
|
||||
```
|
||||
|
||||
In the above statement, the words 'not really cool` will be ignored by Stable Diffusion.
|
||||
In the above statement, the words 'not really cool` will be ignored by Stable
|
||||
Diffusion.
|
||||
|
||||
Here's a prompt that depicts what it does.
|
||||
|
||||
@ -51,7 +54,9 @@ original prompt:
|
||||
![step1](../assets/negative_prompt_walkthru/step1.png)
|
||||
</figure>
|
||||
|
||||
That image has a woman, so if we want the horse without a rider, we can influence the image not to have a woman by putting [woman] in the prompt, like this:
|
||||
That image has a woman, so if we want the horse without a rider, we can
|
||||
influence the image not to have a woman by putting [woman] in the prompt, like
|
||||
this:
|
||||
|
||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||
|
||||
@ -59,7 +64,8 @@ That image has a woman, so if we want the horse without a rider, we can influenc
|
||||
![step2](../assets/negative_prompt_walkthru/step2.png)
|
||||
</figure>
|
||||
|
||||
That's nice - but say we also don't want the image to be quite so blue. We can add "blue" to the list of negative prompts, so it's now [woman blue]:
|
||||
That's nice - but say we also don't want the image to be quite so blue. We can
|
||||
add "blue" to the list of negative prompts, so it's now [woman blue]:
|
||||
|
||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||
|
||||
@ -67,7 +73,8 @@ That's nice - but say we also don't want the image to be quite so blue. We can a
|
||||
![step3](../assets/negative_prompt_walkthru/step3.png)
|
||||
</figure>
|
||||
|
||||
Getting close - but there's no sense in having a saddle when our horse doesn't have a rider, so we'll add one more negative prompt: [woman blue saddle].
|
||||
Getting close - but there's no sense in having a saddle when our horse doesn't
|
||||
have a rider, so we'll add one more negative prompt: [woman blue saddle].
|
||||
|
||||
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
|
||||
|
||||
@ -88,91 +95,128 @@ Getting close - but there's no sense in having a saddle when our horse doesn't h
|
||||
|
||||
The InvokeAI prompting language has the following features:
|
||||
|
||||
### Attention weighting
|
||||
Append a word or phrase with `-` or `+`, or a weight between `0` and `2` (`1`=default), to decrease or increase "attention" (= a mix of per-token CFG weighting multiplier and, for `-`, a weighted blend with the prompt without the term).
|
||||
### Attention weighting
|
||||
|
||||
Append a word or phrase with `-` or `+`, or a weight between `0` and `2`
|
||||
(`1`=default), to decrease or increase "attention" (= a mix of per-token CFG
|
||||
weighting multiplier and, for `-`, a weighted blend with the prompt without the
|
||||
term).
|
||||
|
||||
The following syntax is recognised:
|
||||
* single words without parentheses: `a tall thin man picking apricots+`
|
||||
* single or multiple words with parentheses: `a tall thin man picking (apricots)+` `a tall thin man picking (apricots)-` `a tall thin man (picking apricots)+` `a tall thin man (picking apricots)-`
|
||||
* more effect with more symbols `a tall thin man (picking apricots)++`
|
||||
* nesting `a tall thin man (picking apricots+)++` (`apricots` effectively gets `+++`)
|
||||
* all of the above with explicit numbers `a tall thin man picking (apricots)1.1` `a tall thin man (picking (apricots)1.3)1.1`. (`+` is equivalent to 1.1, `++` is pow(1.1,2), `+++` is pow(1.1,3), etc; `-` means 0.9, `--` means pow(0.9,2), etc.)
|
||||
* attention also applies to `[unconditioning]` so `a tall thin man picking apricots [(ladder)0.01]` will *very gently* nudge SD away from trying to draw the man on a ladder
|
||||
|
||||
You can use this to increase or decrease the amount of something. Starting from this prompt of `a man picking apricots from a tree`, let's see what happens if we increase and decrease how much attention we want Stable Diffusion to pay to the word `apricots`:
|
||||
- single words without parentheses: `a tall thin man picking apricots+`
|
||||
- single or multiple words with parentheses:
|
||||
`a tall thin man picking (apricots)+` `a tall thin man picking (apricots)-`
|
||||
`a tall thin man (picking apricots)+` `a tall thin man (picking apricots)-`
|
||||
- more effect with more symbols `a tall thin man (picking apricots)++`
|
||||
- nesting `a tall thin man (picking apricots+)++` (`apricots` effectively gets
|
||||
`+++`)
|
||||
- all of the above with explicit numbers `a tall thin man picking (apricots)1.1`
|
||||
`a tall thin man (picking (apricots)1.3)1.1`. (`+` is equivalent to 1.1, `++`
|
||||
is pow(1.1,2), `+++` is pow(1.1,3), etc; `-` means 0.9, `--` means pow(0.9,2),
|
||||
etc.)
|
||||
- attention also applies to `[unconditioning]` so
|
||||
`a tall thin man picking apricots [(ladder)0.01]` will _very gently_ nudge SD
|
||||
away from trying to draw the man on a ladder
|
||||
|
||||
You can use this to increase or decrease the amount of something. Starting from
|
||||
this prompt of `a man picking apricots from a tree`, let's see what happens if
|
||||
we increase and decrease how much attention we want Stable Diffusion to pay to
|
||||
the word `apricots`:
|
||||
|
||||
![an AI generated image of a man picking apricots from a tree](../assets/prompt_syntax/apricots-0.png)
|
||||
|
||||
Using `-` to reduce apricot-ness:
|
||||
|
||||
| `a man picking apricots- from a tree` | `a man picking apricots-- from a tree` | `a man picking apricots--- from a tree` |
|
||||
| -- | -- | -- |
|
||||
| `a man picking apricots- from a tree` | `a man picking apricots-- from a tree` | `a man picking apricots--- from a tree` |
|
||||
| ------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| ![an AI generated image of a man picking apricots from a tree, with smaller apricots](../assets/prompt_syntax/apricots--1.png) | ![an AI generated image of a man picking apricots from a tree, with even smaller and fewer apricots](../assets/prompt_syntax/apricots--2.png) | ![an AI generated image of a man picking apricots from a tree, with very few very small apricots](../assets/prompt_syntax/apricots--3.png) |
|
||||
|
||||
Using `+` to increase apricot-ness:
|
||||
|
||||
| `a man picking apricots+ from a tree` | `a man picking apricots++ from a tree` | `a man picking apricots+++ from a tree` | `a man picking apricots++++ from a tree` | `a man picking apricots+++++ from a tree` |
|
||||
| -- | -- | -- | -- | -- |
|
||||
| `a man picking apricots+ from a tree` | `a man picking apricots++ from a tree` | `a man picking apricots+++ from a tree` | `a man picking apricots++++ from a tree` | `a man picking apricots+++++ from a tree` |
|
||||
| ------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| ![an AI generated image of a man picking apricots from a tree, with larger, more vibrant apricots](../assets/prompt_syntax/apricots-1.png) | ![an AI generated image of a man picking apricots from a tree with even larger, even more vibrant apricots](../assets/prompt_syntax/apricots-2.png) | ![an AI generated image of a man picking apricots from a tree, but the man has been replaced by a pile of apricots](../assets/prompt_syntax/apricots-3.png) | ![an AI generated image of a man picking apricots from a tree, but the man has been replaced by a mound of giant melting-looking apricots](../assets/prompt_syntax/apricots-4.png) | ![an AI generated image of a man picking apricots from a tree, but the man and the leaves and parts of the ground have all been replaced by giant melting-looking apricots](../assets/prompt_syntax/apricots-5.png) |
|
||||
|
||||
You can also change the balance between different parts of a prompt. For example, below is a `mountain man`:
|
||||
You can also change the balance between different parts of a prompt. For
|
||||
example, below is a `mountain man`:
|
||||
|
||||
![an AI generated image of a mountain man](../assets/prompt_syntax/mountain-man.png)
|
||||
|
||||
And here he is with more mountain:
|
||||
|
||||
| `mountain+ man` | `mountain++ man` | `mountain+++ man` |
|
||||
| -- | -- | -- |
|
||||
| `mountain+ man` | `mountain++ man` | `mountain+++ man` |
|
||||
| ---------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- |
|
||||
| ![](../assets/prompt_syntax/mountain1-man.png) | ![](../assets/prompt_syntax/mountain2-man.png) | ![](../assets/prompt_syntax/mountain3-man.png) |
|
||||
|
||||
Or, alternatively, with more man:
|
||||
|
||||
| `mountain man+` | `mountain man++` | `mountain man+++` | `mountain man++++` |
|
||||
| -- | -- | -- | -- |
|
||||
| `mountain man+` | `mountain man++` | `mountain man+++` | `mountain man++++` |
|
||||
| ---------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- | ---------------------------------------------- |
|
||||
| ![](../assets/prompt_syntax/mountain-man1.png) | ![](../assets/prompt_syntax/mountain-man2.png) | ![](../assets/prompt_syntax/mountain-man3.png) | ![](../assets/prompt_syntax/mountain-man4.png) |
|
||||
|
||||
### Blending between prompts
|
||||
|
||||
* `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
|
||||
* The existing prompt blending using `:<weight>` will continue to be supported - `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)` is equivalent to `a tall thin man picking apricots:1 a tall thin man picking pears:1` in the old syntax.
|
||||
* Attention weights can be nested inside blends.
|
||||
* Non-normalized blends are supported by passing `no_normalize` as an additional argument to the blend weights, eg `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,-1,no_normalize)`. very fun to explore local maxima in the feature space, but also easy to produce garbage output.
|
||||
- `("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
|
||||
- The existing prompt blending using `:<weight>` will continue to be supported -
|
||||
`("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,1)`
|
||||
is equivalent to
|
||||
`a tall thin man picking apricots:1 a tall thin man picking pears:1` in the
|
||||
old syntax.
|
||||
- Attention weights can be nested inside blends.
|
||||
- Non-normalized blends are supported by passing `no_normalize` as an additional
|
||||
argument to the blend weights, eg
|
||||
`("a tall thin man picking apricots", "a tall thin man picking pears").blend(1,-1,no_normalize)`.
|
||||
very fun to explore local maxima in the feature space, but also easy to
|
||||
produce garbage output.
|
||||
|
||||
See the section below on "Prompt Blending" for more information about how this works.
|
||||
See the section below on "Prompt Blending" for more information about how this
|
||||
works.
|
||||
|
||||
### Cross-Attention Control ('prompt2prompt')
|
||||
|
||||
Sometimes an image you generate is almost right, and you just want to
|
||||
change one detail without affecting the rest. You could use a photo editor and inpainting
|
||||
to overpaint the area, but that's a pain. Here's where `prompt2prompt`
|
||||
comes in handy.
|
||||
Sometimes an image you generate is almost right, and you just want to change one
|
||||
detail without affecting the rest. You could use a photo editor and inpainting
|
||||
to overpaint the area, but that's a pain. Here's where `prompt2prompt` comes in
|
||||
handy.
|
||||
|
||||
Generate an image with a given prompt, record the seed of the image,
|
||||
and then use the `prompt2prompt` syntax to substitute words in the
|
||||
original prompt for words in a new prompt. This works for `img2img` as well.
|
||||
Generate an image with a given prompt, record the seed of the image, and then
|
||||
use the `prompt2prompt` syntax to substitute words in the original prompt for
|
||||
words in a new prompt. This works for `img2img` as well.
|
||||
|
||||
* `a ("fluffy cat").swap("smiling dog") eating a hotdog`.
|
||||
* quotes optional: `a (fluffy cat).swap(smiling dog) eating a hotdog`.
|
||||
* for single word substitutions parentheses are also optional: `a cat.swap(dog) eating a hotdog`.
|
||||
* Supports options `s_start`, `s_end`, `t_start`, `t_end` (each 0-1) loosely corresponding to bloc97's `prompt_edit_spatial_start/_end` and `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to intuitively understand.
|
||||
* Example usage:`a (cat).swap(dog, s_end=0.3) eating a hotdog` - the `s_end` argument means that the "spatial" (self-attention) edit will stop having any effect after 30% (=0.3) of the steps have been done, leaving Stable Diffusion with 70% of the steps where it is free to decide for itself how to reshape the cat-form into a dog form.
|
||||
* The numbers represent a percentage through the step sequence where the edits should happen. 0 means the start (noisy starting image), 1 is the end (final image).
|
||||
* For img2img, the step sequence does not start at 0 but instead at (1-strength) - so if strength is 0.7, s_start and s_end must both be greater than 0.3 (1-0.7) to have any effect.
|
||||
* Convenience option `shape_freedom` (0-1) to specify how much "freedom" Stable Diffusion should have to change the shape of the subject being swapped.
|
||||
* `a (cat).swap(dog, shape_freedom=0.5) eating a hotdog`.
|
||||
- `a ("fluffy cat").swap("smiling dog") eating a hotdog`.
|
||||
- quotes optional: `a (fluffy cat).swap(smiling dog) eating a hotdog`.
|
||||
- for single word substitutions parentheses are also optional:
|
||||
`a cat.swap(dog) eating a hotdog`.
|
||||
- Supports options `s_start`, `s_end`, `t_start`, `t_end` (each 0-1) loosely
|
||||
corresponding to bloc97's `prompt_edit_spatial_start/_end` and
|
||||
`prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
|
||||
intuitively understand.
|
||||
- Example usage:`a (cat).swap(dog, s_end=0.3) eating a hotdog` - the `s_end`
|
||||
argument means that the "spatial" (self-attention) edit will stop having any
|
||||
effect after 30% (=0.3) of the steps have been done, leaving Stable
|
||||
Diffusion with 70% of the steps where it is free to decide for itself how to
|
||||
reshape the cat-form into a dog form.
|
||||
- The numbers represent a percentage through the step sequence where the edits
|
||||
should happen. 0 means the start (noisy starting image), 1 is the end (final
|
||||
image).
|
||||
- For img2img, the step sequence does not start at 0 but instead at
|
||||
(1-strength) - so if strength is 0.7, s_start and s_end must both be
|
||||
greater than 0.3 (1-0.7) to have any effect.
|
||||
- Convenience option `shape_freedom` (0-1) to specify how much "freedom" Stable
|
||||
Diffusion should have to change the shape of the subject being swapped.
|
||||
- `a (cat).swap(dog, shape_freedom=0.5) eating a hotdog`.
|
||||
|
||||
The `prompt2prompt` code is based off
|
||||
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
|
||||
|
||||
Note that `prompt2prompt` is not currently working with the runwayML inpainting
|
||||
model, and may never work due to the way this model is set up. If you attempt to
|
||||
use `prompt2prompt` you will get the original image back. However, since this
|
||||
model is so good at inpainting, a good substitute is to use the `clipseg` text
|
||||
masking option:
|
||||
|
||||
The `prompt2prompt` code is based off [bloc97's
|
||||
colab](https://github.com/bloc97/CrossAttentionControl).
|
||||
|
||||
Note that `prompt2prompt` is not currently working with the runwayML
|
||||
inpainting model, and may never work due to the way this model is set
|
||||
up. If you attempt to use `prompt2prompt` you will get the original
|
||||
image back. However, since this model is so good at inpainting, a
|
||||
good substitute is to use the `clipseg` text masking option:
|
||||
|
||||
```
|
||||
```bash
|
||||
invoke> a fluffy cat eating a hotdot
|
||||
Outputs:
|
||||
[1010] outputs/000025.2182095108.png: a fluffy cat eating a hotdog
|
||||
@ -181,94 +225,96 @@ invoke> a smiling dog eating a hotdog -I 000025.2182095108.png -tm cat
|
||||
|
||||
### Escaping parantheses () and speech marks ""
|
||||
|
||||
If the model you are using has parentheses () or speech marks "" as
|
||||
part of its syntax, you will need to "escape" these using a backslash,
|
||||
so that`(my_keyword)` becomes `\(my_keyword\)`. Otherwise, the prompt
|
||||
parser will attempt to interpret the parentheses as part of the prompt
|
||||
syntax and it will get confused.
|
||||
If the model you are using has parentheses () or speech marks "" as part of its
|
||||
syntax, you will need to "escape" these using a backslash, so that`(my_keyword)`
|
||||
becomes `\(my_keyword\)`. Otherwise, the prompt parser will attempt to interpret
|
||||
the parentheses as part of the prompt syntax and it will get confused.
|
||||
|
||||
---
|
||||
|
||||
## **Prompt Blending**
|
||||
|
||||
You may blend together different sections of the prompt to explore the
|
||||
AI's latent semantic space and generate interesting (and often
|
||||
surprising!) variations. The syntax is:
|
||||
You may blend together different sections of the prompt to explore the AI's
|
||||
latent semantic space and generate interesting (and often surprising!)
|
||||
variations. The syntax is:
|
||||
|
||||
```bash
|
||||
blue sphere:0.25 red cube:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to blend 25% of the concept of a blue
|
||||
sphere with 75% of the concept of a red cube. The blend weights can
|
||||
use any combination of integers and floating point numbers, and they
|
||||
do not need to add up to 1. Everything to the left of the `:XX` up to
|
||||
the previous `:XX` is used for merging, so the overall effect is:
|
||||
This will tell the sampler to blend 25% of the concept of a blue sphere with 75%
|
||||
of the concept of a red cube. The blend weights can use any combination of
|
||||
integers and floating point numbers, and they do not need to add up to 1.
|
||||
Everything to the left of the `:XX` up to the previous `:XX` is used for
|
||||
merging, so the overall effect is:
|
||||
|
||||
```bash
|
||||
0.25 * "blue sphere" + 0.75 * "white duck" + hybrid
|
||||
```
|
||||
|
||||
Because you are exploring the "mind" of the AI, the AI's way of mixing
|
||||
two concepts may not match yours, leading to surprising effects. To
|
||||
illustrate, here are three images generated using various combinations
|
||||
of blend weights. As usual, unless you fix the seed, the prompts will give you
|
||||
different results each time you run them.
|
||||
|
||||
---
|
||||
Because you are exploring the "mind" of the AI, the AI's way of mixing two
|
||||
concepts may not match yours, leading to surprising effects. To illustrate, here
|
||||
are three images generated using various combinations of blend weights. As
|
||||
usual, unless you fix the seed, the prompts will give you different results each
|
||||
time you run them.
|
||||
|
||||
<figure markdown>
|
||||
### "blue sphere, red cube, hybrid"
|
||||
</figure>
|
||||
|
||||
This example doesn't use melding at all and represents the default way
|
||||
of mixing concepts.
|
||||
This example doesn't use melding at all and represents the default way of mixing
|
||||
concepts.
|
||||
|
||||
<figure markdown>
|
||||
![blue-sphere-red-cube-hyprid](../assets/prompt-blending/blue-sphere-red-cube-hybrid.png)
|
||||
</figure>
|
||||
|
||||
It's interesting to see how the AI expressed the concept of "cube" as
|
||||
the four quadrants of the enclosing frame. If you look closely, there
|
||||
is depth there, so the enclosing frame is actually a cube.
|
||||
It's interesting to see how the AI expressed the concept of "cube" as the four
|
||||
quadrants of the enclosing frame. If you look closely, there is depth there, so
|
||||
the enclosing frame is actually a cube.
|
||||
|
||||
<figure markdown>
|
||||
### "blue sphere:0.25 red cube:0.75 hybrid"
|
||||
|
||||
![blue-sphere-25-red-cube-75](../assets/prompt-blending/blue-sphere-0.25-red-cube-0.75-hybrid.png)
|
||||
|
||||
</figure>
|
||||
|
||||
Now that's interesting. We get neither a blue sphere nor a red cube,
|
||||
but a red sphere embedded in a brick wall, which represents a melding
|
||||
of concepts within the AI's "latent space" of semantic
|
||||
representations. Where is Ludwig Wittgenstein when you need him?
|
||||
Now that's interesting. We get neither a blue sphere nor a red cube, but a red
|
||||
sphere embedded in a brick wall, which represents a melding of concepts within
|
||||
the AI's "latent space" of semantic representations. Where is Ludwig
|
||||
Wittgenstein when you need him?
|
||||
|
||||
<figure markdown>
|
||||
### "blue sphere:0.75 red cube:0.25 hybrid"
|
||||
|
||||
![blue-sphere-75-red-cube-25](../assets/prompt-blending/blue-sphere-0.75-red-cube-0.25-hybrid.png)
|
||||
|
||||
</figure>
|
||||
|
||||
Definitely more blue-spherey. The cube is gone entirely, but it's
|
||||
really cool abstract art.
|
||||
Definitely more blue-spherey. The cube is gone entirely, but it's really cool
|
||||
abstract art.
|
||||
|
||||
<figure markdown>
|
||||
### "blue sphere:0.5 red cube:0.5 hybrid"
|
||||
|
||||
![blue-sphere-5-red-cube-5-hybrid](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5-hybrid.png)
|
||||
|
||||
</figure>
|
||||
|
||||
Whoa...! I see blue and red, but no spheres or cubes. Is the word
|
||||
"hybrid" summoning up the concept of some sort of scifi creature?
|
||||
Let's find out.
|
||||
Whoa...! I see blue and red, but no spheres or cubes. Is the word "hybrid"
|
||||
summoning up the concept of some sort of scifi creature? Let's find out.
|
||||
|
||||
<figure markdown>
|
||||
### "blue sphere:0.5 red cube:0.5"
|
||||
|
||||
![blue-sphere-5-red-cube-5](../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5.png)
|
||||
|
||||
</figure>
|
||||
|
||||
Indeed, removing the word "hybrid" produces an image that is more like
|
||||
what we'd expect.
|
||||
Indeed, removing the word "hybrid" produces an image that is more like what we'd
|
||||
expect.
|
||||
|
||||
In conclusion, prompt blending is great for exploring creative space,
|
||||
but can be difficult to direct. A forthcoming release of InvokeAI will
|
||||
feature more deterministic prompt weighting.
|
||||
In conclusion, prompt blending is great for exploring creative space, but can be
|
||||
difficult to direct. A forthcoming release of InvokeAI will feature more
|
||||
deterministic prompt weighting.
|
||||
|
@ -16,12 +16,10 @@ You are able to do the following:
|
||||
2. Given two or more variations that you like, you can combine them in a
|
||||
weighted fashion.
|
||||
|
||||
---
|
||||
!!! Information ""
|
||||
|
||||
This cheat sheet provides a quick guide for how this works in practice, using
|
||||
variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
---
|
||||
This cheat sheet provides a quick guide for how this works in practice, using
|
||||
variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
## Step 1 -- Find a base image that you like
|
||||
|
||||
|
@ -4,56 +4,55 @@ title: InvokeAI Web Server
|
||||
|
||||
# :material-web: InvokeAI Web Server
|
||||
|
||||
As of version 2.0.0, this distribution comes with a full-featured web
|
||||
server (see screenshot). To use it, run the `invoke.py` script by
|
||||
adding the `--web` option:
|
||||
As of version 2.0.0, this distribution comes with a full-featured web server
|
||||
(see screenshot). To use it, run the `invoke.py` script by adding the `--web`
|
||||
option:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py --web
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at
|
||||
http://localhost:9090. To reach the server from a different machine on
|
||||
your LAN, you may launch the web server with the `--host` argument and
|
||||
either the IP address of the host you are running it on, or the
|
||||
wildcard `0.0.0.0`. For example:
|
||||
http://localhost:9090. To reach the server from a different machine on your LAN,
|
||||
you may launch the web server with the `--host` argument and either the IP
|
||||
address of the host you are running it on, or the wildcard `0.0.0.0`. For
|
||||
example:
|
||||
|
||||
```bash
|
||||
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py --web --host 0.0.0.0
|
||||
```
|
||||
|
||||
# Quick guided walkthrough of the WebGUI's features
|
||||
## Quick guided walkthrough of the WebGUI's features
|
||||
|
||||
While most of the WebGUI's features are intuitive, here is a guided
|
||||
walkthrough through its various components.
|
||||
While most of the WebGUI's features are intuitive, here is a guided walkthrough
|
||||
through its various components.
|
||||
|
||||
![Invoke Web Server - Major Components](../assets/invoke-web-server-1.png){:width="640px"}
|
||||
|
||||
The screenshot above shows the Text to Image tab of the WebGUI. There
|
||||
are three main sections:
|
||||
The screenshot above shows the Text to Image tab of the WebGUI. There are three
|
||||
main sections:
|
||||
|
||||
1. A **control panel** on the left, which contains various settings
|
||||
for text to image generation. The most important part is the text
|
||||
field (currently showing `strawberry sushi`) for entering the text
|
||||
prompt, and the camera icon directly underneath that will render the
|
||||
image. We'll call this the *Invoke* button from now on.
|
||||
1. A **control panel** on the left, which contains various settings for text to
|
||||
image generation. The most important part is the text field (currently
|
||||
showing `strawberry sushi`) for entering the text prompt, and the camera icon
|
||||
directly underneath that will render the image. We'll call this the _Invoke_
|
||||
button from now on.
|
||||
|
||||
2. The **current image** section in the middle, which shows a large
|
||||
format version of the image you are currently working on. A series of
|
||||
buttons at the top ("image to image", "Use All", "Use Seed", etc) lets
|
||||
you modify the image in various ways.
|
||||
2. The **current image** section in the middle, which shows a large format
|
||||
version of the image you are currently working on. A series of buttons at the
|
||||
top ("image to image", "Use All", "Use Seed", etc) lets you modify the image
|
||||
in various ways.
|
||||
|
||||
3. A **gallery* section on the left that contains a history of the
|
||||
images you have generated. These images are read and written to the
|
||||
directory specified at launch time in `--outdir`.
|
||||
3. A \*_gallery_ section on the left that contains a history of the images you
|
||||
have generated. These images are read and written to the directory specified
|
||||
at launch time in `--outdir`.
|
||||
|
||||
In addition to these three elements, there are a series of icons for
|
||||
changing global settings, reporting bugs, and changing the theme on
|
||||
the upper right.
|
||||
In addition to these three elements, there are a series of icons for changing
|
||||
global settings, reporting bugs, and changing the theme on the upper right.
|
||||
|
||||
There are also a series of icons to the left of the control panel (see
|
||||
highlighted area in the screenshot below) which select among a series
|
||||
of tabs for performing different types of operations.
|
||||
highlighted area in the screenshot below) which select among a series of tabs
|
||||
for performing different types of operations.
|
||||
|
||||
<figure markdown>
|
||||
![Invoke Web Server - Control Panel](../assets/invoke-web-server-2.png){:width="512px"}
|
||||
@ -61,174 +60,167 @@ of tabs for performing different types of operations.
|
||||
|
||||
From top to bottom, these are:
|
||||
|
||||
1. Text to Image - generate images from text
|
||||
2. Image to Image - from an uploaded starting image (drawing or photograph) generate a new one, modified by the text prompt
|
||||
3. Inpainting (pending) - Interactively erase portions of a starting image and have the AI fill in the erased region from a text prompt.
|
||||
4. Outpainting (pending) - Interactively add blank space to the borders of a starting image and fill in the background from a text prompt.
|
||||
5. Postprocessing (pending) - Interactively postprocess generated images using a variety of filters.
|
||||
1. Text to Image - generate images from text
|
||||
2. Image to Image - from an uploaded starting image (drawing or photograph)
|
||||
generate a new one, modified by the text prompt
|
||||
3. Inpainting (pending) - Interactively erase portions of a starting image and
|
||||
have the AI fill in the erased region from a text prompt.
|
||||
4. Outpainting (pending) - Interactively add blank space to the borders of a
|
||||
starting image and fill in the background from a text prompt.
|
||||
5. Postprocessing (pending) - Interactively postprocess generated images using a
|
||||
variety of filters.
|
||||
|
||||
The inpainting, outpainting and postprocessing tabs are currently in
|
||||
development. However, limited versions of their features can already
|
||||
be accessed through the Text to Image and Image to Image tabs.
|
||||
development. However, limited versions of their features can already be accessed
|
||||
through the Text to Image and Image to Image tabs.
|
||||
|
||||
## Walkthrough
|
||||
|
||||
The following walkthrough will exercise most (but not all) of the
|
||||
WebGUI's feature set.
|
||||
The following walkthrough will exercise most (but not all) of the WebGUI's
|
||||
feature set.
|
||||
|
||||
### Text to Image
|
||||
|
||||
1. Launch the WebGUI using `python scripts/invoke.py --web` and
|
||||
connect to it with your browser by accessing
|
||||
`http://localhost:9090`. If the browser and server are running on
|
||||
different machines on your LAN, add the option `--host 0.0.0.0` to the
|
||||
launch command line and connect to the machine hosting the web server
|
||||
using its IP address or domain name.
|
||||
1. Launch the WebGUI using `python scripts/invoke.py --web` and connect to it
|
||||
with your browser by accessing `http://localhost:9090`. If the browser and
|
||||
server are running on different machines on your LAN, add the option
|
||||
`--host 0.0.0.0` to the launch command line and connect to the machine
|
||||
hosting the web server using its IP address or domain name.
|
||||
|
||||
2. If all goes well, the WebGUI should come up and you'll see a green
|
||||
`connected` message on the upper right.
|
||||
`connected` message on the upper right.
|
||||
|
||||
#### Basics
|
||||
|
||||
1. Generate an image by typing *strawberry sushi* into the large
|
||||
prompt field on the upper left and then clicking on the Invoke button
|
||||
(the one with the Camera icon). After a short wait, you'll see a large
|
||||
image of sushi in the image panel, and a new thumbnail in the gallery
|
||||
on the right.
|
||||
1. Generate an image by typing _strawberry sushi_ into the large prompt field
|
||||
on the upper left and then clicking on the Invoke button (the one with the
|
||||
Camera icon). After a short wait, you'll see a large image of sushi in the
|
||||
image panel, and a new thumbnail in the gallery on the right.
|
||||
|
||||
If you need more room on the screen, you can turn the gallery off
|
||||
by clicking on the **x** to the right of "Your Invocations". You can
|
||||
turn it back on later by clicking the image icon that appears in the
|
||||
gallery's place.
|
||||
If you need more room on the screen, you can turn the gallery off by
|
||||
clicking on the **x** to the right of "Your Invocations". You can turn it
|
||||
back on later by clicking the image icon that appears in the gallery's
|
||||
place.
|
||||
|
||||
The images are written into the directory indicated by the `--outdir`
|
||||
option provided at script launch time. By default, this is
|
||||
`outputs/img-samples` under the InvokeAI directory.
|
||||
The images are written into the directory indicated by the `--outdir` option
|
||||
provided at script launch time. By default, this is `outputs/img-samples`
|
||||
under the InvokeAI directory.
|
||||
|
||||
2. Generate a bunch of strawberry sushi images by increasing the
|
||||
number of requested images by adjusting the Images counter just below
|
||||
the Camera button. As each is generated, it will be added to the
|
||||
gallery. You can switch the active image by clicking on the gallery
|
||||
thumbnails.
|
||||
2. Generate a bunch of strawberry sushi images by increasing the number of
|
||||
requested images by adjusting the Images counter just below the Camera
|
||||
button. As each is generated, it will be added to the gallery. You can
|
||||
switch the active image by clicking on the gallery thumbnails.
|
||||
|
||||
3. Try playing with different settings, including image width and
|
||||
height, the Sampler, the Steps and the CFG scale.
|
||||
3. Try playing with different settings, including image width and height, the
|
||||
Sampler, the Steps and the CFG scale.
|
||||
|
||||
Image *Width* and *Height* do what you'd expect. However, be aware that
|
||||
Image _Width_ and _Height_ do what you'd expect. However, be aware that
|
||||
larger images consume more VRAM memory and take longer to generate.
|
||||
|
||||
The *Sampler* controls how the AI selects the image to display. Some
|
||||
samplers are more "creative" than others and will produce a wider
|
||||
range of variations (see next section). Some samplers run faster than
|
||||
others.
|
||||
The _Sampler_ controls how the AI selects the image to display. Some
|
||||
samplers are more "creative" than others and will produce a wider range of
|
||||
variations (see next section). Some samplers run faster than others.
|
||||
|
||||
*Steps* controls how many noising/denoising/sampling steps the AI will
|
||||
take. The higher this value, the more refined the image will be, but
|
||||
the longer the image will take to generate. A typical strategy is to
|
||||
generate images with a low number of steps in order to select one to
|
||||
work on further, and then regenerate it using a higher number of
|
||||
steps.
|
||||
_Steps_ controls how many noising/denoising/sampling steps the AI will take.
|
||||
The higher this value, the more refined the image will be, but the longer
|
||||
the image will take to generate. A typical strategy is to generate images
|
||||
with a low number of steps in order to select one to work on further, and
|
||||
then regenerate it using a higher number of steps.
|
||||
|
||||
The *CFG Scale* controls how hard the AI tries to match the generated
|
||||
image to the input prompt. You can go as high or low as you like, but
|
||||
generally values greater than 20 won't improve things much, and values
|
||||
lower than 5 will produce unexpected images. There are complex
|
||||
interactions between *Steps*, *CFG Scale* and the *Sampler*, so
|
||||
experiment to find out what works for you.
|
||||
The _CFG Scale_ controls how hard the AI tries to match the generated image
|
||||
to the input prompt. You can go as high or low as you like, but generally
|
||||
values greater than 20 won't improve things much, and values lower than 5
|
||||
will produce unexpected images. There are complex interactions between
|
||||
_Steps_, _CFG Scale_ and the _Sampler_, so experiment to find out what works
|
||||
for you.
|
||||
|
||||
6. To regenerate a previously-generated image, select the image you
|
||||
want and click *Use All*. This loads the text prompt and other
|
||||
original settings into the control panel. If you then press *Invoke*
|
||||
it will regenerate the image exactly. You can also selectively modify
|
||||
the prompt or other settings to tweak the image.
|
||||
4. To regenerate a previously-generated image, select the image you want and
|
||||
click _Use All_. This loads the text prompt and other original settings into
|
||||
the control panel. If you then press _Invoke_ it will regenerate the image
|
||||
exactly. You can also selectively modify the prompt or other settings to
|
||||
tweak the image.
|
||||
|
||||
Alternatively, you may click on *Use Seed* to load just the image's
|
||||
seed, and leave other settings unchanged.
|
||||
Alternatively, you may click on _Use Seed_ to load just the image's seed,
|
||||
and leave other settings unchanged.
|
||||
|
||||
7. To regenerate a Stable Diffusion image that was generated by
|
||||
another SD package, you need to know its text prompt and its
|
||||
*Seed*. Copy-paste the prompt into the prompt box, unset the
|
||||
*Randomize Seed* control in the control panel, and copy-paste the
|
||||
desired *Seed* into its text field. When you Invoke, you will get
|
||||
something similar to the original image. It will not be exact unless
|
||||
you also set the correct values for the original sampler, CFG,
|
||||
steps and dimensions, but it will (usually) be close.
|
||||
5. To regenerate a Stable Diffusion image that was generated by another SD
|
||||
package, you need to know its text prompt and its _Seed_. Copy-paste the
|
||||
prompt into the prompt box, unset the _Randomize Seed_ control in the
|
||||
control panel, and copy-paste the desired _Seed_ into its text field. When
|
||||
you Invoke, you will get something similar to the original image. It will
|
||||
not be exact unless you also set the correct values for the original
|
||||
sampler, CFG, steps and dimensions, but it will (usually) be close.
|
||||
|
||||
#### Variations on a theme
|
||||
|
||||
1. Let's try generating some variations. Select your favorite sushi
|
||||
image from the gallery to load it. Then select "Use All" from the list
|
||||
of buttons above. This will load up all the settings used to generate
|
||||
this image, including its unique seed.
|
||||
1. Let's try generating some variations. Select your favorite sushi image from
|
||||
the gallery to load it. Then select "Use All" from the list of buttons
|
||||
above. This will load up all the settings used to generate this image,
|
||||
including its unique seed.
|
||||
|
||||
Go down to the Variations section of the Control Panel and set the
|
||||
button to On. Set Variation Amount to 0.2 to generate a modest
|
||||
number of variations on the image, and also set the Image counter to
|
||||
`4`. Press the `invoke` button. This will generate a series of related
|
||||
images. To obtain smaller variations, just lower the Variation
|
||||
Amount. You may also experiment with changing the Sampler. Some
|
||||
samplers generate more variability than others. *k_euler_a* is
|
||||
particularly creative, while *ddim* is pretty conservative.
|
||||
Go down to the Variations section of the Control Panel and set the button to
|
||||
On. Set Variation Amount to 0.2 to generate a modest number of variations on
|
||||
the image, and also set the Image counter to `4`. Press the `invoke` button.
|
||||
This will generate a series of related images. To obtain smaller variations,
|
||||
just lower the Variation Amount. You may also experiment with changing the
|
||||
Sampler. Some samplers generate more variability than others. _k_euler_a_ is
|
||||
particularly creative, while _ddim_ is pretty conservative.
|
||||
|
||||
2. For even more variations, experiment with increasing the setting
|
||||
for *Perlin*. This adds a bit of noise to the image generation
|
||||
process. Note that values of Perlin noise greater than 0.15 produce
|
||||
poor images for several of the samplers.
|
||||
2. For even more variations, experiment with increasing the setting for
|
||||
_Perlin_. This adds a bit of noise to the image generation process. Note
|
||||
that values of Perlin noise greater than 0.15 produce poor images for
|
||||
several of the samplers.
|
||||
|
||||
#### Facial reconstruction and upscaling
|
||||
|
||||
Stable Diffusion frequently produces mangled faces, particularly when
|
||||
there are multiple figures in the same scene. Stable Diffusion has
|
||||
particular issues with generating reallistic eyes. InvokeAI provides
|
||||
the ability to reconstruct faces using either the GFPGAN or CodeFormer
|
||||
libraries. For more information see [POSTPROCESS](POSTPROCESS.md).
|
||||
|
||||
1. Invoke a prompt that generates a mangled face. A prompt that often
|
||||
gives this is "portrait of a lawyer, 3/4 shot" (this is not intended
|
||||
as a slur against lawyers!) Once you have an image that needs some
|
||||
touching up, load it into the Image panel, and press the button with
|
||||
the face icon (highlighted in the first screenshot below). A dialog
|
||||
box will appear. Leave *Strength* at 0.8 and press *Restore Faces". If
|
||||
all goes well, the eyes and other aspects of the face will be improved
|
||||
(see the second screenshot)
|
||||
Stable Diffusion frequently produces mangled faces, particularly when there are
|
||||
multiple figures in the same scene. Stable Diffusion has particular issues with
|
||||
generating reallistic eyes. InvokeAI provides the ability to reconstruct faces
|
||||
using either the GFPGAN or CodeFormer libraries. For more information see
|
||||
[POSTPROCESS](POSTPROCESS.md).
|
||||
|
||||
1. Invoke a prompt that generates a mangled face. A prompt that often gives
|
||||
this is "portrait of a lawyer, 3/4 shot" (this is not intended as a slur
|
||||
against lawyers!) Once you have an image that needs some touching up, load
|
||||
it into the Image panel, and press the button with the face icon
|
||||
(highlighted in the first screenshot below). A dialog box will appear. Leave
|
||||
_Strength_ at 0.8 and press \*Restore Faces". If all goes well, the eyes and
|
||||
other aspects of the face will be improved (see the second screenshot)
|
||||
|
||||
![Invoke Web Server - Original Image](../assets/invoke-web-server-3.png)
|
||||
|
||||
![Invoke Web Server - Retouched Image](../assets/invoke-web-server-4.png)
|
||||
|
||||
The facial reconstruction *Strength* field adjusts how aggressively
|
||||
the face library will try to alter the face. It can be as high as 1.0,
|
||||
but be aware that this often softens the face airbrush style, losing
|
||||
some details. The default 0.8 is usually sufficient.
|
||||
The facial reconstruction _Strength_ field adjusts how aggressively the face
|
||||
library will try to alter the face. It can be as high as 1.0, but be aware
|
||||
that this often softens the face airbrush style, losing some details. The
|
||||
default 0.8 is usually sufficient.
|
||||
|
||||
2. "Upscaling" is the process of increasing the size of an image while
|
||||
retaining the sharpness. InvokeAI uses an external library called
|
||||
"ESRGAN" to do this. To invoke upscaling, simply select an image and
|
||||
press the *HD* button above it. You can select between 2X and 4X
|
||||
upscaling, and adjust the upscaling strength, which has much the same
|
||||
meaning as in facial reconstruction. Try running this on one of your
|
||||
previously-generated images.
|
||||
2. "Upscaling" is the process of increasing the size of an image while
|
||||
retaining the sharpness. InvokeAI uses an external library called "ESRGAN"
|
||||
to do this. To invoke upscaling, simply select an image and press the _HD_
|
||||
button above it. You can select between 2X and 4X upscaling, and adjust the
|
||||
upscaling strength, which has much the same meaning as in facial
|
||||
reconstruction. Try running this on one of your previously-generated images.
|
||||
|
||||
3. Finally, you can run facial reconstruction and/or upscaling
|
||||
automatically after each Invocation. Go to the Advanced Options
|
||||
section of the Control Panel and turn on *Restore Face* and/or
|
||||
*Upscale*.
|
||||
3. Finally, you can run facial reconstruction and/or upscaling automatically
|
||||
after each Invocation. Go to the Advanced Options section of the Control
|
||||
Panel and turn on _Restore Face_ and/or _Upscale_.
|
||||
|
||||
### Image to Image
|
||||
|
||||
InvokeAI lets you take an existing image and use it as the basis for a
|
||||
new creation. You can use any sort of image, including a photograph, a
|
||||
scanned sketch, or a digital drawing, as long as it is in PNG or JPEG
|
||||
format.
|
||||
InvokeAI lets you take an existing image and use it as the basis for a new
|
||||
creation. You can use any sort of image, including a photograph, a scanned
|
||||
sketch, or a digital drawing, as long as it is in PNG or JPEG format.
|
||||
|
||||
For this tutorial, we'll use files named
|
||||
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png),
|
||||
and
|
||||
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png), and
|
||||
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png).
|
||||
Download these images to your local machine now to continue with the walkthrough.
|
||||
Download these images to your local machine now to continue with the
|
||||
walkthrough.
|
||||
|
||||
1. Click on the *Image to Image* tab icon, which is the second icon
|
||||
from the top on the left-hand side of the screen:
|
||||
1. Click on the _Image to Image_ tab icon, which is the second icon from the
|
||||
top on the left-hand side of the screen:
|
||||
|
||||
<figure markdown>
|
||||
![Invoke Web Server - Image to Image Icon](../assets/invoke-web-server-5.png)
|
||||
@ -240,93 +232,92 @@ from the top on the left-hand side of the screen:
|
||||
![Invoke Web Server - Image to Image Tab](../assets/invoke-web-server-6.png){:width="640px"}
|
||||
</figure>
|
||||
|
||||
2. Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or
|
||||
click the blank area to get an upload dialog. The image will load into
|
||||
an area marked *Initial Image*. (The WebGUI will also load the most
|
||||
recently-generated image from the gallery into a section on the left,
|
||||
but this image will be replaced in the next step.)
|
||||
2. Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or click
|
||||
the blank area to get an upload dialog. The image will load into an area
|
||||
marked _Initial Image_. (The WebGUI will also load the most
|
||||
recently-generated image from the gallery into a section on the left, but
|
||||
this image will be replaced in the next step.)
|
||||
|
||||
3. Go to the prompt box and type *old sea captain with raven on
|
||||
shoulder* and press Invoke. A derived image will appear to the right
|
||||
of the original one:
|
||||
3. Go to the prompt box and type _old sea captain with raven on shoulder_ and
|
||||
press Invoke. A derived image will appear to the right of the original one:
|
||||
|
||||
![Invoke Web Server - Image to Image example](../assets/invoke-web-server-7.png){:width="640px"}
|
||||
|
||||
4. Experiment with the different settings. The most influential one
|
||||
in Image to Image is *Image to Image Strength* located about midway
|
||||
down the control panel. By default it is set to 0.75, but can range
|
||||
from 0.0 to 0.99. The higher the value, the more of the original image
|
||||
the AI will replace. A value of 0 will leave the initial image
|
||||
completely unchanged, while 0.99 will replace it completely. However,
|
||||
the Sampler and CFG Scale also influence the final result. You can
|
||||
also generate variations in the same way as described in Text to
|
||||
Image.
|
||||
4. Experiment with the different settings. The most influential one in Image to
|
||||
Image is _Image to Image Strength_ located about midway down the control
|
||||
panel. By default it is set to 0.75, but can range from 0.0 to 0.99. The
|
||||
higher the value, the more of the original image the AI will replace. A
|
||||
value of 0 will leave the initial image completely unchanged, while 0.99
|
||||
will replace it completely. However, the Sampler and CFG Scale also
|
||||
influence the final result. You can also generate variations in the same way
|
||||
as described in Text to Image.
|
||||
|
||||
5. What if we only want to change certain part(s) of the image and
|
||||
leave the rest intact? This is called Inpainting, and a future version
|
||||
of the InvokeAI web server will provide an interactive painting canvas
|
||||
on which you can directly draw the areas you wish to Inpaint into. For
|
||||
now, you can achieve this effect by using an external photoeditor tool
|
||||
to make one or more regions of the image transparent as described in
|
||||
[INPAINTING.md] and uploading that.
|
||||
5. What if we only want to change certain part(s) of the image and leave the
|
||||
rest intact? This is called Inpainting, and a future version of the InvokeAI
|
||||
web server will provide an interactive painting canvas on which you can
|
||||
directly draw the areas you wish to Inpaint into. For now, you can achieve
|
||||
this effect by using an external photoeditor tool to make one or more
|
||||
regions of the image transparent as described in [INPAINTING.md] and
|
||||
uploading that.
|
||||
|
||||
The file
|
||||
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png)
|
||||
is a version of the earlier image in which the area around the parrot
|
||||
has been replaced with transparency. Click on the "x" in the upper
|
||||
right of the Initial Image and upload the transparent version. Using
|
||||
the same prompt "old sea captain with raven on shoulder" try Invoking
|
||||
an image. This time, only the parrot will be replaced, leaving the
|
||||
rest of the original image intact:
|
||||
is a version of the earlier image in which the area around the parrot has
|
||||
been replaced with transparency. Click on the "x" in the upper right of the
|
||||
Initial Image and upload the transparent version. Using the same prompt "old
|
||||
sea captain with raven on shoulder" try Invoking an image. This time, only
|
||||
the parrot will be replaced, leaving the rest of the original image intact:
|
||||
|
||||
<figure markdown>
|
||||
![Invoke Web Server - Inpainting](../assets/invoke-web-server-8.png){:width="640px"}
|
||||
</figure>
|
||||
<figure markdown>
|
||||
![Invoke Web Server - Inpainting](../assets/invoke-web-server-8.png){:width="640px"}
|
||||
</figure>
|
||||
|
||||
6. Would you like to modify a previously-generated image using the
|
||||
Image to Image facility? Easy! While in the Image to Image panel,
|
||||
hover over any of the gallery images to see a little menu of icons pop
|
||||
up. Click the picture icon to instantly send the selected image to
|
||||
Image to Image as the initial image.
|
||||
6. Would you like to modify a previously-generated image using the Image to
|
||||
Image facility? Easy! While in the Image to Image panel, hover over any of
|
||||
the gallery images to see a little menu of icons pop up. Click the picture
|
||||
icon to instantly send the selected image to Image to Image as the initial
|
||||
image.
|
||||
|
||||
You can do the same from the Text to Image tab by clicking on the
|
||||
picture icon above the central image panel. The screenshot below
|
||||
shows where the "use as initial image" icons are located.
|
||||
You can do the same from the Text to Image tab by clicking on the picture icon
|
||||
above the central image panel. The screenshot below shows where the "use as
|
||||
initial image" icons are located.
|
||||
|
||||
![Invoke Web Server - Use as Image Links](../assets/invoke-web-server-9.png){:width="640px"}
|
||||
|
||||
## Parting remarks
|
||||
|
||||
This concludes the walkthrough, but there are several more features that you
|
||||
can explore. Please check out the [Command Line Interface](CLI.md)
|
||||
documentation for further explanation of the advanced features that
|
||||
were not covered here.
|
||||
This concludes the walkthrough, but there are several more features that you can
|
||||
explore. Please check out the [Command Line Interface](CLI.md) documentation for
|
||||
further explanation of the advanced features that were not covered here.
|
||||
|
||||
The WebGUI is only rapid development. Check back regularly for
|
||||
updates!
|
||||
The WebGUI is only rapid development. Check back regularly for updates!
|
||||
|
||||
## Reference
|
||||
|
||||
### Additional Options
|
||||
|
||||
parameter <img width=160 align="right"> | effect
|
||||
-- | --
|
||||
`--web_develop` | Starts the web server in development mode.
|
||||
`--web_verbose` | Enables verbose logging
|
||||
`--cors [CORS ...]` | Additional allowed origins, comma-separated
|
||||
`--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network.
|
||||
`--port PORT` | Web server: Port to listen on
|
||||
`--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver.
|
||||
| parameter <img width=160 align="right"> | effect |
|
||||
| --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| `--web_develop` | Starts the web server in development mode. |
|
||||
| `--web_verbose` | Enables verbose logging |
|
||||
| `--cors [CORS ...]` | Additional allowed origins, comma-separated |
|
||||
| `--host HOST` | Web server: Host or IP to listen on. Set to 0.0.0.0 to accept traffic from other devices on your network. |
|
||||
| `--port PORT` | Web server: Port to listen on |
|
||||
| `--gui` | Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask to create a desktop app experience of the webserver. |
|
||||
|
||||
### Web Specific Features
|
||||
|
||||
The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit.
|
||||
For detailed guidance on individual features, see the Feature-specific help documents available in this directory.
|
||||
Note that the latest functionality available in the CLI may not always be available in the Web interface.
|
||||
The web experience offers an incredibly easy-to-use experience for interacting
|
||||
with the InvokeAI toolkit. For detailed guidance on individual features, see the
|
||||
Feature-specific help documents available in this directory. Note that the
|
||||
latest functionality available in the CLI may not always be available in the Web
|
||||
interface.
|
||||
|
||||
#### Dark Mode & Light Mode
|
||||
|
||||
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface.
|
||||
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode,
|
||||
and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by
|
||||
clicking the Sun/Moon icons at the top right of the interface.
|
||||
|
||||
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
|
||||
|
||||
@ -334,7 +325,10 @@ The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, a
|
||||
|
||||
#### Invocation Toolbar
|
||||
|
||||
The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar.
|
||||
The left side of the InvokeAI interface is available for customizing the prompt
|
||||
and the settings used for invoking your new image. Typing your prompt into the
|
||||
open text field and clicking the Invoke button will produce the image based on
|
||||
the settings configured in the toolbar.
|
||||
|
||||
See below for additional documentation related to each feature:
|
||||
|
||||
@ -347,11 +341,17 @@ See below for additional documentation related to each feature:
|
||||
|
||||
#### Invocation Gallery
|
||||
|
||||
The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image.
|
||||
The currently selected --outdir (or the default outputs folder) will display all
|
||||
previously generated files on load. As new invocations are generated, these will
|
||||
be dynamically added to the gallery, and can be previewed by selecting them.
|
||||
Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All
|
||||
Parameters, etc.) that can be accessed by hovering over the image.
|
||||
|
||||
#### Image Workspace
|
||||
|
||||
When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including:
|
||||
When an image from the Invocation Gallery is selected, or is generated, the
|
||||
image will be displayed within the center of the interface. A quickbar of common
|
||||
image interactions are displayed along the top of the image, including:
|
||||
|
||||
- Use image in the `Image to Image` workflow
|
||||
- Initialize Face Restoration on the selected file
|
||||
@ -361,9 +361,9 @@ When an image from the Invocation Gallery is selected, or is generated, the imag
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
A huge shout-out to the core team working to make this vision a
|
||||
reality, including
|
||||
[psychedelicious](https://github.com/psychedelicious),
|
||||
A huge shout-out to the core team working to make this vision a reality,
|
||||
including [psychedelicious](https://github.com/psychedelicious),
|
||||
[Kyle0654](https://github.com/Kyle0654) and
|
||||
[blessedcoolant](https://github.com/blessedcoolant). [hipsterusername](https://github.com/hipsterusername)
|
||||
was the team's unofficial cheerleader and added tooltips/docs.
|
||||
[blessedcoolant](https://github.com/blessedcoolant).
|
||||
[hipsterusername](https://github.com/hipsterusername) was the team's unofficial
|
||||
cheerleader and added tooltips/docs.
|
||||
|
@ -1,58 +1,62 @@
|
||||
---
|
||||
title: WebUI Hotkey List
|
||||
---
|
||||
|
||||
# **WebUI Hotkey List**
|
||||
|
||||
## General
|
||||
|
||||
| Setting | Hotkey |
|
||||
| ------------ | ---------------------- |
|
||||
| a | Set All Parameters |
|
||||
| s | Set Seed |
|
||||
| u | Upscale |
|
||||
| r | Restoration |
|
||||
| i | Show Metadata |
|
||||
| Ddl | Delete Image |
|
||||
| alt + a | Focus prompt input |
|
||||
| shift + i | Send To Image to Image |
|
||||
| ctrl + enter | Start processing |
|
||||
| shift + x | cancel Processing |
|
||||
| shift + d | Toggle Dark Mode |
|
||||
| ` | Toggle console |
|
||||
| Setting | Hotkey |
|
||||
| ----------------- | ---------------------- |
|
||||
| ++a++ | Set All Parameters |
|
||||
| ++s++ | Set Seed |
|
||||
| ++u++ | Upscale |
|
||||
| ++r++ | Restoration |
|
||||
| ++i++ | Show Metadata |
|
||||
| ++d++ ++d++ ++l++ | Delete Image |
|
||||
| ++alt+a++ | Focus prompt input |
|
||||
| ++shift+i++ | Send To Image to Image |
|
||||
| ++ctrl+enter++ | Start processing |
|
||||
| ++shift+x++ | cancel Processing |
|
||||
| ++shift+d++ | Toggle Dark Mode |
|
||||
| ` | Toggle console |
|
||||
|
||||
## Tabs
|
||||
|
||||
| Setting | Hotkey |
|
||||
| ------- | ------------------------- |
|
||||
| 1 | Go to Text To Image Tab |
|
||||
| 2 | Go to Image to Image Tab |
|
||||
| 3 | Go to Inpainting Tab |
|
||||
| 4 | Go to Outpainting Tab |
|
||||
| 5 | Go to Nodes Tab |
|
||||
| 6 | Go to Post Processing Tab |
|
||||
| ++1++ | Go to Text To Image Tab |
|
||||
| ++2++ | Go to Image to Image Tab |
|
||||
| ++3++ | Go to Inpainting Tab |
|
||||
| ++4++ | Go to Outpainting Tab |
|
||||
| ++5++ | Go to Nodes Tab |
|
||||
| ++6++ | Go to Post Processing Tab |
|
||||
|
||||
## Gallery
|
||||
|
||||
| Setting | Hotkey |
|
||||
| ------------ | ------------------------------- |
|
||||
| g | Toggle Gallery |
|
||||
| left arrow | Go to previous image in gallery |
|
||||
| right arrow | Go to next image in gallery |
|
||||
| shift + p | Pin gallery |
|
||||
| shift + up | Increase gallery image size |
|
||||
| shift + down | Decrease gallery image size |
|
||||
| shift + r | Reset image gallery size |
|
||||
| Setting | Hotkey |
|
||||
| -------------- | ------------------------------- |
|
||||
| ++g++ | Toggle Gallery |
|
||||
| ++left++ | Go to previous image in gallery |
|
||||
| ++right++ | Go to next image in gallery |
|
||||
| ++shift+p++ | Pin gallery |
|
||||
| ++shift+up++ | Increase gallery image size |
|
||||
| ++shift+down++ | Decrease gallery image size |
|
||||
| ++shift+r++ | Reset image gallery size |
|
||||
|
||||
## Inpainting
|
||||
|
||||
| Setting | Hotkey |
|
||||
| -------------------------- | --------------------- |
|
||||
| [ | Decrease brush size |
|
||||
| ] | Increase brush size |
|
||||
| alt + [ | Decrease mask opacity |
|
||||
| alt + ] | Increase mask opacity |
|
||||
| b | Select brush |
|
||||
| e | Select eraser |
|
||||
| ctrl + z | Undo brush stroke |
|
||||
| ctrl + shift + z, ctrl + y | Redo brush stroke |
|
||||
| h | Hide mask |
|
||||
| shift + m | Invert mask |
|
||||
| shift + c | Clear mask |
|
||||
| shift + j | Expand canvas |
|
||||
| Setting | Hotkey |
|
||||
| ---------------------------- | --------------------- |
|
||||
| [ | Decrease brush size |
|
||||
| ] | Increase brush size |
|
||||
| alt + [ | Decrease mask opacity |
|
||||
| alt + ] | Increase mask opacity |
|
||||
| ++b++ | Select brush |
|
||||
| ++e++ | Select eraser |
|
||||
| ++ctrl+z++ | Undo brush stroke |
|
||||
| ++ctrl+shift+z++, ++ctrl+y++ | Redo brush stroke |
|
||||
| ++h++ | Hide mask |
|
||||
| ++shift+m++ | Invert mask |
|
||||
| ++shift+c++ | Clear mask |
|
||||
| ++shift+j++ | Expand canvas |
|
||||
|
Loading…
Reference in New Issue
Block a user