mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'main' into development
- this syncs documentation and code
This commit is contained in:
@ -4,7 +4,7 @@ title: Changelog
|
||||
|
||||
# :octicons-log-16: Changelog
|
||||
|
||||
## v1.13 <small>(in process)</small>
|
||||
## v1.13
|
||||
|
||||
- Supports a Google Colab notebook for a standalone server running on Google
|
||||
hardware [Arturo Mendivil](https://github.com/artmen1516)
|
||||
@ -12,10 +12,10 @@ title: Changelog
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- WebUI supports incremental display of in-progress images during generation
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- Output directory can be specified on the dream> command line.
|
||||
- Output directory can be specified on the invoke> command line.
|
||||
- The grid was displaying duplicated images when not enough images to fill the
|
||||
final row [Muhammad Usama](https://github.com/SMUsamaShah)
|
||||
- Can specify --grid on dream.py command line as the default.
|
||||
- Can specify --grid on invoke.py command line as the default.
|
||||
- Miscellaneous internal bug and stability fixes.
|
||||
|
||||
---
|
||||
@ -24,14 +24,14 @@ title: Changelog
|
||||
|
||||
- Improved file handling, including ability to read prompts from standard input.
|
||||
(kudos to [Yunsaki](https://github.com/yunsaki)
|
||||
- The web server is now integrated with the dream.py script. Invoke by adding
|
||||
--web to the dream.py command arguments.
|
||||
- The web server is now integrated with the invoke.py script. Invoke by adding
|
||||
--web to the invoke.py command arguments.
|
||||
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
|
||||
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
|
||||
VRAM requirements are modestly reduced. Thanks to both
|
||||
[Blessedcoolant](https://github.com/blessedcoolant) and
|
||||
[Oceanswave](https://github.com/oceanswave) for their work on this.
|
||||
- You can now swap samplers on the dream> command line.
|
||||
- You can now swap samplers on the invoke> command line.
|
||||
[Blessedcoolant](https://github.com/blessedcoolant)
|
||||
|
||||
---
|
||||
@ -45,7 +45,7 @@ title: Changelog
|
||||
back to the previous command, but will work on all images generated with the
|
||||
-n# switch.
|
||||
- Variant generation support temporarily disabled pending more general solution.
|
||||
- Created a feature branch named **yunsaki-morphing-dream** which adds
|
||||
- Created a feature branch named **yunsaki-morphing-invoke** which adds
|
||||
experimental support for iteratively modifying the prompt and its parameters.
|
||||
Please
|
||||
see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
|
||||
@ -75,7 +75,7 @@ title: Changelog
|
||||
|
||||
## v1.08 <small>(24 August 2022)</small>
|
||||
|
||||
- Escape single quotes on the dream> command before trying to parse. This avoids
|
||||
- Escape single quotes on the invoke> command before trying to parse. This avoids
|
||||
parse errors.
|
||||
- Removed instruction to get Python3.8 as first step in Windows install.
|
||||
Anaconda3 does it for you.
|
||||
@ -112,7 +112,7 @@ title: Changelog
|
||||
can be regenerated with the indicated key
|
||||
|
||||
- It should no longer be possible for one image to overwrite another
|
||||
- You can use the "cd" and "pwd" commands at the dream> prompt to set and
|
||||
- You can use the "cd" and "pwd" commands at the invoke> prompt to set and
|
||||
retrieve the path of the output directory.
|
||||
|
||||
## v1.04 <small>(22 August 2022 - after the drop)</small>
|
||||
@ -139,5 +139,5 @@ title: Changelog
|
||||
- added k_lms sampling. **Please run "conda env update -f environment.yaml" to
|
||||
load the k_lms dependencies!!**
|
||||
- use half precision arithmetic by default, resulting in faster execution and
|
||||
lower memory requirements Pass argument --full_precision to dream.py to get
|
||||
lower memory requirements Pass argument --full_precision to invoke.py to get
|
||||
slower but more accurate image generation
|
||||
|
@ -8,8 +8,8 @@ hide:
|
||||
|
||||
## **Interactive Command Line Interface**
|
||||
|
||||
The `dream.py` script, located in `scripts/dream.py`, provides an interactive
|
||||
interface to image generation similar to the "dream mothership" bot that Stable
|
||||
The `invoke.py` script, located in `scripts/dream.py`, provides an interactive
|
||||
interface to image generation similar to the "invoke mothership" bot that Stable
|
||||
AI provided on its Discord server.
|
||||
|
||||
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
|
||||
@ -34,21 +34,21 @@ The script is confirmed to work on Linux, Windows and Mac systems.
|
||||
currently rudimentary, but a much better replacement is on its way.
|
||||
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/invoke.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
(...more initialization messages...)
|
||||
|
||||
* Initialization done! Awaiting your command...
|
||||
dream> ashley judd riding a camel -n2 -s150
|
||||
invoke> ashley judd riding a camel -n2 -s150
|
||||
Outputs:
|
||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
|
||||
|
||||
dream> "there's a fly in my soup" -n6 -g
|
||||
invoke> "there's a fly in my soup" -n6 -g
|
||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||
dream> q
|
||||
invoke> q
|
||||
|
||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
||||
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
|
||||
@ -57,10 +57,10 @@ dream> q
|
||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
```
|
||||
|
||||

|
||||

|
||||
|
||||
The `dream>` prompt's arguments are pretty much identical to those used in the
|
||||
Discord bot, except you don't need to type "!dream" (it doesn't hurt if you do).
|
||||
The `invoke>` prompt's arguments are pretty much identical to those used in the
|
||||
Discord bot, except you don't need to type "!invoke" (it doesn't hurt if you do).
|
||||
A significant change is that creation of individual images is now the default
|
||||
unless `--grid` (`-g`) is given. A full list is given in
|
||||
[List of prompt arguments](#list-of-prompt-arguments).
|
||||
@ -73,7 +73,7 @@ the location of the model weight files.
|
||||
|
||||
### List of arguments recognized at the command line
|
||||
|
||||
These command-line arguments can be passed to `dream.py` when you first run it
|
||||
These command-line arguments can be passed to `invoke.py` when you first run it
|
||||
from the Windows, Mac or Linux command line. Some set defaults that can be
|
||||
overridden on a per-prompt basis (see [List of prompt arguments]
|
||||
(#list-of-prompt-arguments). Others
|
||||
@ -112,15 +112,15 @@ These arguments are deprecated but still work:
|
||||
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead |
|
||||
|
||||
**A note on path names:** On Windows systems, you may run into
|
||||
problems when passing the dream script standard backslashed path
|
||||
problems when passing the invoke script standard backslashed path
|
||||
names because the Python interpreter treats "\" as an escape.
|
||||
You can either double your slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or
|
||||
use Linux/Mac style forward slashes (better): C:/path/to/my/file.
|
||||
|
||||
## List of prompt arguments
|
||||
|
||||
After the dream.py script initializes, it will present you with a
|
||||
**dream>** prompt. Here you can enter information to generate images
|
||||
After the invoke.py script initializes, it will present you with a
|
||||
**invoke>** prompt. Here you can enter information to generate images
|
||||
from text (txt2img), to embellish an existing image or sketch
|
||||
(img2img), or to selectively alter chosen regions of the image
|
||||
(inpainting).
|
||||
@ -128,13 +128,13 @@ from text (txt2img), to embellish an existing image or sketch
|
||||
### This is an example of txt2img:
|
||||
|
||||
~~~~
|
||||
dream> waterfall and rainbow -W640 -H480
|
||||
invoke> waterfall and rainbow -W640 -H480
|
||||
~~~~
|
||||
|
||||
This will create the requested image with the dimensions 640 (width)
|
||||
and 480 (height).
|
||||
|
||||
Here are the dream> command that apply to txt2img:
|
||||
Here are the invoke> command that apply to txt2img:
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
@ -167,7 +167,7 @@ the nearest multiple of 64.
|
||||
### This is an example of img2img:
|
||||
|
||||
~~~~
|
||||
dream> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
||||
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
||||
~~~~
|
||||
|
||||
This will modify the indicated vacation photograph by making it more
|
||||
@ -188,7 +188,7 @@ accepts additional options:
|
||||
### This is an example of inpainting:
|
||||
|
||||
~~~~
|
||||
dream> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
|
||||
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
|
||||
~~~~
|
||||
|
||||
This will do the same thing as img2img, but image alterations will
|
||||
@ -224,20 +224,20 @@ Some examples:
|
||||
|
||||
Upscale to 4X its original size and fix faces using codeformer:
|
||||
~~~
|
||||
dream> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
|
||||
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
|
||||
~~~
|
||||
|
||||
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
|
||||
|
||||
~~~
|
||||
dream> !fix 0000045.4829112.png -G0.8 -ft gfpgan
|
||||
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
|
||||
>> fixing outputs/img-samples/0000045.4829112.png
|
||||
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
|
||||
>> GFPGAN - Restoring Faces for image seed:4829112
|
||||
Outputs:
|
||||
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
|
||||
|
||||
dream> !fix 000017.4829112.gfpgan-00.png --embiggen 3
|
||||
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3
|
||||
...lots of text...
|
||||
Outputs:
|
||||
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
|
||||
@ -251,9 +251,9 @@ provide either the name of a file in the current output directory, or
|
||||
a full file path.
|
||||
|
||||
~~~
|
||||
dream> !fetch 0000015.8929913.png
|
||||
invoke> !fetch 0000015.8929913.png
|
||||
# the script returns the next line, ready for editing and running:
|
||||
dream> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
||||
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
|
||||
~~~
|
||||
|
||||
Note that this command may behave unexpectedly if given a PNG file that
|
||||
@ -261,7 +261,7 @@ was not generated by InvokeAI.
|
||||
|
||||
## !history
|
||||
|
||||
The dream script keeps track of all the commands you issue during a
|
||||
The invoke script keeps track of all the commands you issue during a
|
||||
session, allowing you to re-run them. On Mac and Linux systems, it
|
||||
also writes the command-line history out to disk, giving you access to
|
||||
the most recent 1000 commands issued.
|
||||
@ -272,7 +272,7 @@ issued during the session (Windows), or the most recent 1000 commands
|
||||
where "NNN" is the history line number. For example:
|
||||
|
||||
~~~
|
||||
dream> !history
|
||||
invoke> !history
|
||||
...
|
||||
[14] happy woman sitting under tree wearing broad hat and flowing garment
|
||||
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
|
||||
@ -280,8 +280,8 @@ dream> !history
|
||||
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||
...
|
||||
dream> !20
|
||||
dream> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||
invoke> !20
|
||||
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||
~~~
|
||||
|
||||
## !search <search string>
|
||||
@ -290,7 +290,7 @@ This is similar to !history but it only returns lines that contain
|
||||
`search string`. For example:
|
||||
|
||||
~~~
|
||||
dream> !search surreal
|
||||
invoke> !search surreal
|
||||
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
|
||||
~~~
|
||||
|
||||
@ -312,16 +312,16 @@ command completion.
|
||||
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y
|
||||
|
||||
Windows users can get similar, but more limited, functionality if they
|
||||
launch dream.py with the "winpty" program and have the `pyreadline3`
|
||||
launch invoke.py with the "winpty" program and have the `pyreadline3`
|
||||
library installed:
|
||||
|
||||
~~~
|
||||
> winpty python scripts\dream.py
|
||||
> winpty python scripts\invoke.py
|
||||
~~~
|
||||
|
||||
On the Mac and Linux platforms, when you exit dream.py, the last 1000
|
||||
On the Mac and Linux platforms, when you exit invoke.py, the last 1000
|
||||
lines of your command-line history will be saved. When you restart
|
||||
dream.py, you can access the saved history using the up-arrow key.
|
||||
invoke.py, you can access the saved history using the up-arrow key.
|
||||
|
||||
In addition, limited command-line completion is installed. In various
|
||||
contexts, you can start typing your command and press tab. A list of
|
||||
@ -334,7 +334,7 @@ will attempt to complete pathnames for you. This is most handy for the
|
||||
the path with a slash ("/") or "./". For example:
|
||||
|
||||
~~~
|
||||
dream> zebra with a mustache -I./test-pictures<TAB>
|
||||
invoke> zebra with a mustache -I./test-pictures<TAB>
|
||||
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
|
||||
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
|
||||
```
|
||||
|
@ -106,8 +106,8 @@ Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor
|
||||
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
|
||||
|
||||
```bash
|
||||
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
|
||||
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
|
||||
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
|
||||
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
|
||||
```
|
||||
|
||||
If your starting image was also 512x512 this should have taken 9 tiles.
|
||||
@ -118,7 +118,7 @@ If there weren't enough clouds in the sky of that forest you just made
|
||||
tiles:
|
||||
|
||||
```bash
|
||||
dream> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
|
||||
invoke> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
|
||||
```
|
||||
|
||||
## Fixing Previously-Generated Images
|
||||
@ -129,7 +129,7 @@ syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the
|
||||
previous command to look like this:
|
||||
|
||||
~~~~
|
||||
dream> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
|
||||
invoke> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
|
||||
~~~~
|
||||
|
||||
A new file named `000002.seed.fixed.png` will be created in the output directory. Note that
|
||||
|
@ -10,18 +10,39 @@ top of the image you provide, preserving the original's basic shape and layout.
|
||||
the `--init_img` option as shown here:
|
||||
|
||||
```commandline
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
tree on a hill with a river, nature photograph, national geographic -I./test-pictures/tree-and-river-sketch.png -f 0.85
|
||||
```
|
||||
|
||||
This will take the original image shown here:
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
|
||||
|
||||
and generate a new image based on it as shown here:
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
|
||||
|
||||
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much
|
||||
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
|
||||
original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
|
||||
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
|
||||
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
|
||||
adding steps will continuously change the resulting image and it will not converge.
|
||||
|
||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
|
||||
the original image. This is done by passing the first generated image
|
||||
back into img2img the requested number of times. It generates
|
||||
interesting variants.
|
||||
|
||||
Note that the prompt makes a big difference. For example, this slight variation on the prompt produces
|
||||
a very different image:
|
||||
|
||||
`photograph of a tree on a hill with a river`
|
||||
|
||||
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
|
||||
|
||||
(When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
|
||||
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
|
||||
model, or film settings.)
|
||||
|
||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
|
||||
transparent regions, a process called "inpainting". However, for this to work correctly, the color
|
||||
information underneath the transparent needs to be preserved, not erased.
|
||||
@ -29,6 +50,17 @@ information underneath the transparent needs to be preserved, not erased.
|
||||
More details can be found here:
|
||||
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
|
||||
|
||||
<<<<<<< HEAD
|
||||
=======
|
||||
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your
|
||||
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
|
||||
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified
|
||||
by width x height:
|
||||
~~~
|
||||
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit
|
||||
~~~
|
||||
|
||||
>>>>>>> main
|
||||
## How does it actually work, though?
|
||||
|
||||
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
|
||||
@ -38,7 +70,11 @@ gaussian noise and progressively refines it over the requested number of steps,
|
||||
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
|
||||
|
||||
```commandline
|
||||
<<<<<<< HEAD
|
||||
dream> "fire" -s10 -W384 -H384 -S1592514025
|
||||
=======
|
||||
invoke> "fire" -s10 -W384 -H384 -S1592514025
|
||||
>>>>>>> main
|
||||
```
|
||||
|
||||

|
||||
@ -66,7 +102,11 @@ Notice how much more fuzzy the starting image is for strength `0.7` compared to
|
||||
| | strength = 0.7 | strength = 0.4 |
|
||||
| -- | -- | -- |
|
||||
| initial image that SD sees |  |  |
|
||||
<<<<<<< HEAD
|
||||
| steps argument to `dream>` | `-S10` | `-S10` |
|
||||
=======
|
||||
| steps argument to `invoke>` | `-S10` | `-S10` |
|
||||
>>>>>>> main
|
||||
| steps actually taken | 7 | 4 |
|
||||
| latent space at each step |  |  |
|
||||
| output |  |  |
|
||||
@ -77,10 +117,17 @@ Both of the outputs look kind of like what I was thinking of. With the strength
|
||||
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`:
|
||||
|
||||
```commandline
|
||||
<<<<<<< HEAD
|
||||
dream> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
||||
```
|
||||
|
||||
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `dream.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
|
||||
=======
|
||||
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
|
||||
```
|
||||
|
||||
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `invoke.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
|
||||
>>>>>>> main
|
||||
|
||||
### Compensating for the reduced step count
|
||||
|
||||
@ -89,7 +136,11 @@ After putting this guide together I was curious to see how the difference would
|
||||
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
|
||||
|
||||
```commandline
|
||||
<<<<<<< HEAD
|
||||
dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||
=======
|
||||
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||
>>>>>>> main
|
||||
```
|
||||
|
||||

|
||||
@ -97,7 +148,11 @@ dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
|
||||
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
|
||||
|
||||
```commandline
|
||||
<<<<<<< HEAD
|
||||
dream> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
||||
=======
|
||||
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
|
||||
>>>>>>> main
|
||||
```
|
||||
|
||||

|
||||
|
@ -8,7 +8,7 @@ title: Inpainting
|
||||
|
||||
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make
|
||||
one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this
|
||||
image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the
|
||||
image at the invoke> command line using the `-I` switch. Stable Diffusion will only paint within the
|
||||
transparent region.
|
||||
|
||||
There's a catch. In the current implementation, you have to prepare the initial image correctly so
|
||||
@ -17,13 +17,13 @@ applications will by default erase the color information under the transparent p
|
||||
them with white or black, which will lead to suboptimal inpainting. You also must take care to
|
||||
export the PNG file in such a way that the color information is preserved.
|
||||
|
||||
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat
|
||||
If your photoeditor is erasing the underlying color information, `invoke.py` will give you a big fat
|
||||
warning. If you can't find a way to coax your photoeditor to retain color values under transparent
|
||||
areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image
|
||||
and the masked (partially transparent) image:
|
||||
|
||||
```bash
|
||||
dream> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
|
||||
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
|
||||
```
|
||||
|
||||
We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
@ -38,8 +38,8 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
2. Layer->Transparency->Add Alpha Channel
|
||||
3. Use lasoo tool to select region to mask
|
||||
4. Choose Select -> Float to create a floating selection
|
||||
5. Open the Layers toolbar (++ctrl+l++) and select "Floating Selection"
|
||||
6. Set opacity to 0%
|
||||
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||
6. Set opacity to a value between 0% and 99%
|
||||
7. Export as PNG
|
||||
8. In the export dialogue, Make sure the "Save colour values from
|
||||
transparent pixels" checkbox is selected.
|
||||
@ -69,7 +69,7 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
|
||||

|
||||
|
||||
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively dream. Lookin' good!
|
||||
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good!
|
||||
|
||||

|
||||
|
||||
|
@ -22,10 +22,10 @@ Output Example: 
|
||||
|
||||
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
|
||||
`--seamless` option when starting the script which will result in all generated images to tile, or
|
||||
for each `dream>` prompt as shown here:
|
||||
for each `invoke>` prompt as shown here:
|
||||
|
||||
```python
|
||||
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
invoke> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
```
|
||||
|
||||
---
|
||||
@ -42,12 +42,12 @@ Here's an example of using this to do a quick refinement. It also illustrates us
|
||||
switch to turn on upscaling and face enhancement (see previous section):
|
||||
|
||||
```bash
|
||||
dream> a cute child playing hopscotch -G0.5
|
||||
invoke> a cute child playing hopscotch -G0.5
|
||||
[...]
|
||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
|
||||
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
|
||||
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
|
||||
invoke> a cute child playing hopscotch -G1.0 -s100 -S -1
|
||||
reusing previous seed 3498014304
|
||||
[...]
|
||||
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
|
@ -31,7 +31,7 @@ Pretty nice, but it's annoying that the top of her head is cut
|
||||
off. She's also a bit off center. Let's fix that!
|
||||
|
||||
~~~~
|
||||
dream> !fix images/curly.png --outcrop top 64 right 64
|
||||
invoke> !fix images/curly.png --outcrop top 64 right 64
|
||||
~~~~
|
||||
|
||||
This is saying to apply the `outcrop` extension by extending the top
|
||||
@ -67,7 +67,7 @@ differences. Starting with the same image, here is how we would add an
|
||||
additional 64 pixels to the top of the image:
|
||||
|
||||
~~~
|
||||
dream> !fix images/curly.png --out_direction top 64
|
||||
invoke> !fix images/curly.png --out_direction top 64
|
||||
~~~
|
||||
|
||||
(you can abbreviate ``--out_direction` as `-D`.
|
||||
|
@ -20,39 +20,33 @@ The default face restoration module is GFPGAN. The default upscale is
|
||||
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
|
||||
Support] below.
|
||||
|
||||
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
|
||||
the standard install location for python packages, and will put GFPGAN into a
|
||||
subdirectory of "src" in the InvokeAI directory. (The reason for this is
|
||||
that the standard GFPGAN distribution has a minor bug that adversely affects
|
||||
image color.) Upscaling with Real-ESRGAN should "just work" without further
|
||||
intervention. Simply pass the --upscale (-U) option on the dream> command line,
|
||||
or indicate the desired scale on the popup in the Web GUI.
|
||||
As of version 1.14, environment.yaml will install the Real-ESRGAN
|
||||
package into the standard install location for python packages, and
|
||||
will put GFPGAN into a subdirectory of "src" in the InvokeAI
|
||||
directory. Upscaling with Real-ESRGAN should "just work" without
|
||||
further intervention. Simply pass the --upscale (-U) option on the
|
||||
invoke> command line, or indicate the desired scale on the popup in
|
||||
the Web GUI.
|
||||
|
||||
For **GFPGAN** to work, there is one additional step needed. You will need to
|
||||
download and copy the GFPGAN
|
||||
[models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth)
|
||||
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems,
|
||||
here's how you'd do it using **wget**:
|
||||
**GFPGAN** requires a series of downloadable model files to
|
||||
work. These are loaded when you run `scripts/preload_models.py`. If
|
||||
GFPAN is failing with an error, please run the following from the
|
||||
InvokeAI directory:
|
||||
|
||||
```bash
|
||||
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P src/gfpgan/experiments/pretrained_models/
|
||||
```
|
||||
~~~~
|
||||
python scripts/preload_models.py
|
||||
~~~~
|
||||
|
||||
Make sure that you're in the InvokeAI directory when you do this.
|
||||
If you do not run this script in advance, the GFPGAN module will attempt
|
||||
to download the models files the first time you try to perform facial
|
||||
reconstruction.
|
||||
|
||||
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an
|
||||
earlier version of this package which asked you to install GFPGAN in a sibling
|
||||
directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
|
||||
custom path to your GFPGAN directory. _There are other GFPGAN related boot
|
||||
arguments if you wish to customize further._
|
||||
|
||||
!!! warning "Internet connection needed"
|
||||
|
||||
Users whose GPU machines are isolated from the Internet (e.g.
|
||||
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
|
||||
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
|
||||
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
|
||||
dependencies.
|
||||
Alternatively, if you have GFPGAN installed elsewhere, or if you are
|
||||
using an earlier version of this package which asked you to install
|
||||
GFPGAN in a sibling directory, you may use the `--gfpgan_dir` argument
|
||||
with `invoke.py` to set a custom path to your GFPGAN directory. _There
|
||||
are other GFPGAN related boot arguments if you wish to customize
|
||||
further._
|
||||
|
||||
## Usage
|
||||
|
||||
@ -94,13 +88,13 @@ too.
|
||||
### Example Usage
|
||||
|
||||
```bash
|
||||
dream> superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
invoke> superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```
|
||||
|
||||
This also works with img2img:
|
||||
|
||||
```bash
|
||||
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
invoke> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
```
|
||||
|
||||
!!! note
|
||||
@ -124,15 +118,15 @@ actions.
|
||||
This repo also allows you to perform face restoration using
|
||||
[CodeFormer](https://github.com/sczhou/CodeFormer).
|
||||
|
||||
In order to setup CodeFormer to work, you need to download the models like with
|
||||
GFPGAN. You can do this either by running `preload_models.py` or by manually
|
||||
downloading the
|
||||
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
|
||||
In order to setup CodeFormer to work, you need to download the models
|
||||
like with GFPGAN. You can do this either by running
|
||||
`preload_models.py` or by manually downloading the [model
|
||||
file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
|
||||
and saving it to `ldm/restoration/codeformer/weights` folder.
|
||||
|
||||
You can use `-ft` prompt argument to swap between CodeFormer and the default
|
||||
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
|
||||
strength of the restoration effect.
|
||||
You can use `-ft` prompt argument to swap between CodeFormer and the
|
||||
default GFPGAN. The above mentioned `-G` prompt argument will allow
|
||||
you to control the strength of the restoration effect.
|
||||
|
||||
### Usage:
|
||||
|
||||
@ -168,7 +162,7 @@ previously-generated file. Just use the syntax `!fix path/to/file.png
|
||||
just run:
|
||||
|
||||
```
|
||||
dream> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
|
||||
invoke> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
|
||||
```
|
||||
|
||||
A new file named `000044.2945021133.fixed.png` will be created in the output
|
||||
@ -178,5 +172,5 @@ unlike the behavior at generate time.
|
||||
### Disabling:
|
||||
|
||||
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
|
||||
you can disable them on the dream.py command line with the `--no_restore` and
|
||||
you can disable them on the invoke.py command line with the `--no_restore` and
|
||||
`--no_upscale` options, respectively.
|
||||
|
@ -6,9 +6,9 @@ title: Prompting Features
|
||||
|
||||
## **Reading Prompts from a File**
|
||||
|
||||
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per
|
||||
You can automate `invoke.py` by providing a text file with the prompts you want to run, one line per
|
||||
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
|
||||
Each line should look like what you would type at the dream> prompt:
|
||||
Each line should look like what you would type at the invoke> prompt:
|
||||
|
||||
```bash
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
@ -16,39 +16,23 @@ stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
```
|
||||
|
||||
Then pass this file's name to `dream.py` when you invoke it:
|
||||
Then pass this file's name to `invoke.py` when you invoke it:
|
||||
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/invoke.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of `-`:
|
||||
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/invoke.py --from_file -
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Weighted Prompts**
|
||||
|
||||
You may weight different sections of the prompt to tell the sampler to attach different levels of
|
||||
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight. For
|
||||
example consider this prompt:
|
||||
|
||||
```bash
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
|
||||
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
|
||||
combination of integers and floating point numbers, and they do not need to add up to 1.
|
||||
|
||||
---
|
||||
|
||||
## **Negative and Unconditioned Prompts**
|
||||
|
||||
Any words between a pair of square brackets will try and be ignored by Stable Diffusion's model during generation of images.
|
||||
Any words between a pair of square brackets will instruct Stable
|
||||
Diffusion to attempt to ban the concept from the generated image.
|
||||
|
||||
```bash
|
||||
this is a test prompt [not really] to make you understand [cool] how this works.
|
||||
@ -88,3 +72,78 @@ Getting close - but there's no sense in having a saddle when our horse doesn't h
|
||||
* You can provide multiple words within the same bracket.
|
||||
* You can provide multiple brackets with multiple words in different places of your prompt. That works just fine.
|
||||
* To improve typical anatomy problems, you can add negative prompts like `[bad anatomy, extra legs, extra arms, extra fingers, poorly drawn hands, poorly drawn feet, disfigured, out of frame, tiling, bad art, deformed, mutated]`.
|
||||
|
||||
---
|
||||
|
||||
## **Prompt Blending**
|
||||
|
||||
You may blend together different sections of the prompt to explore the
|
||||
AI's latent semantic space and generate interesting (and often
|
||||
surprising!) variations. The syntax is:
|
||||
|
||||
```bash
|
||||
blue sphere:0.25 red cube:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to blend 25% of the concept of a blue
|
||||
sphere with 75% of the concept of a red cube. The blend weights can
|
||||
use any combination of integers and floating point numbers, and they
|
||||
do not need to add up to 1. Everything to the left of the `:XX` up to
|
||||
the previous `:XX` is used for merging, so the overall effect is:
|
||||
|
||||
```bash
|
||||
0.25 * "blue sphere" + 0.75 * "white duck" + hybrid
|
||||
```
|
||||
|
||||
Because you are exploring the "mind" of the AI, the AI's way of mixing
|
||||
two concepts may not match yours, leading to surprising effects. To
|
||||
illustrate, here are three images generated using various combinations
|
||||
of blend weights. As usual, unless you fix the seed, the prompts will give you
|
||||
different results each time you run them.
|
||||
|
||||
### "blue sphere, red cube, hybrid"
|
||||
|
||||
This example doesn't use melding at all and represents the default way
|
||||
of mixing concepts.
|
||||
|
||||
<img src="../assets/prompt-blending/blue-sphere-red-cube-hybrid.png" width=256>
|
||||
|
||||
It's interesting to see how the AI expressed the concept of "cube" as
|
||||
the four quadrants of the enclosing frame. If you look closely, there
|
||||
is depth there, so the enclosing frame is actually a cube.
|
||||
|
||||
### "blue sphere:0.25 red cube:0.75 hybrid"
|
||||
|
||||
<img src="../assets/prompt-blending/blue-sphere-0.25-red-cube-0.75-hybrid.png" width=256>
|
||||
|
||||
Now that's interesting. We get neither a blue sphere nor a red cube,
|
||||
but a red sphere embedded in a brick wall, which represents a melding
|
||||
of concepts within the AI's "latent space" of semantic
|
||||
representations. Where is Ludwig Wittgenstein when you need him?
|
||||
|
||||
### "blue sphere:0.75 red cube:0.25 hybrid"
|
||||
|
||||
<img src="../assets/prompt-blending/blue-sphere-0.75-red-cube-0.25-hybrid.png" width=256>
|
||||
|
||||
Definitely more blue-spherey. The cube is gone entirely, but it's
|
||||
really cool abstract art.
|
||||
|
||||
### "blue sphere:0.5 red cube:0.5 hybrid"
|
||||
|
||||
<img src="../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5-hybrid.png" width=256>
|
||||
|
||||
Whoa...! I see blue and red, but no spheres or cubes. Is the word
|
||||
"hybrid" summoning up the concept of some sort of scifi creature?
|
||||
Let's find out.
|
||||
|
||||
### "blue sphere:0.5 red cube:0.5"
|
||||
|
||||
<img src="../assets/prompt-blending/blue-sphere-0.5-red-cube-0.5.png" width=256>
|
||||
|
||||
Indeed, removing the word "hybrid" produces an image that is more like
|
||||
what we'd expect.
|
||||
|
||||
In conclusion, prompt blending is great for exploring creative space,
|
||||
but can be difficult to direct. A forthcoming release of InvokeAI will
|
||||
feature more deterministic prompt weighting.
|
||||
|
||||
|
@ -56,22 +56,22 @@ configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
|
||||
## **Run the Model**
|
||||
|
||||
Once the model is trained, specify the trained .pt or .bin file when starting
|
||||
dream using
|
||||
invoke using
|
||||
|
||||
```bash
|
||||
python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt
|
||||
python3 ./scripts/invoke.py --embedding_path /path/to/embedding.pt
|
||||
```
|
||||
|
||||
Then, to utilize your subject at the dream prompt
|
||||
Then, to utilize your subject at the invoke prompt
|
||||
|
||||
```bash
|
||||
dream> "a photo of *"
|
||||
invoke> "a photo of *"
|
||||
```
|
||||
|
||||
This also works with image2image
|
||||
|
||||
```bash
|
||||
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
invoke> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
For .pt files it's also possible to train multiple tokens (modify the
|
||||
|
@ -34,7 +34,7 @@ First we let SD create a series of images in the usual way, in this case
|
||||
requesting six iterations:
|
||||
|
||||
```bash
|
||||
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
||||
invoke> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
||||
...
|
||||
Outputs:
|
||||
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
|
||||
@ -57,7 +57,7 @@ differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
|
||||
with higher numbers being larger amounts of variation.
|
||||
|
||||
```bash
|
||||
dream> "prompt" -n6 -S3357757885 -v0.2
|
||||
invoke> "prompt" -n6 -S3357757885 -v0.2
|
||||
...
|
||||
Outputs:
|
||||
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
|
||||
@ -89,7 +89,7 @@ We combine the two variations using `-V` (`--with_variations`). Again, we must
|
||||
provide the seed for the originally-chosen image in order for this to work.
|
||||
|
||||
```bash
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
|
||||
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
|
||||
Outputs:
|
||||
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
|
||||
```
|
||||
@ -105,7 +105,7 @@ latter, using both the `-V` (combining) and `-v` (variation strength) options.
|
||||
Note that we use `-n6` to generate 6 variations:
|
||||
|
||||
```bash
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
|
||||
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
|
||||
Outputs:
|
||||
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
|
||||
|
@ -1,24 +1,290 @@
|
||||
---
|
||||
title: InvokeAI Web UI & Server
|
||||
title: InvokeAI Web Server
|
||||
---
|
||||
|
||||
# :material-web: InvokeAI Web Server
|
||||
|
||||
As of version 2.0, this distribution's web server has been updated to include
|
||||
an all-new UI, with optimizations to improve common workflows for image generation.
|
||||
|
||||
## Getting Started & Initialization Commands
|
||||
|
||||
To start the web server, run the `dream.py` script by adding the `--web` parameter.
|
||||
As of version 2.0.0, this distribution comes with a full-featured web
|
||||
server (see screenshot). To use it, run the `invoke.py` script by
|
||||
adding the `--web` option:
|
||||
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
|
||||
(ldm) ~/InvokeAI$ python3 scripts/invoke.py --web
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at
|
||||
http://localhost:9090, or to the network name or IP address of the server.
|
||||
http://localhost:9090. To reach the server from a different machine on
|
||||
your LAN, you may launch the web server with the `--host` argument and
|
||||
either the IP address of the host you are running it on, or the
|
||||
wildcard `0.0.0.0`. For example:
|
||||
|
||||
### Additional Options
|
||||
```bash
|
||||
(ldm) ~/InvokeAI$ python3 scripts/invoke.py --web --host 0.0.0.0
|
||||
```
|
||||
|
||||
# Quick guided walkthrough of the WebGUI's features
|
||||
|
||||
While most of the WebGUI's features are intuitive, here is a guided
|
||||
walkthrough through its various components.
|
||||
|
||||
<img src="../assets/invoke-web-server-1.png" width=640>
|
||||
|
||||
The screenshot above shows the Text to Image tab of the WebGUI. There
|
||||
are three main sections:
|
||||
|
||||
1. A **control panel** on the left, which contains various settings
|
||||
for text to image generation. The most important part is the text
|
||||
field (currently showing `strawberry sushi`) for entering the text
|
||||
prompt, and the camera icon directly underneath that will render the
|
||||
image. We'll call this the *Invoke* button from now on.
|
||||
|
||||
2. The **current image** section in the middle, which shows a large
|
||||
format version of the image you are currently working on. A series of
|
||||
buttons at the top ("image to image", "Use All", "Use Seed", etc) lets
|
||||
you modify the image in various ways.
|
||||
|
||||
3. A **gallery* section on the left that contains a history of the
|
||||
images you have generated. These images are read and written to the
|
||||
directory specified at launch time in `--outdir`.
|
||||
|
||||
In addition to these three elements, there are a series of icons for
|
||||
changing global settings, reporting bugs, and changing the theme on
|
||||
the upper right.
|
||||
|
||||
There are also a series of icons to the left of the control panel (see
|
||||
highlighted area in the screenshot below) which select among a series
|
||||
of tabs for performing different types of operations.
|
||||
|
||||
<img src="../assets/invoke-web-server-2.png" width=512>
|
||||
|
||||
From top to bottom, these are:
|
||||
|
||||
1. Text to Image - generate images from text
|
||||
2. Image to Image - from an uploaded starting image (drawing or photograph) generate a new one, modified by the text prompt
|
||||
3. Inpainting (pending) - Interactively erase portions of a starting image and have the AI fill in the erased region from a text prompt.
|
||||
4. Outpainting (pending) - Interactively add blank space to the borders of a starting image and fill in the background from a text prompt.
|
||||
5. Postprocessing (pending) - Interactively postprocess generated images using a variety of filters.
|
||||
|
||||
The inpainting, outpainting and postprocessing tabs are currently in
|
||||
development. However, limited versions of their features can already
|
||||
be accessed through the Text to Image and Image to Image tabs.
|
||||
|
||||
## Walkthrough
|
||||
|
||||
The following walkthrough will exercise most (but not all) of the
|
||||
WebGUI's feature set.
|
||||
|
||||
### Text to Image
|
||||
|
||||
1. Launch the WebGUI using `python scripts/invoke.py --web` and
|
||||
connect to it with your browser by accessing
|
||||
`http://localhost:9090`. If the browser and server are running on
|
||||
different machines on your LAN, add the option `--host 0.0.0.0` to the
|
||||
launch command line and connect to the machine hosting the web server
|
||||
using its IP address or domain name.
|
||||
|
||||
2. If all goes well, the WebGUI should come up and you'll see a green
|
||||
`connected` message on the upper right.
|
||||
|
||||
#### Basics
|
||||
|
||||
3. Generate an image by typing *strawberry sushi* into the large
|
||||
prompt field on the upper left and then clicking on the Invoke button
|
||||
(the one with the Camera icon). After a short wait, you'll see a large
|
||||
image of sushi in the image panel, and a new thumbnail in the gallery
|
||||
on the right.
|
||||
|
||||
If you need more room on the screen, you can turn the gallery off
|
||||
by clicking on the **x** to the right of "Your Invocations". You can
|
||||
turn it back on later by clicking the image icon that appears in the
|
||||
gallery's place.
|
||||
|
||||
The images are written into the directory indicated by the `--outdir`
|
||||
option provided at script launch time. By default, this is
|
||||
`outputs/img-samples` under the InvokeAI directory.
|
||||
|
||||
4. Generate a bunch of strawberry sushi images by increasing the
|
||||
number of requested images by adjusting the Images counter just below
|
||||
the Camera button. As each is generated, it will be added to the
|
||||
gallery. You can switch the active image by clicking on the gallery
|
||||
thumbnails.
|
||||
|
||||
5. Try playing with different settings, including image width and
|
||||
height, the Sampler, the Steps and the CFG scale.
|
||||
|
||||
Image *Width* and *Height* do what you'd expect. However, be aware that
|
||||
larger images consume more VRAM memory and take longer to generate.
|
||||
|
||||
The *Sampler* controls how the AI selects the image to display. Some
|
||||
samplers are more "creative" than others and will produce a wider
|
||||
range of variations (see next section). Some samplers run faster than
|
||||
others.
|
||||
|
||||
*Steps* controls how many noising/denoising/sampling steps the AI will
|
||||
take. The higher this value, the more refined the image will be, but
|
||||
the longer the image will take to generate. A typical strategy is to
|
||||
generate images with a low number of steps in order to select one to
|
||||
work on further, and then regenerate it using a higher number of
|
||||
steps.
|
||||
|
||||
The *CFG Scale* controls how hard the AI tries to match the generated
|
||||
image to the input prompt. You can go as high or low as you like, but
|
||||
generally values greater than 20 won't improve things much, and values
|
||||
lower than 5 will produce unexpected images. There are complex
|
||||
interactions between *Steps*, *CFG Scale* and the *Sampler*, so
|
||||
experiment to find out what works for you.
|
||||
|
||||
6. To regenerate a previously-generated image, select the image you
|
||||
want and click *Use All*. This loads the text prompt and other
|
||||
original settings into the control panel. If you then press *Invoke*
|
||||
it will regenerate the image exactly. You can also selectively modify
|
||||
the prompt or other settings to tweak the image.
|
||||
|
||||
Alternatively, you may click on *Use Seed* to load just the image's
|
||||
seed, and leave other settings unchanged.
|
||||
|
||||
7. To regenerate a Stable Diffusion image that was generated by
|
||||
another SD package, you need to know its text prompt and its
|
||||
*Seed*. Copy-paste the prompt into the prompt box, unset the
|
||||
*Randomize Seed* control in the control panel, and copy-paste the
|
||||
desired *Seed* into its text field. When you Invoke, you will get
|
||||
something similar to the original image. It will not be exact unless
|
||||
you also set the correct values for the original sampler, CFG,
|
||||
steps and dimensions, but it will (usually) be close.
|
||||
|
||||
#### Variations on a theme
|
||||
|
||||
5. Let's try generating some variations. Select your favorite sushi
|
||||
image from the gallery to load it. Then select "Use All" from the list
|
||||
of buttons above. This will load up all the settings used to generate
|
||||
this image, including its unique seed.
|
||||
|
||||
Go down to the Variations section of the Control Panel and set the
|
||||
button to On. Set Variation Amount to 0.2 to generate a modest
|
||||
number of variations on the image, and also set the Image counter to
|
||||
4. Press the `invoke` button. This will generate a series of related
|
||||
images. To obtain smaller variations, just lower the Variation
|
||||
Amount. You may also experiment with changing the Sampler. Some
|
||||
samplers generate more variability than others. *k_euler_a* is
|
||||
particularly creative, while *ddim* is pretty conservative.
|
||||
|
||||
6. For even more variations, experiment with increasing the setting
|
||||
for *Perlin*. This adds a bit of noise to the image generation
|
||||
process. Note that values of Perlin noise greater than 0.15 produce
|
||||
poor images for several of the samplers.
|
||||
|
||||
#### Facial reconstruction and upscaling
|
||||
|
||||
Stable Diffusion frequently produces mangled faces, particularly when
|
||||
there are multiple figures in the same scene. Stable Diffusion has
|
||||
particular issues with generating reallistic eyes. InvokeAI provides
|
||||
the ability to reconstruct faces using either the GFPGAN or CodeFormer
|
||||
libraries. For more information see [POSTPROCESS](POSTPROCESS.md).
|
||||
|
||||
7. Invoke a prompt that generates a mangled face. A prompt that often
|
||||
gives this is "portrait of a lawyer, 3/4 shot" (this is not intended
|
||||
as a slur against lawyers!) Once you have an image that needs some
|
||||
touching up, load it into the Image panel, and press the button with
|
||||
the face icon (highlighted in the first screenshot below). A dialog
|
||||
box will appear. Leave *Strength* at 0.8 and press *Restore Faces". If
|
||||
all goes well, the eyes and other aspects of the face will be improved
|
||||
(see the second screenshot)
|
||||
|
||||
<img src="../assets/invoke-web-server-3.png">
|
||||
<img src="../assets/invoke-web-server-4.png">
|
||||
|
||||
The facial reconstruction *Strength* field adjusts how aggressively
|
||||
the face library will try to alter the face. It can be as high as 1.0,
|
||||
but be aware that this often softens the face airbrush style, losing
|
||||
some details. The default 0.8 is usually sufficient.
|
||||
|
||||
8. "Upscaling" is the process of increasing the size of an image while
|
||||
retaining the sharpness. InvokeAI uses an external library called
|
||||
"ESRGAN" to do this. To invoke upscaling, simply select an image and
|
||||
press the *HD* button above it. You can select between 2X and 4X
|
||||
upscaling, and adjust the upscaling strength, which has much the same
|
||||
meaning as in facial reconstruction. Try running this on one of your
|
||||
previously-generated images.
|
||||
|
||||
9. Finally, you can run facial reconstruction and/or upscaling
|
||||
automatically after each Invocation. Go to the Advanced Options
|
||||
section of the Control Panel and turn on *Restore Face* and/or
|
||||
*Upscale*.
|
||||
|
||||
### Image to Image
|
||||
|
||||
InvokeAI lets you take an existing image and use it as the basis for a
|
||||
new creation. You can use any sort of image, including a photograph, a
|
||||
scanned sketch, or a digital drawing, as long as it is in PNG or JPEG
|
||||
format.
|
||||
|
||||
For this tutorial, we'll use files named
|
||||
[Lincoln-and-Parrot-512.png](../assets/Lincoln-and-Parrot-512.png),
|
||||
and
|
||||
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png).
|
||||
Download these images to your local machine now to continue with the walkthrough.
|
||||
|
||||
10. Click on the *Image to Image* tab icon, which is the second icon
|
||||
from the top on the left-hand side of the screen:
|
||||
|
||||
<img src="../assets/invoke-web-server-5.png">
|
||||
|
||||
This will bring you to a screen similar to the one shown here:
|
||||
|
||||
<img src="../assets/invoke-web-server-6.png" width=640>
|
||||
|
||||
Drag-and-drop the Lincoln-and-Parrot image into the Image panel, or
|
||||
click the blank area to get an upload dialog. The image will load into
|
||||
an area marked *Initial Image*. (The WebGUI will also load the most
|
||||
recently-generated image from the gallery into a section on the left,
|
||||
but this image will be replaced in the next step.)
|
||||
|
||||
11. Go to the prompt box and type *old sea captain with raven on
|
||||
shoulder* and press Invoke. A derived image will appear to the right
|
||||
of the original one:
|
||||
|
||||
<img src="../assets/invoke-web-server-7.png" width=640>
|
||||
|
||||
12. Experiment with the different settings. The most influential one
|
||||
in Image to Image is *Image to Image Strength* located about midway
|
||||
down the control panel. By default it is set to 0.75, but can range
|
||||
from 0.0 to 0.99. The higher the value, the more of the original image
|
||||
the AI will replace. A value of 0 will leave the initial image
|
||||
completely unchanged, while 0.99 will replace it completely. However,
|
||||
the Sampler and CFG Scale also influence the final result. You can
|
||||
also generate variations in the same way as described in Text to
|
||||
Image.
|
||||
|
||||
13. What if we only want to change certain part(s) of the image and
|
||||
leave the rest intact? This is called Inpainting, and a future version
|
||||
of the InvokeAI web server will provide an interactive painting canvas
|
||||
on which you can directly draw the areas you wish to Inpaint into. For
|
||||
now, you can achieve this effect by using an external photoeditor tool
|
||||
to make one or more regions of the image transparent as described in
|
||||
[INPAINTING.md] and uploading that.
|
||||
|
||||
The file
|
||||
[Lincoln-and-Parrot-512-transparent.png](../assets/Lincoln-and-Parrot-512-transparent.png)
|
||||
is a version of the earlier image in which the area around the parrot
|
||||
has been replaced with transparency. Click on the "x" in the upper
|
||||
right of the Initial Image and upload the transparent version. Using
|
||||
the same prompt "old sea captain with raven on shoulder" try Invoking
|
||||
an image. This time, only the parrot will be replaced, leaving the
|
||||
rest of the original image intact:
|
||||
|
||||
<img src="../assets/invoke-web-server-8.png" width=640>
|
||||
|
||||
## Parting remarks
|
||||
|
||||
This concludes the walkthrough, but there are several more features that you
|
||||
can explore. Please check out the [Command Line Interface](CLI.md)
|
||||
documentation for further explanation of the advanced features that
|
||||
were not covered here.
|
||||
|
||||
The WebGUI is only rapid development. Check back regularly for
|
||||
updates!
|
||||
|
||||
# Reference
|
||||
|
||||
## Additional Options
|
||||
`--web_develop` - Starts the web server in development mode.
|
||||
|
||||
`--web_verbose` - Enables verbose logging
|
||||
@ -72,4 +338,3 @@ When an image from the Invocation Gallery is selected, or is generated, the imag
|
||||
## Acknowledgements
|
||||
|
||||
A huge shout-out to the core team working to make this vision a reality, including [psychedelicious](https://github.com/psychedelicious), [Kyle0654](https://github.com/Kyle0654) and [blessedcoolant](https://github.com/blessedcoolant). [hipsterusername](https://github.com/hipsterusername) was the team's unofficial cheerleader and added tooltips/docs.
|
||||
|
||||
|
Reference in New Issue
Block a user