Merge branch 'development' into patch-2

This commit is contained in:
Lincoln Stein
2022-09-28 18:00:37 -04:00
committed by GitHub
194 changed files with 19653 additions and 2570 deletions

143
docs/features/CHANGELOG.md Normal file
View File

@ -0,0 +1,143 @@
---
title: Changelog
---
# :octicons-log-16: Changelog
## v1.13 <small>(in process)</small>
- Supports a Google Colab notebook for a standalone server running on Google
hardware [Arturo Mendivil](https://github.com/artmen1516)
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
[Kevin Gibbons](https://github.com/bakkot)
- WebUI supports incremental display of in-progress images during generation
[Kevin Gibbons](https://github.com/bakkot)
- Output directory can be specified on the dream> command line.
- The grid was displaying duplicated images when not enough images to fill the
final row [Muhammad Usama](https://github.com/SMUsamaShah)
- Can specify --grid on dream.py command line as the default.
- Miscellaneous internal bug and stability fixes.
---
## v1.12 <small>(28 August 2022)</small>
- Improved file handling, including ability to read prompts from standard input.
(kudos to [Yunsaki](https://github.com/yunsaki)
- The web server is now integrated with the dream.py script. Invoke by adding
--web to the dream.py command arguments.
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
VRAM requirements are modestly reduced. Thanks to both
[Blessedcoolant](https://github.com/blessedcoolant) and
[Oceanswave](https://github.com/oceanswave) for their work on this.
- You can now swap samplers on the dream> command line.
[Blessedcoolant](https://github.com/blessedcoolant)
---
## v1.11 <small>(26 August 2022)</small>
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
(kudos to [Oceanswave](https://github.com/Oceanswave))
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
the seed for the image generated before that, etc. Seed memory only extends
back to the previous command, but will work on all images generated with the
-n# switch.
- Variant generation support temporarily disabled pending more general solution.
- Created a feature branch named **yunsaki-morphing-dream** which adds
experimental support for iteratively modifying the prompt and its parameters.
Please
see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
a synopsis of how this works. Note that when this feature is eventually added
to the main branch, it will may be modified significantly.
---
## v1.10 <small>(25 August 2022)</small>
- A barebones but fully functional interactive web server for online generation
of txt2img and img2img.
---
## v1.09 <small>(24 August 2022)</small>
- A new -v option allows you to generate multiple variants of an initial image
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
- [See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
- Added ability to personalize text to image generation (kudos to
[Oceanswave](https://github.com/Oceanswave) and
[nicolai256](https://github.com/nicolai256))
- Enabled all of the samplers from k_diffusion
---
## v1.08 <small>(24 August 2022)</small>
- Escape single quotes on the dream> command before trying to parse. This avoids
parse errors.
- Removed instruction to get Python3.8 as first step in Windows install.
Anaconda3 does it for you.
- Added bounds checks for numeric arguments that could cause crashes.
- Cleaned up the copyright and license agreement files.
---
## v1.07 <small>(23 August 2022)</small>
- Image filenames will now never fill gaps in the sequence, but will be assigned
the next higher name in the chosen directory. This ensures that the alphabetic
and chronological sort orders are the same.
---
## v1.06 <small>(23 August 2022)</small>
- Added weighted prompt support contributed by
[xraxra](https://github.com/xraxra)
- Example of using weighted prompts to tweak a demonic figure contributed by
[bmaltais](https://github.com/bmaltais)
---
## v1.05 <small>(22 August 2022 - after the drop)</small>
- Filenames now use the following formats: 000010.95183149.png -- Two files
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
by a different seed.
000011.455191342.01.png -- Two files produced by the same command using
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
can be regenerated with the indicated key
- It should no longer be possible for one image to overwrite another
- You can use the "cd" and "pwd" commands at the dream> prompt to set and
retrieve the path of the output directory.
## v1.04 <small>(22 August 2022 - after the drop)</small>
- Updated README to reflect installation of the released weights.
- Suppressed very noisy and inconsequential warning when loading the frozen CLIP
tokenizer.
## v1.03 <small>(22 August 2022)</small>
- The original txt2img and img2img scripts from the CompViz repository have been
moved into a subfolder named "orig_scripts", to reduce confusion.
## v1.02 <small>(21 August 2022)</small>
- A copy of the prompt and all of its switches and options is now stored in the
corresponding image in a tEXt metadata field named "Dream". You can read the
prompt using scripts/images2prompt.py, or an image editor that allows you to
explore the full metadata. **Please run "conda env update -f environment.yaml"
to load the k_lms dependencies!!**
## v1.01 <small>(21 August 2022)</small>
- added k_lms sampling. **Please run "conda env update -f environment.yaml" to
load the k_lms dependencies!!**
- use half precision arithmetic by default, resulting in faster execution and
lower memory requirements Pass argument --full_precision to dream.py to get
slower but more accurate image generation

View File

@ -1,19 +1,39 @@
# **Interactive Command-Line Interface**
---
title: CLI
hide:
- toc
---
The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image generation similar to the "dream mothership" bot that Stable AI provided on its Discord server.
# :material-bash: CLI
Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion source code repository, the time-consuming initialization of the AI model initialization only happens once. After that image generation
from the command-line interface is very fast.
## **Interactive Command Line Interface**
The script uses the readline library to allow for in-line editing, command history (up and down arrows), autocompletion, and more. To help keep track of which prompts generated which images, the script writes a log file of image names and prompts to the selected output directory.
The `dream.py` script, located in `scripts/dream.py`, provides an interactive
interface to image generation similar to the "dream mothership" bot that Stable
AI provided on its Discord server.
In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can be retrieved using scripts/images2prompt.py
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) source
code repository, the time-consuming initialization of the AI model
initialization only happens once. After that image generation from the
command-line interface is very fast.
The script uses the readline library to allow for in-line editing, command
history (++up++ and ++down++), autocompletion, and more. To help keep track of
which prompts generated which images, the script writes a log file of image
names and prompts to the selected output directory.
In addition, as of version 1.02, it also writes the prompt into the PNG file's
metadata where it can be retrieved using `scripts/images2prompt.py`
The script is confirmed to work on Linux, Windows and Mac systems.
_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is currently rudimentary, but a much better replacement is on its way.
!!! note
```
This script runs from the command-line or can be used as a Web application. The Web GUI is
currently rudimentary, but a much better replacement is on its way.
```bash
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
* Initializing, be patient...
Loading model from models/ldm/text2img-large/model.ckpt
@ -37,53 +57,55 @@ dream> q
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
```
<p align='center'>
<img src="../assets/dream-py-demo.png"/>
</p>
![dream-py-demo](../assets/dream-py-demo.png)
The `dream>` prompt's arguments are pretty much identical to those
used in the Discord bot, except you don't need to type "!dream" (it
doesn't hurt if you do). A significant change is that creation of
individual images is now the default unless --grid (-g) is given. A
full list is given in [List of prompt arguments]
(#list-of-prompt-arguments).
The `dream>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type "!dream" (it doesn't hurt if you do).
A significant change is that creation of individual images is now the default
unless `--grid` (`-g`) is given. A full list is given in
[List of prompt arguments](#list-of-prompt-arguments).
# Arguments
## Arguments
The script itself also recognizes a series of command-line switches
that will change important global defaults, such as the directory for
image outputs and the location of the model weight files.
The script itself also recognizes a series of command-line switches that will
change important global defaults, such as the directory for image outputs and
the location of the model weight files.
## List of arguments recognized at the command line:
### List of arguments recognized at the command line
These command-line arguments can be passed to dream.py when you first
run it from the Windows, Mac or Linux command line. Some set defaults
that can be overridden on a per-prompt basis (see [List of prompt
arguments] (#list-of-prompt-arguments). Others
These command-line arguments can be passed to `dream.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see [List of prompt arguments]
(#list-of-prompt-arguments). Others
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| --help | -h | | Print a concise help message. |
| --outdir <path> | -o<path> | outputs/img_samples | Location for generated images. |
| --prompt_as_dir | -p | False | Name output directories using the prompt text. |
| --from_file <path> | | None | Read list of prompts from a file. Use "-" to read from standard input |
| --model <modelname>| | stable-diffusion-1.4| Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m"|
| --full_precision | -F | False | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
| --web | | False | Start in web server mode |
| --host <ip addr> | | localhost | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| --port <port> | | 9090 | Which port web server should listen for requests on. |
| --config <path> | | configs/models.yaml | Configuration file for models and their weights. |
| --iterations <int> | -n<int> | 1 | How many images to generate per prompt. |
| --grid | -g | False | Save all image series as a grid rather than individually. |
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
| --seamless | | False | Create interesting effects by tiling elements of the image. |
| --embedding_path <path>| | None | Path to pre-trained embedding manager checkpoints, for custom models |
| --gfpgan_dir | | src/gfpgan | Path to where GFPGAN is installed. |
| --gfpgan_model_path| | experiments/pretrained_models/GFPGANv1.3.pth| Path to GFPGAN model file, relative to --gfpgan_dir. |
| --device <device> | -d<device>| torch.cuda.current_device() | Device to run SD on, e.g. "cuda:0" |
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
| `--help` | `-h` | | Print a concise help message. |
| `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Location for generated images. |
| `--prompt_as_dir` | `-p` | `False` | Name output directories using the prompt text. |
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.4` | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" |
| `--full_precision` | `-F` | `False` | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
| `--web` | | `False` | Start in web server mode |
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| `--port <port>` | | `9090` | Which port web server should listen for requests on. |
| `--config <path>` | | `configs/models.yaml` | Configuration file for models and their weights. |
| `--iterations <int>` | `-n<int>` | `1` | How many images to generate per prompt. |
| `--grid` | `-g` | `False` | Save all image series as a grid rather than individually. |
| `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use `-h` to get list of available samplers. |
| `--seamless` | | `False` | Create interesting effects by tiling elements of the image. |
| `--embedding_path <path>` | | `None` | Path to pre-trained embedding manager checkpoints, for custom models |
| `--gfpgan_dir` | | `src/gfpgan` | Path to where GFPGAN is installed. |
| `--gfpgan_model_path` | | `experiments/pretrained_models/GFPGANv1.4.pth` | Path to GFPGAN model file, relative to `--gfpgan_dir`. |
| `--device <device>` | `-d<device>` | `torch.cuda.current_device()` | Device to run SD on, e.g. "cuda:0" |
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
#### deprecated
These arguments are deprecated but still work:
| Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------|
| --weights <path> | | None | Pth to weights file; use `--model stable-diffusion-1.4` instead |
@ -121,7 +143,7 @@ Here are the dream> command that apply to txt2img:
| --height <int> | -H<int> | 512 | Height of generated image |
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt |
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply |
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 0.0 works, but the useful range is roughly 5.0 to 20.0 |
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
@ -183,6 +205,85 @@ well as the --mask (-M) argument:
| --init_mask <path> | -M<path> | None |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
# Convenience commands
In addition to the standard image generation arguments, there are a
series of convenience commands that begin with !:
## !fix
This command runs a post-processor on a previously-generated image. It
takes a PNG filename or path and applies your choice of the -U, -G, or
--embiggen switches in order to fix faces or upscale. If you provide a
filename, the script will look for it in the current output
directory. Otherwise you can provide a full or partial path to the
desired file.
Some examples:
Upscale to 4X its original size and fix faces using codeformer:
~~~
dream> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
~~~
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
~~~
dream> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
dream> !fix 000017.4829112.gfpgan-00.png --embiggen 3
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
~~~
## !fetch
This command retrieves the generation parameters from a previously
generated image and either loads them into the command line
(Linux|Mac), or prints them out in a comment for copy-and-paste
(Windows). You may provide either the name of a file in the current
output directory, or a full file path.
~~~
dream> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running:
dream> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~
Note that this command may behave unexpectedly if given a PNG file that
was not generated by InvokeAI.
## !history
The dream script keeps track of all the commands you issue during a
session, allowing you to re-run them. On Mac and Linux systems, it
also writes the command-line history out to disk, giving you access to
the most recent 1000 commands issued.
The `!history` command will return a numbered list of all the commands
issued during the session (Windows), or the most recent 1000 commands
(Mac|Linux). You can then repeat a command by using the command !NNN,
where "NNN" is the history line number. For example:
~~~
dream> !history
...
[14] happy woman sitting under tree wearing broad hat and flowing garment
[15] beautiful woman sitting under tree wearing broad hat and flowing garment
[18] beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6
[20] watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
...
dream> !20
dream> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
# Command-line editing and completion
If you are on a Macintosh or Linux machine, the command-line offers
@ -220,9 +321,8 @@ the path with a slash ("/") or "./". For example:
dream> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
~~~
```
You can then type "z", hit tab again, and it will autofill to "zebra.jpg".
You can then type ++z++, hit ++tab++ again, and it will autofill to `zebra.jpg`.
More text completion features (such as autocompleting seeds) are on their way.

154
docs/features/EMBIGGEN.md Normal file
View File

@ -0,0 +1,154 @@
---
title: Embiggen
---
# :material-loupe: Embiggen
**upscale your images on limited memory machines**
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
crashes and memory overloads during the Stable Diffusion process,
these effects are applied after Stable Diffusion has completed its
work.
In single image generations, you will see the output right away but
when you are using multiple iterations, the images will first be
generated and then upscaled and face restored after that process is
complete. While the image generation is taking place, you will still
be able to preview the base images.
If you wish to stop during the image generation but want to upscale or
face restore a particular generated image, pass it again with the same
prompt and generated seed along with the `-U` and `-G` prompt
arguments to perform those actions.
## Embiggen
If you wanted to be able to do more (pixels) without running out of VRAM,
or you want to upscale with details that couldn't possibly appear
without the context of a prompt, this is the feature to try out.
Embiggen automates the process of taking an init image, upscaling it,
cutting it into smaller tiles that slightly overlap, running all the
tiles through img2img to refine details with respect to the prompt,
and "stitching" the tiles back together into a cohesive image.
It automatically computes how many tiles are needed, and so it can be fed
*ANY* size init image and perform Img2Img on it (though it will be run only
one tile at a time, which can cause problems, see the Note at the end).
If you're familiar with "GoBig" (ala [progrock-stable](https://github.com/lowfuel/progrock-stable))
it's similar to that, except it can work up to an arbitrarily large size
(instead of just 2x), with tile overlaps configurable as a ratio, and
has extra logic to re-run any number of the tile sub-sections of the image
if for example a small part of a huge run got messed up.
## Usage
`-embiggen <scaling_factor> <esrgan_strength> <overlap_ratio OR overlap_pixels>`
Takes a scaling factor relative to the size of the `--init_img` (`-I`), followed by
ESRGAN upscaling strength (0 - 1.0), followed by minimum amount of overlap
between tiles as a decimal ratio (0 - 1.0) *OR* a number of pixels.
The scaling factor is how much larger than the `--init_img` the output
should be, and will multiply both x and y axis, so an image that is a
scaling factor of 3.0 has 3*3= 9 times as many pixels, and will take
(at least) 9 times as long (see overlap for why it might be
longer). If the `--init_img` is already the right size `-embiggen 1`,
and it can also be less than one if the init_img is too big.
Esrgan_strength defaults to 0.75, and the overlap_ratio defaults to
0.25, both are optional.
Unlike Img2Img, the `--width` (`-W`) and `--height` (`-H`) arguments
do not control the size of the image as a whole, but the size of the
tiles used to Embiggen the image.
ESRGAN is used to upscale the `--init_img` prior to cutting it into
tiles/pieces to run through img2img and then stitch back
together. Embiggen can be run without ESRGAN; just set the strength to
zero (e.g. `-embiggen 1.75 0`). The output of Embiggen can also be
upscaled after it's finished (`-U`).
The overlap is the minimum that tiles will overlap with adjacent
tiles, specified as either a ratio or a number of pixels. How much the
tiles overlap determines the likelihood the tiling will be noticable,
really small overlaps (e.g. a couple of pixels) may produce noticeable
grid-like fuzzy distortions in the final stitched image. Though, as
the overlapping space doesn't contribute to making the image bigger,
and the larger the overlap the more tiles (and the more time) it will
take to finish.
Because the overlapping parts of tiles don't "contribute" to
increasing size, every tile after the first in a row or column
effectively only covers an extra `1 - overlap_ratio` on each axis. If
the input/`--init_img` is same size as a tile, the ideal (for time)
scaling factors with the default overlap (0.25) are 1.75, 2.5, 3.25,
4.0 etc..
`-embiggen_tiles <spaced list of tiles>`
An advanced usage useful if you only want to alter parts of the image
while running Embiggen. It takes a list of tiles by number to run and
replace onto the initial image e.g. `1 3 5`. It's useful for either
fixing problem spots from a previous Embiggen run, or selectively
altering the prompt for sections of an image - for creative or
coherency reasons.
Tiles are numbered starting with one, and left-to-right,
top-to-bottom. So, if you are generating a 3x3 tiled image, the
middle row would be `4 5 6`.
## Example Usage
Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor of 2.5x;
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
```bash
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
dream > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
If your starting image was also 512x512 this should have taken 9 tiles.
If there weren't enough clouds in the sky of that forest you just made
(and that image is about 1280 pixels (512*2.5) wide A.K.A. three
512x512 tiles with 0.25 overlaps wide) we can replace that top row of
tiles:
```bash
dream> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
## Fixing Previously-Generated Images
It is easy to apply embiggen to any previously-generated file without having to
look up the original prompt and provide an initial image. Just use the
syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the
previous command to look like this:
~~~~
dream> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
~~~~
A new file named `000002.seed.fixed.png` will be created in the output directory. Note that
the `!fix` command does not replace the original file, unlike the behavior at generate time.
You do not need to provide the prompt, and `!fix` automatically selects a good strength for
embiggen-ing.
**Note**
Because the same prompt is used on all the tiled images, and the model
doesn't have the context of anything outside the tile being run - it
can end up creating repeated pattern (also called 'motifs') across all
the tiles based on that prompt. The best way to combat this is
lowering the `--strength` (`-f`) to stay more true to the init image,
and increasing the number of steps so there is more compute-time to
create the detail. Anecdotally `--strength` 0.35-0.45 works pretty
well on most things. It may also work great in some examples even with
the `--strength` set high for patterns, landscapes, or subjects that
are more abstract. Because this is (relatively) fast, you can also
preserve the best parts from each.
Author: [Travco](https://github.com/travco)

View File

@ -1,30 +1,30 @@
# **Image-to-Image**
---
title: Image-to-Image
---
This script also provides an img2img feature that lets you seed your
creations with an initial drawing or photo. This is a really cool
feature that tells stable diffusion to build the prompt on top of the
image you provide, preserving the original's basic shape and
layout. To use it, provide the `--init_img` option as shown here:
# :material-image-multiple: **IMG2IMG**
```
This script also provides an `img2img` feature that lets you seed your creations with an initial
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
the `--init_img` option as shown here:
```bash
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
The `--init_img (-I)` option gives the path to the seed
picture. `--strength (-f)` controls how much the original will be
modified, ranging from `0.0` (keep the original intact), to `1.0`
(ignore the original completely). The default is `0.75`, and ranges
from `0.25-0.75` give interesting results.
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
You may also pass a `-v<count>` option to generate count variants on
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
the original image. This is done by passing the first generated image
back into img2img the requested number of times. It generates
interesting variants.
If the initial image contains transparent regions, then Stable
Diffusion will only draw within the transparent regions, a process
called "inpainting". However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not
erased. See [Creating Transparent Images For
Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
for details.
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
transparent regions, a process called "inpainting". However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased.
More Details can be found here:
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)

View File

@ -1,31 +1,35 @@
# **Creating Transparent Regions for Inpainting**
---
title: Inpainting
---
Inpainting is really cool. To do it, you start with an initial image
and use a photoeditor to make one or more regions transparent
(i.e. they have a "hole" in them). You then provide the path to this
image at the dream> command line using the `-I` switch. Stable
Diffusion will only paint within the transparent region.
# :octicons-paintbrush-16: Inpainting
There's a catch. In the current implementation, you have to prepare
the initial image correctly so that the underlying colors are
preserved under the transparent area. Many imaging editing
applications will by default erase the color information under the
transparent pixels and replace them with white or black, which will
lead to suboptimal inpainting. You also must take care to export the
PNG file in such a way that the color information is preserved.
## **Creating Transparent Regions for Inpainting**
If your photoeditor is erasing the underlying color information,
`dream.py` will give you a big fat warning. If you can't find a way to
coax your photoeditor to retain color values under transparent areas,
then you can combine the `-I` and `-M` switches to provide both the
original unedited image and the masked (partially transparent) image:
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make
one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this
image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the
transparent region.
```
dream> man with cat on shoulder -I./images/man.png -M./images/man-transparent.png
There's a catch. In the current implementation, you have to prepare the initial image correctly so
that the underlying colors are preserved under the transparent area. Many imaging editing
applications will by default erase the color information under the transparent pixels and replace
them with white or black, which will lead to suboptimal inpainting. You also must take care to
export the PNG file in such a way that the color information is preserved.
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat
warning. If you can't find a way to coax your photoeditor to retain color values under transparent
areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image
and the masked (partially transparent) image:
```bash
dream> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
```
We are hoping to get rid of the need for this workaround in an upcoming release.
---
## Recipe for GIMP
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
@ -34,7 +38,7 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
2. Layer->Transparency->Add Alpha Channel
3. Use lasoo tool to select region to mask
4. Choose Select -> Float to create a floating selection
5. Open the Layers toolbar (^L) and select "Floating Selection"
5. Open the Layers toolbar (++ctrl+l++) and select "Floating Selection"
6. Set opacity to 0%
7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from
@ -44,35 +48,29 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
## Recipe for Adobe Photoshop
1. Open image in Photoshop
<p align='left'>
<img src="../assets/step1.png"/>
</p>
![step1](../assets/step1.png)
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint.
<p align='left'>
<img src="../assets/step2.png"/>
</p>
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the Shift + Ctrl + I shortcut, or right clicking and using the "Select Inverse" option.
![step2](../assets/step2.png)
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the underlying image, or your inpainting results will be dramatically impacted.
<p align='left'>
<img src="../assets/step4.png"/>
</p>
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the undrlying image, or your inpainting results will be dramatically impacted.
![step4](../assets/step4.png)
5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background.
<p align='left'>
<img src="../assets/step5.png"/>
</p>
<p align='left'>
<img src="../assets/step6.png"/>
</p>
![step5](../assets/step5.png)
6. Save the image as a transparent PNG by using the "Save a Copy" option in the File menu, or using the Alt + Ctrl + S keyboard shortcut.
6. Save the image as a transparent PNG by using the "Save a Copy" option in the File menu, or using the Alt + Ctrl + S keyboard shortcut
![step6](../assets/step6.png)
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively dream. Lookin' good!
<p align='left'>
<img src="../assets/step7.png"/>
</p>
![step7](../assets/step7.png)
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected.

View File

@ -1,65 +1,47 @@
---
title: Others
---
# :fontawesome-regular-share-from-square: Others
## **Google Colab**
Stable Diffusion AI Notebook: <a
href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb"
target="_parent"><img
target="_parent">
<img
src="https://colab.research.google.com/assets/colab-badge.svg"
alt="Open In Colab"/></a> <br> Open and follow instructions to use an
isolated environment running Dream.<br>
alt="Open In Colab"/></a> <br> Open and follow instructions to use an isolated environment running
Dream.<br>
Output Example:
![Colab Notebook](../assets/colab_notebook.png)
Output Example: ![Colab Notebook](../assets/colab_notebook.png)
---
## **Seamless Tiling**
The seamless tiling mode causes generated images to seamlessly tile
with itself. To use it, add the `--seamless` option when starting the
script which will result in all generated images to tile, or for each
`dream>` prompt as shown here:
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
`--seamless` option when starting the script which will result in all generated images to tile, or
for each `dream>` prompt as shown here:
```
```python
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
```
---
## **Reading Prompts from a File**
You can automate `dream.py` by providing a text file with the prompts
you want to run, one line per prompt. The text file must be composed
with a text editor (e.g. Notepad) and not a word processor. Each line
should look like what you would type at the dream> prompt:
```
a beautiful sunny day in the park, children playing -n4 -C10
stormy weather on a mountain top, goats grazing -s100
innovative packaging for a squid's dinner -S137038382
```
Then pass this file's name to `dream.py` when you invoke it:
```
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
```
You may read a series of prompts from standard input by providing a filename of `-`:
```
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
```
---
## **Shortcuts: Reusing Seeds**
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version 1.11. Provide a `**-S**` (or `**--seed**`)
switch of `-1` to use the seed of the most recent image generated. If you produced multiple images with the `**-n**` switch, then you can go back further using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go back further than one command.
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
1.11. Provide a `**-S**` (or `**--seed**`) switch of `-1` to use the seed of the most recent image
generated. If you produced multiple images with the `**-n**` switch, then you can go back further
using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go
back further than one command.
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**` switch to turn on upscaling and face enhancement (see previous section):
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**`
switch to turn on upscaling and face enhancement (see previous section):
```
```bash
dream> a cute child playing hopscotch -G0.5
[...]
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
@ -73,29 +55,13 @@ outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.
---
## **Weighted Prompts**
You may weight different sections of the prompt to tell the sampler to attach different levels of
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight.
For example consider this prompt:
```
tabby cat:0.25 white duck:0.75 hybrid
```
This will tell the sampler to invest 25% of its effort on the tabby
cat aspect of the image and 75% on the white duck aspect
(surprisingly, this example actually works). The prompt weights can
use any combination of integers and floating point numbers, and they
do not need to add up to 1.
---
## **Simplified API**
For programmers who wish to incorporate stable-diffusion into other products, this repository includes a simplified API for text to image generation, which lets you create images from a prompt in just three lines of code:
For programmers who wish to incorporate stable-diffusion into other products, this repository
includes a simplified API for text to image generation, which lets you create images from a prompt
in just three lines of code:
```
```bash
from ldm.generate import Generate
g = Generate()
outputs = g.txt2img("a unicorn in manhattan")
@ -109,16 +75,14 @@ Please see ldm/generate.py for more information. A set of example scripts is com
## **Preload Models**
In situations where you have limited internet connectivity or are
blocked behind a firewall, you can use the preload script to preload
the required files for Stable Diffusion to run.
In situations where you have limited internet connectivity or are blocked behind a firewall, you can
use the preload script to preload the required files for Stable Diffusion to run.
The preload script `scripts/preload_models.py` needs to be run once at
least while connected to the internet. In the following runs, it will
load up the cached versions of the required files from the `.cache`
directory of the system.
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the
internet. In the following runs, it will load up the cached versions of the required files from the
`.cache` directory of the system.
```
```bash
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
preloading bert tokenizer...
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]

View File

@ -0,0 +1,78 @@
---
title: Outpainting
---
# :octicons-paintbrush-16: Outpainting
## Continous outpainting
This extension uses the inpainting code to extend an existing image to
any direction of "top", "right", "bottom" or "left". To use it you
need to provide an initial image with -I and an extension direction
with -D (direction). When extending using outpainting a higher img2img
strength value of 0.83 is the default.
The code is not foolproof. Sometimes it will do a good job extending
the image, and other times it will generate ghost images and other
artifacts. In addition, the code works best on images that were
generated by dream.py, because it will be able to recover the original
prompt that generated the file and "understand" what you are trying to
achieve.
### Basic Usage
To illustrate, consider this image generated with the prompt "fantasy
portrait of eleven princess." It's nice, but rather annoying that the
top of the head has been cropped off.
![elven_princess](../assets/outpainting/elven_princess.png)
We can fix that using the `!fix` command!
~~~~
dream> !fix my_images/elven_princess.png -D top 50
~~~~
This is telling dream.py to open up a rectangle 50 pixels high at the
top of the image and outpaint into it. The result is:
![elven_princess.fixed](../assets/outpainting/elven_princess.outpainted.png)
Viola! You can similarly specify `bottom`, `left` or `right` to
outpaint into these margins.
There are some limitations to be aware of:
1. You cannot change the size of the image rectangle. In the example,
notice that the whole image is shifted downwards by 50 pixels, rather
than the top being extended upwards.
2. Attempting to outpaint larger areas will frequently give rise to ugly
ghosting effects.
3. For best results, try increasing the step number.
4. If you don't specify a pixel value in -D, it will default to half
of the whole image, which is likely not what you want.
You can do more with `!fix` including upscaling and facial
reconstruction of previously-generated images. See
[./UPSCALE.md#fixing-previously-generated-images] for the details.
### Advanced Usage
For more control over the outpaintihg process, you can provide the
`-D` option at image generation time. This allows you to apply all the
controls, including the ability to resize the image and apply face-fixing
and upscaling. For example:
~~~~
dream> man with cat on shoulder -I./images/man.png -D bottom 100 -W960 -H960 -fit
~~~~
Or even shorter, since the prompt is read from the metadata of the old image:
~~~~
dream> -I./images/man.png -D bottom 100 -W960 -H960 -fit -U2 -G1
~~~~

90
docs/features/PROMPTS.md Normal file
View File

@ -0,0 +1,90 @@
---
title: Prompting Features
---
# :octicons-command-palette-24: Prompting Features
## **Reading Prompts from a File**
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
Each line should look like what you would type at the dream> prompt:
```bash
a beautiful sunny day in the park, children playing -n4 -C10
stormy weather on a mountain top, goats grazing -s100
innovative packaging for a squid's dinner -S137038382
```
Then pass this file's name to `dream.py` when you invoke it:
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
```
You may read a series of prompts from standard input by providing a filename of `-`:
```bash
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
```
---
## **Weighted Prompts**
You may weight different sections of the prompt to tell the sampler to attach different levels of
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight. For
example consider this prompt:
```bash
tabby cat:0.25 white duck:0.75 hybrid
```
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
combination of integers and floating point numbers, and they do not need to add up to 1.
---
## **Negative and Unconditioned Prompts**
Any words between a pair of square brackets will try and be ignored by Stable Diffusion's model during generation of images.
```bash
this is a test prompt [not really] to make you understand [cool] how this works.
```
In the above statement, the words 'not really cool` will be ignored by Stable Diffusion.
Here's a prompt that depicts what it does.
original prompt:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
![step1](../assets/negative_prompt_walkthru/step1.png)
That image has a woman, so if we want the horse without a rider, we can influence the image not to have a woman by putting [woman] in the prompt, like this:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
![step2](../assets/negative_prompt_walkthru/step2.png)
That's nice - but say we also don't want the image to be quite so blue. We can add "blue" to the list of negative prompts, so it's now [woman blue]:
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
![step3](../assets/negative_prompt_walkthru/step3.png)
Getting close - but there's no sense in having a saddle when our horse doesn't have a rider, so we'll add one more negative prompt: [woman blue saddle].
`#!bash "A fantastical translucent poney made of water and foam, ethereal, radiant, hyperalism, scottish folklore, digital painting, artstation, concept art, smooth, 8 k frostbite 3 engine, ultra detailed, art by artgerm and greg rutkowski and magali villeneuve [woman blue saddle]" -s 20 -W 512 -H 768 -C 7.5 -A k_euler_a -S 1654590180`
![step4](../assets/negative_prompt_walkthru/step4.png)
!!! notes "Notes about this feature:"
* The only requirement for words to be ignored is that they are in between a pair of square brackets.
* You can provide multiple words within the same bracket.
* You can provide multiple brackets with multiple words in different places of your prompt. That works just fine.
* To improve typical anatomy problems, you can add negative prompts like `[bad anatomy, extra legs, extra arms, extra fingers, poorly drawn hands, poorly drawn feet, disfigured, out of frame, tiling, bad art, deformed, mutated]`.

View File

@ -1,70 +1,91 @@
# **Personalizing Text-to-Image Generation**
---
title: TEXTUAL_INVERSION
---
You may personalize the generated images to provide your own styles or objects by training a new LDM checkpoint and introducing a new vocabulary to the fixed model as a (.pt) embeddings file. Alternatively, you may use or train HuggingFace Concepts embeddings files (.bin) from https://huggingface.co/sd-concepts-library and its associated notebooks.
# :material-file-document-plus-outline: TEXTUAL_INVERSION
**Training**
## **Personalizing Text-to-Image Generation**
To train, prepare a folder that contains images sized at 512x512 and execute the following:
You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated notebooks.
**WINDOWS**: As the default backend is not available on Windows, if you're using that platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND=gloo`
## **Training**
```
(ldm) ~/stable-diffusion$ python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
-t \
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
-n my_cat \
--gpus 0, \
--data_root D:/textual-inversion/my_cat \
--init_word 'cat'
To train, prepare a folder that contains images sized at 512x512 and execute the
following:
### WINDOWS
As the default backend is not available on Windows, if you're using that
platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND` to `gloo`
```bash
python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
-t \
-n my_cat \
--gpus 0 \
--data_root D:/textual-inversion/my_cat \
--init_word 'cat'
```
During the training process, files will be created in
/logs/[project][time][project]/ where you can see the process.
`/logs/[project][time][project]/` where you can see the process.
Conditioning contains the training prompts inputs, reconstruction the
input images for the training epoch samples, samples scaled for a
sample of the prompt and one with the init word provided.
Conditioning contains the training prompts inputs, reconstruction the input
images for the training epoch samples, samples scaled for a sample of the prompt
and one with the init word provided.
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
_Note_: According to the associated paper, the optimal number of
images is 3-5. Your model may not converge if you use more images than
that.
!!! note
Training will run indefinitely, but you may wish to stop it (with
ctrl-c) before the heat death of the universe, when you find a low
loss epoch or around ~5000 iterations. Note that you can set a fixed
limit on the number of training steps by decreasing the "max_steps"
option in configs/stable_diffusion/v1-finetune.yaml (currently set to
4000000)
According to the associated paper, the optimal number of
images is 3-5. Your model may not converge if you use more images than
that.
**Running**
Training will run indefinitely, but you may wish to stop it (with ctrl-c) before
the heat death of the universe, when you find a low loss epoch or around ~5000
iterations. Note that you can set a fixed limit on the number of training steps
by decreasing the "max_steps" option in
configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
Once the model is trained, specify the trained .pt or .bin file when
starting dream using
## **Run the Model**
```
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt --full_precision
Once the model is trained, specify the trained .pt or .bin file when starting
dream using
```bash
python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt
```
Then, to utilize your subject at the dream prompt
```
```bash
dream> "a photo of *"
```
This also works with image2image
```
```bash
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
For .pt files it's also possible to train multiple tokens (modify the placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine LDM checkpoints using:
For .pt files it's also possible to train multiple tokens (modify the
placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine
LDM checkpoints using:
```
(ldm) ~/stable-diffusion$ python3 ./scripts/merge_embeddings.py \
--manager_ckpts /path/to/first/embedding.pt /path/to/second/embedding.pt [...] \
--output_path /path/to/output/embedding.pt
```bash
python3 ./scripts/merge_embeddings.py \
--manager_ckpts /path/to/first/embedding.pt \
[</path/to/second/embedding.pt>,[...]] \
--output_path /path/to/output/embedding.pt
```
Credit goes to rinongal and the repository located at https://github.com/rinongal/textual_inversion Please see the repository and associated paper for details and limitations.
Credit goes to rinongal and the repository
Please see [the repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.

View File

@ -1,105 +1,178 @@
# **GFPGAN and Real-ESRGAN Support**
The script also provides the ability to do face restoration and
upscaling with the help of GFPGAN and Real-ESRGAN respectively.
---
title: Upscale
---
As of version 1.14, environment.yaml will install the Real-ESRGAN package into the
standard install location for python packages, and will put GFPGAN into a subdirectory of "src"
in the stable-diffusion directory.
(The reason for this is that the standard GFPGAN distribution has a minor bug that adversely affects image
color.) Upscaling with Real-ESRGAN should "just work" without further intervention. Simply pass the --upscale (-U)
option on the dream> command line, or indicate the desired scale on the popup in the Web GUI.
## Intro
The script provides the ability to restore faces and upscale. You can apply
these operations at the time you generate the images, or at any time to a
previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
## Face Fixing
The default face restoration module is GFPGAN. The default upscale is
Real-ESRGAN. For an alternative face restoration module, see [CodeFormer
Support] below.
As of version 1.14, environment.yaml will install the Real-ESRGAN package into
the standard install location for python packages, and will put GFPGAN into a
subdirectory of "src" in the InvokeAI directory. (The reason for this is
that the standard GFPGAN distribution has a minor bug that adversely affects
image color.) Upscaling with Real-ESRGAN should "just work" without further
intervention. Simply pass the --upscale (-U) option on the dream> command line,
or indicate the desired scale on the popup in the Web GUI.
For **GFPGAN** to work, there is one additional step needed. You will need to
download and copy the GFPGAN
[models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth)
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems,
here's how you'd do it using **wget**:
For **GFPGAN** to work, there is one additional step needed. You will need to download and
copy the GFPGAN [models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems, here's how you'd do it using
**wget**:
```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P src/gfpgan/experiments/pretrained_models/
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth src/gfpgan/experiments/pretrained_models/
```
Make sure that you're in the stable-diffusion directory when you do this.
Make sure that you're in the InvokeAI directory when you do this.
Alternatively, if you have GFPGAN installed elsewhere, or if you are using
an earlier version of this package which asked you to install GFPGAN in a
sibling directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
custom path to your GFPGAN directory. _There are other GFPGAN related
boot arguments if you wish to customize further._
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an
earlier version of this package which asked you to install GFPGAN in a sibling
directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
custom path to your GFPGAN directory. _There are other GFPGAN related boot
arguments if you wish to customize further._
**Note: Internet connection needed:**
Users whose GPU machines are isolated from the Internet (e.g. on a
University cluster) should be aware that the first time you run
dream.py with GFPGAN and Real-ESRGAN turned on, it will try to
download model files from the Internet. To rectify this, you may run
`python3 scripts/preload_models.py` after you have installed GFPGAN
and all its dependencies.
!!! warning "Internet connection needed"
**Usage**
Users whose GPU machines are isolated from the Internet (e.g.
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
dependencies.
## Usage
You will now have access to two new prompt arguments.
**Upscaling**
### Upscaling
`-U : <upscaling_factor> <upscaling_strength>`
The upscaling prompt argument takes two values. The first value is a
scaling factor and should be set to either `2` or `4` only. This will
either scale the image 2x or 4x respectively using different models.
The upscaling prompt argument takes two values. The first value is a scaling
factor and should be set to either `2` or `4` only. This will either scale the
image 2x or 4x respectively using different models.
You can set the scaling stength between `0` and `1.0` to control
intensity of the of the scaling. This is handy because AI upscalers
generally tend to smooth out texture details. If you wish to retain
some of those for natural looking results, we recommend using values
between `0.5 to 0.8`.
You can set the scaling stength between `0` and `1.0` to control intensity of
the of the scaling. This is handy because AI upscalers generally tend to smooth
out texture details. If you wish to retain some of those for natural looking
results, we recommend using values between `0.5 to 0.8`.
If you do not explicitly specify an upscaling_strength, it will
default to 0.75.
If you do not explicitly specify an upscaling_strength, it will default to 0.75.
**Face Restoration**
### Face Restoration
`-G : <gfpgan_strength>`
This prompt argument controls the strength of the face restoration
that is being applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
This prompt argument controls the strength of the face restoration that is being
applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
You can use either one or both without any conflicts. In cases where
you use both, the image will be first upscaled and then the face
restoration process will be executed to ensure you get the highest
quality facial features.
You can use either one or both without any conflicts. In cases where you use
both, the image will be first upscaled and then the face restoration process
will be executed to ensure you get the highest quality facial features.
`--save_orig`
When you use either `-U` or `-G`, the final result you get is upscaled
or face modified. If you want to save the original Stable Diffusion
generation, you can use the `-save_orig` prompt argument to save the
original unaffected version too.
When you use either `-U` or `-G`, the final result you get is upscaled or face
modified. If you want to save the original Stable Diffusion generation, you can
use the `-save_orig` prompt argument to save the original unaffected version
too.
**Example Usage**
### Example Usage
```
dream > superman dancing with a panda bear -U 2 0.6 -G 0.4
```bash
dream> superman dancing with a panda bear -U 2 0.6 -G 0.4
```
This also works with img2img:
```
```bash
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
```
**Note**
!!! note
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
crashes and memory overloads during the Stable Diffusion process,
these effects are applied after Stable Diffusion has completed its
work.
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
its work.
In single image generations, you will see the output right away but
when you are using multiple iterations, the images will first be
generated and then upscaled and face restored after that process is
complete. While the image generation is taking place, you will still
be able to preview the base images.
In single image generations, you will see the output right away but when you are using multiple
iterations, the images will first be generated and then upscaled and face restored after that
process is complete. While the image generation is taking place, you will still be able to preview
the base images.
If you wish to stop during the image generation but want to upscale or
face restore a particular generated image, pass it again with the same
prompt and generated seed along with the `-U` and `-G` prompt
arguments to perform those actions.
If you wish to stop during the image generation but want to upscale or face
restore a particular generated image, pass it again with the same prompt and
generated seed along with the `-U` and `-G` prompt arguments to perform those
actions.
## CodeFormer Support
This repo also allows you to perform face restoration using
[CodeFormer](https://github.com/sczhou/CodeFormer).
In order to setup CodeFormer to work, you need to download the models like with
GFPGAN. You can do this either by running `preload_models.py` or by manually
downloading the
[model file](https://github.com/sczhou/CodeFormer/releases/download/v0.1.0/codeformer.pth)
and saving it to `ldm/restoration/codeformer/weights` folder.
You can use `-ft` prompt argument to swap between CodeFormer and the default
GFPGAN. The above mentioned `-G` prompt argument will allow you to control the
strength of the restoration effect.
### Usage:
The following command will perform face restoration with CodeFormer instead of
the default gfpgan.
`<prompt> -G 0.8 -ft codeformer`
### Other Options:
- `-cf` - cf or CodeFormer Fidelity takes values between `0` and `1`. 0 produces
high quality results but low accuracy and 1 produces lower quality results but
higher accuacy to your original face.
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is closely matching to the input face.
`<prompt> -G 1.0 -ft codeformer -cf 0.9`
The following command will perform face restoration with CodeFormer. CodeFormer
will output a result that is the best restoration possible. This may deviate
slightly from the original face. This is an excellent option to use in
situations when there is very little facial data to work with.
`<prompt> -G 1.0 -ft codeformer -cf 0.1`
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```
dream> !fix ./outputs/img-samples/000044.2945021133.png -G 0.8 -U 2
```
A new file named `000044.2945021133.fixed.png` will be created in the output
directory. Note that the `!fix` command does not replace the original file,
unlike the behavior at generate time.
### Disabling:
If, for some reason, you do not wish to load the GFPGAN and/or ESRGAN libraries,
you can disable them on the dream.py command line with the `--no_restore` and
`--no_upscale` options, respectively.

View File

@ -1,28 +1,39 @@
# **Variations**
---
title: Variations
---
# :material-tune-variant: Variations
## Intro
Release 1.13 of SD-Dream adds support for image variations.
You are able to do the following:
1. Generate a series of systematic variations of an image, given a prompt. The amount of variation from one image to the next can be controlled.
1. Generate a series of systematic variations of an image, given a prompt. The
amount of variation from one image to the next can be controlled.
2. Given two or more variations that you like, you can combine them in a weighted fashion.
2. Given two or more variations that you like, you can combine them in a
weighted fashion.
---
This cheat sheet provides a quick guide for how this works in practice, using variations to create the desired image of Xena, Warrior Princess.
This cheat sheet provides a quick guide for how this works in practice, using
variations to create the desired image of Xena, Warrior Princess.
---
## Step 1 -- Find a base image that you like
The prompt we will use throughout is `lucy lawless as xena, warrior princess, character portrait, high resolution.`
The prompt we will use throughout is
`lucy lawless as xena, warrior princess, character portrait, high resolution.`
This will be indicated as `prompt` in the examples below.
First we let SD create a series of images in the usual way, in this case requesting six iterations:
First we let SD create a series of images in the usual way, in this case
requesting six iterations:
```
```bash
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
...
Outputs:
@ -34,19 +45,18 @@ Outputs:
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
```
The one with seed 3357757885 looks nice:
<img src="../assets/variation_walkthru/000001.3357757885.png"/>
![var1](../assets/variation_walkthru/000001.3357757885.png)
---
## Step 2 - Generating Variations
Let's try to generate some variations. Using the same seed, we pass the argument `-v0.1` (or --variant_amount), which generates a series of
variations each differing by a variation amount of 0.2. This number ranges from `0` to `1.0`, with higher numbers being larger amounts of
variation.
Let's try to generate some variations. Using the same seed, we pass the argument
`-v0.1` (or --variant_amount), which generates a series of variations each
differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
with higher numbers being larger amounts of variation.
```
```bash
dream> "prompt" -n6 -S3357757885 -v0.2
...
Outputs:
@ -60,33 +70,42 @@ Outputs:
### **Variation Sub Seeding**
Note that the output for each image has a `-V` option giving the "variant subseed" for that image, consisting of a seed followed by the
variation amount used to generate it.
Note that the output for each image has a `-V` option giving the "variant
subseed" for that image, consisting of a seed followed by the variation amount
used to generate it.
This gives us a series of closely-related variations, including the two shown here.
This gives us a series of closely-related variations, including the two shown
here.
<img src="../assets/variation_walkthru/000002.3647897225.png">
<img src="../assets/variation_walkthru/000002.1614299449.png">
![var2](../assets/variation_walkthru/000002.3647897225.png)
I like the expression on Xena's face in the first one (subseed 3647897225), and the armor on her shoulder in the second one (subseed 1614299449). Can we combine them to get the best of both worlds?
![var3](../assets/variation_walkthru/000002.1614299449.png)
We combine the two variations using `-V` (--with_variations). Again, we must provide the seed for the originally-chosen image in order for
this to work.
I like the expression on Xena's face in the first one (subseed 3647897225), and
the armor on her shoulder in the second one (subseed 1614299449). Can we combine
them to get the best of both worlds?
```
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1
We combine the two variations using `-V` (`--with_variations`). Again, we must
provide the seed for the originally-chosen image in order for this to work.
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
Outputs:
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
```
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The resulting image is close, but not exactly what I wanted:
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The
resulting image is close, but not exactly what I wanted:
<img src="../assets/variation_walkthru/000003.1614299449.png">
![var4](../assets/variation_walkthru/000003.1614299449.png)
We could either try combining the images with different weights, or we can generate more variations around the almost-but-not-quite image. We do the latter, using both the `-V` (combining) and `-v` (variation strength) options. Note that we use `-n6` to generate 6 variations:
We could either try combining the images with different weights, or we can
generate more variations around the almost-but-not-quite image. We do the
latter, using both the `-V` (combining) and `-v` (variation strength) options.
Note that we use `-n6` to generate 6 variations:
```
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1 -v0.05 -n6
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
Outputs:
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
@ -96,9 +115,11 @@ Outputs:
./outputs/Xena/000004.2183375608.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2183375608:0.05 -S3357757885
```
This produces six images, all slight variations on the combination of the chosen two images. Here's the one I like best:
This produces six images, all slight variations on the combination of the chosen
two images. Here's the one I like best:
<img src="../assets/variation_walkthru/000004.3747154981.png">
![var5](../assets/variation_walkthru/000004.3747154981.png)
As you can see, this is a very powerful tool, which when combined with subprompt weighting, gives you great control over the content and
quality of your generated images.
As you can see, this is a very powerful tool, which when combined with subprompt
weighting, gives you great control over the content and quality of your
generated images.

View File

@ -1,13 +1,21 @@
# Barebones Web Server
---
title: Barebones Web Server
---
As of version 1.10, this distribution comes with a bare bones web server (see screenshot). To use it, run the `dream.py` script by adding the `**--web**` option.
# :material-web: Barebones Web Server
```
As of version 1.10, this distribution comes with a bare bones web server (see
screenshot). To use it, run the `dream.py` script by adding the `--web`
option.
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
```
You can then connect to the server by pointing your web browser at http://localhost:9090, or to the network name or IP address of the server.
You can then connect to the server by pointing your web browser at
http://localhost:9090, or to the network name or IP address of the server.
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this code, and to [dagf2101](https://github.com/dagf2101) for refining it.
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
![Dream Web Server](../assets/dream_web_server.png)