mkdocs fixes, PR #1032

Squashed commit of the following:

commit 2c1e0168bb03a2cd625f2d4aca40eee0fdf7e4af
Merge: 2325c6c 31f2733
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 11 08:33:18 2022 -0400

    Merge branch 'mkdocs-fixes' of https://github.com/mauwii/stable-diffusion into mauwii-mkdocs-fixes

commit 31f2733e89
Merge: d9d6d3a a61a690
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Tue Oct 11 08:05:52 2022 -0400

    Merge branch 'main' into mkdocs-fixes

commit d9d6d3af3f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 08:13:04 2022 +0200

    some more minor, overseen fixes to IMG2IMG

commit 4ab5a2aeba
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 07:49:11 2022 +0200

    add 4gotten alt-text to images

commit f778bd9c0f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 07:18:11 2022 +0200

    update OTHER.md
    - fix codeblocks, add admonitions, embed graphic

commit a19f148a8e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 06:51:29 2022 +0200

    update IMG2IMG.md

commit c1f1dfa714
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 06:10:25 2022 +0200

    update EMBIGGEN.md
    - fix codeblocks
    - fix toc
    - use admonitions

commit 791e6c63ef
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 05:58:53 2022 +0200

    better admonitions for CLI.md

commit e078025f00
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 05:50:32 2022 +0200

    huge update to CLI.md
    way too many updates to list them all, including:
    - render keys for keyboard-shortcuts
    - quote commands and "unhide" parameter-values (like `<int>`, `<string>`
    - fix codeblocks
    - quote commands
    - quote filenames
    - use admonitions
    - ....

commit bd98dd2307
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:49:57 2022 +0200

    fix INPAINTING.md
    - fix numbered List
    - replace text key combos with actual rendered keyboard keys

commit 5392000335
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:30:11 2022 +0200

    fix nubered list and codeblocks in INSTALL_WINDOWS

commit ffe9276f1e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 04:12:56 2022 +0200

    fix numbered list in INSTALL_LINUX.md
    also fix blank lines, codeblocks and admonition

commit 2c6a6a567f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 03:51:03 2022 +0200

    upgrade INSTALL_MAC.md:
    - use annotations and content-tabs

    yes, this looks ugly in repo afterwards, but plz also look at mkdocs:
    https://mauwii.github.io/stable-diffusion/installation/INSTALL_MAC/

commit 8f6c544480
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 01:43:11 2022 +0200

    comment out PR part in mkdocs-flow.yml

commit b52c14a67f
Merge: 97ebe58 a1b0b91
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 01:17:28 2022 +0200

    Merge branch 'mkdocs-fixes' of github.com:mauwii/stable-diffusion into mkdocs-fixes

commit a1b0b91bb3
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:59:44 2022 +0200

    fix conda env in codeblock

commit 5f9f9a266e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:43:46 2022 +0200

    fix 4gotten title in TEXTUAL_INVERSION

commit 8f025b034e
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:41:52 2022 +0200

    quote repo_url and repo_name
    otherwise the version/stars/forks did not appear

commit 3a52b7deb3
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:39:54 2022 +0200

    fix TEXTUAL_INVERSION headline to fit the others

commit 389b21f966
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:35:48 2022 +0200

    fix SAMPLER_CONVERGENCE and add emoji

commit f26fc79a18
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:32:04 2022 +0200

    fix INSTALL_DOCKER.md:
    - fix title (Docker instead of "Before you begin")
    - add headline with Emoji
    - fix headlines to render toc correct

commit cbc3520489
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:24:58 2022 +0200

    add headline with emoji to INSTALL_MAC.md

commit 25f0614d66
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:21:01 2022 +0200

    add log emoji to docs/CHANGELOG.md

commit 42005688fa
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:20:47 2022 +0200

    use better fitting Icon for new Name

commit 0c65bad7f5
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:09:07 2022 +0200

    add Headline with Emoji to WEB and POSTPROCESS

commit 1c1cf2692e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:56:16 2022 +0200

    update index.md:
    - remove unused template reference
    - make headline rendered bold and underlined, add (kind of) subtitle
    - update discord badge and link
    - update Quick links to look like in GH-Readme
      - also remove self reference to docs
    - add screenshot as in GH-Readme
    - add note pointing to issues tab
    - update path in command line to reflect new Repo Name

commit 0e29b0737e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:23:10 2022 +0200

    chng site_name to `Stable Diffusion Toolkit Docs`

commit ad8a60d992
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:00:02 2022 +0200

    fix repo_url in mkdocs.yml

commit 234569d6b6
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:54:39 2022 +0200

    fix link to upscaling in WEB.md and TOC
    - TOC fixed by adding `#` to every headline after `## Parting remarks`
    - add missing blank lines

commit 97c84ad824
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:25:32 2022 +0200

    fix broken links in docs/CHANGELOG.md

commit bce62b3a32
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:15:37 2022 +0200

    add title to CHANGELOG.md to render TOC wo. `**`
    alternatively remove `**` around headline

commit 97ebe58b5b
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:59:44 2022 +0200

    fix conda env in codeblock

commit 87ac217e43
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:43:46 2022 +0200

    fix 4gotten title in TEXTUAL_INVERSION

commit 91439e8a52
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:41:52 2022 +0200

    quote repo_url and repo_name
    otherwise the version/stars/forks did not appear

commit 8a632a9e8f
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:39:54 2022 +0200

    fix TEXTUAL_INVERSION headline to fit the others

commit 7c8ffe2feb
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:35:48 2022 +0200

    fix SAMPLER_CONVERGENCE and add emoji

commit e2e86d2d11
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:32:04 2022 +0200

    fix INSTALL_DOCKER.md:
    - fix title (Docker instead of "Before you begin")
    - add headline with Emoji
    - fix headlines to render toc correct

commit 8b54c083fe
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:24:58 2022 +0200

    add headline with emoji to INSTALL_MAC.md

commit 8d8a032434
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:21:01 2022 +0200

    add log emoji to docs/CHANGELOG.md

commit 76519f6fa4
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:20:47 2022 +0200

    use better fitting Icon for new Name

commit aff0725533
Author: mauwii <Mauwii@outlook.de>
Date:   Tue Oct 11 00:09:07 2022 +0200

    add Headline with Emoji to WEB and POSTPROCESS

commit 0f7898cbdd
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:56:16 2022 +0200

    update index.md:
    - remove unused template reference
    - make headline rendered bold and underlined, add (kind of) subtitle
    - update discord badge and link
    - update Quick links to look like in GH-Readme
      - also remove self reference to docs
    - add screenshot as in GH-Readme
    - add note pointing to issues tab
    - update path in command line to reflect new Repo Name

commit f4c04eadf8
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:23:10 2022 +0200

    chng site_name to `Stable Diffusion Toolkit Docs`

commit 6e624827c0
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 23:00:02 2022 +0200

    fix repo_url in mkdocs.yml

commit 158848dd7e
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:54:39 2022 +0200

    fix link to upscaling in WEB.md and TOC
    - TOC fixed by adding `#` to every headline after `## Parting remarks`
    - add missing blank lines

commit 533736e135
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:29:46 2022 +0200

    fix link to truncation_comparison.jpg in OTHER.md

commit dd335142df
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:25:32 2022 +0200

    fix broken links in docs/CHANGELOG.md

commit 374dd54f30
Author: mauwii <Mauwii@outlook.de>
Date:   Mon Oct 10 22:15:37 2022 +0200

    add title to CHANGELOG.md to render TOC wo. `**`
    alternatively remove `**` around headline
This commit is contained in:
Lincoln Stein 2022-10-11 08:35:21 -04:00
parent 2325c6cd40
commit 79e79b78aa
17 changed files with 500 additions and 411 deletions

View File

@ -3,9 +3,9 @@ on:
push: push:
branches: branches:
- main - main
pull_request: # pull_request:
branches: # branches:
- main # - main
jobs: jobs:
build: build:
name: Deploy docs to GitHub Pages name: Deploy docs to GitHub Pages

View File

@ -1,4 +1,8 @@
# **Changelog** ---
title: Changelog
---
# :octicons-log-16: **Changelog**
- v2.0.0 (9 October 2022) - v2.0.0 (9 October 2022)
@ -39,7 +43,7 @@
- v1.13 (3 September 2022 - v1.13 (3 September 2022
- Support image variations (see [VARIATIONS](docs/features/VARIATIONS.md) - Support image variations (see [VARIATIONS](features/VARIATIONS.md)
([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers) ([Kevin Gibbons](https://github.com/bakkot) and many contributors and reviewers)
- Supports a Google Colab notebook for a standalone server running on Google hardware - Supports a Google Colab notebook for a standalone server running on Google hardware
[Arturo Mendivil](https://github.com/artmen1516) [Arturo Mendivil](https://github.com/artmen1516)
@ -179,4 +183,4 @@
## Links ## Links
- **[Read Me](../readme.md)** - **[Read Me](index.md)**

View File

@ -60,7 +60,7 @@ invoke> q
![invoke-py-demo](../assets/dream-py-demo.png) ![invoke-py-demo](../assets/dream-py-demo.png)
The `invoke>` prompt's arguments are pretty much identical to those used in the The `invoke>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type "!invoke" (it doesn't hurt if you do). Discord bot, except you don't need to type `!invoke` (it doesn't hurt if you do).
A significant change is that creation of individual images is now the default A significant change is that creation of individual images is now the default
unless `--grid` (`-g`) is given. A full list is given in unless `--grid` (`-g`) is given. A full list is given in
[List of prompt arguments](#list-of-prompt-arguments). [List of prompt arguments](#list-of-prompt-arguments).
@ -75,8 +75,7 @@ the location of the model weight files.
These command-line arguments can be passed to `invoke.py` when you first run it These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see [List of prompt arguments] overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt-arguments). Others
(#list-of-prompt-arguments). Others
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description | | Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
@ -101,165 +100,181 @@ overridden on a per-prompt basis (see [List of prompt arguments]
| `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions | | `--free_gpu_mem` | | `False` | Free GPU memory after sampling, to allow image decoding and saving in low VRAM conditions |
| `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast | | `--precision` | | `auto` | Set model precision, default is selected by device. Options: auto, float32, float16, autocast |
#### deprecated !!! warning deprecated
These arguments are deprecated but still work: These arguments are deprecated but still work:
<div align="center" markdown>
| Argument | Shortcut | Default | Description | | Argument | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------| |--------------------|------------|---------------------|--------------|
| --weights <path> | | None | Pth to weights file; use `--model stable-diffusion-1.4` instead | | `--weights <path>` | | `None` | Pth to weights file; use `--model stable-diffusion-1.4` instead |
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead | | `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
**A note on path names:** On Windows systems, you may run into </div>
!!! tip
On Windows systems, you may run into
problems when passing the invoke script standard backslashed path problems when passing the invoke script standard backslashed path
names because the Python interpreter treats "\" as an escape. names because the Python interpreter treats "\" as an escape.
You can either double your slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or You can either double your slashes (ick): `C:\\path\\to\\my\\file`, or
use Linux/Mac style forward slashes (better): C:/path/to/my/file. use Linux/Mac style forward slashes (better): `C:/path/to/my/file`.
## List of prompt arguments ## List of prompt arguments
After the invoke.py script initializes, it will present you with a After the invoke.py script initializes, it will present you with a
**invoke>** prompt. Here you can enter information to generate images `invoke>` prompt. Here you can enter information to generate images
from text (txt2img), to embellish an existing image or sketch from text ([txt2img](#txt2img)), to embellish an existing image or sketch
(img2img), or to selectively alter chosen regions of the image ([img2img](#img2img)), or to selectively alter chosen regions of the image
(inpainting). ([inpainting](#inpainting)).
### This is an example of txt2img: ### txt2img
~~~~ !!! example
invoke> waterfall and rainbow -W640 -H480
~~~~
This will create the requested image with the dimensions 640 (width) ```bash
and 480 (height). invoke> waterfall and rainbow -W640 -H480
```
This will create the requested image with the dimensions 640 (width)
and 480 (height).
Here are the invoke> command that apply to txt2img: Here are the invoke> command that apply to txt2img:
| Argument | Shortcut | Default | Description | | Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
|--------------------|------------|---------------------|--------------| |--------------------|------------|---------------------|--------------|
| "my prompt" | | | Text prompt to use. The quotation marks are optional. | | `"my prompt"` | | | Text prompt to use. The quotation marks are optional. |
| --width <int> | -W<int> | 512 | Width of generated image | | `--width <int>` | `-W<int>` | `512` | Width of generated image |
| --height <int> | -H<int> | 512 | Height of generated image | | `--height <int>` | `-H<int>` | `512` | Height of generated image |
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt | | `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply | | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 | | `--cfg_scale <float>`| `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.| | `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. | | `--sampler <sampler>`| `-A<sampler>`| `k_lms` | Sampler to use. Use -h to get list of available samplers. |
| --hires_fix | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution | | `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt | | `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) | | `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off `--grid` instead) |
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images | | `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
| --seamless | | False | Activate seamless tiling for interesting effects | | `--seamless` | | `False` | Activate seamless tiling for interesting effects |
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt | | `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| --skip_normalization| -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) | | `--skip_normalization`| `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. | | `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75`| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| --gfpgan_strength <float> | -G <float> | -G0 | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) | | `--gfpgan_strength <float>` | `-G <float>` | `-G0` | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) |
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. | | `--save_original` | `-save_orig`| `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). | | `--variation <float>` |`-v<float>`| `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| --with_variations <pattern> | -V<pattern>| None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. | | `--with_variations <pattern>` | `-V<pattern>`| `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
Note that the width and height of the image must be multiples of !!! note
64. You can provide different values, but they will be rounded down to
the nearest multiple of 64.
The width and height of the image must be multiples of
64. You can provide different values, but they will be rounded down to
the nearest multiple of 64.
### This is an example of img2img: ### img2img
~~~~ !!! example
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
~~~~
This will modify the indicated vacation photograph by making it more ```bash
like the prompt. Results will vary greatly depending on what is in the invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
image. We also ask to --fit the image into a box no bigger than ```
640x480. Otherwise the image size will be identical to the provided
photo and you may run out of memory if it is large. This will modify the indicated vacation photograph by making it more
like the prompt. Results will vary greatly depending on what is in the
image. We also ask to `--fit` the image into a box no bigger than
640x480. Otherwise the image size will be identical to the provided
photo and you may run out of memory if it is large.
In addition to the command-line options recognized by txt2img, img2img In addition to the command-line options recognized by txt2img, img2img
accepts additional options: accepts additional options:
| Argument | Shortcut | Default | Description | | Argument <img width="160" align="right"/> | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------| |----------------------|-------------|-----------------|--------------|
| --init_img <path> | -I<path> | None | Path to the initialization image | | `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| --fit | -F | False | Scale the image to fit into the specified -H and -W dimensions | | `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| --strength <float> | -s<float> | 0.75 | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.| | `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.|
### This is an example of inpainting: ### inpainting
~~~~ !!! example
invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
~~~~
This will do the same thing as img2img, but image alterations will ```bash
only occur within transparent areas defined by the mask file specified invoke> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
by -M. You may also supply just a single initial image with the areas ```
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See This will do the same thing as img2img, but image alterations will
[Inpainting](./INPAINTING.md) for details. only occur within transparent areas defined by the mask file specified
by `-M`. You may also supply just a single initial image with the areas
to overpaint made transparent, but you must be careful not to destroy
the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as inpainting accepts all the arguments used for txt2img and img2img, as
well as the --mask (-M) argument: well as the --mask (-M) argument:
| Argument | Shortcut | Default | Description | | Argument <img width="100" align="right"/> | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------| |--------------------|------------|---------------------|--------------|
| --init_mask <path> | -M<path> | None |Path to an image the same size as the initial_image, with areas for inpainting made transparent.| | `--init_mask <path>` | `-M<path>` | `None` |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
## Convenience commands
# Convenience commands
In addition to the standard image generation arguments, there are a In addition to the standard image generation arguments, there are a
series of convenience commands that begin with !: series of convenience commands that begin with !:
## !fix ### `!fix`
This command runs a post-processor on a previously-generated image. It This command runs a post-processor on a previously-generated image. It
takes a PNG filename or path and applies your choice of the -U, -G, or takes a PNG filename or path and applies your choice of the `-U`, `-G`, or
--embiggen switches in order to fix faces or upscale. If you provide a `--embiggen` switches in order to fix faces or upscale. If you provide a
filename, the script will look for it in the current output filename, the script will look for it in the current output
directory. Otherwise you can provide a full or partial path to the directory. Otherwise you can provide a full or partial path to the
desired file. desired file.
Some examples: Some examples:
Upscale to 4X its original size and fix faces using codeformer: !!! example ""
~~~
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
~~~
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen: Upscale to 4X its original size and fix faces using codeformer:
~~~ ```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
>> fixing outputs/img-samples/0000045.4829112.png ```
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3 !!! example ""
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
~~~
## !fetch Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
>> fixing outputs/img-samples/0000045.4829112.png
>> retrieved seed 4829112 and prompt "boy enjoying a banana split"
>> GFPGAN - Restoring Faces for image seed:4829112
Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3
...lots of text...
Outputs:
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25
```
### `!fetch`
This command retrieves the generation parameters from a previously This command retrieves the generation parameters from a previously
generated image and either loads them into the command line. You may generated image and either loads them into the command line. You may
provide either the name of a file in the current output directory, or provide either the name of a file in the current output directory, or
a full file path. a full file path.
~~~ ```bash
invoke> !fetch 0000015.8929913.png invoke> !fetch 0000015.8929913.png
# the script returns the next line, ready for editing and running: # the script returns the next line, ready for editing and running:
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5 invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
~~~ ```
Note that this command may behave unexpectedly if given a PNG file that Note that this command may behave unexpectedly if given a PNG file that
was not generated by InvokeAI. was not generated by InvokeAI.
## !history ### `!history`
The invoke script keeps track of all the commands you issue during a The invoke script keeps track of all the commands you issue during a
session, allowing you to re-run them. On Mac and Linux systems, it session, allowing you to re-run them. On Mac and Linux systems, it
@ -268,10 +283,10 @@ the most recent 1000 commands issued.
The `!history` command will return a numbered list of all the commands The `!history` command will return a numbered list of all the commands
issued during the session (Windows), or the most recent 1000 commands issued during the session (Windows), or the most recent 1000 commands
(Mac|Linux). You can then repeat a command by using the command !NNN, (Mac|Linux). You can then repeat a command by using the command `!NNN`,
where "NNN" is the history line number. For example: where "NNN" is the history line number. For example:
~~~ ```bash
invoke> !history invoke> !history
... ...
[14] happy woman sitting under tree wearing broad hat and flowing garment [14] happy woman sitting under tree wearing broad hat and flowing garment
@ -282,58 +297,58 @@ invoke> !history
... ...
invoke> !20 invoke> !20
invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194 invoke> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~ ```
## !search <search string> ### `!search <search string>`
This is similar to !history but it only returns lines that contain This is similar to !history but it only returns lines that contain
`search string`. For example: `search string`. For example:
~~~ ```bash
invoke> !search surreal invoke> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194 [21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~ ```
## !clear ### `!clear`
This clears the search history from memory and disk. Be advised that This clears the search history from memory and disk. Be advised that
this operation is irreversible and does not issue any warnings! this operation is irreversible and does not issue any warnings!
# Command-line editing and completion ## Command-line editing and completion
The command-line offers convenient history tracking, editing, and The command-line offers convenient history tracking, editing, and
command completion. command completion.
- To scroll through previous commands and potentially edit/reuse them, use the up and down cursor keys. - To scroll through previous commands and potentially edit/reuse them, use the ++up++ and ++down++ keys.
- To edit the current command, use the left and right cursor keys to position the cursor, and then backspace, delete or insert characters. - To edit the current command, use the ++left++ and ++right++ keys to position the cursor, and then ++backspace++, ++delete++ or insert characters.
- To move to the very beginning of the command, type CTRL-A (or command-A on the Mac) - To move to the very beginning of the command, type ++ctrl+a++ (or ++command+a++ on the Mac)
- To move to the end of the command, type CTRL-E. - To move to the end of the command, type ++ctrl+e++.
- To cut a section of the command, position the cursor where you want to start cutting and type CTRL-K. - To cut a section of the command, position the cursor where you want to start cutting and type ++ctrl+k++
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y - To paste a cut section back in, position the cursor where you want to paste, and type ++ctrl+y++
Windows users can get similar, but more limited, functionality if they Windows users can get similar, but more limited, functionality if they
launch invoke.py with the "winpty" program and have the `pyreadline3` launch `invoke.py` with the `winpty` program and have the `pyreadline3`
library installed: library installed:
~~~ ```batch
> winpty python scripts\invoke.py > winpty python scripts\invoke.py
~~~ ```
On the Mac and Linux platforms, when you exit invoke.py, the last 1000 On the Mac and Linux platforms, when you exit invoke.py, the last 1000
lines of your command-line history will be saved. When you restart lines of your command-line history will be saved. When you restart
invoke.py, you can access the saved history using the up-arrow key. `invoke.py`, you can access the saved history using the ++up++ key.
In addition, limited command-line completion is installed. In various In addition, limited command-line completion is installed. In various
contexts, you can start typing your command and press tab. A list of contexts, you can start typing your command and press ++tab++. A list of
potential completions will be presented to you. You can then type a potential completions will be presented to you. You can then type a
little more, hit tab again, and eventually autocomplete what you want. little more, hit ++tab++ again, and eventually autocomplete what you want.
When specifying file paths using the one-letter shortcuts, the CLI When specifying file paths using the one-letter shortcuts, the CLI
will attempt to complete pathnames for you. This is most handy for the will attempt to complete pathnames for you. This is most handy for the
-I (init image) and -M (init mask) paths. To initiate completion, start `-I` (init image) and `-M` (init mask) paths. To initiate completion, start
the path with a slash ("/") or "./". For example: the path with a slash (`/`) or `./`. For example:
~~~ ```bash
invoke> zebra with a mustache -I./test-pictures<TAB> invoke> zebra with a mustache -I./test-pictures<TAB>
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png -I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/ -I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/

View File

@ -43,7 +43,7 @@ it's similar to that, except it can work up to an arbitrarily large size
has extra logic to re-run any number of the tile sub-sections of the image has extra logic to re-run any number of the tile sub-sections of the image
if for example a small part of a huge run got messed up. if for example a small part of a huge run got messed up.
## Usage ### Usage
`-embiggen <scaling_factor> <esrgan_strength> <overlap_ratio OR overlap_pixels>` `-embiggen <scaling_factor> <esrgan_strength> <overlap_ratio OR overlap_pixels>`
@ -100,26 +100,30 @@ Tiles are numbered starting with one, and left-to-right,
top-to-bottom. So, if you are generating a 3x3 tiled image, the top-to-bottom. So, if you are generating a 3x3 tiled image, the
middle row would be `4 5 6`. middle row would be `4 5 6`.
## Example Usage ### Examples
Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor of 2.5x; !!! example ""
and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
```bash Running Embiggen with 512x512 tiles on an existing image, scaling up by a factor of 2.5x;
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 and doing the same again (default ESRGAN strength is 0.75, default overlap between tiles is 0.25):
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
If your starting image was also 512x512 this should have taken 9 tiles. ```bash
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5
invoke > a photo of a forest at sunset -s 100 -W 512 -H 512 -I outputs/forest.png -f 0.4 -embiggen 2.5 0.75 0.25
```
If there weren't enough clouds in the sky of that forest you just made If your starting image was also 512x512 this should have taken 9 tiles.
(and that image is about 1280 pixels (512*2.5) wide A.K.A. three
512x512 tiles with 0.25 overlaps wide) we can replace that top row of
tiles:
```bash !!! example ""
invoke> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
``` If there weren't enough clouds in the sky of that forest you just made
(and that image is about 1280 pixels (512*2.5) wide A.K.A. three
512x512 tiles with 0.25 overlaps wide) we can replace that top row of
tiles:
```bash
invoke> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
## Fixing Previously-Generated Images ## Fixing Previously-Generated Images
@ -128,27 +132,27 @@ look up the original prompt and provide an initial image. Just use the
syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the syntax `!fix path/to/file.png <embiggen>`. For example, you can rewrite the
previous command to look like this: previous command to look like this:
~~~~ ```bash
invoke> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3 invoke> !fix ./outputs/000002.seed.png -embiggen_tiles 1 2 3
~~~~ ```
A new file named `000002.seed.fixed.png` will be created in the output directory. Note that A new file named `000002.seed.fixed.png` will be created in the output directory. Note that
the `!fix` command does not replace the original file, unlike the behavior at generate time. the `!fix` command does not replace the original file, unlike the behavior at generate time.
You do not need to provide the prompt, and `!fix` automatically selects a good strength for You do not need to provide the prompt, and `!fix` automatically selects a good strength for
embiggen-ing. embiggen-ing.
!!! note
**Note** Because the same prompt is used on all the tiled images, and the model
Because the same prompt is used on all the tiled images, and the model doesn't have the context of anything outside the tile being run - it
doesn't have the context of anything outside the tile being run - it can end up creating repeated pattern (also called 'motifs') across all
can end up creating repeated pattern (also called 'motifs') across all the tiles based on that prompt. The best way to combat this is
the tiles based on that prompt. The best way to combat this is lowering the `--strength` (`-f`) to stay more true to the init image,
lowering the `--strength` (`-f`) to stay more true to the init image, and increasing the number of steps so there is more compute-time to
and increasing the number of steps so there is more compute-time to create the detail. Anecdotally `--strength` 0.35-0.45 works pretty
create the detail. Anecdotally `--strength` 0.35-0.45 works pretty well on most things. It may also work great in some examples even with
well on most things. It may also work great in some examples even with the `--strength` set high for patterns, landscapes, or subjects that
the `--strength` set high for patterns, landscapes, or subjects that are more abstract. Because this is (relatively) fast, you can also
are more abstract. Because this is (relatively) fast, you can also preserve the best parts from each.
preserve the best parts from each.
Author: [Travco](https://github.com/travco) Author: [Travco](https://github.com/travco)

View File

@ -2,7 +2,9 @@
title: Image-to-Image title: Image-to-Image
--- ---
# :material-image-multiple: **IMG2IMG** # :material-image-multiple: Image-to-Image
## `img2img`
This script also provides an `img2img` feature that lets you seed your creations with an initial This script also provides an `img2img` feature that lets you seed your creations with an initial
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
@ -15,13 +17,17 @@ tree on a hill with a river, nature photograph, national geographic -I./test-pic
This will take the original image shown here: This will take the original image shown here:
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350> <img src="https://user-images.githubusercontent.com/50542132/193946000-c42a96d8-5a74-4f8a-b4c3-5213e6cadcce.png" width=350>
</div>
and generate a new image based on it as shown here: and generate a new image based on it as shown here:
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350> <img src="https://user-images.githubusercontent.com/111189/194135515-53d4c060-e994-4016-8121-7c685e281ac9.png" width=350>
</div>
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much The `--init_img` (`-I`) option gives the path to the seed picture. `--strength` (`-f`) controls how much
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results. original completely). The default is `0.75`, and ranges from `0.25-0.90` give interesting results.
Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`, Other relevant options include `-C` (classification free guidance scale), and `-s` (steps). Unlike `txt2img`,
@ -37,26 +43,32 @@ a very different image:
`photograph of a tree on a hill with a river` `photograph of a tree on a hill with a river`
<div align="center" markdown>
<img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350> <img src="https://user-images.githubusercontent.com/111189/194135220-16b62181-b60c-4248-8989-4834a8fd7fbd.png" width=350>
</div>
(When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will !!! tip
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.) When designing prompts, think about how the images scraped from the internet were captioned. Very few photographs will
be labeled "photograph" or "photorealistic." They will, however, be captioned with the publication, photographer, camera
model, or film settings.
If the initial image contains transparent regions, then Stable Diffusion will only draw within the If the initial image contains transparent regions, then Stable Diffusion will only draw within the
transparent regions, a process called "inpainting". However, for this to work correctly, the color transparent regions, a process called [`inpainting`](./INPAINTING.md#creating-transparent-regions-for-inpainting). However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased. information underneath the transparent needs to be preserved, not erased.
More details can be found here: !!! warning
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
**IMPORTANT ISSUE** `img2img` does not work properly on initial images smaller than 512x512. Please scale your `img2img` does not work properly on initial images smaller than 512x512. Please scale your
image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your image to at least 512x512 before using it. Larger images are not a problem, but may run out of VRAM on your
GPU card. To fix this, use the --fit option, which downscales the initial image to fit within the box specified GPU card.
by width x height:
~~~ To fix this, use the `--fit` option, which downscales the initial image to fit within the box specified
tree on a hill with a river, national geographic -I./test-pictures/big-sketch.png -H512 -W512 --fit by width x height:
~~~
```bash
invoke> "tree on a hill with a river, national geographic" -I./test-pictures/big-sketch.png -H512 -W512 --fit
```
## How does it actually work, though? ## How does it actually work, though?
@ -66,11 +78,13 @@ gaussian noise and progressively refines it over the requested number of steps,
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this: **Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
```commandline ```bash
invoke> "fire" -s10 -W384 -H384 -S1592514025 invoke> "fire" -s10 -W384 -H384 -S1592514025
``` ```
<div align="center" markdown>
![latent steps](../assets/img2img/000019.steps.png) ![latent steps](../assets/img2img/000019.steps.png)
</div>
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains. Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
@ -78,34 +92,39 @@ Put simply: starting from a frame of fuzz/static, SD finds details in each frame
### A concrete example ### A concrete example
Say I want SD to draw a fire based on this hand-drawn image: I want SD to draw a fire based on this hand-drawn image:
<div align="center" markdown>
![drawing of a fireplace](../assets/img2img/fire-drawing.png) ![drawing of a fireplace](../assets/img2img/fire-drawing.png)
</div>
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like: Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
![](../assets/img2img/000032.steps.gravity.png) <div align="center" markdown>
![gravity32](../assets/img2img/000032.steps.gravity.png)
</div>
With strength `0.4`, the steps look more like this: With strength `0.4`, the steps look more like this:
![](../assets/img2img/000030.steps.gravity.png) <div align="center" markdown>
![gravity30](../assets/img2img/000030.steps.gravity.png)
</div>
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`: Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
| | strength = 0.7 | strength = 0.4 | | | strength = 0.7 | strength = 0.4 |
| -- | -- | -- | | -- | :--: | :--: |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) | | initial image that SD sees | ![step-0-32](../assets/img2img/000032.step-0.png) | ![step-0-30](../assets/img2img/000030.step-0.png) |
| steps argument to `dream>` | `-S10` | `-S10` | | steps argument to `dream>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 | | steps actually taken | 7 | 4 |
| latent space at each step | ![](../assets/img2img/000032.steps.gravity.png) | ![](../assets/img2img/000030.steps.gravity.png) | | latent space at each step | ![gravity32](../assets/img2img/000032.steps.gravity.png) | ![gravity30](../assets/img2img/000030.steps.gravity.png) |
| output | ![](../assets/img2img/000032.1592514025.png) | ![](../assets/img2img/000030.1592514025.png) | | output | ![000032.1592514025](../assets/img2img/000032.1592514025.png) | ![000030.1592514025](../assets/img2img/000030.1592514025.png) |
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing. Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `"fire"`:
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`: ```bash
```commandline
invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7 invoke> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
``` ```
@ -117,37 +136,43 @@ After putting this guide together I was curious to see how the difference would
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image): Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
```commandline ```bash
invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4 invoke> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
``` ```
![](../assets/img2img/000035.1592514025.png) <div align="center" markdown>
![000035.1592514025](../assets/img2img/000035.1592514025.png)
</div>
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image): and here is strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
```commandline ```bash
invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7 invoke> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
``` ```
![](../assets/img2img/000046.1592514025.png) <div align="center" markdown>
![000046.1592514025](../assets/img2img/000046.1592514025.png)
</div>
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`: In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
![](../assets/img2img/000046.steps.gravity.png) ![gravity46](../assets/img2img/000046.steps.gravity.png)
than there is for strength `0.4`: than there is for strength `0.4`:
![](../assets/img2img/000035.steps.gravity.png) ![gravity35](../assets/img2img/000035.steps.gravity.png)
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image. and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image): Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
![](../assets/img2img/000045.1592514025.png) <div align="center" markdown>
![gravity45](../assets/img2img/000045.1592514025.png)
</div>
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames. By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
![](../assets/img2img/000046.steps.gravity.png) ![gravity46](../assets/img2img/000046.steps.gravity.png)
![](../assets/img2img/000045.steps.gravity.png) ![gravity45](../assets/img2img/000045.steps.gravity.png)
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see https://huggingface.co/blog/stable_diffusion for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step. This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see [stable diffusion blog](https://huggingface.co/blog/stable_diffusion) for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.

View File

@ -35,42 +35,43 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool. [GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
1. Open image in GIMP. 1. Open image in GIMP.
2. Layer->Transparency->Add Alpha Channel 2. Layer --> Transparency --> Add Alpha Channel
3. Use lasoo tool to select region to mask 3. Use lasoo tool to select region to mask
4. Choose Select -> Float to create a floating selection 4. Choose Select --> Float to create a floating selection
5. Open the Layers toolbar (^L) and select "Floating Selection" 5. Open the Layers toolbar (++ctrl+l++) and select "Floating Selection"
6. Set opacity to a value between 0% and 99% 6. Set opacity to a value between 0% and 99%
7. Export as PNG 7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from 8. In the export dialogue, Make sure the "Save colour values from
transparent pixels" checkbox is selected. transparent pixels" checkbox is selected.
---
## Recipe for Adobe Photoshop ## Recipe for Adobe Photoshop
1. Open image in Photoshop 1. Open image in Photoshop
![step1](../assets/step1.png) ![step1](../assets/step1.png)
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint. 2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint.
![step2](../assets/step2.png) ![step2](../assets/step2.png)
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option. 3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the undrlying image, or your inpainting results will be dramatically impacted. 4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the undrlying image, or your inpainting results will be dramatically impacted.
![step4](../assets/step4.png) ![step4](../assets/step4.png)
5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background. 5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background.
![step5](../assets/step5.png) ![step5](../assets/step5.png)
6. Save the image as a transparent PNG by using the "Save a Copy" option in the File menu, or using the Alt + Ctrl + S keyboard shortcut 6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
![step6](../assets/step6.png) ![step6](../assets/step6.png)
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good! 7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good!
![step7](../assets/step7.png) ![step7](../assets/step7.png)
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected. 8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected.

View File

@ -6,13 +6,9 @@ title: Others
## **Google Colab** ## **Google Colab**
Stable Diffusion AI Notebook: <a Stable Diffusion AI Notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb)
href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb"
target="_parent"> Open and follow instructions to use an isolated environment running Dream.
<img
src="https://colab.research.google.com/assets/colab-badge.svg"
alt="Open In Colab"/></a> <br> Open and follow instructions to use an isolated environment running
Dream.<br>
Output Example: ![Colab Notebook](../assets/colab_notebook.png) Output Example: ![Colab Notebook](../assets/colab_notebook.png)
@ -71,24 +67,30 @@ combination of integers and floating point numbers, and they do not need to add
--- ---
## Thresholding and Perlin Noise Initialization Options ## **Thresholding and Perlin Noise Initialization Options**
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images. Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
For better intuition into what these options do in practice, [here is a graphic demonstrating them both](../assets/truncation_comparison.jpg) in use. In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from For better intuition into what these options do in practice:
![here is a graphic demonstrating them both](../assets/truncation_comparison.jpg)
In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise): 0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
``` ```bash
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0 invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
``` ```
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2: Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
``` ```bash
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2 invoke> "a portrait of a beautiful young lady" -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
``` ```
Note: currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization. !!! note
currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
--- ---

View File

@ -1,8 +1,9 @@
--- ---
title: Postprocessing title: Postprocessing
--- ---
# :material-image-edit: Postprocessing
## Intro ## Intro
This extension provides the ability to restore faces and upscale This extension provides the ability to restore faces and upscale

View File

@ -1,8 +1,8 @@
--- ---
title: TEXTUAL_INVERSION title: Textual Inversion
--- ---
# :material-file-document-plus-outline: TEXTUAL_INVERSION # :material-file-document-plus-outline: Textual Inversion
## **Personalizing Text-to-Image Generation** ## **Personalizing Text-to-Image Generation**

View File

@ -2,6 +2,8 @@
title: InvokeAI Web Server title: InvokeAI Web Server
--- ---
# :material-web: InvokeAI Web Server
As of version 2.0.0, this distribution comes with a full-featured web As of version 2.0.0, this distribution comes with a full-featured web
server (see screenshot). To use it, run the `invoke.py` script by server (see screenshot). To use it, run the `invoke.py` script by
adding the `--web` option: adding the `--web` option:
@ -294,9 +296,9 @@ were not covered here.
The WebGUI is only rapid development. Check back regularly for The WebGUI is only rapid development. Check back regularly for
updates! updates!
# Reference ## Reference
## Additional Options ### Additional Options
`--web_develop` - Starts the web server in development mode. `--web_develop` - Starts the web server in development mode.
`--web_verbose` - Enables verbose logging `--web_verbose` - Enables verbose logging
@ -311,36 +313,41 @@ updates!
`--gui` - Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask `--gui` - Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask
to create a desktop app experience of the webserver. to create a desktop app experience of the webserver.
### Web Specific Features
## Web Specific Features
The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit. The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit.
For detailed guidance on individual features, see the Feature-specific help documents available in this directory. For detailed guidance on individual features, see the Feature-specific help documents available in this directory.
Note that the latest functionality available in the CLI may not always be available in the Web interface. Note that the latest functionality available in the CLI may not always be available in the Web interface.
### Dark Mode & Light Mode #### Dark Mode & Light Mode
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface. The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface.
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png) ![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png) ![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png)
### Invocation Toolbar #### Invocation Toolbar
The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar. The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar.
See below for additional documentation related to each feature: See below for additional documentation related to each feature:
- [Core Prompt Settings](./CLI.md) - [Core Prompt Settings](./CLI.md)
- [Variations](./VARIATIONS.md) - [Variations](./VARIATIONS.md)
- [Upscaling](./UPSCALE.md) - [Upscaling](./POSTPROCESS.md#upscaling)
- [Image to Image](./IMG2IMG.md) - [Image to Image](./IMG2IMG.md)
- [Inpainting](./INPAINTING.md) - [Inpainting](./INPAINTING.md)
- [Other](./OTHER.md) - [Other](./OTHER.md)
### Invocation Gallery #### Invocation Gallery
The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image. The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image.
### Image Workspace #### Image Workspace
When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including: When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including:
- Use image in the `Image to Image` workflow - Use image in the `Image to Image` workflow
- Initialize Face Restoration on the selected file - Initialize Face Restoration on the selected file
- Initialize Upscaling on the selected file - Initialize Upscaling on the selected file

View File

@ -1,8 +1,8 @@
--- ---
title: SAMPLER CONVERGENCE title: Sampler Convergence
--- ---
## *Sampler Convergence* # :material-palette-advanced: *Sampler Convergence*
As features keep increasing, making the right choices for your needs can become increasingly difficult. What sampler to use? And for how many steps? Do you change the CFG value? Do you use prompt weighting? Do you allow variations? As features keep increasing, making the right choices for your needs can become increasingly difficult. What sampler to use? And for how many steps? Do you change the CFG value? Do you use prompt weighting? Do you allow variations?
@ -21,6 +21,8 @@ Looking for a short version? Here's a TL;DR in 3 tables.
| `K_HEUN` and `K_DPM_2` converge in less steps (but are slower). | | `K_HEUN` and `K_DPM_2` converge in less steps (but are slower). |
| `K_DPM_2_A` and `K_EULER_A` incorporate a lot of creativity/variability. | | `K_DPM_2_A` and `K_EULER_A` incorporate a lot of creativity/variability. |
<div align="center" markdown>
| Sampler | (3 sample avg) it/s (M1 Max 64GB, 512x512) | | Sampler | (3 sample avg) it/s (M1 Max 64GB, 512x512) |
|---|---| |---|---|
| `DDIM` | 1.89 | | `DDIM` | 1.89 |
@ -32,6 +34,8 @@ Looking for a short version? Here's a TL;DR in 3 tables.
| `K_DPM_2_A` | 0.95 (slower) | | `K_DPM_2_A` | 0.95 (slower) |
| `K_EULER_A` | 1.86 | | `K_EULER_A` | 1.86 |
</div>
| Suggestions | | Suggestions |
|:---| |:---|
| For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.| | For most use cases, `K_LMS`, `K_HEUN` and `K_DPM_2` are the best choices (the latter 2 run 0.5x as quick, but tend to converge 2x as quick as `K_LMS`). At very low steps (≤ `-s8`), `K_HEUN` and `K_DPM_2` are not recommended. Use `K_LMS` instead.|

View File

@ -1,6 +1,5 @@
--- ---
title: Home title: Home
template: main.html
--- ---
<!-- <!--
@ -13,7 +12,7 @@ template: main.html
--> -->
<div align="center" markdown> <div align="center" markdown>
# :material-script-text-outline: InvokeAI: A Stable Diffusion Toolkit # ^^**InvokeAI: A Stable Diffusion Toolkit**^^ :tools: <br> <small>Formally known as lstein/stable-diffusion</small>
![project logo](assets/logo.png) ![project logo](assets/logo.png)
@ -29,8 +28,8 @@ template: main.html
[CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment [CI checks on dev link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Adevelopment
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github [CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml [CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions/workflows/test-invoke-conda.yml
[discord badge]: https://flat.badgen.net/discord/members/htRgbc7e?icon=discord [discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.com/invite/htRgbc7e [discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github [github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
[github forks link]: https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion [github forks link]: https://useful-forks.github.io/?repo=lstein%2Fstable-diffusion
[github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github [github open issues badge]: https://flat.badgen.net/github/open-issues/invoke-ai/InvokeAI?icon=github
@ -53,14 +52,13 @@ various new features and options to aid the image generation
process. It runs on Windows, Mac and Linux machines, and runs on GPU process. It runs on Windows, Mac and Linux machines, and runs on GPU
cards with as little as 4 GB or RAM. cards with as little as 4 GB or RAM.
Quick links: **Quick links**: [<a href="https://discord.gg/NwVCmKwY">Discord Server</a>] [<a href="https://github.com/invoke-ai/InvokeAI/">Code and Downloads</a>] [<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>] [<a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion, Ideas & Q&A</a>]
<ul>
<li><a href="https://github.com/invoke-ai/InvokeAI">Downloads and Source Code</a> <div align="center"><img src="assets/invoke-web-server-1.png" width=640></div>
<li><a href="https://invoke-ai.github.io/InvokeAI/">Documentation and Tutorials</a>
<li><a href="https://discord.gg/NwVCmKwY">Discord Server</a> !!! note
<li><a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>
<li><a href="https://github.com/invoke-ai/InvokeAI/discussions">Discussions, suggestions and Q&A</a> This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
</ul>
## :octicons-package-dependencies-24: Installation ## :octicons-package-dependencies-24: Installation
@ -98,7 +96,7 @@ You wil need one of the following:
To run in full-precision mode, start `invoke.py` with the `--full_precision` flag: To run in full-precision mode, start `invoke.py` with the `--full_precision` flag:
```bash ```bash
(ldm) ~/stable-diffusion$ python scripts/invoke.py --full_precision (invokeai) ~/InvokeAI$ python scripts/invoke.py --full_precision
``` ```
## :octicons-log-16: Latest Changes ## :octicons-log-16: Latest Changes

View File

@ -1,4 +1,10 @@
# Before you begin ---
title: Docker
---
# :fontawesome-brands-docker: Docker
## Before you begin
- For end users: Install Stable Diffusion locally using the instructions for - For end users: Install Stable Diffusion locally using the instructions for
your OS. your OS.
@ -6,7 +12,7 @@
deployment to other environments (on-premises or cloud), follow these deployment to other environments (on-premises or cloud), follow these
instructions. For general use, install locally to leverage your machine's GPU. instructions. For general use, install locally to leverage your machine's GPU.
# Why containers? ## Why containers?
They provide a flexible, reliable way to build and deploy Stable Diffusion. They provide a flexible, reliable way to build and deploy Stable Diffusion.
You'll also use a Docker volume to store the largest model files and image You'll also use a Docker volume to store the largest model files and image
@ -26,11 +32,11 @@ development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud. another environment with NVIDIA GPUs on-premises or in the cloud.
# Installation on a Linux container ## Installation on a Linux container
## Prerequisites ### Prerequisites
### Get the data files #### Get the data files
Go to Go to
[Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original), [Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original),
@ -44,14 +50,14 @@ cd ~/Downloads
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth
``` ```
### Install [Docker](https://github.com/santisbon/guides#docker) #### Install [Docker](https://github.com/santisbon/guides#docker)
On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the On the Docker Desktop app, go to Preferences, Resources, Advanced. Increase the
CPUs and Memory to avoid this CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to [Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too. increase Swap and Disk image size too.
## Setup ### Setup
Set the fork you want to use and other variables. Set the fork you want to use and other variables.
@ -132,9 +138,9 @@ docker run -it \
$TAG_STABLE_DIFFUSION $TAG_STABLE_DIFFUSION
``` ```
# Usage (time to have fun) ## Usage (time to have fun)
## Startup ### Startup
If you're on a **Linux container** the `invoke` script is **automatically If you're on a **Linux container** the `invoke` script is **automatically
started** and the output dir set to the Docker volume you created earlier. started** and the output dir set to the Docker volume you created earlier.
@ -158,7 +164,7 @@ invoke> -h
invoke> q invoke> q
``` ```
## Text to Image ### Text to Image
For quick (but bad) image results test with 5 steps (default 50) and 1 sample For quick (but bad) image results test with 5 steps (default 50) and 1 sample
image. This will let you know that everything is set up correctly. image. This will let you know that everything is set up correctly.
@ -188,7 +194,7 @@ volume):
docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures docker cp dummy:/data/000001.928403745.png /Users/<your-user>/Pictures
``` ```
## Image to Image ### Image to Image
You can also do text-guided image-to-image translation. For example, turning a You can also do text-guided image-to-image translation. For example, turning a
sketch into a detailed drawing. sketch into a detailed drawing.
@ -225,7 +231,7 @@ If you're on a Linux container on your Mac
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1 invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
``` ```
## Web Interface ### Web Interface
You can use the `invoke` script with a graphical web interface. Start the web You can use the `invoke` script with a graphical web interface. Start the web
server with: server with:
@ -238,7 +244,7 @@ If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
Press Control-C at the command line to stop the web server. Press Control-C at the command line to stop the web server.
## Notes ### Notes
Some text you can add at the end of the prompt to make it very pretty: Some text you can add at the end of the prompt to make it very pretty:

View File

@ -26,38 +26,36 @@ title: Linux
3. Copy the InvokeAI source code from GitHub: 3. Copy the InvokeAI source code from GitHub:
``` ```bash
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git (base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
``` ```
This will create InvokeAI folder where you will follow the rest of the steps. This will create InvokeAI folder where you will follow the rest of the steps.
4. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory! 4. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
``` ```bash
(base) ~$ cd InvokeAI (base) ~$ cd InvokeAI
(base) ~/InvokeAI$ (base) ~/InvokeAI$
``` ```
5. Use anaconda to copy necessary python packages, create a new python 5. Use anaconda to copy necessary python packages, create a new python
environment named `invokeai` and activate the environment. environment named `invokeai` and activate the environment.
```bash
``` (base) ~/InvokeAI$ conda env create
(base) ~/InvokeAI$ conda env create (base) ~/InvokeAI$ conda activate invokeai
(base) ~/InvokeAI$ conda activate invokeai (invokeai) ~/InvokeAI$
(invokeai) ~/InvokeAI$ ```
```
After these steps, your command prompt will be prefixed by `(invokeai)` as shown After these steps, your command prompt will be prefixed by `(invokeai)` as shown
above. above.
6. Load a couple of small machine-learning models required by stable diffusion: 6. Load a couple of small machine-learning models required by stable diffusion:
```bash
``` (invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py ```
```
!!! note !!! note
@ -79,33 +77,31 @@ This will create InvokeAI folder where you will follow the rest of the steps.
This will create a symbolic link from the stable-diffusion model.ckpt file, to This will create a symbolic link from the stable-diffusion model.ckpt file, to
the true location of the `sd-v1-4.ckpt` file. the true location of the `sd-v1-4.ckpt` file.
```bash
``` (invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1
(invokeai) ~/InvokeAI$ mkdir -p models/ldm/stable-diffusion-v1 (invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt
(invokeai) ~/InvokeAI$ ln -sf /path/to/sd-v1-4.ckpt models/ldm/stable-diffusion-v1/model.ckpt ```
```
8. Start generating images! 8. Start generating images!
``` ```bash
# for the pre-release weights use the -l or --liaon400m switch # for the pre-release weights use the -l or --liaon400m switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l (invokeai) ~/InvokeAI$ python3 scripts/invoke.py -l
# for the post-release weights do not use the switch # for the post-release weights do not use the switch
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py (invokeai) ~/InvokeAI$ python3 scripts/invoke.py
# for additional configuration switches and arguments, use -h or --help # for additional configuration switches and arguments, use -h or --help
(invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h (invokeai) ~/InvokeAI$ python3 scripts/invoke.py -h
``` ```
9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors. 9. Subsequently, to relaunch the script, be sure to run "conda activate invokeai" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the 'invokeai' environment, the script will fail with multiple `ModuleNotFound` errors.
## Updating to newer versions of the script ## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type: This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI` and type:
``` ```bash
(invokeai) ~/InvokeAI$ git pull (invokeai) ~/InvokeAI$ git pull
(invokeai) ~/InvokeAI$ conda env update -f environment.yml (invokeai) ~/InvokeAI$ conda env update -f environment.yml
``` ```

View File

@ -2,6 +2,8 @@
title: macOS title: macOS
--- ---
# :fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users Invoke AI runs quite well on M1 Macs and we have a number of M1 users
in the community. in the community.
@ -26,98 +28,120 @@ First you need to download a large checkpoint file.
While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1). While that is downloading, open Terminal and run the following commands one at a time, reading the comments and taking care to run the appropriate command for your Mac's architecture (Intel or M1).
Do not just copy and paste the whole thing into your terminal! !!! todo "Homebrew"
```bash If you have no brew installation yet (otherwise skip):
# Install brew (and Xcode command line tools):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Now there are two options to get the Python (miniconda) environment up and running: ```bash title="install brew (and Xcode command line tools)"
# 1. Alongside pyenv /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# 2. Standalone ```
#
# If you don't know what we are talking about, choose 2.
#
# If you are familiar with python environments, you'll know there are other options
# for setting up the environment - you are on your own if you go one of those routes.
##### BEGIN TWO DIFFERENT OPTIONS ##### !!! todo "Conda Installation"
### BEGIN OPTION 1: Installing alongside pyenv ### Now there are two different ways to set up the Python (miniconda) environment:
brew install pyenv-virtualenv # you might have this from before, no problem
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
### END OPTION 1 ###
### BEGIN OPTION 2: Installing standalone ### 1. Standalone
# Install cmake, protobuf, and rust: 2. with pyenv
brew install cmake protobuf rust
# BEGIN ARCHITECTURE-DEPENDENT STEP # If you don't know what we are talking about, choose Standalone. If you are familiar with python environments, choose "with pyenv"
# For M1: install miniconda (M1 arm64 version):
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
# For Intel: install miniconda (Intel x86-64 version): === "Standalone"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh -o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
# END ARCHITECTURE-DEPENDENT STEP #
### END OPTION 2 ### ```bash title="Install cmake, protobuf, and rust"
brew install cmake protobuf rust
```
##### END TWO DIFFERENT OPTIONS ##### Then choose the kind of your Mac and install miniconda:
# Clone the Invoke AI repo === "M1 arm64"
git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
### WAIT FOR THE CHECKPOINT FILE TO DOWNLOAD, THEN PROCEED ### ```bash title="Install miniconda for M1 arm64"
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh \
-o Miniconda3-latest-MacOSX-arm64.sh
/bin/bash Miniconda3-latest-MacOSX-arm64.sh
```
# We will leave the big checkpoint wherever you stashed it for long-term storage, === "Intel x86_64"
# and make a link to it from the repo's folder. This allows you to use it for
# other repos, and if you need to delete Invoke AI, you won't have to download it again.
# Make the directory in the repo for the symlink ```bash title="Install miniconda for Intel"
mkdir -p models/ldm/stable-diffusion-v1/ curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh \
-o Miniconda3-latest-MacOSX-x86_64.sh
/bin/bash Miniconda3-latest-MacOSX-x86_64.sh
```
# This is the folder where you put the checkpoint file `sd-v1-4.ckpt` === "with pyenv"
PATH_TO_CKPT="$HOME/Downloads"
# Create a link to the checkpoint ```bash
ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt brew install pyenv-virtualenv
pyenv install anaconda3-2022.05
pyenv virtualenv anaconda3-2022.05
eval "$(pyenv init -)"
pyenv activate anaconda3-2022.05
```
# BEGIN ARCHITECTURE-DEPENDENT STEP # !!! todo "Clone the Invoke AI repo"
# For M1: Create the environment & install packages
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
# For Intel: Create the environment & install packages ```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI
```
# END ARCHITECTURE-DEPENDENT STEP # !!! todo "Wait until the checkpoint-file download finished, then proceed"
# Activate the environment (you need to do this every time you want to run SD) We will leave the big checkpoint wherever you stashed it for long-term storage,
conda activate invokeai and make a link to it from the repo's folder. This allows you to use it for
other repos, or if you need to delete Invoke AI, you won't have to download it again.
# This will download some bits and pieces and make take a while ```{.bash .annotate}
python scripts/preload_models.py # Make the directory in the repo for the symlink
mkdir -p models/ldm/stable-diffusion-v1/
# Run SD! # This is the folder where you put the checkpoint file `sd-v1-4.ckpt`
python scripts/dream.py PATH_TO_CKPT="$HOME/Downloads" # (1)!
```
# or run the web interface!
python scripts/invoke.py --web
# The original scripts should work as well. # Create a link to the checkpoint
python scripts/orig_scripts/txt2img.py \ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" models/ldm/stable-diffusion-v1/model.ckpt
```
1. replace `$HOME/Downloads` with the Location where you actually stored the Checkppoint (`sd-v1-4.ckpt`)
!!! todo "Create the environment & install packages"
=== "M1 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yml
```
=== "Intel x86_64 Mac"
```bash
PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-64 conda env create -f environment-mac.yml
```
```bash
# Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai
# This will download some bits and pieces and make take a while
python scripts/preload_models.py
# Run SD!
python scripts/dream.py
# or run the web interface!
python scripts/invoke.py --web
# The original scripts should work as well.
python scripts/orig_scripts/txt2img.py \
--prompt "a photograph of an astronaut riding a horse" \ --prompt "a photograph of an astronaut riding a horse" \
--plms --plms
``` ```
!!! info
Note, `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env `export PIP_EXISTS_ACTION=w` is a precaution to fix `conda env
create -f environment-mac.yml` never finishing in some situations. So create -f environment-mac.yml` never finishing in some situations. So
it isn't required but wont hurt. it isn't required but wont hurt.
--- ---
## Common problems ## Common problems
@ -157,7 +181,6 @@ conda install \
-n ldm -n ldm
``` ```
If it takes forever to run `conda env create -f environment-mac.yml`, try this: If it takes forever to run `conda env create -f environment-mac.yml`, try this:
```bash ```bash
@ -169,8 +192,8 @@ conda clean \
Or you could try to completley reset Anaconda: Or you could try to completley reset Anaconda:
```bash ```bash
conda update \ conda update \
--force-reinstall \ --force-reinstall \
-y \ -y \
-n base \ -n base \

View File

@ -39,7 +39,7 @@ in the wiki
4. Run the command: 4. Run the command:
```bash ```batch
git clone https://github.com/invoke-ai/InvokeAI.git git clone https://github.com/invoke-ai/InvokeAI.git
``` ```
@ -48,16 +48,20 @@ in the wiki
5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory! 5. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory!
``` ```batch
cd InvokeAI cd InvokeAI
``` ```
6. Run the following two commands: 6. Run the following two commands:
``` ```batch title="step 6a"
conda env create (step 6a) conda env create
conda activate invokeai (step 6b) ```
```
```batch title="step 6b"
conda activate invokeai
```
This will install all python requirements and activate the "invokeai" environment This will install all python requirements and activate the "invokeai" environment
which sets PATH and other environment variables properly. which sets PATH and other environment variables properly.
@ -67,7 +71,7 @@ conda activate invokeai (step 6b)
7. Run the command: 7. Run the command:
```bash ```batch
python scripts\preload_models.py python scripts\preload_models.py
``` ```
@ -79,45 +83,44 @@ conda activate invokeai (step 6b)
8. Now you need to install the weights for the big stable diffusion model. 8. Now you need to install the weights for the big stable diffusion model.
- For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co). 1. For running with the released weights, you will first need to set up an acount with Hugging Face (https://huggingface.co).
- Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original. 2. Use your credentials to log in, and then point your browser at https://huggingface.co/CompVis/stable-diffusion-v-1-4-original.
- You may be asked to sign a license agreement at this point. 3. You may be asked to sign a license agreement at this point.
- Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that 4. Click on "Files and versions" near the top of the page, and then click on the file named `sd-v1-4.ckpt`. You'll be taken to a page that
prompts you to click the "download" link. Now save the file somewhere safe on your local machine. prompts you to click the "download" link. Now save the file somewhere safe on your local machine.
- The weight file is >4 GB in size, so 5. The weight file is >4 GB in size, so
downloading may take a while. downloading may take a while.
Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place: Now run the following commands from **within the InvokeAI directory** to copy the weights file to the right place:
``` ```batch
mkdir -p models\ldm\stable-diffusion-v1 mkdir -p models\ldm\stable-diffusion-v1
copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt copy C:\path\to\sd-v1-4.ckpt models\ldm\stable-diffusion-v1\model.ckpt
``` ```
Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file, Please replace `C:\path\to\sd-v1.4.ckpt` with the correct path to wherever you stashed this file. If you prefer not to copy or move the .ckpt file,
you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`. you may instead create a shortcut to it from within `models\ldm\stable-diffusion-v1\`.
9. Start generating images! 9. Start generating images!
```bash ```batch title="for the pre-release weights"
# for the pre-release weights
python scripts\invoke.py -l python scripts\invoke.py -l
```
# for the post-release weights ```batch title="for the post-release weights"
python scripts\invoke.py python scripts\invoke.py
``` ```
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9). 10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate invokeai` (step 6b), and then launch the invoke script (step 9).
**Note:** Tildebyte has written an alternative !!! tip "Tildebyte has written an alternative"
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install) ["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
which uses the Windows Powershell and pew. If you are having trouble with which uses the Windows Powershell and pew. If you are having trouble with
Anaconda on Windows, give this a try (or try it first!) Anaconda on Windows, give this a try (or try it first!)
--- ---
This distribution is changing rapidly. If you used the `git clone` method (step 5) to download the InvokeAI directory, then to update to the latest and greatest version, launch the Anaconda window, enter `InvokeAI`, and type:
This distribution is changing rapidly. If you used the `git clone` method This distribution is changing rapidly. If you used the `git clone` method
(step 5) to download the stable-diffusion directory, then to update to the (step 5) to download the stable-diffusion directory, then to update to the
latest and greatest version, launch the Anaconda window, enter latest and greatest version, launch the Anaconda window, enter

View File

@ -1,12 +1,12 @@
# General # General
site_name: Dream Script Docs site_name: Stable Diffusion Toolkit Docs
site_url: https://invoke-ai.github.io/InvokeAI/ site_url: https://invoke-ai.github.io/InvokeAI
site_author: mauwii site_author: mauwii
dev_addr: "127.0.0.1:8080" dev_addr: "127.0.0.1:8080"
# Repository # Repository
repo_name: invoke-ai/InvokeAI repo_name: 'invoke-ai/InvokeAI'
repo_url: https://invoke-ai.github.io/InvokeAI/ repo_url: 'https://github.com/invoke-ai/InvokeAI'
edit_uri: edit/main/docs/ edit_uri: edit/main/docs/
# Copyright # Copyright