Hotifx docs (#1376) (#1377)

This commit is contained in:
Matthias Wild 2022-11-04 20:47:31 +01:00 committed by GitHub
parent ef8b3ce639
commit fcdefa0620
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 523 additions and 468 deletions

View File

@ -8,9 +8,9 @@ hide:
## **Interactive Command Line Interface** ## **Interactive Command Line Interface**
The `invoke.py` script, located in `scripts/`, provides an interactive The `invoke.py` script, located in `scripts/`, provides an interactive interface
interface to image generation similar to the "invoke mothership" bot that Stable to image generation similar to the "invoke mothership" bot that Stable AI
AI provided on its Discord server. provided on its Discord server.
Unlike the `txt2img.py` and `img2img.py` scripts provided in the original Unlike the `txt2img.py` and `img2img.py` scripts provided in the original
[CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) source [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion) source
@ -60,9 +60,9 @@ invoke> q
![invoke-py-demo](../assets/dream-py-demo.png) ![invoke-py-demo](../assets/dream-py-demo.png)
The `invoke>` prompt's arguments are pretty much identical to those used in the The `invoke>` prompt's arguments are pretty much identical to those used in the
Discord bot, except you don't need to type `!invoke` (it doesn't hurt if you do). Discord bot, except you don't need to type `!invoke` (it doesn't hurt if you
A significant change is that creation of individual images is now the default do). A significant change is that creation of individual images is now the
unless `--grid` (`-g`) is given. A full list is given in default unless `--grid` (`-g`) is given. A full list is given in
[List of prompt arguments](#list-of-prompt-arguments). [List of prompt arguments](#list-of-prompt-arguments).
## Arguments ## Arguments
@ -75,7 +75,8 @@ the location of the model weight files.
These command-line arguments can be passed to `invoke.py` when you first run it These command-line arguments can be passed to `invoke.py` when you first run it
from the Windows, Mac or Linux command line. Some set defaults that can be from the Windows, Mac or Linux command line. Some set defaults that can be
overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt-arguments). Others overridden on a per-prompt basis (see
[List of prompt arguments](#list-of-prompt-arguments). Others
| Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description | | Argument <img width="240" align="right"/> | Shortcut <img width="100" align="right"/> | Default <img width="320" align="right"/> | Description |
| ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- | | ----------------------------------------- | ----------------------------------------- | ---------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
@ -85,8 +86,8 @@ overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt
| `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input | | `--from_file <path>` | | `None` | Read list of prompts from a file. Use `-` to read from standard input |
| `--model <modelname>` | | `stable-diffusion-1.4` | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" | | `--model <modelname>` | | `stable-diffusion-1.4` | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" |
| `--full_precision` | `-F` | `False` | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. | | `--full_precision` | `-F` | `False` | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
| `--png_compression <0-9>` | `-z<0-9>` | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) | | `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| `--safety-checker` | | False | Activate safety checker for NSFW and other potentially disturbing imagery | | `--safety-checker` | | `False` | Activate safety checker for NSFW and other potentially disturbing imagery |
| `--web` | | `False` | Start in web server mode | | `--web` | | `False` | Start in web server mode |
| `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. | | `--host <ip addr>` | | `localhost` | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
| `--port <port>` | | `9090` | Which port web server should listen for requests on. | | `--port <port>` | | `9090` | Which port web server should listen for requests on. |
@ -122,9 +123,9 @@ overridden on a per-prompt basis (see [List of prompt arguments](#list-of-prompt
## List of prompt arguments ## List of prompt arguments
After the invoke.py script initializes, it will present you with a After the invoke.py script initializes, it will present you with a `invoke>`
`invoke>` prompt. Here you can enter information to generate images prompt. Here you can enter information to generate images from text
from text ([txt2img](#txt2img)), to embellish an existing image or sketch ([txt2img](#txt2img)), to embellish an existing image or sketch
([img2img](#img2img)), or to selectively alter chosen regions of the image ([img2img](#img2img)), or to selectively alter chosen regions of the image
([inpainting](#inpainting)). ([inpainting](#inpainting)).
@ -142,56 +143,55 @@ from text ([txt2img](#txt2img)), to embellish an existing image or sketch
Here are the invoke> command that apply to txt2img: Here are the invoke> command that apply to txt2img:
| Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description | | Argument <img width="680" align="right"/> | Shortcut <img width="420" align="right"/> | Default <img width="480" align="right"/> | Description |
|--------------------|------------|---------------------|--------------| | ----------------------------------------- | ----------------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| "my prompt" | | | Text prompt to use. The quotation marks are optional. | | "my prompt" | | | Text prompt to use. The quotation marks are optional. |
| --width <int> | -W<int> | 512 | Width of generated image | | `--width <int>` | `-W<int>` | `512` | Width of generated image |
| --height <int> | -H<int> | 512 | Height of generated image | | `--height <int>` | `-H<int>` | `512` | Height of generated image |
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt | | `--iterations <int>` | `-n<int>` | `1` | How many images to generate from this prompt |
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply | | `--steps <int>` | `-s<int>` | `50` | How many steps of refinement to apply |
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 | | `--cfg_scale <float>` | `-C<float>` | `7.5` | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.| | `--seed <int>` | `-S<int>` | `None` | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. | | `--sampler <sampler>` | `-A<sampler>` | `k_lms` | Sampler to use. Use -h to get list of available samplers. |
| --karras_max <int> | | 29 | When using k_* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] | | `--karras_max <int>` | | `29` | When using k\_\* samplers, set the maximum number of steps before shifting from using the Karras noise schedule (good for low step counts) to the LatentDiffusion noise schedule (good for high step counts) This value is sticky. [29] |
| --hires_fix | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution | | `--hires_fix` | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| --png_compression <0-9> | -z<0-9> | 6 | Select level of compression for output files, from 0 (no compression) to 9 (max compression) | | `--png_compression <0-9>` | `-z<0-9>` | `6` | Select level of compression for output files, from 0 (no compression) to 9 (max compression) |
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt | | `--grid` | `-g` | `False` | Turn on grid mode to return a single image combining all the images generated by this prompt |
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) | | `--individual` | `-i` | `True` | Turn off grid mode (deprecated; leave off --grid instead) |
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images | | `--outdir <path>` | `-o<path>` | `outputs/img_samples` | Temporarily change the location of these images |
| --seamless | | False | Activate seamless tiling for interesting effects | | `--seamless` | | `False` | Activate seamless tiling for interesting effects |
| --seamless_axes | | x,y | Specify which axes to use circular convolution on. | | `--seamless_axes` | | `x,y` | Specify which axes to use circular convolution on. |
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt | | `--log_tokenization` | `-t` | `False` | Display a color-coded list of the parsed tokens derived from the prompt |
| --skip_normalization| -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) | | `--skip_normalization` | `-x` | `False` | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. | | `--upscale <int> <float>` | `-U <int> <float>` | `-U 1 0.75` | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
| --facetool_strength <float> | -G <float> | -G0 | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) | | `--facetool_strength <float>` | `-G <float> ` | `-G0` | Fix faces (defaults to using the GFPGAN algorithm); argument indicates how hard the algorithm should try (0.0-1.0) |
| --facetool <name> | -ft <name> | -ft gfpgan | Select face restoration algorithm to use: gfpgan, codeformer | | `--facetool <name>` | `-ft <name>` | `-ft gfpgan` | Select face restoration algorithm to use: gfpgan, codeformer |
| --codeformer_fidelity | -cf <float> | 0.75 | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality | | `--codeformer_fidelity` | `-cf <float>` | `0.75` | Used along with CodeFormer. Takes values between 0 and 1. 0 produces high quality but low accuracy. 1 produces high accuracy but low quality |
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. | | `--save_original` | `-save_orig` | `False` | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). | | `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with `-S<seed>` and `-n<int>` to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
| --with_variations <pattern> | | None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. | | `--with_variations <pattern>` | | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
| --save_intermediates <n> | | None | Save the image from every nth step into an "intermediates" folder inside the output directory | | `--save_intermediates <n>` | | `None` | Save the image from every nth step into an "intermediates" folder inside the output directory |
Note that the width and height of the image must be multiples of
64. You can provide different values, but they will be rounded down to
the nearest multiple of 64.
Note that the width and height of the image must be multiples of 64. You can
provide different values, but they will be rounded down to the nearest multiple
of 64.
### This is an example of img2img: ### This is an example of img2img:
~~~~ ```
invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit invoke> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
~~~~ ```
This will modify the indicated vacation photograph by making it more This will modify the indicated vacation photograph by making it more like the
like the prompt. Results will vary greatly depending on what is in the prompt. Results will vary greatly depending on what is in the image. We also ask
image. We also ask to --fit the image into a box no bigger than to --fit the image into a box no bigger than 640x480. Otherwise the image size
640x480. Otherwise the image size will be identical to the provided will be identical to the provided photo and you may run out of memory if it is
photo and you may run out of memory if it is large. large.
In addition to the command-line options recognized by txt2img, img2img In addition to the command-line options recognized by txt2img, img2img accepts
accepts additional options: additional options:
| Argument <img width="160" align="right"/> | Shortcut | Default | Description | | Argument <img width="160" align="right"/> | Shortcut | Default | Description |
|----------------------|-------------|-----------------|--------------| | ----------------------------------------- | ----------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
| `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image | | `--init_img <path>` | `-I<path>` | `None` | Path to the initialization image |
| `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions | | `--fit` | `-F` | `False` | Scale the image to fit into the specified -H and -W dimensions |
| `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. | | `--strength <float>` | `-s<float>` | `0.75` | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
@ -211,41 +211,39 @@ accepts additional options:
the pixels underneath when you create the transparent areas. See the pixels underneath when you create the transparent areas. See
[Inpainting](./INPAINTING.md) for details. [Inpainting](./INPAINTING.md) for details.
inpainting accepts all the arguments used for txt2img and img2img, as inpainting accepts all the arguments used for txt2img and img2img, as well as
well as the --mask (-M) and --text_mask (-tm) arguments: the --mask (-M) and --text_mask (-tm) arguments:
| Argument <img width="100" align="right"/> | Shortcut | Default | Description | | Argument <img width="100" align="right"/> | Shortcut | Default | Description |
|--------------------|------------|---------------------|--------------| | ----------------------------------------- | ------------------------ | ------- | ------------------------------------------------------------------------------------------------ |
| `--init_mask <path>` | `-M<path>` | `None` | Path to an image the same size as the initial_image, with areas for inpainting made transparent. | | `--init_mask <path>` | `-M<path>` | `None` | Path to an image the same size as the initial_image, with areas for inpainting made transparent. |
| `--invert_mask ` | | False | If true, invert the mask so that transparent areas are opaque and vice versa. | | `--invert_mask ` | | False | If true, invert the mask so that transparent areas are opaque and vice versa. |
| `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image | | `--text_mask <prompt> [<float>]` | `-tm <prompt> [<float>]` | <none> | Create a mask from a text prompt describing part of the image |
The mask may either be an image with transparent areas, in which case The mask may either be an image with transparent areas, in which case the
the inpainting will occur in the transparent areas only, or a black inpainting will occur in the transparent areas only, or a black and white image,
and white image, in which case all black areas will be painted into. in which case all black areas will be painted into.
`--text_mask` (short form `-tm`) is a way to generate a mask using a `--text_mask` (short form `-tm`) is a way to generate a mask using a text
text description of the part of the image to replace. For example, if description of the part of the image to replace. For example, if you have an
you have an image of a breakfast plate with a bagel, toast and image of a breakfast plate with a bagel, toast and scrambled eggs, you can
scrambled eggs, you can selectively mask the bagel and replace it with selectively mask the bagel and replace it with a piece of cake this way:
a piece of cake this way:
~~~ ```
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel invoke> a piece of cake -I /path/to/breakfast.png -tm bagel
~~~ ```
The algorithm uses <a The algorithm uses <a
href="https://github.com/timojl/clipseg">clipseg</a> to classify href="https://github.com/timojl/clipseg">clipseg</a> to classify different
different regions of the image. The classifier puts out a confidence regions of the image. The classifier puts out a confidence score for each region
score for each region it identifies. Generally regions that score it identifies. Generally regions that score above 0.5 are reliable, but if you
above 0.5 are reliable, but if you are getting too much or too little are getting too much or too little masking you can adjust the threshold down (to
masking you can adjust the threshold down (to get more mask), or up get more mask), or up (to get less). In this example, by passing `-tm` a higher
(to get less). In this example, by passing `-tm` a higher value, we value, we are insisting on a more stringent classification.
are insisting on a more stringent classification.
~~~ ```
invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6 invoke> a piece of cake -I /path/to/breakfast.png -tm bagel 0.6
~~~ ```
# Other Commands # Other Commands
@ -253,31 +251,26 @@ The CLI offers a number of commands that begin with "!".
## Postprocessing images ## Postprocessing images
To postprocess a file using face restoration or upscaling, use the To postprocess a file using face restoration or upscaling, use the `!fix`
`!fix` command. command.
### `!fix` ### `!fix`
This command runs a post-processor on a previously-generated image. It This command runs a post-processor on a previously-generated image. It takes a
takes a PNG filename or path and applies your choice of the `-U`, `-G`, or PNG filename or path and applies your choice of the `-U`, `-G`, or `--embiggen`
`--embiggen` switches in order to fix faces or upscale. If you provide a switches in order to fix faces or upscale. If you provide a filename, the script
filename, the script will look for it in the current output will look for it in the current output directory. Otherwise you can provide a
directory. Otherwise you can provide a full or partial path to the full or partial path to the desired file.
desired file.
Some examples: Some examples:
!!! example "" !!! example "Upscale to 4X its original size and fix faces using codeformer"
Upscale to 4X its original size and fix faces using codeformer:
```bash ```bash
invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer invoke> !fix 0000045.4829112.png -G1 -U4 -ft codeformer
``` ```
!!! example "" !!! example "Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen"
Use the GFPGAN algorithm to fix faces, then upscale to 3X using --embiggen:
```bash ```bash
invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan invoke> !fix 0000045.4829112.png -G0.8 -ft gfpgan
@ -286,26 +279,27 @@ Some examples:
>> GFPGAN - Restoring Faces for image seed:4829112 >> GFPGAN - Restoring Faces for image seed:4829112
Outputs: Outputs:
[1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8 [1] outputs/img-samples/000017.4829112.gfpgan-00.png: !fix "outputs/img-samples/0000045.4829112.png" -s 50 -S -W 512 -H 512 -C 7.5 -A k_lms -G 0.8
```
### !mask ### !mask
This command takes an image, a text prompt, and uses the `clipseg` This command takes an image, a text prompt, and uses the `clipseg` algorithm to
algorithm to automatically generate a mask of the area that matches automatically generate a mask of the area that matches the text prompt. It is
the text prompt. It is useful for debugging the text masking process useful for debugging the text masking process prior to inpainting with the
prior to inpainting with the `--text_mask` argument. See `--text_mask` argument. See [INPAINTING.md] for details.
[INPAINTING.md] for details.
## Model selection and importation ## Model selection and importation
The CLI allows you to add new models on the fly, as well as to switch The CLI allows you to add new models on the fly, as well as to switch among them
among them rapidly without leaving the script. rapidly without leaving the script.
### !models ### !models
This prints out a list of the models defined in `config/models.yaml'. This prints out a list of the models defined in `config/models.yaml'. The active
The active model is bold-faced model is bold-faced
Example: Example:
<pre> <pre>
laion400m not loaded <no description> laion400m not loaded <no description>
<b>stable-diffusion-1.4 active Stable Diffusion v1.4</b> <b>stable-diffusion-1.4 active Stable Diffusion v1.4</b>
@ -314,13 +308,12 @@ waifu-diffusion not loaded Waifu Diffusion v1.3
### !switch <model> ### !switch <model>
This quickly switches from one model to another without leaving the This quickly switches from one model to another without leaving the CLI script.
CLI script. `invoke.py` uses a memory caching system; once a model `invoke.py` uses a memory caching system; once a model has been loaded,
has been loaded, switching back and forth is quick. The following switching back and forth is quick. The following example shows this in action.
example shows this in action. Note how the second column of the Note how the second column of the `!models` table changes to `cached` after a
`!models` table changes to `cached` after a model is first loaded, model is first loaded, and that the long initialization step is not needed when
and that the long initialization step is not needed when loading loading a cached model.
a cached model.
<pre> <pre>
invoke> !models invoke> !models
@ -360,24 +353,22 @@ waifu-diffusion cached Waifu Diffusion v1.3
### !import_model <path/to/model/weights> ### !import_model <path/to/model/weights>
This command imports a new model weights file into InvokeAI, makes it This command imports a new model weights file into InvokeAI, makes it available
available for image generation within the script, and writes out the for image generation within the script, and writes out the configuration for the
configuration for the model into `config/models.yaml` for use in model into `config/models.yaml` for use in subsequent sessions.
subsequent sessions.
Provide `!import_model` with the path to a weights file ending in Provide `!import_model` with the path to a weights file ending in `.ckpt`. If
`.ckpt`. If you type a partial path and press tab, the CLI will you type a partial path and press tab, the CLI will autocomplete. Although it
autocomplete. Although it will also autocomplete to `.vae` files, will also autocomplete to `.vae` files, these are not currenty supported (but
these are not currenty supported (but will be soon). will be soon).
When you hit return, the CLI will prompt you to fill in additional When you hit return, the CLI will prompt you to fill in additional information
information about the model, including the short name you wish to use about the model, including the short name you wish to use for it with the
for it with the `!switch` command, a brief description of the model, `!switch` command, a brief description of the model, the default image width and
the default image width and height to use with this model, and the height to use with this model, and the model's configuration file. The latter
model's configuration file. The latter three fields are automatically three fields are automatically filled with reasonable defaults. In the example
filled with reasonable defaults. In the example below, the bold-faced below, the bold-faced text shows what the user typed in with the exception of
text shows what the user typed in with the exception of the width, the width, height and configuration file paths, which were filled in
height and configuration file paths, which were filled in
automatically. automatically.
Example: Example:
@ -412,12 +403,13 @@ invoke>
###!edit_model <name_of_model> ###!edit_model <name_of_model>
The `!edit_model` command can be used to modify a model that is The `!edit_model` command can be used to modify a model that is already defined
already defined in `config/models.yaml`. Call it with the short in `config/models.yaml`. Call it with the short name of the model you wish to
name of the model you wish to modify, and it will allow you to modify, and it will allow you to modify the model's `description`, `weights` and
modify the model's `description`, `weights` and other fields. other fields.
Example: Example:
<pre> <pre>
invoke> <b>!edit_model waifu-diffusion</b> invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml >> Editing model waifu-diffusion from configuration file ./configs/models.yaml
@ -440,28 +432,28 @@ OK to import [n]? y
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt >> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
... ...
</pre> </pre>
=======
invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3 ======= invoke> !fix 000017.4829112.gfpgan-00.png --embiggen 3 ...lots of
...lots of text... text... Outputs: [2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix
Outputs: "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512
[2] outputs/img-samples/000018.2273800735.embiggen-00.png: !fix "outputs/img-samples/000017.243781548.gfpgan-00.png" -s 50 -S 2273800735 -W 512 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 -H 512 -C 7.5 -A k_lms --embiggen 3.0 0.75 0.25 ```
```
## History processing ## History processing
The CLI provides a series of convenient commands for reviewing previous The CLI provides a series of convenient commands for reviewing previous actions,
actions, retrieving them, modifying them, and re-running them. retrieving them, modifying them, and re-running them.
### !history ### !history
The invoke script keeps track of all the commands you issue during a The invoke script keeps track of all the commands you issue during a session,
session, allowing you to re-run them. On Mac and Linux systems, it allowing you to re-run them. On Mac and Linux systems, it also writes the
also writes the command-line history out to disk, giving you access to command-line history out to disk, giving you access to the most recent 1000
the most recent 1000 commands issued. commands issued.
The `!history` command will return a numbered list of all the commands The `!history` command will return a numbered list of all the commands issued
issued during the session (Windows), or the most recent 1000 commands during the session (Windows), or the most recent 1000 commands (Mac|Linux). You
(Mac|Linux). You can then repeat a command by using the command `!NNN`, can then repeat a command by using the command `!NNN`, where "NNN" is the
where "NNN" is the history line number. For example: history line number. For example:
```bash ```bash
invoke> !history invoke> !history
@ -478,14 +470,13 @@ invoke> watercolor of beautiful woman sitting under tree wearing broad hat and f
### !fetch ### !fetch
This command retrieves the generation parameters from a previously This command retrieves the generation parameters from a previously generated
generated image and either loads them into the command line image and either loads them into the command line (Linux|Mac), or prints them
(Linux|Mac), or prints them out in a comment for copy-and-paste out in a comment for copy-and-paste (Windows). You may provide either the name
(Windows). You may provide either the name of a file in the current of a file in the current output directory, or a full file path. Specify path to
output directory, or a full file path. Specify path to a folder with a folder with image png files, and wildcard \*.png to retrieve the dream command
image png files, and wildcard *.png to retrieve the dream command used used to generate the images, and save them to a file commands.txt for further
to generate the images, and save them to a file commands.txt for processing.
further processing.
This example loads the generation command for a single png file: This example loads the generation command for a single png file:
@ -495,8 +486,8 @@ invoke> !fetch 0000015.8929913.png
invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5 invoke> a fantastic alien landscape -W 576 -H 512 -s 60 -A plms -C 7.5
``` ```
This one fetches the generation commands from a batch of files and This one fetches the generation commands from a batch of files and stores them
stores them into `selected.txt`: into `selected.txt`:
```bash ```bash
invoke> !fetch outputs\selected-imgs\*.png selected.txt invoke> !fetch outputs\selected-imgs\*.png selected.txt
@ -506,12 +497,12 @@ invoke> !fetch outputs\selected-imgs\*.png selected.txt
This command replays a text file generated by !fetch or created manually This command replays a text file generated by !fetch or created manually
~~~ ```
invoke> !replay outputs\selected-imgs\selected.txt invoke> !replay outputs\selected-imgs\selected.txt
~~~ ```
Note that these commands may behave unexpectedly if given a PNG file that Note that these commands may behave unexpectedly if given a PNG file that was
was not generated by InvokeAI. not generated by InvokeAI.
### !search <search string> ### !search <search string>
@ -525,42 +516,47 @@ invoke> !search surreal
### `!clear` ### `!clear`
This clears the search history from memory and disk. Be advised that This clears the search history from memory and disk. Be advised that this
this operation is irreversible and does not issue any warnings! operation is irreversible and does not issue any warnings!
## Command-line editing and completion ## Command-line editing and completion
The command-line offers convenient history tracking, editing, and The command-line offers convenient history tracking, editing, and command
command completion. completion.
- To scroll through previous commands and potentially edit/reuse them, use the ++up++ and ++down++ keys. - To scroll through previous commands and potentially edit/reuse them, use the
- To edit the current command, use the ++left++ and ++right++ keys to position the cursor, and then ++backspace++, ++delete++ or insert characters. ++up++ and ++down++ keys.
- To move to the very beginning of the command, type ++ctrl+a++ (or ++command+a++ on the Mac) - To edit the current command, use the ++left++ and ++right++ keys to position
the cursor, and then ++backspace++, ++delete++ or insert characters.
- To move to the very beginning of the command, type ++ctrl+a++ (or
++command+a++ on the Mac)
- To move to the end of the command, type ++ctrl+e++. - To move to the end of the command, type ++ctrl+e++.
- To cut a section of the command, position the cursor where you want to start cutting and type ++ctrl+k++ - To cut a section of the command, position the cursor where you want to start
- To paste a cut section back in, position the cursor where you want to paste, and type ++ctrl+y++ cutting and type ++ctrl+k++
- To paste a cut section back in, position the cursor where you want to paste,
and type ++ctrl+y++
Windows users can get similar, but more limited, functionality if they Windows users can get similar, but more limited, functionality if they launch
launch `invoke.py` with the `winpty` program and have the `pyreadline3` `invoke.py` with the `winpty` program and have the `pyreadline3` library
library installed: installed:
```batch ```batch
> winpty python scripts\invoke.py > winpty python scripts\invoke.py
``` ```
On the Mac and Linux platforms, when you exit invoke.py, the last 1000 On the Mac and Linux platforms, when you exit invoke.py, the last 1000 lines of
lines of your command-line history will be saved. When you restart your command-line history will be saved. When you restart `invoke.py`, you can
`invoke.py`, you can access the saved history using the ++up++ key. access the saved history using the ++up++ key.
In addition, limited command-line completion is installed. In various In addition, limited command-line completion is installed. In various contexts,
contexts, you can start typing your command and press ++tab++. A list of you can start typing your command and press ++tab++. A list of potential
potential completions will be presented to you. You can then type a completions will be presented to you. You can then type a little more, hit
little more, hit ++tab++ again, and eventually autocomplete what you want. ++tab++ again, and eventually autocomplete what you want.
When specifying file paths using the one-letter shortcuts, the CLI When specifying file paths using the one-letter shortcuts, the CLI will attempt
will attempt to complete pathnames for you. This is most handy for the to complete pathnames for you. This is most handy for the `-I` (init image) and
`-I` (init image) and `-M` (init mask) paths. To initiate completion, start `-M` (init mask) paths. To initiate completion, start the path with a slash
the path with a slash (`/`) or `./`. For example: (`/`) or `./`. For example:
```bash ```bash
invoke> zebra with a mustache -I./test-pictures<TAB> invoke> zebra with a mustache -I./test-pictures<TAB>

View File

@ -6,29 +6,27 @@ title: Inpainting
## **Creating Transparent Regions for Inpainting** ## **Creating Transparent Regions for Inpainting**
Inpainting is really cool. To do it, you start with an initial image Inpainting is really cool. To do it, you start with an initial image and use a
and use a photoeditor to make one or more regions transparent photoeditor to make one or more regions transparent (i.e. they have a "hole" in
(i.e. they have a "hole" in them). You then provide the path to this them). You then provide the path to this image at the dream> command line using
image at the dream> command line using the `-I` switch. Stable the `-I` switch. Stable Diffusion will only paint within the transparent region.
Diffusion will only paint within the transparent region.
There's a catch. In the current implementation, you have to prepare There's a catch. In the current implementation, you have to prepare the initial
the initial image correctly so that the underlying colors are image correctly so that the underlying colors are preserved under the
preserved under the transparent area. Many imaging editing transparent area. Many imaging editing applications will by default erase the
applications will by default erase the color information under the color information under the transparent pixels and replace them with white or
transparent pixels and replace them with white or black, which will black, which will lead to suboptimal inpainting. It often helps to apply
lead to suboptimal inpainting. It often helps to apply incomplete incomplete transparency, such as any value between 1 and 99%
transparency, such as any value between 1 and 99%
You also must take care to export the PNG file in such a way that the You also must take care to export the PNG file in such a way that the color
color information is preserved. There is often an option in the export information is preserved. There is often an option in the export dialog that
dialog that lets you specify this. lets you specify this.
If your photoeditor is erasing the underlying color information, If your photoeditor is erasing the underlying color information, `dream.py` will
`dream.py` will give you a big fat warning. If you can't find a way to give you a big fat warning. If you can't find a way to coax your photoeditor to
coax your photoeditor to retain color values under transparent areas, retain color values under transparent areas, then you can combine the `-I` and
then you can combine the `-I` and `-M` switches to provide both the `-M` switches to provide both the original unedited image and the masked
original unedited image and the masked (partially transparent) image: (partially transparent) image:
```bash ```bash
invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
@ -36,47 +34,47 @@ invoke> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent
## **Masking using Text** ## **Masking using Text**
You can also create a mask using a text prompt to select the part of You can also create a mask using a text prompt to select the part of the image
the image you want to alter, using the <a you want to alter, using the [clipseg](https://github.com/timojl/clipseg)
href="https://github.com/timojl/clipseg">clipseg</a> algorithm. This algorithm. This works on any image, not just ones generated by InvokeAI.
works on any image, not just ones generated by InvokeAI.
The `--text_mask` (short form `-tm`) option takes two arguments. The The `--text_mask` (short form `-tm`) option takes two arguments. The first
first argument is a text description of the part of the image you wish argument is a text description of the part of the image you wish to mask (paint
to mask (paint over). If the text description contains a space, you must over). If the text description contains a space, you must surround it with
surround it with quotation marks. The optional second argument is the quotation marks. The optional second argument is the minimum threshold for the
minimum threshold for the mask classifier's confidence score, described mask classifier's confidence score, described in more detail below.
in more detail below.
To see how this works in practice, here's an image of a still life To see how this works in practice, here's an image of a still life painting that
painting that I got off the web. I got off the web.
<img src="../assets/still-life-scaled.jpg"> <figure markdown>
![still life scaled](../assets/still-life-scaled.jpg)
</figure>
You can selectively mask out the You can selectively mask out the orange and replace it with a baseball in this
orange and replace it with a baseball in this way: way:
~~~ ```bash
invoke> a baseball -I /path/to/still_life.png -tm orange invoke> a baseball -I /path/to/still_life.png -tm orange
~~~ ```
<img src="../assets/still-life-inpainted.png"> <figure markdown>
![](../assets/still-life-inpainted.png)
</figure>
The clipseg classifier produces a confidence score for each region it The clipseg classifier produces a confidence score for each region it
identifies. Generally regions that score above 0.5 are reliable, but identifies. Generally regions that score above 0.5 are reliable, but if you are
if you are getting too much or too little masking you can adjust the getting too much or too little masking you can adjust the threshold down (to get
threshold down (to get more mask), or up (to get less). In this more mask), or up (to get less). In this example, by passing `-tm` a higher
example, by passing `-tm` a higher value, we are insisting on a tigher value, we are insisting on a tigher mask. However, if you make it too high, the
mask. However, if you make it too high, the orange may not be picked orange may not be picked up at all!
up at all!
~~~ ```bash
invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6 invoke> a baseball -I /path/to/breakfast.png -tm orange 0.6
~~~ ```
The `!mask` command may be useful for debugging problems with the The `!mask` command may be useful for debugging problems with the text2mask
text2mask feature. The syntax is `!mask /path/to/image.png -tm <text> feature. The syntax is `!mask /path/to/image.png -tm <text> <threshold>`
<threshold>`
It will generate three files: It will generate three files:
@ -84,19 +82,18 @@ It will generate three files:
- it will be named XXXXX.<imagename>.<prompt>.selected.png - it will be named XXXXX.<imagename>.<prompt>.selected.png
- The image with the un-selected area highlighted. - The image with the un-selected area highlighted.
- it will be named XXXXX.<imagename>.<prompt>.deselected.png - it will be named XXXXX.<imagename>.<prompt>.deselected.png
- The image with the selected area converted into a black and white - The image with the selected area converted into a black and white image
image according to the threshold level according to the threshold level
- it will be named XXXXX.<imagename>.<prompt>.masked.png - it will be named XXXXX.<imagename>.<prompt>.masked.png
The `.masked.png` file can then be directly passed to the `invoke>` The `.masked.png` file can then be directly passed to the `invoke>` prompt in
prompt in the CLI via the `-M` argument. Do not attempt this with the CLI via the `-M` argument. Do not attempt this with the `selected.png` or
the `selected.png` or `deselected.png` files, as they contain some `deselected.png` files, as they contain some transparency throughout the image
transparency throughout the image and will not produce the desired and will not produce the desired results.
results.
Here is an example of how `!mask` works: Here is an example of how `!mask` works:
``` ```bash
invoke> !mask ./test-pictures/curly.png -tm hair 0.5 invoke> !mask ./test-pictures/curly.png -tm hair 0.5
>> generating masks from ./test-pictures/curly.png >> generating masks from ./test-pictures/curly.png
>> Initializing clipseg model for text to mask inference >> Initializing clipseg model for text to mask inference
@ -106,23 +103,30 @@ Outputs:
[941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5 [941.3] outputs/img-samples/000019.curly.hair.masked.png: !mask ./test-pictures/curly.png -tm hair 0.5
``` ```
**Original image "curly.png"** <figure markdown>
<img src="../assets/outpainting/curly.png"> ![curly](../assets/outpainting/curly.png)
<figcaption>Original image "curly.png"</figcaption>
</figure>
**000019.curly.hair.selected.png** <figure markdown>
<img src="../assets/inpainting/000019.curly.hair.selected.png"> ![curly hair selected](../assets/inpainting/000019.curly.hair.selected.png)
<figcaption>000019.curly.hair.selected.png</figcaption>
</figure>
**000019.curly.hair.deselected.png** <figure markdown>
<img src="../assets/inpainting/000019.curly.hair.deselected.png"> ![curly hair deselected](../assets/inpainting/000019.curly.hair.deselected.png)
<figcaption>000019.curly.hair.deselected.png</figcaption>
</figure>
**000019.curly.hair.masked.png** <figure markdown>
<img src="../assets/inpainting/000019.curly.hair.masked.png"> ![curly hair masked](../assets/inpainting/000019.curly.hair.masked.png)
<figcaption>000019.curly.hair.masked.png</figcaption>
</figure>
It looks like we selected the hair pretty well at the 0.5 threshold It looks like we selected the hair pretty well at the 0.5 threshold (which is
(which is the default, so we didn't actually have to specify it), so the default, so we didn't actually have to specify it), so let's have some fun:
let's have some fun:
``` ```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20 invoke> medusa with cobras -I ./test-pictures/curly.png -M 000019.curly.hair.masked.png -C20
>> loaded input image of size 512x512 from ./test-pictures/curly.png >> loaded input image of size 512x512 from ./test-pictures/curly.png
... ...
@ -130,86 +134,83 @@ Outputs:
[946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75 [946] outputs/img-samples/000024.801380492.png: "medusa with cobras" -s 50 -S 801380492 -W 512 -H 512 -C 20.0 -I ./test-pictures/curly.png -A k_lms -f 0.75
``` ```
<img src="../assets/inpainting/000024.801380492.png"> <figure markdown>
![](../assets/inpainting/000024.801380492.png)
</figure>
You can also skip the `!mask` creation step and just select the masked You can also skip the `!mask` creation step and just select the masked
region directly: region directly:
```
```bash
invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20 invoke> medusa with cobras -I ./test-pictures/curly.png -tm hair -C20
``` ```
## Using the RunwayML inpainting model ## Using the RunwayML inpainting model
The [RunwayML Inpainting Model The
v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting) is [RunwayML Inpainting Model v1.5](https://huggingface.co/runwayml/stable-diffusion-inpainting)
a specialized version of [Stable Diffusion is a specialized version of
v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5) [Stable Diffusion v1.5](https://huggingface.co/spaces/runwayml/stable-diffusion-v1-5)
that contains extra channels specifically designed to enhance that contains extra channels specifically designed to enhance inpainting and
inpainting and outpainting. While it can do regular `txt2img` and outpainting. While it can do regular `txt2img` and `img2img`, it really shines
`img2img`, it really shines when filling in missing regions. It has an when filling in missing regions. It has an almost uncanny ability to blend the
almost uncanny ability to blend the new regions with existing ones in new regions with existing ones in a semantically coherent way.
a semantically coherent way.
To install the inpainting model, follow the To install the inpainting model, follow the
[instructions](INSTALLING-MODELS.md) for installing a new model. You [instructions](../installation/INSTALLING_MODELS.md) for installing a new model.
may use either the CLI (`invoke.py` script) or directly edit the You may use either the CLI (`invoke.py` script) or directly edit the
`configs/models.yaml` configuration file to do this. The main thing to `configs/models.yaml` configuration file to do this. The main thing to watch out
watch out for is that the the model `config` option must be set up to for is that the the model `config` option must be set up to use
use `v1-inpainting-inference.yaml` rather than the `v1-inference.yaml` `v1-inpainting-inference.yaml` rather than the `v1-inference.yaml` file that is
file that is used by Stable Diffusion 1.4 and 1.5. used by Stable Diffusion 1.4 and 1.5.
After installation, your `models.yaml` should contain an entry that After installation, your `models.yaml` should contain an entry that looks like
looks like this one: this one:
inpainting-1.5: inpainting-1.5: weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt description: SD inpainting v1.5 config:
description: SD inpainting v1.5 configs/stable-diffusion/v1-inpainting-inference.yaml vae:
config: configs/stable-diffusion/v1-inpainting-inference.yaml models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt width: 512
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512 height: 512
As shown in the example, you may include a VAE fine-tuning weights As shown in the example, you may include a VAE fine-tuning weights file as well.
file as well. This is strongly recommended. This is strongly recommended.
To use the custom inpainting model, launch `invoke.py` with the To use the custom inpainting model, launch `invoke.py` with the argument
argument `--model inpainting-1.5` or alternatively from within the `--model inpainting-1.5` or alternatively from within the script use the
script use the `!switch inpainting-1.5` command to load and switch to `!switch inpainting-1.5` command to load and switch to the inpainting model.
the inpainting model.
You can now do inpainting and outpainting exactly as described above, You can now do inpainting and outpainting exactly as described above, but there
but there will (likely) be a noticeable improvement in will (likely) be a noticeable improvement in coherence. Txt2img and Img2img will
coherence. Txt2img and Img2img will work as well. work as well.
There are a few caveats to be aware of: There are a few caveats to be aware of:
1. The inpainting model is larger than the standard model, and will 1. The inpainting model is larger than the standard model, and will use nearly 4
use nearly 4 GB of GPU VRAM. This makes it unlikely to run on GB of GPU VRAM. This makes it unlikely to run on a 4 GB graphics card.
a 4 GB graphics card.
2. When operating in Img2img mode, the inpainting model is much less 2. When operating in Img2img mode, the inpainting model is much less steerable
steerable than the standard model. It is great for making small than the standard model. It is great for making small changes, such as
changes, such as changing the pattern of a fabric, or slightly changing the pattern of a fabric, or slightly changing a subject's expression
changing a subject's expression or hair, but the model will or hair, but the model will resist making the dramatic alterations that the
resist making the dramatic alterations that the standard standard model lets you do.
model lets you do.
3. While the `--hires` option works fine with the inpainting model, 3. While the `--hires` option works fine with the inpainting model, some special
some special features, such as `--embiggen` are disabled. features, such as `--embiggen` are disabled.
4. Prompt weighting (`banana++ sushi`) and merging work well with 4. Prompt weighting (`banana++ sushi`) and merging work well with the inpainting
the inpainting model, but prompt swapping (a ("fluffy cat").swap("smiling dog") eating a hotdog`) model, but prompt swapping
will not have any effect due to the way the model is set up. (`a ("fluffy cat").swap("smiling dog") eating a hotdog`) will not have any
You may use text masking (with `-tm thing-to-mask`) as an effect due to the way the model is set up. You may use text masking (with
effective replacement. `-tm thing-to-mask`) as an effective replacement.
5. The model tends to oversharpen image if you use high step or CFG 5. The model tends to oversharpen image if you use high step or CFG values. If
values. If you need to do large steps, use the standard model. you need to do large steps, use the standard model.
6. The `--strength` (`-f`) option has no effect on the inpainting 6. The `--strength` (`-f`) option has no effect on the inpainting model due to
model due to its fundamental differences with the standard its fundamental differences with the standard model. It will always take the
model. It will always take the full number of steps you specify. full number of steps you specify.
## Troubleshooting ## Troubleshooting
@ -217,23 +218,21 @@ Here are some troubleshooting tips for inpainting and outpainting.
## Inpainting is not changing the masked region enough! ## Inpainting is not changing the masked region enough!
One of the things to understand about how inpainting works is that it One of the things to understand about how inpainting works is that it is
is equivalent to running img2img on just the masked (transparent) equivalent to running img2img on just the masked (transparent) area. img2img
area. img2img builds on top of the existing image data, and therefore builds on top of the existing image data, and therefore will attempt to preserve
will attempt to preserve colors, shapes and textures to the best of colors, shapes and textures to the best of its ability. Unfortunately this means
its ability. Unfortunately this means that if you want to make a that if you want to make a dramatic change in the inpainted region, for example
dramatic change in the inpainted region, for example replacing a red replacing a red wall with a blue one, the algorithm will fight you.
wall with a blue one, the algorithm will fight you.
You have a couple of options. The first is to increase the values of You have a couple of options. The first is to increase the values of the
the requested steps (`-sXXX`), strength (`-f0.XX`), and/or requested steps (`-sXXX`), strength (`-f0.XX`), and/or condition-free guidance
condition-free guidance (`-CXX.X`). If this is not working for you, a (`-CXX.X`). If this is not working for you, a more extreme step is to provide
more extreme step is to provide the `--inpaint_replace 0.X` (`-r0.X`) the `--inpaint_replace 0.X` (`-r0.X`) option. This value ranges from 0.0 to 1.0.
option. This value ranges from 0.0 to 1.0. The higher it is the less The higher it is the less attention the algorithm will pay to the data
attention the algorithm will pay to the data underneath the masked underneath the masked region. At high values this will enable you to replace
region. At high values this will enable you to replace colored regions colored regions entirely, but beware that the masked region mayl not blend in
entirely, but beware that the masked region mayl not blend in with the with the surrounding unmasked regions as well.
surrounding unmasked regions as well.
--- ---
@ -248,8 +247,8 @@ surrounding unmasked regions as well.
5. Open the Layers toolbar (^L) and select "Floating Selection" 5. Open the Layers toolbar (^L) and select "Floating Selection"
6. Set opacity to a value between 0% and 99% 6. Set opacity to a value between 0% and 99%
7. Export as PNG 7. Export as PNG
8. In the export dialogue, Make sure the "Save colour values from 8. In the export dialogue, Make sure the "Save colour values from transparent
transparent pixels" checkbox is selected. pixels" checkbox is selected.
--- ---
@ -261,36 +260,47 @@ surrounding unmasked regions as well.
![step1](../assets/step1.png) ![step1](../assets/step1.png)
</figure> </figure>
2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area you desire to inpaint. 2. Use any of the selection tools (Marquee, Lasso, or Wand) to select the area
you desire to inpaint.
<figure markdown> <figure markdown>
![step2](../assets/step2.png) ![step2](../assets/step2.png)
</figure> </figure>
3. Because we'll be applying a mask over the area we want to preserve, you should now select the inverse by using the ++shift+ctrl+i++ shortcut, or right clicking and using the "Select Inverse" option. 3. Because we'll be applying a mask over the area we want to preserve, you
should now select the inverse by using the ++shift+ctrl+i++ shortcut, or
right clicking and using the "Select Inverse" option.
4. You'll now create a mask by selecting the image layer, and Masking the selection. Make sure that you don't delete any of the underlying image, or your inpainting results will be dramatically impacted. 4. You'll now create a mask by selecting the image layer, and Masking the
selection. Make sure that you don't delete any of the underlying image, or
your inpainting results will be dramatically impacted.
<figure markdown> <figure markdown>
![step4](../assets/step4.png) ![step4](../assets/step4.png)
</figure> </figure>
5. Make sure to hide any background layers that are present. You should see the mask applied to your image layer, and the image on your canvas should display the checkered background. 5. Make sure to hide any background layers that are present. You should see the
mask applied to your image layer, and the image on your canvas should display
the checkered background.
<figure markdown> <figure markdown>
![step5](../assets/step5.png) ![step5](../assets/step5.png)
</figure> </figure>
6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the menu bar, or by using the keyboard shortcut ++alt+ctrl+s++ 6. Save the image as a transparent PNG by using `File`-->`Save a Copy` from the
menu bar, or by using the keyboard shortcut ++alt+ctrl+s++
<figure markdown> <figure markdown>
![step6](../assets/step6.png) ![step6](../assets/step6.png)
</figure> </figure>
7. After following the inpainting instructions above (either through the CLI or the Web UI), marvel at your newfound ability to selectively invoke. Lookin' good! 7. After following the inpainting instructions above (either through the CLI or
the Web UI), marvel at your newfound ability to selectively invoke. Lookin'
good!
<figure markdown> <figure markdown>
![step7](../assets/step7.png) ![step7](../assets/step7.png)
</figure> </figure>
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is selected. 8. In the export dialogue, Make sure the "Save colour values from transparent
pixels" checkbox is selected.

View File

@ -20,9 +20,9 @@ title: Manual Installation, Linux
~$ ./Anaconda3-2022.05-Linux-x86_64.sh ~$ ./Anaconda3-2022.05-Linux-x86_64.sh
``` ```
After installing anaconda, you should log out of your system and log back in. If After installing anaconda, you should log out of your system and log back
the installation worked, your command prompt will be prefixed by the name of the in. If the installation worked, your command prompt will be prefixed by the
current anaconda environment - `(base)`. name of the current anaconda environment - `(base)`.
3. Copy the InvokeAI source code from GitHub: 3. Copy the InvokeAI source code from GitHub:
@ -30,9 +30,11 @@ title: Manual Installation, Linux
(base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git (base) ~$ git clone https://github.com/invoke-ai/InvokeAI.git
``` ```
This will create InvokeAI folder where you will follow the rest of the steps. This will create InvokeAI folder where you will follow the rest of the
steps.
4. Enter the newly-created InvokeAI folder. From this step forward make sure that you are working in the InvokeAI directory! 4. Enter the newly-created InvokeAI folder. From this step forward make sure
that you are working in the InvokeAI directory!
```bash ```bash
(base) ~$ cd InvokeAI (base) ~$ cd InvokeAI
@ -49,34 +51,52 @@ title: Manual Installation, Linux
(invokeai) ~/InvokeAI$ (invokeai) ~/InvokeAI$
``` ```
After these steps, your command prompt will be prefixed by `(invokeai)` as shown After these steps, your command prompt will be prefixed by `(invokeai)` as
above. shown above.
6. Load the big stable diffusion weights files and a couple of smaller machine-learning models: 6. Load the big stable diffusion weights files and a couple of smaller
machine-learning models:
```bash ```bash
(invokeai) ~/InvokeAI$ python3 scripts/preload_models.py (invokeai) ~/InvokeAI$ python3 scripts/preload_models.py
``` ```
!!! note !!! note
This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the
weights files for you.
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the This script will lead you through the process of creating an account on Hugging Face,
same thing. accepting the terms and conditions of the Stable Diffusion model license,
and obtaining an access token for downloading. It will then download and
install the weights files for you.
Please look [here](INSTALLING_MODELS.md) for a manual process for doing
the same thing.
7. Start generating images! 7. Start generating images!
# Command-line interface !!! todo "Run InvokeAI!"
(invokeai) python scripts/invoke.py
# or run the web interface on localhost:9090! !!! warning "IMPORTANT"
(invokeai) python scripts/invoke.py --web
# or run the web interface on your machine's network interface! Make sure that the conda environment is activated, which should create
(invokeai) python scripts/invoke.py --web --host 0.0.0.0 `(invokeai)` in front of your prompt!
=== "CLI"
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for
@ -84,18 +104,16 @@ either the CLI or the Web UI. See [Command Line
Client](../features/CLI.md#model-selection-and-importation). The Client](../features/CLI.md#model-selection-and-importation). The
model names are defined in `configs/models.yaml`. model names are defined in `configs/models.yaml`.
9. Subsequently, to relaunch the script, be sure to run "conda 8. Subsequently, to relaunch the script, be sure to run "conda activate
activate invokeai" (step 5, second command), enter the `InvokeAI` invokeai" (step 5, second command), enter the `InvokeAI` directory, and then
directory, and then launch the invoke script (step 8). If you forget launch the invoke script (step 8). If you forget to activate the 'invokeai'
to activate the 'invokeai' environment, the script will fail with environment, the script will fail with multiple `ModuleNotFound` errors.
multiple `ModuleNotFound` errors.
## Updating to newer versions of the script ## Updating to newer versions of the script
This distribution is changing rapidly. If you used the `git clone` This distribution is changing rapidly. If you used the `git clone` method
method (step 5) to download the InvokeAI directory, then to update to (step 5) to download the InvokeAI directory, then to update to the latest and
the latest and greatest version, launch the Anaconda window, enter greatest version, launch the Anaconda window, enter `InvokeAI` and type:
`InvokeAI` and type:
```bash ```bash
(invokeai) ~/InvokeAI$ git pull (invokeai) ~/InvokeAI$ git pull

View File

@ -4,17 +4,18 @@ title: Manual Installation, macOS
# :fontawesome-brands-apple: macOS # :fontawesome-brands-apple: macOS
Invoke AI runs quite well on M1 Macs and we have a number of M1 users Invoke AI runs quite well on M1 Macs and we have a number of M1 users in the
in the community. community.
While the repo does run on Intel Macs, we only have a couple While the repo does run on Intel Macs, we only have a couple reports. If you
reports. If you have an Intel Mac and run into issues, please create have an Intel Mac and run into issues, please create an issue on Github and we
an issue on Github and we will do our best to help. will do our best to help.
## Requirements ## Requirements
- macOS 12.3 Monterey or later - macOS 12.3 Monterey or later
- About 10GB of storage (and 10GB of data if your internet connection has data caps) - About 10GB of storage (and 10GB of data if your internet connection has data
caps)
- Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more) - Any M1 Macs or an Intel Macs with 4GB+ of VRAM (ideally more)
## Installation ## Installation
@ -42,9 +43,7 @@ an issue on Github and we will do our best to help.
brew install cmake protobuf rust brew install cmake protobuf rust
``` ```
Then clone the InvokeAI repository: ```bash title="Clone the InvokeAI repository"
```bash title="Clone the InvokeAI repository:
# Clone the Invoke AI repo # Clone the Invoke AI repo
git clone https://github.com/invoke-ai/InvokeAI.git git clone https://github.com/invoke-ai/InvokeAI.git
cd InvokeAI cd InvokeAI
@ -102,6 +101,7 @@ an issue on Github and we will do our best to help.
```bash ```bash
# Activate the environment (you need to do this every time you want to run SD) # Activate the environment (you need to do this every time you want to run SD)
conda activate invokeai conda activate invokeai
```
!!! info !!! info
@ -116,20 +116,36 @@ files for you. It will lead you through the process of getting a Hugging Face
account, accepting the Stable Diffusion model weight license agreement, and account, accepting the Stable Diffusion model weight license agreement, and
creating a download token: creating a download token:
```bash
# This will take some time, depending on the speed of your internet connection # This will take some time, depending on the speed of your internet connection
# and will consume about 10GB of space # and will consume about 10GB of space
(invokeai) python scripts/preload_models.py python scripts/preload_models.py --no-interactive
```
!! todo "Run InvokeAI!" !!! todo "Run InvokeAI!"
# Command-line interface !!! warning "IMPORTANT"
(invokeai) python scripts/invoke.py
# or run the web interface on localhost:9090! Make sure that the conda environment is activated, which should create
(invokeai) python scripts/invoke.py --web `(invokeai)` in front of your prompt!
# or run the web interface on your machine's network interface! === "CLI"
(invokeai) python scripts/invoke.py --web --host 0.0.0.0
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for
@ -141,8 +157,8 @@ model names are defined in `configs/models.yaml`.
## Common problems ## Common problems
After you followed all the instructions and try to run invoke.py, you might After you followed all the instructions and try to run invoke.py, you might get
get several errors. Here's the errors I've seen and found solutions for. several errors. Here's the errors I've seen and found solutions for.
### Is it slow? ### Is it slow?
@ -158,13 +174,12 @@ python ./scripts/orig_scripts/txt2img.py \
### Doesn't work anymore? ### Doesn't work anymore?
PyTorch nightly includes support for MPS. Because of this, this setup PyTorch nightly includes support for MPS. Because of this, this setup is
is inherently unstable. One morning I woke up and it no longer worked inherently unstable. One morning I woke up and it no longer worked no matter
no matter what I did until I switched to miniforge. However, I have what I did until I switched to miniforge. However, I have another Mac that works
another Mac that works just fine with Anaconda. If you can't get it to just fine with Anaconda. If you can't get it to work, please search a little
work, please search a little first because many of the errors will get first because many of the errors will get posted and solved. If you can't find a
posted and solved. If you can't find a solution please [create an solution please [create an issue](https://github.com/invoke-ai/InvokeAI/issues).
issue](https://github.com/invoke-ai/InvokeAI/issues).
One debugging step is to update to the latest version of PyTorch nightly. One debugging step is to update to the latest version of PyTorch nightly.
@ -201,9 +216,9 @@ conda update \
There are several causes of these errors: There are several causes of these errors:
1. Did you remember to `conda activate invokeai`? If your terminal prompt begins with 1. Did you remember to `conda activate invokeai`? If your terminal prompt begins
"(invokeai)" then you activated it. If it begins with "(base)" or something else with "(invokeai)" then you activated it. If it begins with "(base)" or
you haven't. something else you haven't.
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py` 2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
instead of `python ./scripts/preload_models.py` or instead of `python ./scripts/preload_models.py` or
@ -220,10 +235,10 @@ There are several causes of these errors:
conda env create -f environment-mac.yml conda env create -f environment-mac.yml
``` ```
4. If you have activated the invokeai virtual environment and tried rebuilding it, 4. If you have activated the invokeai virtual environment and tried rebuilding
maybe the problem could be that I have something installed that you don't and it, maybe the problem could be that I have something installed that you don't
you'll just need to manually install it. Make sure you activate the virtual and you'll just need to manually install it. Make sure you activate the
environment so it installs there instead of globally. virtual environment so it installs there instead of globally.
```bash ```bash
conda activate invokeai conda activate invokeai
@ -382,11 +397,11 @@ curl \
### How come `--seed` doesn't work? ### How come `--seed` doesn't work?
First this: !!! Information
> Completely reproducible results are not guaranteed across PyTorch releases, Completely reproducible results are not guaranteed across PyTorch releases,
> individual commits, or different platforms. Furthermore, results may not be individual commits, or different platforms. Furthermore, results may not be
> reproducible between CPU and GPU executions, even when using identical seeds. reproducible between CPU and GPU executions, even when using identical seeds.
[PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html) [PyTorch docs](https://pytorch.org/docs/stable/notes/randomness.html)
@ -447,11 +462,11 @@ C.
### I just got Rickrolled! Do I have a virus? ### I just got Rickrolled! Do I have a virus?
You don't have a virus. It's part of the project. Here's You don't have a virus. It's part of the project. Here's
[Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) [Rick](https://github.com/invoke-ai/InvokeAI/blob/main/assets/rick.jpeg) and
and here's [the here's
code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79) [the code](https://github.com/invoke-ai/InvokeAI/blob/69ae4b35e0a0f6ee1af8bb9a5d0016ccb27e36dc/scripts/txt2img.py#L79)
that swaps him in. It's a NSFW filter, which IMO, doesn't work very that swaps him in. It's a NSFW filter, which IMO, doesn't work very good (and we
good (and we call this "computer vision", sheesh). call this "computer vision", sheesh).
--- ---
@ -474,9 +489,9 @@ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backen
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead. RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
``` ```
Update to the latest version of invoke-ai/InvokeAI. We were Update to the latest version of invoke-ai/InvokeAI. We were patching pytorch but
patching pytorch but we found a file in stable-diffusion that we could we found a file in stable-diffusion that we could change instead. This is a
change instead. This is a 32-bit vs 16-bit problem. 32-bit vs 16-bit problem.
### The processor must support the Intel bla bla bla ### The processor must support the Intel bla bla bla
@ -508,4 +523,3 @@ Abort trap: 6
/Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown /Users/[...]/opt/anaconda3/envs/invokeai/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d ' warnings.warn('resource_tracker: There appear to be %d '
``` ```

View File

@ -72,28 +72,45 @@ in the wiki
7. Load the big stable diffusion weights files and a couple of smaller machine-learning models: 7. Load the big stable diffusion weights files and a couple of smaller machine-learning models:
```bash ```bash
(invokeai) ~/InvokeAI$ python scripts\preload_models.py python scripts/preload_models.py
``` ```
!!! note !!! note
This script will lead you through the process of creating an account on Hugging Face, This script will lead you through the process of creating an account on Hugging Face,
accepting the terms and conditions of the Stable Diffusion model license, and accepting the terms and conditions of the Stable Diffusion model license, and
obtaining an access token for downloading. It will then download and install the obtaining an access token for downloading. It will then download and install the
weights files for you. weights files for you.
Please see [../features/INSTALLING_MODELS.md] for a manual process for doing the Please look [here](INSTALLING_MODELS.md) for a manual process for doing the
same thing. same thing.
8. Start generating images! 8. Start generating images!
# Command-line interface !!! example ""
(invokeai) python scripts/invoke.py
# or run the web interface on localhost:9090! !!! warning "IMPORTANT"
(invokeai) python scripts/invoke.py --web
# or run the web interface on your machine's network interface! Make sure that the conda environment is activated, which should create
(invokeai) python scripts/invoke.py --web --host 0.0.0.0 `(invokeai)` in front of your prompt!
=== "CLI"
```bash
python scripts/invoke.py
```
=== "local Webserver"
```bash
python scripts/invoke.py --web
```
=== "Public Webserver"
```bash
python scripts/invoke.py --web --host 0.0.0.0
```
To use an alternative model you may invoke the `!switch` command in To use an alternative model you may invoke the `!switch` command in
the CLI, or pass `--model <model_name>` during `invoke.py` launch for the CLI, or pass `--model <model_name>` during `invoke.py` launch for