mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Merge branch 'development' into development
This commit is contained in:
141
docs/features/CHANGELOG.md
Normal file
141
docs/features/CHANGELOG.md
Normal file
@ -0,0 +1,141 @@
|
||||
---
|
||||
title: Changelog
|
||||
---
|
||||
|
||||
## v1.13 (in process)
|
||||
|
||||
- Supports a Google Colab notebook for a standalone server running on Google
|
||||
hardware [Arturo Mendivil](https://github.com/artmen1516)
|
||||
- WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- WebUI supports incremental display of in-progress images during generation
|
||||
[Kevin Gibbons](https://github.com/bakkot)
|
||||
- Output directory can be specified on the dream> command line.
|
||||
- The grid was displaying duplicated images when not enough images to fill the
|
||||
final row [Muhammad Usama](https://github.com/SMUsamaShah)
|
||||
- Can specify --grid on dream.py command line as the default.
|
||||
- Miscellaneous internal bug and stability fixes.
|
||||
|
||||
---
|
||||
|
||||
## v1.12 (28 August 2022)
|
||||
|
||||
- Improved file handling, including ability to read prompts from standard input.
|
||||
(kudos to [Yunsaki](https://github.com/yunsaki)
|
||||
- The web server is now integrated with the dream.py script. Invoke by adding
|
||||
--web to the dream.py command arguments.
|
||||
- Face restoration and upscaling via GFPGAN and Real-ESGAN are now automatically
|
||||
enabled if the GFPGAN directory is located as a sibling to Stable Diffusion.
|
||||
VRAM requirements are modestly reduced. Thanks to both
|
||||
[Blessedcoolant](https://github.com/blessedcoolant) and
|
||||
[Oceanswave](https://github.com/oceanswave) for their work on this.
|
||||
- You can now swap samplers on the dream> command line.
|
||||
[Blessedcoolant](https://github.com/blessedcoolant)
|
||||
|
||||
---
|
||||
|
||||
## v1.11 (26 August 2022)
|
||||
|
||||
- NEW FEATURE: Support upscaling and face enhancement using the GFPGAN module.
|
||||
(kudos to [Oceanswave](https://github.com/Oceanswave))
|
||||
- You now can specify a seed of -1 to use the previous image's seed, -2 to use
|
||||
the seed for the image generated before that, etc. Seed memory only extends
|
||||
back to the previous command, but will work on all images generated with the
|
||||
-n# switch.
|
||||
- Variant generation support temporarily disabled pending more general solution.
|
||||
- Created a feature branch named **yunsaki-morphing-dream** which adds
|
||||
experimental support for iteratively modifying the prompt and its parameters.
|
||||
Please
|
||||
see[ Pull Request #86](https://github.com/lstein/stable-diffusion/pull/86) for
|
||||
a synopsis of how this works. Note that when this feature is eventually added
|
||||
to the main branch, it will may be modified significantly.
|
||||
|
||||
---
|
||||
|
||||
## v1.10 (25 August 2022)
|
||||
|
||||
- A barebones but fully functional interactive web server for online generation
|
||||
of txt2img and img2img.
|
||||
|
||||
---
|
||||
|
||||
## v1.09 (24 August 2022)
|
||||
|
||||
- A new -v option allows you to generate multiple variants of an initial image
|
||||
in img2img mode. (kudos to [Oceanswave](https://github.com/Oceanswave).
|
||||
- [See this discussion in the PR for examples and details on use](https://github.com/lstein/stable-diffusion/pull/71#issuecomment-1226700810))
|
||||
- Added ability to personalize text to image generation (kudos to
|
||||
[Oceanswave](https://github.com/Oceanswave) and
|
||||
[nicolai256](https://github.com/nicolai256))
|
||||
- Enabled all of the samplers from k_diffusion
|
||||
|
||||
---
|
||||
|
||||
## v1.08 (24 August 2022)
|
||||
|
||||
- Escape single quotes on the dream> command before trying to parse. This avoids
|
||||
parse errors.
|
||||
- Removed instruction to get Python3.8 as first step in Windows install.
|
||||
Anaconda3 does it for you.
|
||||
- Added bounds checks for numeric arguments that could cause crashes.
|
||||
- Cleaned up the copyright and license agreement files.
|
||||
|
||||
---
|
||||
|
||||
## v1.07 (23 August 2022)
|
||||
|
||||
- Image filenames will now never fill gaps in the sequence, but will be assigned
|
||||
the next higher name in the chosen directory. This ensures that the alphabetic
|
||||
and chronological sort orders are the same.
|
||||
|
||||
---
|
||||
|
||||
## v1.06 (23 August 2022)
|
||||
|
||||
- Added weighted prompt support contributed by
|
||||
[xraxra](https://github.com/xraxra)
|
||||
- Example of using weighted prompts to tweak a demonic figure contributed by
|
||||
[bmaltais](https://github.com/bmaltais)
|
||||
|
||||
---
|
||||
|
||||
## v1.05 (22 August 2022 - after the drop)
|
||||
|
||||
- Filenames now use the following formats: 000010.95183149.png -- Two files
|
||||
produced by the same command (e.g. -n2), 000010.26742632.png -- distinguished
|
||||
by a different seed.
|
||||
000011.455191342.01.png -- Two files produced by the same command using
|
||||
000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
|
||||
000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid
|
||||
can be regenerated with the indicated key
|
||||
|
||||
- It should no longer be possible for one image to overwrite another
|
||||
- You can use the "cd" and "pwd" commands at the dream> prompt to set and
|
||||
retrieve the path of the output directory.
|
||||
|
||||
## v1.04 (22 August 2022 - after the drop)
|
||||
|
||||
- Updated README to reflect installation of the released weights.
|
||||
- Suppressed very noisy and inconsequential warning when loading the frozen CLIP
|
||||
tokenizer.
|
||||
|
||||
## v1.03 (22 August 2022)
|
||||
|
||||
- The original txt2img and img2img scripts from the CompViz repository have been
|
||||
moved into a subfolder named "orig_scripts", to reduce confusion.
|
||||
|
||||
## v1.02 (21 August 2022)
|
||||
|
||||
- A copy of the prompt and all of its switches and options is now stored in the
|
||||
corresponding image in a tEXt metadata field named "Dream". You can read the
|
||||
prompt using scripts/images2prompt.py, or an image editor that allows you to
|
||||
explore the full metadata. **Please run "conda env update -f environment.yaml"
|
||||
to load the k_lms dependencies!!**
|
||||
|
||||
## v1.01 (21 August 2022)
|
||||
|
||||
- added k_lms sampling. **Please run "conda env update -f environment.yaml" to
|
||||
load the k_lms dependencies!!**
|
||||
- use half precision arithmetic by default, resulting in faster execution and
|
||||
lower memory requirements Pass argument --full_precision to dream.py to get
|
||||
slower but more accurate image generation
|
@ -1,32 +1,29 @@
|
||||
# **Interactive Command-Line Interface**
|
||||
---
|
||||
title: CLI
|
||||
---
|
||||
|
||||
The `dream.py` script, located in `scripts/dream.py`, provides an
|
||||
interactive interface to image generation similar to the "dream
|
||||
mothership" bot that Stable AI provided on its Discord server.
|
||||
## **Interactive Command Line Interface**
|
||||
|
||||
Unlike the txt2img.py and img2img.py scripts provided in the original
|
||||
CompViz/stable-diffusion source code repository, the time-consuming
|
||||
initialization of the AI model initialization only happens once. After
|
||||
that image generation from the command-line interface is very fast.
|
||||
The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image
|
||||
generation similar to the "dream mothership" bot that Stable AI provided on its Discord server.
|
||||
|
||||
The script uses the readline library to allow for in-line editing,
|
||||
command history (up and down arrows), autocompletion, and more. To
|
||||
help keep track of which prompts generated which images, the script
|
||||
writes a log file of image names and prompts to the selected output
|
||||
directory.
|
||||
Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion
|
||||
source code repository, the time-consuming initialization of the AI model initialization only
|
||||
happens once. After that image generation from the command-line interface is very fast.
|
||||
|
||||
In addition, as of version 1.02, it also writes the prompt into the
|
||||
PNG file's metadata where it can be retrieved using
|
||||
scripts/images2prompt.py
|
||||
The script uses the readline library to allow for in-line editing, command history (up and down
|
||||
arrows), autocompletion, and more. To help keep track of which prompts generated which images, the
|
||||
script writes a log file of image names and prompts to the selected output directory.
|
||||
|
||||
In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can
|
||||
be retrieved using scripts/images2prompt.py
|
||||
|
||||
The script is confirmed to work on Linux, Windows and Mac systems.
|
||||
|
||||
_Note:_ This script runs from the command-line or can be used as a Web
|
||||
application. The Web GUI is currently rudimentary, but a much better
|
||||
replacement is on its way.
|
||||
_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is
|
||||
currently rudimentary, but a much better replacement is on its way.
|
||||
|
||||
|
||||
```
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
@ -54,238 +51,175 @@ dream> q
|
||||
<img src="../assets/dream-py-demo.png"/>
|
||||
</p>
|
||||
|
||||
The `dream>` prompt's arguments are pretty much identical to those
|
||||
used in the Discord bot, except you don't need to type "!dream" (it
|
||||
doesn't hurt if you do). A significant change is that creation of
|
||||
individual images is now the default unless --grid (-g) is given. A
|
||||
full list is given in [List of prompt arguments]
|
||||
(#list-of-prompt-arguments).
|
||||
The `dream>` prompt's arguments are pretty much identical to those used in the Discord bot, except
|
||||
you don't need to type "!dream" (it doesn't hurt if you do). A significant change is that creation
|
||||
of individual images is now the default unless --grid (-g) is given. A full list is given in [List
|
||||
of prompt arguments] (#list-of-prompt-arguments).
|
||||
|
||||
# Arguments
|
||||
## Arguments
|
||||
|
||||
The script itself also recognizes a series of command-line switches
|
||||
that will change important global defaults, such as the directory for
|
||||
image outputs and the location of the model weight files.
|
||||
The script itself also recognizes a series of command-line switches that will change important
|
||||
global defaults, such as the directory for image outputs and the location of the model weight files.
|
||||
|
||||
## List of arguments recognized at the command line:
|
||||
## List of arguments recognized at the command line
|
||||
|
||||
These command-line arguments can be passed to dream.py when you first
|
||||
run it from the Windows, Mac or Linux command line. Some set defaults
|
||||
that can be overridden on a per-prompt basis (see [List of prompt
|
||||
arguments] (#list-of-prompt-arguments). Others
|
||||
These command-line arguments can be passed to dream.py when you first run it from the Windows, Mac
|
||||
or Linux command line. Some set defaults that can be overridden on a per-prompt basis (see [List of
|
||||
prompt arguments] (#list-of-prompt-arguments). Others
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
| --help | -h | | Print a concise help message. |
|
||||
| --outdir <path> | -o<path> | outputs/img_samples | Location for generated images. |
|
||||
| --prompt_as_dir | -p | False | Name output directories using the prompt text. |
|
||||
| --from_file <path> | | None | Read list of prompts from a file. Use "-" to read from standard input |
|
||||
| --model <modelname>| | stable-diffusion-1.4| Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m"|
|
||||
| --full_precision | -F | False | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
|
||||
| --web | | False | Start in web server mode |
|
||||
| --host <ip addr> | | localhost | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
|
||||
| --port <port> | | 9090 | Which port web server should listen for requests on. |
|
||||
| --config <path> | | configs/models.yaml | Configuration file for models and their weights. |
|
||||
| --iterations <int> | -n<int> | 1 | How many images to generate per prompt. |
|
||||
| --grid | -g | False | Save all image series as a grid rather than individually. |
|
||||
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
|
||||
| --seamless | | False | Create interesting effects by tiling elements of the image. |
|
||||
| --embedding_path <path>| | None | Path to pre-trained embedding manager checkpoints, for custom models |
|
||||
| --gfpgan_dir | | src/gfpgan | Path to where GFPGAN is installed. |
|
||||
| --gfpgan_model_path| | experiments/pretrained_models/GFPGANv1.3.pth| Path to GFPGAN model file, relative to --gfpgan_dir. |
|
||||
| --device <device> | -d<device>| torch.cuda.current_device() | Device to run SD on, e.g. "cuda:0" |
|
||||
| Argument | Shortcut | Default | Description |
|
||||
| :---------------------- | :---------: | ------------------------------------------------ | ---------------------------------------------------------------------------------------------------- |
|
||||
| --help | -h | | Print a concise help message. |
|
||||
| --outdir <path> | -o<path> | outputs/img_samples | Location for generated images. |
|
||||
| --prompt_as_dir | -p | False | Name output directories using the prompt text. |
|
||||
| --from_file <path> | | None | Read list of prompts from a file. Use "-" to read from standard input |
|
||||
| --model <modelname> | | stable-diffusion-1.4 | Loads model specified in configs/models.yaml. Currently one of "stable-diffusion-1.4" or "laion400m" |
|
||||
| --full_precision | -F | False | Run in slower full-precision mode. Needed for Macintosh M1/M2 hardware and some older video cards. |
|
||||
| --web | | False | Start in web server mode |
|
||||
| --host <ip addr> | | localhost | Which network interface web server should listen on. Set to 0.0.0.0 to listen on any. |
|
||||
| --port <port> | | 9090 | Which port web server should listen for requests on. |
|
||||
| --config <path> | | configs/models.yaml | Configuration file for models and their weights. |
|
||||
| --iterations <int> | -n<int> | 1 | How many images to generate per prompt. |
|
||||
| --grid | -g | False | Save all image series as a grid rather than individually. |
|
||||
| --sampler <sampler> | -A<sampler> | k_lms | Sampler to use. Use -h to get list of available samplers. |
|
||||
| --seamless | | False | Create interesting effects by tiling elements of the image. |
|
||||
| --embedding_path <path> | | None | Path to pre-trained embedding manager checkpoints, for custom models |
|
||||
| --gfpgan_dir | | src/gfpgan | Path to where GFPGAN is installed. |
|
||||
| --gfpgan_model_path | | experiments/pretrained_models<br>/GFPGANv1.3.pth | Path to GFPGAN model file, relative to --gfpgan_dir. |
|
||||
| --device <device> | -d<device> | torch.cuda.current_device() | Device to run SD on, e.g. "cuda:0" |
|
||||
|
||||
These arguments are deprecated but still work:
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
| --weights <path> | | None | Pth to weights file; use `--model stable-diffusion-1.4` instead |
|
||||
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead |
|
||||
| Argument | Shortcut | Default | Description |
|
||||
| ---------------- | -------- | ------- | --------------------------------------------------------------- |
|
||||
| --weights <path> | | None | Pth to weights file; use `--model stable-diffusion-1.4` instead |
|
||||
| --laion400m | -l | False | Use older LAION400m weights; use `--model=laion400m` instead |
|
||||
|
||||
**A note on path names:** On Windows systems, you may run into
|
||||
problems when passing the dream script standard backslashed path
|
||||
names because the Python interpreter treats "\" as an escape.
|
||||
You can either double your slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or
|
||||
use Linux/Mac style forward slashes (better): C:/path/to/my/file.
|
||||
### **A note on path names:**
|
||||
|
||||
## List of prompt arguments
|
||||
On Windows systems, you may run into problems when passing the dream script standard backslashed
|
||||
path names because the Python interpreter treats "\" as an escape. You can either double your
|
||||
slashes (ick): C:\\\\path\\\\to\\\\my\\\\file, or use Linux/Mac style forward slashes (better):
|
||||
C:/path/to/my/file.
|
||||
|
||||
After the dream.py script initializes, it will present you with a
|
||||
**dream>** prompt. Here you can enter information to generate images
|
||||
from text (txt2img), to embellish an existing image or sketch
|
||||
(img2img), or to selectively alter chosen regions of the image
|
||||
(inpainting).
|
||||
### List of prompt arguments
|
||||
|
||||
### This is an example of txt2img:
|
||||
After the dream.py script initializes, it will present you with a **dream>** prompt. Here you can
|
||||
enter information to generate images from text (txt2img), to embellish an existing image or sketch
|
||||
(img2img), or to selectively alter chosen regions of the image (inpainting).
|
||||
|
||||
~~~~
|
||||
dream> waterfall and rainbow -W640 -H480
|
||||
~~~~
|
||||
### This is an example of txt2img
|
||||
|
||||
This will create the requested image with the dimensions 640 (width)
|
||||
and 480 (height).
|
||||
```bash
|
||||
dream> "waterfall and rainbow" -W640 -H480
|
||||
```
|
||||
|
||||
Here are the dream> command that apply to txt2img:
|
||||
This will create the requested image with the dimensions 640 (width) and 480 (height).
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
|
||||
| --width <int> | -W<int> | 512 | Width of generated image |
|
||||
| --height <int> | -H<int> | 512 | Height of generated image |
|
||||
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt |
|
||||
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply |
|
||||
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 0.0 works, but the useful range is roughly 5.0 to 20.0 |
|
||||
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
|
||||
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
|
||||
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
|
||||
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) |
|
||||
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images |
|
||||
| --seamless | | False | Activate seamless tiling for interesting effects |
|
||||
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt |
|
||||
| --skip_normalization| -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
|
||||
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75| Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
||||
| --gfpgan_strength <float> | -G <float> | -G0 | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) |
|
||||
| --save_original | -save_orig| False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
||||
| --variation <float> |-v<float>| 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
|
||||
| --with_variations <pattern> | -V<pattern>| None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
||||
Those are the `dream` commands that apply to txt2img:
|
||||
|
||||
Note that the width and height of the image must be multiples of
|
||||
64. You can provide different values, but they will be rounded down to
|
||||
the nearest multiple of 64.
|
||||
| Argument | Shortcut | Default | Description |
|
||||
| --------------------------- | ---------------- | ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| "my prompt" | | | Text prompt to use. The quotation marks are optional. |
|
||||
| --width <int> | -W<int> | 512 | Width of generated image |
|
||||
| --height <int> | -H<int> | 512 | Height of generated image |
|
||||
| --iterations <int> | -n<int> | 1 | How many images to generate from this prompt |
|
||||
| --steps <int> | -s<int> | 50 | How many steps of refinement to apply |
|
||||
| --cfg_scale <float> | -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 0.0 works, but the useful range is roughly 5.0 to 20.0 |
|
||||
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously. |
|
||||
| --sampler <sampler> | -A<sampler> | k_lms | Sampler to use. Use -h to get list of available samplers. |
|
||||
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
|
||||
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) |
|
||||
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images |
|
||||
| --seamless | | False | Activate seamless tiling for interesting effects |
|
||||
| --log_tokenization | -t | False | Display a color-coded list of the parsed tokens derived from the prompt |
|
||||
| --skip_normalization | -x | False | Weighted subprompts will not be normalized. See [Weighted Prompts](./OTHER.md#weighted-prompts) |
|
||||
| --upscale <int> <float> | -U <int> <float> | -U 1 0.75 | Upscale image by magnification factor (2, 4), and set strength of upscaling (0.0-1.0). If strength not set, will default to 0.75. |
|
||||
| --gfpgan_strength <float> | -G <float> | -G0 | Fix faces using the GFPGAN algorithm; argument indicates how hard the algorithm should try (0.0-1.0) |
|
||||
| --save_original | -save_orig | False | When upscaling or fixing faces, this will cause the original image to be saved rather than replaced. |
|
||||
| --variation <float> | -v<float> | 0.0 | Add a bit of noise (0.0=none, 1.0=high) to the image in order to generate a series of variations. Usually used in combination with -S<seed> and -n<int> to generate a series a riffs on a starting image. See [Variations](./VARIATIONS.md). |
|
||||
| --with_variations <pattern> | -V<pattern> | None | Combine two or more variations. See [Variations](./VARIATIONS.md) for now to use this. |
|
||||
|
||||
Note that the width and height of the image must be multiples of 64. You can provide different
|
||||
values, but they will be rounded down to the nearest multiple of 64.
|
||||
|
||||
### This is an example of img2img:
|
||||
### This is an example of img2img
|
||||
|
||||
~~~~
|
||||
```bash
|
||||
dream> waterfall and rainbow -I./vacation-photo.png -W640 -H480 --fit
|
||||
~~~~
|
||||
```
|
||||
|
||||
This will modify the indicated vacation photograph by making it more
|
||||
like the prompt. Results will vary greatly depending on what is in the
|
||||
image. We also ask to --fit the image into a box no bigger than
|
||||
640x480. Otherwise the image size will be identical to the provided
|
||||
photo and you may run out of memory if it is large.
|
||||
This will modify the indicated vacation photograph by making it more like the prompt. Results will
|
||||
vary greatly depending on what is in the image. We also ask to --fit the image into a box no bigger
|
||||
than 640x480. Otherwise the image size will be identical to the provided photo and you may run out
|
||||
of memory if it is large.
|
||||
|
||||
In addition to the command-line options recognized by txt2img, img2img
|
||||
accepts additional options:
|
||||
In addition to the command-line options recognized by txt2img, img2img accepts additional options:
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
| --init_img <path> | -I<path> | None | Path to the initialization image |
|
||||
| --fit | -F | False | Scale the image to fit into the specified -H and -W dimensions |
|
||||
| --strength <float> | -s<float> | 0.75 | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely.|
|
||||
| Argument | Shortcut | Default | Description |
|
||||
| ------------------ | --------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| --init_img <path> | -I<path> | None | Path to the initialization image |
|
||||
| --fit | -F | False | Scale the image to fit into the specified -H and -W dimensions |
|
||||
| --strength <float> | -s<float> | 0.75 | How hard to try to match the prompt to the initial image. Ranges from 0.0-0.99, with higher values replacing the initial image completely. |
|
||||
|
||||
### This is an example of inpainting:
|
||||
### This is an example of inpainting
|
||||
|
||||
~~~~
|
||||
dream> waterfall and rainbow -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
|
||||
~~~~
|
||||
```bash
|
||||
dream> "waterfall and rainbow" -I./vacation-photo.png -M./vacation-mask.png -W640 -H480 --fit
|
||||
```
|
||||
|
||||
This will do the same thing as img2img, but image alterations will
|
||||
only occur within transparent areas defined by the mask file specified
|
||||
by -M. You may also supply just a single initial image with the areas
|
||||
to overpaint made transparent, but you must be careful not to destroy
|
||||
the pixels underneath when you create the transparent areas. See
|
||||
[Inpainting](./INPAINTING.md) for details.
|
||||
This will do the same thing as img2img, but image alterations will only occur within transparent
|
||||
areas defined by the mask file specified by -M. You may also supply just a single initial image with
|
||||
the areas to overpaint made transparent, but you must be careful not to destroy the pixels
|
||||
underneath when you create the transparent areas. See [Inpainting](./INPAINTING.md) for details.
|
||||
|
||||
inpainting accepts all the arguments used for txt2img and img2img, as
|
||||
well as the --mask (-M) argument:
|
||||
inpainting accepts all the arguments used for txt2img and img2img, as well as the --mask (-M)
|
||||
argument:
|
||||
|
||||
| Argument | Shortcut | Default | Description |
|
||||
|--------------------|------------|---------------------|--------------|
|
||||
| --init_mask <path> | -M<path> | None |Path to an image the same size as the initial_image, with areas for inpainting made transparent.|
|
||||
| Argument | Shortcut | Default | Description |
|
||||
| ------------------ | -------- | ------- | ------------------------------------------------------------------------------------------------ |
|
||||
| --init_mask <path> | -M<path> | None | Path to an image the same size as the initial_image, with areas for inpainting made transparent. |
|
||||
|
||||
## Command-line editing and completion
|
||||
|
||||
# Shortcuts
|
||||
If you are on a Macintosh or Linux machine, the command-line offers convenient history tracking,
|
||||
editing, and command completion.
|
||||
|
||||
Since one so frequently refers back to a previously-generated seed or
|
||||
image, dream.py provides an easy shortcut that avoids having to cut
|
||||
and paste these values.
|
||||
|
||||
Here's how it works. Say you generated 6 images of a man-eating snail:
|
||||
|
||||
~~~~
|
||||
dream> man-eating snail -n6
|
||||
...
|
||||
>> Usage stats:
|
||||
>> 6 image(s) generated in 79.85s
|
||||
>> Max VRAM used for this generation: 3.36G. Current VRAM utilization:2.21G
|
||||
>> Max VRAM used since script start: 3.36G
|
||||
Outputs:
|
||||
[1] outputs/img-samples/000210.1414805682.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S1414805682
|
||||
[2] outputs/img-samples/000210.3312885013.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S3312885013
|
||||
[3] outputs/img-samples/000210.1398528919.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S1398528919
|
||||
[4] outputs/img-samples/000210.92626031.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S92626031
|
||||
[5] outputs/img-samples/000210.1733666373.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S1733666373
|
||||
[6] outputs/img-samples/000210.2453524229.png: "man-eating snail" -s50 -W512 -H512 -C7.5 -Ak_lms -S2453524229
|
||||
~~~~
|
||||
|
||||
The last image generated (with seed 2453524229) looks really good. So let's
|
||||
pick that one for variation generation. Instead of cutting and pasting
|
||||
the argument -S2453524229, we can simply refer to the most recent seed as
|
||||
-1, and write:
|
||||
|
||||
~~~~
|
||||
dream> man-eating snail -v0.1 -n10 -S-1
|
||||
>> Reusing previous seed 2453524229
|
||||
...etc...
|
||||
~~~~
|
||||
|
||||
You can use -2 to refer to the second to last seed, -3 to the third to
|
||||
last, etc. It works with both individual images and grids. However,
|
||||
the numbering system only extends across the last group of images
|
||||
generated and doesn't reach back to earlier commands.
|
||||
|
||||
The initial image (-I or --init_img) argument works in a similar
|
||||
way. To use the second-to-most-recent snail image as the initial
|
||||
image for an img2img render, you could refer to it as -I-2:
|
||||
|
||||
~~~~
|
||||
dream> glowing science-fiction snail -I -2 -n4
|
||||
>> Reusing previous image outputs/img-samples/000213.2150458613.png
|
||||
...etc...
|
||||
~~~~
|
||||
|
||||
# Command-line editing and completion
|
||||
|
||||
If you are on a Macintosh or Linux machine, the command-line offers
|
||||
convenient history tracking, editing, and command completion.
|
||||
|
||||
- To scroll through previous commands and potentially edit/reuse them, use the up and down cursor keys.
|
||||
- To edit the current command, use the left and right cursor keys to position the cursor, and then backspace, delete or insert characters.
|
||||
- To scroll through previous commands and potentially edit/reuse them, use the up and down cursor
|
||||
keys.
|
||||
- To edit the current command, use the left and right cursor keys to position the cursor, and then
|
||||
backspace, delete or insert characters.
|
||||
- To move to the very beginning of the command, type CTRL-A (or command-A on the Mac)
|
||||
- To move to the end of the command, type CTRL-E.
|
||||
- To cut a section of the command, position the cursor where you want to start cutting and type CTRL-K.
|
||||
- To cut a section of the command, position the cursor where you want to start cutting and type
|
||||
CTRL-K.
|
||||
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y
|
||||
|
||||
Windows users can get similar, but more limited, functionality if they
|
||||
launch dream.py with the "winpty" program:
|
||||
Windows users can get similar, but more limited, functionality if they launch dream.py with the
|
||||
"winpty" program:
|
||||
|
||||
~~~
|
||||
```
|
||||
> winpty python scripts\dream.py
|
||||
~~~
|
||||
```
|
||||
|
||||
On the Mac and Linux platforms, when you exit dream.py, the last 1000
|
||||
lines of your command-line history will be saved. When you restart
|
||||
dream.py, you can access the saved history using the up-arrow key.
|
||||
On the Mac and Linux platforms, when you exit dream.py, the last 1000 lines of your command-line
|
||||
history will be saved. When you restart dream.py, you can access the saved history using the
|
||||
up-arrow key.
|
||||
|
||||
In addition, limited command-line completion is installed. In various
|
||||
contexts, you can start typing your command and press tab. A list of
|
||||
potential completions will be presented to you. You can then type a
|
||||
little more, hit tab again, and eventually autocomplete what you want.
|
||||
In addition, limited command-line completion is installed. In various contexts, you can start typing
|
||||
your command and press tab. A list of potential completions will be presented to you. You can then
|
||||
type a little more, hit tab again, and eventually autocomplete what you want.
|
||||
|
||||
When specifying file paths using the one-letter shortcuts, the CLI
|
||||
will attempt to complete pathnames for you. This is most handy for the
|
||||
-I (init image) and -M (init mask) paths. To initiate completion, start
|
||||
the path with a slash ("/") or "./". For example:
|
||||
When specifying file paths using the one-letter shortcuts, the CLI will attempt to complete
|
||||
pathnames for you. This is most handy for the -I (init image) and -M (init mask) paths. To initiate
|
||||
completion, start the path with a slash ("/") or "./". For example:
|
||||
|
||||
~~~
|
||||
```
|
||||
dream> zebra with a mustache -I./test-pictures<TAB>
|
||||
-I./test-pictures/Lincoln-and-Parrot.png -I./test-pictures/zebra.jpg -I./test-pictures/madonna.png
|
||||
-I./test-pictures/bad-sketch.png -I./test-pictures/man_with_eagle/
|
||||
~~~
|
||||
```
|
||||
|
||||
You can then type "z", hit tab again, and it will autofill to "zebra.jpg".
|
||||
|
||||
More text completion features (such as autocompleting seeds) are on their way.
|
||||
|
||||
|
@ -1,30 +1,29 @@
|
||||
# **Image-to-Image**
|
||||
---
|
||||
title: Image-to-Image
|
||||
---
|
||||
|
||||
This script also provides an img2img feature that lets you seed your
|
||||
creations with an initial drawing or photo. This is a really cool
|
||||
feature that tells stable diffusion to build the prompt on top of the
|
||||
image you provide, preserving the original's basic shape and
|
||||
layout. To use it, provide the `--init_img` option as shown here:
|
||||
## **IMG2IMG**
|
||||
This script also provides an `img2img` feature that lets you seed your creations with an initial
|
||||
drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on
|
||||
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
|
||||
the `--init_img` option as shown here:
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
The `--init_img (-I)` option gives the path to the seed
|
||||
picture. `--strength (-f)` controls how much the original will be
|
||||
modified, ranging from `0.0` (keep the original intact), to `1.0`
|
||||
(ignore the original completely). The default is `0.75`, and ranges
|
||||
from `0.25-0.75` give interesting results.
|
||||
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much
|
||||
the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore the
|
||||
original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
|
||||
|
||||
You may also pass a `-v<variation_amount>` option to generate `-n<iterations>` count variants on
|
||||
the original image. This is done by passing the first generated image
|
||||
back into img2img the requested number of times. It generates
|
||||
interesting variants.
|
||||
|
||||
If the initial image contains transparent regions, then Stable
|
||||
Diffusion will only draw within the transparent regions, a process
|
||||
called "inpainting". However, for this to work correctly, the color
|
||||
information underneath the transparent needs to be preserved, not
|
||||
erased. See [Creating Transparent Images For
|
||||
Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
|
||||
for details.
|
||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the
|
||||
transparent regions, a process called "inpainting". However, for this to work correctly, the color
|
||||
information underneath the transparent needs to be preserved, not erased.
|
||||
|
||||
More Details can be found here:
|
||||
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
|
||||
|
@ -1,27 +1,27 @@
|
||||
# **Creating Transparent Regions for Inpainting**
|
||||
---
|
||||
title: Inpainting
|
||||
---
|
||||
|
||||
Inpainting is really cool. To do it, you start with an initial image
|
||||
and use a photoeditor to make one or more regions transparent
|
||||
(i.e. they have a "hole" in them). You then provide the path to this
|
||||
image at the dream> command line using the `-I` switch. Stable
|
||||
Diffusion will only paint within the transparent region.
|
||||
## **Creating Transparent Regions for Inpainting**
|
||||
|
||||
There's a catch. In the current implementation, you have to prepare
|
||||
the initial image correctly so that the underlying colors are
|
||||
preserved under the transparent area. Many imaging editing
|
||||
applications will by default erase the color information under the
|
||||
transparent pixels and replace them with white or black, which will
|
||||
lead to suboptimal inpainting. You also must take care to export the
|
||||
PNG file in such a way that the color information is preserved.
|
||||
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make
|
||||
one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this
|
||||
image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the
|
||||
transparent region.
|
||||
|
||||
If your photoeditor is erasing the underlying color information,
|
||||
`dream.py` will give you a big fat warning. If you can't find a way to
|
||||
coax your photoeditor to retain color values under transparent areas,
|
||||
then you can combine the `-I` and `-M` switches to provide both the
|
||||
original unedited image and the masked (partially transparent) image:
|
||||
There's a catch. In the current implementation, you have to prepare the initial image correctly so
|
||||
that the underlying colors are preserved under the transparent area. Many imaging editing
|
||||
applications will by default erase the color information under the transparent pixels and replace
|
||||
them with white or black, which will lead to suboptimal inpainting. You also must take care to
|
||||
export the PNG file in such a way that the color information is preserved.
|
||||
|
||||
```
|
||||
dream> man with cat on shoulder -I./images/man.png -M./images/man-transparent.png
|
||||
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat
|
||||
warning. If you can't find a way to coax your photoeditor to retain color values under transparent
|
||||
areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image
|
||||
and the masked (partially transparent) image:
|
||||
|
||||
```bash
|
||||
dream> "man with cat on shoulder" -I./images/man.png -M./images/man-transparent.png
|
||||
```
|
||||
|
||||
We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
@ -37,5 +37,5 @@ We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||
6. Set opacity to 0%
|
||||
7. Export as PNG
|
||||
8. In the export dialogue, Make sure the "Save colour values from
|
||||
transparent pixels" checkbox is selected.
|
||||
8. In the export dialogue, Make sure the "Save colour values from transparent pixels" checkbox is
|
||||
selected.
|
||||
|
@ -1,25 +1,28 @@
|
||||
---
|
||||
title: Others
|
||||
---
|
||||
|
||||
## **Google Colab**
|
||||
|
||||
Stable Diffusion AI Notebook: <a
|
||||
href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/notebooks/Stable_Diffusion_AI_Notebook.ipynb"
|
||||
target="_parent"><img
|
||||
target="_parent">
|
||||
<img
|
||||
src="https://colab.research.google.com/assets/colab-badge.svg"
|
||||
alt="Open In Colab"/></a> <br> Open and follow instructions to use an
|
||||
isolated environment running Dream.<br>
|
||||
alt="Open In Colab"/></a> <br> Open and follow instructions to use an isolated environment running
|
||||
Dream.<br>
|
||||
|
||||
Output Example:
|
||||

|
||||
Output Example: 
|
||||
|
||||
---
|
||||
|
||||
## **Seamless Tiling**
|
||||
|
||||
The seamless tiling mode causes generated images to seamlessly tile
|
||||
with itself. To use it, add the `--seamless` option when starting the
|
||||
script which will result in all generated images to tile, or for each
|
||||
`dream>` prompt as shown here:
|
||||
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the
|
||||
`--seamless` option when starting the script which will result in all generated images to tile, or
|
||||
for each `dream>` prompt as shown here:
|
||||
|
||||
```
|
||||
```python
|
||||
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
```
|
||||
|
||||
@ -27,12 +30,11 @@ dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
|
||||
## **Reading Prompts from a File**
|
||||
|
||||
You can automate `dream.py` by providing a text file with the prompts
|
||||
you want to run, one line per prompt. The text file must be composed
|
||||
with a text editor (e.g. Notepad) and not a word processor. Each line
|
||||
should look like what you would type at the dream> prompt:
|
||||
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per
|
||||
prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor.
|
||||
Each line should look like what you would type at the dream> prompt:
|
||||
|
||||
```
|
||||
```bash
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
@ -40,13 +42,13 @@ innovative packaging for a squid's dinner -S137038382
|
||||
|
||||
Then pass this file's name to `dream.py` when you invoke it:
|
||||
|
||||
```
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of `-`:
|
||||
|
||||
```
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
|
||||
```
|
||||
|
||||
@ -54,12 +56,16 @@ You may read a series of prompts from standard input by providing a filename of
|
||||
|
||||
## **Shortcuts: Reusing Seeds**
|
||||
|
||||
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version 1.11. Provide a `**-S**` (or `**--seed**`)
|
||||
switch of `-1` to use the seed of the most recent image generated. If you produced multiple images with the `**-n**` switch, then you can go back further using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go back further than one command.
|
||||
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version
|
||||
1.11. Provide a `**-S**` (or `**--seed**`) switch of `-1` to use the seed of the most recent image
|
||||
generated. If you produced multiple images with the `**-n**` switch, then you can go back further
|
||||
using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go
|
||||
back further than one command.
|
||||
|
||||
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**` switch to turn on upscaling and face enhancement (see previous section):
|
||||
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**`
|
||||
switch to turn on upscaling and face enhancement (see previous section):
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> a cute child playing hopscotch -G0.5
|
||||
[...]
|
||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
@ -76,18 +82,16 @@ outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.
|
||||
## **Weighted Prompts**
|
||||
|
||||
You may weight different sections of the prompt to tell the sampler to attach different levels of
|
||||
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight.
|
||||
For example consider this prompt:
|
||||
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight. For
|
||||
example consider this prompt:
|
||||
|
||||
```
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```bash
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to invest 25% of its effort on the tabby
|
||||
cat aspect of the image and 75% on the white duck aspect
|
||||
(surprisingly, this example actually works). The prompt weights can
|
||||
use any combination of integers and floating point numbers, and they
|
||||
do not need to add up to 1.
|
||||
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
|
||||
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
|
||||
combination of integers and floating point numbers, and they do not need to add up to 1.
|
||||
|
||||
---
|
||||
|
||||
@ -114,9 +118,11 @@ Note: currently the thresholding feature is only implemented for the k-diffusion
|
||||
|
||||
## **Simplified API**
|
||||
|
||||
For programmers who wish to incorporate stable-diffusion into other products, this repository includes a simplified API for text to image generation, which lets you create images from a prompt in just three lines of code:
|
||||
For programmers who wish to incorporate stable-diffusion into other products, this repository
|
||||
includes a simplified API for text to image generation, which lets you create images from a prompt
|
||||
in just three lines of code:
|
||||
|
||||
```
|
||||
```bash
|
||||
from ldm.generate import Generate
|
||||
g = Generate()
|
||||
outputs = g.txt2img("a unicorn in manhattan")
|
||||
@ -130,16 +136,14 @@ Please see ldm/generate.py for more information. A set of example scripts is com
|
||||
|
||||
## **Preload Models**
|
||||
|
||||
In situations where you have limited internet connectivity or are
|
||||
blocked behind a firewall, you can use the preload script to preload
|
||||
the required files for Stable Diffusion to run.
|
||||
In situations where you have limited internet connectivity or are blocked behind a firewall, you can
|
||||
use the preload script to preload the required files for Stable Diffusion to run.
|
||||
|
||||
The preload script `scripts/preload_models.py` needs to be run once at
|
||||
least while connected to the internet. In the following runs, it will
|
||||
load up the cached versions of the required files from the `.cache`
|
||||
directory of the system.
|
||||
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the
|
||||
internet. In the following runs, it will load up the cached versions of the required files from the
|
||||
`.cache` directory of the system.
|
||||
|
||||
```
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
||||
preloading bert tokenizer...
|
||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
||||
|
@ -1,70 +1,91 @@
|
||||
# **Personalizing Text-to-Image Generation**
|
||||
---
|
||||
title: TEXTUAL_INVERSION
|
||||
---
|
||||
|
||||
You may personalize the generated images to provide your own styles or objects by training a new LDM checkpoint and introducing a new vocabulary to the fixed model as a (.pt) embeddings file. Alternatively, you may use or train HuggingFace Concepts embeddings files (.bin) from https://huggingface.co/sd-concepts-library and its associated notebooks.
|
||||
## **Personalizing Text-to-Image Generation**
|
||||
|
||||
**Training**
|
||||
You may personalize the generated images to provide your own styles or objects
|
||||
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
|
||||
model as a (.pt) embeddings file. Alternatively, you may use or train
|
||||
HuggingFace Concepts embeddings files (.bin) from
|
||||
<https://huggingface.co/sd-concepts-library> and its associated notebooks.
|
||||
|
||||
To train, prepare a folder that contains images sized at 512x512 and execute the following:
|
||||
## **Training**
|
||||
|
||||
**WINDOWS**: As the default backend is not available on Windows, if you're using that platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND=gloo`
|
||||
To train, prepare a folder that contains images sized at 512x512 and execute the
|
||||
following:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
|
||||
-t \
|
||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
||||
-n my_cat \
|
||||
--gpus 0, \
|
||||
--data_root D:/textual-inversion/my_cat \
|
||||
--init_word 'cat'
|
||||
### WINDOWS
|
||||
|
||||
As the default backend is not available on Windows, if you're using that
|
||||
platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND` to `gloo`
|
||||
|
||||
```bash
|
||||
python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
|
||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
||||
-t \
|
||||
-n my_cat \
|
||||
--gpus 0 \
|
||||
--data_root D:/textual-inversion/my_cat \
|
||||
--init_word 'cat'
|
||||
```
|
||||
|
||||
During the training process, files will be created in
|
||||
/logs/[project][time][project]/ where you can see the process.
|
||||
`/logs/[project][time][project]/` where you can see the process.
|
||||
|
||||
Conditioning contains the training prompts inputs, reconstruction the
|
||||
input images for the training epoch samples, samples scaled for a
|
||||
sample of the prompt and one with the init word provided.
|
||||
Conditioning contains the training prompts inputs, reconstruction the input
|
||||
images for the training epoch samples, samples scaled for a sample of the prompt
|
||||
and one with the init word provided.
|
||||
|
||||
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
|
||||
|
||||
_Note_: According to the associated paper, the optimal number of
|
||||
images is 3-5. Your model may not converge if you use more images than
|
||||
that.
|
||||
!!! Info _Note_
|
||||
|
||||
Training will run indefinitely, but you may wish to stop it (with
|
||||
ctrl-c) before the heat death of the universe, when you find a low
|
||||
loss epoch or around ~5000 iterations. Note that you can set a fixed
|
||||
limit on the number of training steps by decreasing the "max_steps"
|
||||
option in configs/stable_diffusion/v1-finetune.yaml (currently set to
|
||||
4000000)
|
||||
According to the associated paper, the optimal number of
|
||||
images is 3-5. Your model may not converge if you use more images than
|
||||
that.
|
||||
|
||||
**Running**
|
||||
Training will run indefinitely, but you may wish to stop it (with ctrl-c) before
|
||||
the heat death of the universe, when you find a low loss epoch or around ~5000
|
||||
iterations. Note that you can set a fixed limit on the number of training steps
|
||||
by decreasing the "max_steps" option in
|
||||
configs/stable_diffusion/v1-finetune.yaml (currently set to 4000000)
|
||||
|
||||
Once the model is trained, specify the trained .pt or .bin file when
|
||||
starting dream using
|
||||
## **Run the Model**
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt --full_precision
|
||||
Once the model is trained, specify the trained .pt or .bin file when starting
|
||||
dream using
|
||||
|
||||
```bash
|
||||
python3 ./scripts/dream.py \
|
||||
--embedding_path /path/to/embedding.pt \
|
||||
--full_precision
|
||||
```
|
||||
|
||||
Then, to utilize your subject at the dream prompt
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> "a photo of *"
|
||||
```
|
||||
|
||||
This also works with image2image
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
For .pt files it's also possible to train multiple tokens (modify the placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine LDM checkpoints using:
|
||||
For .pt files it's also possible to train multiple tokens (modify the
|
||||
placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine
|
||||
LDM checkpoints using:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/merge_embeddings.py \
|
||||
--manager_ckpts /path/to/first/embedding.pt /path/to/second/embedding.pt [...] \
|
||||
--output_path /path/to/output/embedding.pt
|
||||
```bash
|
||||
python3 ./scripts/merge_embeddings.py \
|
||||
--manager_ckpts /path/to/first/embedding.pt \
|
||||
[</path/to/second/embedding.pt>,[...]] \
|
||||
--output_path /path/to/output/embedding.pt
|
||||
```
|
||||
|
||||
Credit goes to rinongal and the repository located at https://github.com/rinongal/textual_inversion Please see the repository and associated paper for details and limitations.
|
||||
Credit goes to rinongal and the repository
|
||||
|
||||
Please see [the repository](https://github.com/rinongal/textual_inversion) and
|
||||
associated paper for details and limitations.
|
||||
|
@ -1,105 +1,99 @@
|
||||
# **GFPGAN and Real-ESRGAN Support**
|
||||
---
|
||||
title: Upscale
|
||||
---
|
||||
|
||||
The script also provides the ability to do face restoration and
|
||||
upscaling with the help of GFPGAN and Real-ESRGAN respectively.
|
||||
## **GFPGAN and Real-ESRGAN Support**
|
||||
|
||||
As of version 1.14, environment.yaml will install the Real-ESRGAN package into the
|
||||
standard install location for python packages, and will put GFPGAN into a subdirectory of "src"
|
||||
in the stable-diffusion directory.
|
||||
(The reason for this is that the standard GFPGAN distribution has a minor bug that adversely affects image
|
||||
color.) Upscaling with Real-ESRGAN should "just work" without further intervention. Simply pass the --upscale (-U)
|
||||
option on the dream> command line, or indicate the desired scale on the popup in the Web GUI.
|
||||
The script also provides the ability to do face restoration and upscaling with the help of GFPGAN
|
||||
and Real-ESRGAN respectively.
|
||||
|
||||
For **GFPGAN** to work, there is one additional step needed. You will need to download and
|
||||
copy the GFPGAN [models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)
|
||||
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems, here's how you'd do it using
|
||||
**wget**:
|
||||
~~~~
|
||||
As of version 1.14, environment.yaml will install the Real-ESRGAN package into the standard install
|
||||
location for python packages, and will put GFPGAN into a subdirectory of "src" in the
|
||||
stable-diffusion directory. (The reason for this is that the standard GFPGAN distribution has a
|
||||
minor bug that adversely affects image color.) Upscaling with Real-ESRGAN should "just work" without
|
||||
further intervention. Simply pass the --upscale (-U) option on the dream> command line, or indicate
|
||||
the desired scale on the popup in the Web GUI.
|
||||
|
||||
For **GFPGAN** to work, there is one additional step needed. You will need to download and copy the
|
||||
GFPGAN [models file](https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth)
|
||||
into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems, here's how you'd do it
|
||||
using **wget**:
|
||||
|
||||
```bash
|
||||
> wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth src/gfpgan/experiments/pretrained_models/
|
||||
~~~~
|
||||
```
|
||||
|
||||
Make sure that you're in the stable-diffusion directory when you do this.
|
||||
|
||||
Alternatively, if you have GFPGAN installed elsewhere, or if you are using
|
||||
an earlier version of this package which asked you to install GFPGAN in a
|
||||
sibling directory, you may use the `--gfpgan_dir` argument with `dream.py` to set a
|
||||
custom path to your GFPGAN directory. _There are other GFPGAN related
|
||||
boot arguments if you wish to customize further._
|
||||
Alternatively, if you have GFPGAN installed elsewhere, or if you are using an earlier version of
|
||||
this package which asked you to install GFPGAN in a sibling directory, you may use the
|
||||
`--gfpgan_dir` argument with `dream.py` to set a custom path to your GFPGAN directory. _There are
|
||||
other GFPGAN related boot arguments if you wish to customize further._
|
||||
|
||||
**Note: Internet connection needed:**
|
||||
Users whose GPU machines are isolated from the Internet (e.g. on a
|
||||
University cluster) should be aware that the first time you run
|
||||
dream.py with GFPGAN and Real-ESRGAN turned on, it will try to
|
||||
download model files from the Internet. To rectify this, you may run
|
||||
`python3 scripts/preload_models.py` after you have installed GFPGAN
|
||||
and all its dependencies.
|
||||
**Note: Internet connection needed:** Users whose GPU machines are isolated from the Internet (e.g.
|
||||
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
|
||||
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
|
||||
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
|
||||
dependencies.
|
||||
|
||||
**Usage**
|
||||
## **Usage**
|
||||
|
||||
You will now have access to two new prompt arguments.
|
||||
|
||||
**Upscaling**
|
||||
### **Upscaling**
|
||||
|
||||
`-U : <upscaling_factor> <upscaling_strength>`
|
||||
|
||||
The upscaling prompt argument takes two values. The first value is a
|
||||
scaling factor and should be set to either `2` or `4` only. This will
|
||||
either scale the image 2x or 4x respectively using different models.
|
||||
The upscaling prompt argument takes two values. The first value is a scaling factor and should be
|
||||
set to either `2` or `4` only. This will either scale the image 2x or 4x respectively using
|
||||
different models.
|
||||
|
||||
You can set the scaling stength between `0` and `1.0` to control
|
||||
intensity of the of the scaling. This is handy because AI upscalers
|
||||
generally tend to smooth out texture details. If you wish to retain
|
||||
some of those for natural looking results, we recommend using values
|
||||
between `0.5 to 0.8`.
|
||||
You can set the scaling stength between `0` and `1.0` to control intensity of the of the scaling.
|
||||
This is handy because AI upscalers generally tend to smooth out texture details. If you wish to
|
||||
retain some of those for natural looking results, we recommend using values between `0.5 to 0.8`.
|
||||
|
||||
If you do not explicitly specify an upscaling_strength, it will
|
||||
default to 0.75.
|
||||
If you do not explicitly specify an upscaling_strength, it will default to 0.75.
|
||||
|
||||
**Face Restoration**
|
||||
### **Face Restoration**
|
||||
|
||||
`-G : <gfpgan_strength>`
|
||||
|
||||
This prompt argument controls the strength of the face restoration
|
||||
that is being applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
|
||||
This prompt argument controls the strength of the face restoration that is being applied. Similar to
|
||||
upscaling, values between `0.5 to 0.8` are recommended.
|
||||
|
||||
You can use either one or both without any conflicts. In cases where
|
||||
you use both, the image will be first upscaled and then the face
|
||||
restoration process will be executed to ensure you get the highest
|
||||
You can use either one or both without any conflicts. In cases where you use both, the image will be
|
||||
first upscaled and then the face restoration process will be executed to ensure you get the highest
|
||||
quality facial features.
|
||||
|
||||
`--save_orig`
|
||||
|
||||
When you use either `-U` or `-G`, the final result you get is upscaled
|
||||
or face modified. If you want to save the original Stable Diffusion
|
||||
generation, you can use the `-save_orig` prompt argument to save the
|
||||
original unaffected version too.
|
||||
When you use either `-U` or `-G`, the final result you get is upscaled or face modified. If you want
|
||||
to save the original Stable Diffusion generation, you can use the `-save_orig` prompt argument to
|
||||
save the original unaffected version too.
|
||||
|
||||
**Example Usage**
|
||||
### **Example Usage**
|
||||
|
||||
```
|
||||
dream > superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```bash
|
||||
dream> superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```
|
||||
|
||||
This also works with img2img:
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
```
|
||||
|
||||
**Note**
|
||||
### **Note**
|
||||
|
||||
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
|
||||
crashes and memory overloads during the Stable Diffusion process,
|
||||
these effects are applied after Stable Diffusion has completed its
|
||||
work.
|
||||
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads
|
||||
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed
|
||||
its work.
|
||||
|
||||
In single image generations, you will see the output right away but
|
||||
when you are using multiple iterations, the images will first be
|
||||
generated and then upscaled and face restored after that process is
|
||||
complete. While the image generation is taking place, you will still
|
||||
be able to preview the base images.
|
||||
In single image generations, you will see the output right away but when you are using multiple
|
||||
iterations, the images will first be generated and then upscaled and face restored after that
|
||||
process is complete. While the image generation is taking place, you will still be able to preview
|
||||
the base images.
|
||||
|
||||
If you wish to stop during the image generation but want to upscale or
|
||||
face restore a particular generated image, pass it again with the same
|
||||
prompt and generated seed along with the `-U` and `-G` prompt
|
||||
arguments to perform those actions.
|
||||
If you wish to stop during the image generation but want to upscale or face restore a particular
|
||||
generated image, pass it again with the same prompt and generated seed along with the `-U` and `-G`
|
||||
prompt arguments to perform those actions.
|
||||
|
@ -1,26 +1,33 @@
|
||||
# **Variations**
|
||||
---
|
||||
title: Variations
|
||||
---
|
||||
|
||||
Release 1.13 of SD-Dream adds support for image variations.
|
||||
|
||||
You are able to do the following:
|
||||
|
||||
1. Generate a series of systematic variations of an image, given a prompt. The amount of variation from one image to the next can be controlled.
|
||||
1. Generate a series of systematic variations of an image, given a prompt. The
|
||||
amount of variation from one image to the next can be controlled.
|
||||
|
||||
2. Given two or more variations that you like, you can combine them in a weighted fashion.
|
||||
2. Given two or more variations that you like, you can combine them in a
|
||||
weighted fashion.
|
||||
|
||||
---
|
||||
|
||||
This cheat sheet provides a quick guide for how this works in practice, using variations to create the desired image of Xena, Warrior Princess.
|
||||
This cheat sheet provides a quick guide for how this works in practice, using
|
||||
variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
---
|
||||
|
||||
## Step 1 -- Find a base image that you like
|
||||
|
||||
The prompt we will use throughout is `lucy lawless as xena, warrior princess, character portrait, high resolution.`
|
||||
The prompt we will use throughout is
|
||||
`lucy lawless as xena, warrior princess, character portrait, high resolution.`
|
||||
|
||||
This will be indicated as `prompt` in the examples below.
|
||||
|
||||
First we let SD create a series of images in the usual way, in this case requesting six iterations:
|
||||
First we let SD create a series of images in the usual way, in this case
|
||||
requesting six iterations:
|
||||
|
||||
```
|
||||
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
||||
@ -36,17 +43,18 @@ Outputs:
|
||||
|
||||
The one with seed 3357757885 looks nice:
|
||||
|
||||
<img src="../assets/variation_walkthru/000001.3357757885.png"/>
|
||||

|
||||
|
||||
---
|
||||
|
||||
## Step 2 - Generating Variations
|
||||
|
||||
Let's try to generate some variations. Using the same seed, we pass the argument `-v0.1` (or --variant_amount), which generates a series of
|
||||
variations each differing by a variation amount of 0.2. This number ranges from `0` to `1.0`, with higher numbers being larger amounts of
|
||||
variation.
|
||||
Let's try to generate some variations. Using the same seed, we pass the argument
|
||||
`-v0.1` (or --variant_amount), which generates a series of variations each
|
||||
differing by a variation amount of 0.2. This number ranges from `0` to `1.0`,
|
||||
with higher numbers being larger amounts of variation.
|
||||
|
||||
```
|
||||
```bash
|
||||
dream> "prompt" -n6 -S3357757885 -v0.2
|
||||
...
|
||||
Outputs:
|
||||
@ -60,33 +68,41 @@ Outputs:
|
||||
|
||||
### **Variation Sub Seeding**
|
||||
|
||||
Note that the output for each image has a `-V` option giving the "variant subseed" for that image, consisting of a seed followed by the
|
||||
variation amount used to generate it.
|
||||
Note that the output for each image has a `-V` option giving the "variant
|
||||
subseed" for that image, consisting of a seed followed by the variation amount
|
||||
used to generate it.
|
||||
|
||||
This gives us a series of closely-related variations, including the two shown here.
|
||||
This gives us a series of closely-related variations, including the two shown
|
||||
here.
|
||||
|
||||
<img src="../assets/variation_walkthru/000002.3647897225.png">
|
||||
<img src="../assets/variation_walkthru/000002.1614299449.png">
|
||||

|
||||
|
||||
I like the expression on Xena's face in the first one (subseed 3647897225), and the armor on her shoulder in the second one (subseed 1614299449). Can we combine them to get the best of both worlds?
|
||||

|
||||
|
||||
We combine the two variations using `-V` (--with_variations). Again, we must provide the seed for the originally-chosen image in order for
|
||||
this to work.
|
||||
I like the expression on Xena's face in the first one (subseed 3647897225), and
|
||||
the armor on her shoulder in the second one (subseed 1614299449). Can we combine
|
||||
them to get the best of both worlds?
|
||||
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225:0.1,1614299449:0.1
|
||||
We combine the two variations using `-V` (--with_variations). Again, we must
|
||||
provide the seed for the originally-chosen image in order for this to work.
|
||||
|
||||
```bash
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
|
||||
Outputs:
|
||||
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
|
||||
```
|
||||
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The resulting image is close, but not exactly what I wanted:
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The
|
||||
resulting image is close, but not exactly what I wanted:
|
||||
|
||||
<img src="../assets/variation_walkthru/000003.1614299449.png">
|
||||

|
||||
|
||||
We could either try combining the images with different weights, or we can generate more variations around the almost-but-not-quite image. We do the latter, using both the `-V` (combining) and `-v` (variation strength) options. Note that we use `-n6` to generate 6 variations:
|
||||
We could either try combining the images with different weights, or we can
|
||||
generate more variations around the almost-but-not-quite image. We do the
|
||||
latter, using both the `-V` (combining) and `-v` (variation strength) options.
|
||||
Note that we use `-n6` to generate 6 variations:
|
||||
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225:0.1,1614299449:0.1 -v0.05 -n6
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
|
||||
Outputs:
|
||||
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
|
||||
@ -96,9 +112,11 @@ Outputs:
|
||||
./outputs/Xena/000004.2183375608.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2183375608:0.05 -S3357757885
|
||||
```
|
||||
|
||||
This produces six images, all slight variations on the combination of the chosen two images. Here's the one I like best:
|
||||
This produces six images, all slight variations on the combination of the chosen
|
||||
two images. Here's the one I like best:
|
||||
|
||||
<img src="../assets/variation_walkthru/000004.3747154981.png">
|
||||

|
||||
|
||||
As you can see, this is a very powerful tool, which when combined with subprompt weighting, gives you great control over the content and
|
||||
quality of your generated images.
|
||||
As you can see, this is a very powerful tool, which when combined with subprompt
|
||||
weighting, gives you great control over the content and quality of your
|
||||
generated images.
|
||||
|
@ -1,13 +1,19 @@
|
||||
# Barebones Web Server
|
||||
---
|
||||
title: Barebones Web Server
|
||||
---
|
||||
|
||||
As of version 1.10, this distribution comes with a bare bones web server (see screenshot). To use it, run the `dream.py` script by adding the `**--web**` option.
|
||||
As of version 1.10, this distribution comes with a bare bones web server (see
|
||||
screenshot). To use it, run the `dream.py` script by adding the `**--web**`
|
||||
option.
|
||||
|
||||
```
|
||||
```bash
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at http://localhost:9090, or to the network name or IP address of the server.
|
||||
You can then connect to the server by pointing your web browser at
|
||||
http://localhost:9090, or to the network name or IP address of the server.
|
||||
|
||||
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this code, and to [dagf2101](https://github.com/dagf2101) for refining it.
|
||||
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
|
||||
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
|
||||
|
||||

|
||||
|
Reference in New Issue
Block a user