mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
Docs Update (#466)
Authored-by: @blessedcoolant Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
This commit is contained in:
50
docs/features/CLI.md
Normal file
50
docs/features/CLI.md
Normal file
@ -0,0 +1,50 @@
|
||||
# **Interactive Command-Line Interface**
|
||||
|
||||
The `dream.py` script, located in `scripts/dream.py`, provides an interactive interface to image generation similar to the "dream mothership" bot that Stable AI provided on its Discord server.
|
||||
|
||||
Unlike the txt2img.py and img2img.py scripts provided in the original CompViz/stable-diffusion source code repository, the time-consuming initialization of the AI model initialization only happens once. After that image generation
|
||||
from the command-line interface is very fast.
|
||||
|
||||
The script uses the readline library to allow for in-line editing, command history (up and down arrows), autocompletion, and more. To help keep track of which prompts generated which images, the script writes a log file of image names and prompts to the selected output directory.
|
||||
|
||||
In addition, as of version 1.02, it also writes the prompt into the PNG file's metadata where it can be retrieved using scripts/images2prompt.py
|
||||
|
||||
The script is confirmed to work on Linux, Windows and Mac systems.
|
||||
|
||||
_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is currently rudimentary, but a much better replacement is on its way.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py
|
||||
* Initializing, be patient...
|
||||
Loading model from models/ldm/text2img-large/model.ckpt
|
||||
(...more initialization messages...)
|
||||
|
||||
* Initialization done! Awaiting your command...
|
||||
dream> ashley judd riding a camel -n2 -s150
|
||||
Outputs:
|
||||
outputs/img-samples/00009.png: "ashley judd riding a camel" -n2 -s150 -S 416354203
|
||||
outputs/img-samples/00010.png: "ashley judd riding a camel" -n2 -s150 -S 1362479620
|
||||
|
||||
dream> "there's a fly in my soup" -n6 -g
|
||||
outputs/img-samples/00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
seeds for individual rows: [2685670268, 1216708065, 2335773498, 822223658, 714542046, 3395302430]
|
||||
dream> q
|
||||
|
||||
# this shows how to retrieve the prompt stored in the saved image's metadata
|
||||
(ldm) ~/stable-diffusion$ python ./scripts/images2prompt.py outputs/img_samples/*.png
|
||||
00009.png: "ashley judd riding a camel" -s150 -S 416354203
|
||||
00010.png: "ashley judd riding a camel" -s150 -S 1362479620
|
||||
00011.png: "there's a fly in my soup" -n6 -g -S 2685670268
|
||||
```
|
||||
|
||||
<p align='center'>
|
||||
<img src="../assets/dream-py-demo.png"/>
|
||||
</p>
|
||||
|
||||
The `dream>` prompt's arguments are pretty much identical to those used in the Discord bot, except you don't need to type "!dream" (it doesn't
|
||||
hurt if you do). A significant change is that creation of individual images is now the default unless --grid (-g) is given.
|
||||
|
||||
For backward compatibility, the -i switch is recognized. For command-line help type -h (or --help) at the dream> prompt.
|
||||
|
||||
The script itself also recognizes a series of command-line switches that will change important global defaults, such as the directory for
|
||||
image outputs and the location of the model weight files.
|
15
docs/features/IMG2IMG.md
Normal file
15
docs/features/IMG2IMG.md
Normal file
@ -0,0 +1,15 @@
|
||||
# **Image-to-Image**
|
||||
|
||||
This script also provides an img2img feature that lets you seed your creations with an initial drawing or photo. This is a really cool feature that tells stable diffusion to build the prompt on top of the image you provide, preserving the original's basic shape and layout. To use it, provide the `--init_img` option as shown here:
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
The `--init_img (-I)` option gives the path to the seed picture. `--strength (-f)` controls how much the original will be modified, ranging from `0.0` (keep the original intact), to `1.0` (ignore
|
||||
the original completely). The default is `0.75`, and ranges from `0.25-0.75` give interesting results.
|
||||
|
||||
You may also pass a `-v<count>` option to generate count variants on the original image. This is done by passing the first generated image back into img2img the requested number of times. It generates interesting variants.
|
||||
|
||||
If the initial image contains transparent regions, then Stable Diffusion will only draw within the transparent regions, a process
|
||||
called "inpainting". However, for this to work correctly, the color information underneath the transparent needs to be preserved, not erased. See [Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting) for details.
|
27
docs/features/INPAINTING.md
Normal file
27
docs/features/INPAINTING.md
Normal file
@ -0,0 +1,27 @@
|
||||
# **Creating Transparent Regions for Inpainting**
|
||||
|
||||
Inpainting is really cool. To do it, you start with an initial image and use a photoeditor to make one or more regions transparent (i.e. they have a "hole" in them). You then provide the path to this image at the dream> command line using the `-I` switch. Stable Diffusion will only paint within the transparent region.
|
||||
|
||||
There's a catch. In the current implementation, you have to prepare the initial image correctly so that the underlying colors are preserved under the transparent area. Many imaging editing applications will by default erase the color information under the transparent pixels and replace them with white or black, which will lead to suboptimal inpainting. You also must take care to export the PNG file in such a way that the color information is preserved.
|
||||
|
||||
If your photoeditor is erasing the underlying color information, `dream.py` will give you a big fat warning. If you can't find a way to coax your photoeditor to retain color values under transparent areas, then you can combine the `-I` and `-M` switches to provide both the original unedited image and the masked (partially transparent) image:
|
||||
|
||||
```
|
||||
dream> man with cat on shoulder -I./images/man.png -M./images/man-transparent.png
|
||||
```
|
||||
|
||||
We are hoping to get rid of the need for this workaround in an upcoming release.
|
||||
|
||||
## Recipe for GIMP
|
||||
|
||||
[GIMP](https://www.gimp.org/) is a popular Linux photoediting tool.
|
||||
|
||||
1. Open image in GIMP.
|
||||
2. Layer->Transparency->Add Alpha Channel
|
||||
3. Use lasoo tool to select region to mask
|
||||
4. Choose Select -> Float to create a floating selection
|
||||
5. Open the Layers toolbar (^L) and select "Floating Selection"
|
||||
6. Set opacity to 0%
|
||||
7. Export as PNG
|
||||
8. In the export dialogue, Make sure the "Save colour values from
|
||||
transparent pixels" checkbox is selected.
|
116
docs/features/OTHER.md
Normal file
116
docs/features/OTHER.md
Normal file
@ -0,0 +1,116 @@
|
||||
## **Google Colab**
|
||||
|
||||
Stable Diffusion AI Notebook: <a href="https://colab.research.google.com/github/lstein/stable-diffusion/blob/main/Stable_Diffusion_AI_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <br>
|
||||
Open and follow instructions to use an isolated environment running Dream.<br>
|
||||
|
||||
Output Example:
|
||||

|
||||
|
||||
---
|
||||
|
||||
## **Seamless Tiling**
|
||||
|
||||
The seamless tiling mode causes generated images to seamlessly tile with itself. To use it, add the `--seamless` option when starting the
|
||||
script which will result in all generated images to tile, or for each `dream>` prompt as shown here:
|
||||
|
||||
```
|
||||
dream> "pond garden with lotus by claude monet" --seamless -s100 -n4
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Reading Prompts from a File**
|
||||
|
||||
You can automate `dream.py` by providing a text file with the prompts you want to run, one line per prompt. The text file must be composed with a text editor (e.g. Notepad) and not a word processor. Each line should look like what you would type at the dream> prompt:
|
||||
|
||||
```
|
||||
a beautiful sunny day in the park, children playing -n4 -C10
|
||||
stormy weather on a mountain top, goats grazing -s100
|
||||
innovative packaging for a squid's dinner -S137038382
|
||||
```
|
||||
|
||||
Then pass this file's name to `dream.py` when you invoke it:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --from_file "path/to/prompts.txt"
|
||||
```
|
||||
|
||||
You may read a series of prompts from standard input by providing a filename of `-`:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ echo "a beautiful day" | python3 scripts/dream.py --from_file -
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Shortcuts: Reusing Seeds**
|
||||
|
||||
Since it is so common to reuse seeds while refining a prompt, there is now a shortcut as of version 1.11. Provide a `**-S**` (or `**--seed**`)
|
||||
switch of `-1` to use the seed of the most recent image generated. If you produced multiple images with the `**-n**` switch, then you can go back further using -2, -3, etc. up to the first image generated by the previous command. Sorry, but you can't go back further than one command.
|
||||
|
||||
Here's an example of using this to do a quick refinement. It also illustrates using the new `**-G**` switch to turn on upscaling and face enhancement (see previous section):
|
||||
|
||||
```
|
||||
dream> a cute child playing hopscotch -G0.5
|
||||
[...]
|
||||
outputs/img-samples/000039.3498014304.png: "a cute child playing hopscotch" -s50 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
|
||||
# I wonder what it will look like if I bump up the steps and set facial enhancement to full strength?
|
||||
dream> a cute child playing hopscotch -G1.0 -s100 -S -1
|
||||
reusing previous seed 3498014304
|
||||
[...]
|
||||
outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.0 -s100 -W512 -H512 -C7.5 -mk_lms -S3498014304
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## **Weighted Prompts**
|
||||
|
||||
You may weight different sections of the prompt to tell the sampler to attach different levels of
|
||||
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight.
|
||||
For example consider this prompt:
|
||||
|
||||
```
|
||||
tabby cat:0.25 white duck:0.75 hybrid
|
||||
```
|
||||
|
||||
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75% on the white duck aspect (surprisingly, this example actually works). The prompt weights can
|
||||
use any combination of integers and floating point numbers, and they do not need to add up to 1.
|
||||
|
||||
---
|
||||
|
||||
## **Simplified API**
|
||||
|
||||
For programmers who wish to incorporate stable-diffusion into other products, this repository includes a simplified API for text to image generation, which lets you create images from a prompt in just three lines of code:
|
||||
|
||||
```
|
||||
from ldm.simplet2i import T2I
|
||||
model = T2I()
|
||||
outputs = model.txt2img("a unicorn in manhattan")
|
||||
```
|
||||
|
||||
Outputs is a list of lists in the format [filename1,seed1],[filename2,seed2]...].
|
||||
|
||||
Please see ldm/simplet2i.py for more information. A set of example scripts is coming RSN.
|
||||
|
||||
---
|
||||
|
||||
## **Preload Models**
|
||||
|
||||
In situations where you have limited internet connectivity or are blocked behind a firewall, you can use the preload script to preload the required files for Stable Diffusion to run.
|
||||
|
||||
The preload script `scripts/preload_models.py` needs to be run once at least while connected to the internet. In the following runs, it will load up the cached versions of the required files from the `.cache` directory of the system.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/preload_models.py
|
||||
preloading bert tokenizer...
|
||||
Downloading: 100%|██████████████████████████████████| 28.0/28.0 [00:00<00:00, 49.3kB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 226k/226k [00:00<00:00, 2.79MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 455k/455k [00:00<00:00, 4.36MB/s]
|
||||
Downloading: 100%|██████████████████████████████████| 570/570 [00:00<00:00, 477kB/s]
|
||||
...success
|
||||
preloading kornia requirements...
|
||||
Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /u/lstein/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
|
||||
100%|███████████████████████████████████████████████| 5.10M/5.10M [00:00<00:00, 101MB/s]
|
||||
...success
|
||||
```
|
57
docs/features/TEXTUAL_INVERSION.md
Normal file
57
docs/features/TEXTUAL_INVERSION.md
Normal file
@ -0,0 +1,57 @@
|
||||
# **Personalizing Text-to-Image Generation**
|
||||
|
||||
You may personalize the generated images to provide your own styles or objects by training a new LDM checkpoint and introducing a new vocabulary to the fixed model.
|
||||
|
||||
To train, prepare a folder that contains images sized at 512x512 and execute the following:
|
||||
|
||||
**WINDOWS**: As the default backend is not available on Windows, if you're using that platform, set the environment variable `PL_TORCH_DISTRIBUTED_BACKEND=gloo`
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./main.py --base ./configs/stable-diffusion/v1-finetune.yaml \
|
||||
-t \
|
||||
--actual_resume ./models/ldm/stable-diffusion-v1/model.ckpt \
|
||||
-n my_cat \
|
||||
--gpus 0, \
|
||||
--data_root D:/textual-inversion/my_cat \
|
||||
--init_word 'cat'
|
||||
```
|
||||
|
||||
During the training process, files will be created in /logs/[project][time][project]/
|
||||
where you can see the process.
|
||||
|
||||
Conditioning contains the training prompts
|
||||
inputs, reconstruction the input images for the training epoch samples, samples scaled for a sample of the prompt and one with the init word provided.
|
||||
|
||||
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
|
||||
|
||||
_Note_: According to the associated paper, the optimal number of images is 3-5. Your model may not converge if you use more images than that.
|
||||
|
||||
Training will run indefinately, but you may wish to stop it before the heat death of the universe, when you find a low loss epoch or around ~5000 iterations.
|
||||
|
||||
Once the model is trained, specify the trained .pt file when starting dream using
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/dream.py --embedding_path /path/to/embedding.pt --full_precision
|
||||
```
|
||||
|
||||
Then, to utilize your subject at the dream prompt
|
||||
|
||||
```
|
||||
dream> "a photo of *"
|
||||
```
|
||||
|
||||
This also works with image2image
|
||||
|
||||
```
|
||||
dream> "waterfall and rainbow in the style of *" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
|
||||
```
|
||||
|
||||
It's also possible to train multiple token (modify the placeholder string in `configs/stable-diffusion/v1-finetune.yaml`) and combine LDM checkpoints using:
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 ./scripts/merge_embeddings.py \
|
||||
--manager_ckpts /path/to/first/embedding.pt /path/to/second/embedding.pt [...] \
|
||||
--output_path /path/to/output/embedding.pt
|
||||
```
|
||||
|
||||
Credit goes to rinongal and the repository located at https://github.com/rinongal/textual_inversion Please see the repository and associated paper for details and limitations.
|
98
docs/features/UPSCALE.md
Normal file
98
docs/features/UPSCALE.md
Normal file
@ -0,0 +1,98 @@
|
||||
# **GFPGAN and Real-ESRGAN Support**
|
||||
|
||||
The script also provides the ability to do face restoration and
|
||||
upscaling with the help of GFPGAN and Real-ESRGAN respectively.
|
||||
|
||||
To use the ability, clone the **[GFPGAN
|
||||
repository](https://github.com/TencentARC/GFPGAN)** and follow their
|
||||
installation instructions. By default, we expect GFPGAN to be
|
||||
installed in a 'GFPGAN' sibling directory. Be sure that the `"ldm"`
|
||||
conda environment is active as you install GFPGAN.
|
||||
|
||||
You can use the `--gfpgan_dir` argument with `dream.py` to set a
|
||||
custom path to your GFPGAN directory. _There are other GFPGAN related
|
||||
boot arguments if you wish to customize further._
|
||||
|
||||
You can install **Real-ESRGAN** by typing the following command.
|
||||
|
||||
```
|
||||
pip install realesrgan
|
||||
```
|
||||
|
||||
**Note: Internet connection needed:**
|
||||
Users whose GPU machines are isolated from the Internet (e.g. on a
|
||||
University cluster) should be aware that the first time you run
|
||||
dream.py with GFPGAN and Real-ESRGAN turned on, it will try to
|
||||
download model files from the Internet. To rectify this, you may run
|
||||
`python3 scripts/preload_models.py` after you have installed GFPGAN
|
||||
and all its dependencies.
|
||||
|
||||
**Usage**
|
||||
|
||||
You will now have access to two new prompt arguments.
|
||||
|
||||
**Upscaling**
|
||||
|
||||
`-U : <upscaling_factor> <upscaling_strength>`
|
||||
|
||||
The upscaling prompt argument takes two values. The first value is a
|
||||
scaling factor and should be set to either `2` or `4` only. This will
|
||||
either scale the image 2x or 4x respectively using different models.
|
||||
|
||||
You can set the scaling stength between `0` and `1.0` to control
|
||||
intensity of the of the scaling. This is handy because AI upscalers
|
||||
generally tend to smooth out texture details. If you wish to retain
|
||||
some of those for natural looking results, we recommend using values
|
||||
between `0.5 to 0.8`.
|
||||
|
||||
If you do not explicitly specify an upscaling_strength, it will
|
||||
default to 0.75.
|
||||
|
||||
**Face Restoration**
|
||||
|
||||
`-G : <gfpgan_strength>`
|
||||
|
||||
This prompt argument controls the strength of the face restoration
|
||||
that is being applied. Similar to upscaling, values between `0.5 to 0.8` are recommended.
|
||||
|
||||
You can use either one or both without any conflicts. In cases where
|
||||
you use both, the image will be first upscaled and then the face
|
||||
restoration process will be executed to ensure you get the highest
|
||||
quality facial features.
|
||||
|
||||
`--save_orig`
|
||||
|
||||
When you use either `-U` or `-G`, the final result you get is upscaled
|
||||
or face modified. If you want to save the original Stable Diffusion
|
||||
generation, you can use the `-save_orig` prompt argument to save the
|
||||
original unaffected version too.
|
||||
|
||||
**Example Usage**
|
||||
|
||||
```
|
||||
dream > superman dancing with a panda bear -U 2 0.6 -G 0.4
|
||||
```
|
||||
|
||||
This also works with img2img:
|
||||
|
||||
```
|
||||
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
|
||||
```
|
||||
|
||||
**Note**
|
||||
|
||||
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid
|
||||
crashes and memory overloads during the Stable Diffusion process,
|
||||
these effects are applied after Stable Diffusion has completed its
|
||||
work.
|
||||
|
||||
In single image generations, you will see the output right away but
|
||||
when you are using multiple iterations, the images will first be
|
||||
generated and then upscaled and face restored after that process is
|
||||
complete. While the image generation is taking place, you will still
|
||||
be able to preview the base images.
|
||||
|
||||
If you wish to stop during the image generation but want to upscale or
|
||||
face restore a particular generated image, pass it again with the same
|
||||
prompt and generated seed along with the `-U` and `-G` prompt
|
||||
arguments to perform those actions.
|
104
docs/features/VARIATIONS.md
Normal file
104
docs/features/VARIATIONS.md
Normal file
@ -0,0 +1,104 @@
|
||||
# **Variations**
|
||||
|
||||
Release 1.13 of SD-Dream adds support for image variations.
|
||||
|
||||
You are able to do the following:
|
||||
|
||||
1. Generate a series of systematic variations of an image, given a prompt. The amount of variation from one image to the next can be controlled.
|
||||
|
||||
2. Given two or more variations that you like, you can combine them in a weighted fashion.
|
||||
|
||||
---
|
||||
|
||||
This cheat sheet provides a quick guide for how this works in practice, using variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
---
|
||||
|
||||
## Step 1 -- Find a base image that you like
|
||||
|
||||
The prompt we will use throughout is `lucy lawless as xena, warrior princess, character portrait, high resolution.`
|
||||
|
||||
This will be indicated as `prompt` in the examples below.
|
||||
|
||||
First we let SD create a series of images in the usual way, in this case requesting six iterations:
|
||||
|
||||
```
|
||||
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
|
||||
...
|
||||
Outputs:
|
||||
./outputs/Xena/000001.1579445059.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1579445059
|
||||
./outputs/Xena/000001.1880768722.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S1880768722
|
||||
./outputs/Xena/000001.332057179.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S332057179
|
||||
./outputs/Xena/000001.2224800325.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S2224800325
|
||||
./outputs/Xena/000001.465250761.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S465250761
|
||||
./outputs/Xena/000001.3357757885.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -S3357757885
|
||||
```
|
||||
|
||||
The one with seed 3357757885 looks nice:
|
||||
|
||||
<img src="assets/variation_walkthru/000001.3357757885.png"/>
|
||||
|
||||
---
|
||||
|
||||
## Step 2 - Generating Variations
|
||||
|
||||
Let's try to generate some variations. Using the same seed, we pass the argument `-v0.1` (or --variant_amount), which generates a series of
|
||||
variations each differing by a variation amount of 0.2. This number ranges from `0` to `1.0`, with higher numbers being larger amounts of
|
||||
variation.
|
||||
|
||||
```
|
||||
dream> "prompt" -n6 -S3357757885 -v0.2
|
||||
...
|
||||
Outputs:
|
||||
./outputs/Xena/000002.784039624.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 784039624:0.2 -S3357757885
|
||||
./outputs/Xena/000002.3647897225.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.2 -S3357757885
|
||||
./outputs/Xena/000002.917731034.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 917731034:0.2 -S3357757885
|
||||
./outputs/Xena/000002.4116285959.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 4116285959:0.2 -S3357757885
|
||||
./outputs/Xena/000002.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1614299449:0.2 -S3357757885
|
||||
./outputs/Xena/000002.1335553075.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 1335553075:0.2 -S3357757885
|
||||
```
|
||||
|
||||
### **Variation Sub Seeding**
|
||||
|
||||
Note that the output for each image has a `-V` option giving the "variant subseed" for that image, consisting of a seed followed by the
|
||||
variation amount used to generate it.
|
||||
|
||||
This gives us a series of closely-related variations, including the two shown here.
|
||||
|
||||
<img src="assets/variation_walkthru/000002.3647897225.png">
|
||||
<img src="assets/variation_walkthru/000002.1614299449.png">
|
||||
|
||||
I like the expression on Xena's face in the first one (subseed 3647897225), and the armor on her shoulder in the second one (subseed 1614299449). Can we combine them to get the best of both worlds?
|
||||
|
||||
We combine the two variations using `-V` (--with_variations). Again, we must provide the seed for the originally-chosen image in order for
|
||||
this to work.
|
||||
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1
|
||||
Outputs:
|
||||
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
|
||||
```
|
||||
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The resulting image is close, but not exactly what I wanted:
|
||||
|
||||
<img src="assets/variation_walkthru/000003.1614299449.png">
|
||||
|
||||
We could either try combining the images with different weights, or we can generate more variations around the almost-but-not-quite image. We do the latter, using both the `-V` (combining) and `-v` (variation strength) options. Note that we use `-n6` to generate 6 variations:
|
||||
|
||||
```
|
||||
dream> "prompt" -S3357757885 -V3647897225,0.1;1614299449,0.1 -v0.05 -n6
|
||||
Outputs:
|
||||
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
|
||||
./outputs/Xena/000004.3747154981.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3747154981:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2664260391.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2664260391:0.05 -S3357757885
|
||||
./outputs/Xena/000004.1642517170.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,1642517170:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2183375608.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2183375608:0.05 -S3357757885
|
||||
```
|
||||
|
||||
This produces six images, all slight variations on the combination of the chosen two images. Here's the one I like best:
|
||||
|
||||
<img src="assets/variation_walkthru/000004.3747154981.png">
|
||||
|
||||
As you can see, this is a very powerful tool, which when combined with subprompt weighting, gives you great control over the content and
|
||||
quality of your generated images.
|
13
docs/features/WEB.md
Normal file
13
docs/features/WEB.md
Normal file
@ -0,0 +1,13 @@
|
||||
# Barebones Web Server
|
||||
|
||||
As of version 1.10, this distribution comes with a bare bones web server (see screenshot). To use it, run the `dream.py` script by adding the `**--web**` option.
|
||||
|
||||
```
|
||||
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
|
||||
```
|
||||
|
||||
You can then connect to the server by pointing your web browser at http://localhost:9090, or to the network name or IP address of the server.
|
||||
|
||||
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this code, and to [dagf2101](https://github.com/dagf2101) for refining it.
|
||||
|
||||

|
Reference in New Issue
Block a user