mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
remove all references to CLI
This commit is contained in:
committed by
psychedelicious
parent
9fa8e38163
commit
d27392cc2d
@ -28,8 +28,9 @@ by placing them in the designated directory for the compatible model type
|
||||
|
||||
### An Example
|
||||
|
||||
Here are a few examples to illustrate how it works. All these images were
|
||||
generated using the command-line client and the Stable Diffusion 1.5 model:
|
||||
Here are a few examples to illustrate how it works. All these images
|
||||
were generated using the legacy command-line client and the Stable
|
||||
Diffusion 1.5 model:
|
||||
|
||||
| Japanese gardener | Japanese gardener <ghibli-face> | Japanese gardener <hoi4-leaders> | Japanese gardener <cartoona-animals> |
|
||||
| :--------------------------------: | :-----------------------------------: | :------------------------------------: | :----------------------------------------: |
|
||||
|
@ -82,7 +82,7 @@ format of YAML files can be found
|
||||
[here](https://circleci.com/blog/what-is-yaml-a-beginner-s-guide/).
|
||||
|
||||
You can fix a broken `invokeai.yaml` by deleting it and running the
|
||||
configuration script again -- option [7] in the launcher, "Re-run the
|
||||
configuration script again -- option [6] in the launcher, "Re-run the
|
||||
configure script".
|
||||
|
||||
#### Reading Environment Variables
|
||||
|
@ -46,7 +46,7 @@ Diffuser-style ControlNet models are available at HuggingFace
|
||||
(http://huggingface.co) and accessed via their repo IDs (identifiers
|
||||
in the format "author/modelname"). The easiest way to install them is
|
||||
to use the InvokeAI model installer application. Use the
|
||||
`invoke.sh`/`invoke.bat` launcher to select item [5] and then navigate
|
||||
`invoke.sh`/`invoke.bat` launcher to select item [4] and then navigate
|
||||
to the CONTROLNETS section. Select the models you wish to install and
|
||||
press "APPLY CHANGES". You may also enter additional HuggingFace
|
||||
repo_ids in the "Additional models" textbox:
|
||||
@ -145,8 +145,8 @@ Additionally, each ControlNet section can be expanded in order to manipulate set
|
||||
#### Installation
|
||||
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
|
||||
|
||||
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models.
|
||||
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
|
||||
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
|
||||
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](https://www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
|
||||
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
|
||||
|
||||
#### Using IP-Adapter
|
||||
|
@ -16,9 +16,10 @@ Model Merging can be be done by navigating to the Model Manager and clicking the
|
||||
display all the diffusers-style models that InvokeAI knows about.
|
||||
If you do not see the model you are looking for, then it is probably
|
||||
a legacy checkpoint model and needs to be converted using the
|
||||
`invoke` command-line client and its `!optimize` command. You
|
||||
must select at least two models to merge. The third can be left at
|
||||
"None" if you desire.
|
||||
"Convert" option in the Web-based Model Manager tab.
|
||||
|
||||
You must select at least two models to merge. The third can be left
|
||||
at "None" if you desire.
|
||||
|
||||
* Alpha: This is the ratio to use when combining models. It ranges
|
||||
from 0 to 1. The higher the value, the more weight is given to the
|
||||
|
@ -8,7 +8,7 @@ title: Command-line Utilities
|
||||
|
||||
InvokeAI comes with several scripts that are accessible via the
|
||||
command line. To access these commands, start the "developer's
|
||||
console" from the launcher (`invoke.bat` menu item [8]). Users who are
|
||||
console" from the launcher (`invoke.bat` menu item [7]). Users who are
|
||||
familiar with Python can alternatively activate InvokeAI's virtual
|
||||
environment (typically, but not necessarily `invokeai/.venv`).
|
||||
|
||||
@ -34,7 +34,7 @@ invokeai-web --ram 7
|
||||
|
||||
## **invokeai-merge**
|
||||
|
||||
This is the model merge script, the same as launcher option [4]. Call
|
||||
This is the model merge script, the same as launcher option [3]. Call
|
||||
it with the `--gui` command-line argument to start the interactive
|
||||
console-based GUI. Alternatively, you can run it non-interactively
|
||||
using command-line arguments as illustrated in the example below which
|
||||
@ -48,7 +48,7 @@ invokeai-merge --force --base-model sd-1 --models stable-diffusion-1.5 inkdiffus
|
||||
## **invokeai-ti**
|
||||
|
||||
This is the textual inversion training script that is run by launcher
|
||||
option [3]. Call it with `--gui` to run the interactive console-based
|
||||
option [2]. Call it with `--gui` to run the interactive console-based
|
||||
front end. It can also be run non-interactively. It has about a
|
||||
zillion arguments, but a typical training session can be launched
|
||||
with:
|
||||
@ -68,7 +68,7 @@ in Windows).
|
||||
## **invokeai-install**
|
||||
|
||||
This is the console-based model install script that is run by launcher
|
||||
option [5]. If called without arguments, it will launch the
|
||||
option [4]. If called without arguments, it will launch the
|
||||
interactive console-based interface. It can also be used
|
||||
non-interactively to list, add and remove models as shown by these
|
||||
examples:
|
||||
@ -148,7 +148,7 @@ launch the web server against it with `invokeai-web --root InvokeAI-New`.
|
||||
## **invokeai-update**
|
||||
|
||||
This is the interactive console-based script that is run by launcher
|
||||
menu item [9] to update to a new version of InvokeAI. It takes no
|
||||
menu item [8] to update to a new version of InvokeAI. It takes no
|
||||
command-line arguments.
|
||||
|
||||
## **invokeai-metadata**
|
||||
|
@ -1,131 +0,0 @@
|
||||
---
|
||||
title: Variations
|
||||
---
|
||||
|
||||
# :material-tune-variant: Variations
|
||||
|
||||
## Intro
|
||||
|
||||
InvokeAI's support for variations enables you to do the following:
|
||||
|
||||
1. Generate a series of systematic variations of an image, given a prompt. The
|
||||
amount of variation from one image to the next can be controlled.
|
||||
|
||||
2. Given two or more variations that you like, you can combine them in a
|
||||
weighted fashion.
|
||||
|
||||
!!! Information ""
|
||||
|
||||
This cheat sheet provides a quick guide for how this works in practice, using
|
||||
variations to create the desired image of Xena, Warrior Princess.
|
||||
|
||||
## Step 1 -- Find a base image that you like
|
||||
|
||||
The prompt we will use throughout is:
|
||||
|
||||
`#!bash "lucy lawless as xena, warrior princess, character portrait, high resolution."`
|
||||
|
||||
This will be indicated as `#!bash "prompt"` in the examples below.
|
||||
|
||||
First we let SD create a series of images in the usual way, in this case
|
||||
requesting six iterations.
|
||||
|
||||
<figure markdown>
|
||||

|
||||
<figcaption> Seed 3357757885 looks nice </figcaption>
|
||||
</figure>
|
||||
|
||||
---
|
||||
|
||||
## Step 2 - Generating Variations
|
||||
|
||||
Let's try to generate some variations on this image. We select the "*"
|
||||
symbol in the line of icons above the image in order to fix the prompt
|
||||
and seed. Then we open up the "Variations" section of the generation
|
||||
panel and use the slider to set the variation amount to 0.2. The
|
||||
higher this value, the more each generated image will differ from the
|
||||
previous one.
|
||||
|
||||
Now we run the prompt a second time, requesting six iterations. You
|
||||
will see six images that are thematically related to each other. Try
|
||||
increasing and decreasing the variation amount and see what happens.
|
||||
|
||||
### **Variation Sub Seeding**
|
||||
|
||||
Note that the output for each image has a `-V` option giving the "variant
|
||||
subseed" for that image, consisting of a seed followed by the variation amount
|
||||
used to generate it.
|
||||
|
||||
This gives us a series of closely-related variations, including the two shown
|
||||
here.
|
||||
|
||||
<figure markdown>
|
||||

|
||||
<figcaption>subseed 3647897225</figcaption>
|
||||
</figure>
|
||||
|
||||
<figure markdown>
|
||||

|
||||
<figcaption>subseed 1614299449</figcaption>
|
||||
</figure>
|
||||
|
||||
I like the expression on Xena's face in the first one (subseed 3647897225), and
|
||||
the armor on her shoulder in the second one (subseed 1614299449). Can we combine
|
||||
them to get the best of both worlds?
|
||||
|
||||
We combine the two variations using `-V` (`--with_variations`). Again, we must
|
||||
provide the seed for the originally-chosen image in order for this to work.
|
||||
|
||||
```bash
|
||||
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1
|
||||
Outputs:
|
||||
./outputs/Xena/000003.1614299449.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1 -S3357757885
|
||||
```
|
||||
|
||||
Here we are providing equal weights (0.1 and 0.1) for both the subseeds. The
|
||||
resulting image is close, but not exactly what I wanted:
|
||||
|
||||
<figure markdown>
|
||||

|
||||
<figcaption> subseed 1614299449 </figcaption>
|
||||
</figure>
|
||||
|
||||
We could either try combining the images with different weights, or we can
|
||||
generate more variations around the almost-but-not-quite image. We do the
|
||||
latter, using both the `-V` (combining) and `-v` (variation strength) options.
|
||||
Note that we use `-n6` to generate 6 variations:
|
||||
|
||||
```bash
|
||||
invoke> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
|
||||
Outputs:
|
||||
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2853129515.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2853129515:0.05 -S3357757885
|
||||
./outputs/Xena/000004.3747154981.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3747154981:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2664260391.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2664260391:0.05 -S3357757885
|
||||
./outputs/Xena/000004.1642517170.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,1642517170:0.05 -S3357757885
|
||||
./outputs/Xena/000004.2183375608.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,2183375608:0.05 -S3357757885
|
||||
```
|
||||
|
||||
This produces six images, all slight variations on the combination of the chosen
|
||||
two images. Here's the one I like best:
|
||||
|
||||
<figure markdown>
|
||||

|
||||
<figcaption> subseed 3747154981 </figcaption>
|
||||
</figure>
|
||||
|
||||
As you can see, this is a very powerful tool, which when combined with subprompt
|
||||
weighting, gives you great control over the content and quality of your
|
||||
generated images.
|
||||
|
||||
## Variations and Samplers
|
||||
|
||||
The sampler you choose has a strong effect on variation strength. Some
|
||||
samplers, such as `k_euler_a` are very "creative" and produce significant
|
||||
amounts of image-to-image variation even when the seed is fixed and the
|
||||
`-v` argument is very low. Others are more deterministic. Feel free to
|
||||
experiment until you find the combination that you like.
|
||||
|
||||
Also be aware of the [Perlin Noise](OTHER.md#thresholding-and-perlin-noise-initialization-options)
|
||||
feature, which provides another way of introducing variability into your
|
||||
image generation requests.
|
@ -28,7 +28,7 @@ Learn how to install and use ControlNet models for fine control over
|
||||
image output.
|
||||
|
||||
### * [Image-to-Image Guide](IMG2IMG.md)
|
||||
Use a seed image to build new creations in the CLI.
|
||||
Use a seed image to build new creations.
|
||||
|
||||
## Model Management
|
||||
|
||||
|
Reference in New Issue
Block a user