mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
rebrand CLI from "dream" to "invoke"
- rename dream.py to invoke.py - create a compatibility script named dream.py that execs() invoke.py - redo documentation - change help message in args - this does **not** rename the libraries, which are still ldm.dream.util, etc
This commit is contained in:
@ -136,7 +136,7 @@ $TAG_STABLE_DIFFUSION
|
||||
|
||||
## Startup
|
||||
|
||||
If you're on a **Linux container** the `dream` script is **automatically
|
||||
If you're on a **Linux container** the `invoke` script is **automatically
|
||||
started** and the output dir set to the Docker volume you created earlier.
|
||||
|
||||
If you're **directly on macOS follow these startup instructions**.
|
||||
@ -148,14 +148,14 @@ half-precision requires autocast and won't work.
|
||||
By default the images are saved in `outputs/img-samples/`.
|
||||
|
||||
```Shell
|
||||
python3 scripts/dream.py --full_precision
|
||||
python3 scripts/invoke.py --full_precision
|
||||
```
|
||||
|
||||
You'll get the script's prompt. You can see available options or quit.
|
||||
|
||||
```Shell
|
||||
dream> -h
|
||||
dream> q
|
||||
invoke> -h
|
||||
invoke> q
|
||||
```
|
||||
|
||||
## Text to Image
|
||||
@ -166,10 +166,10 @@ Then increase steps to 100 or more for good (but slower) results.
|
||||
The prompt can be in quotes or not.
|
||||
|
||||
```Shell
|
||||
dream> The hulk fighting with sheldon cooper -s5 -n1
|
||||
dream> "woman closeup highly detailed" -s 150
|
||||
invoke> The hulk fighting with sheldon cooper -s5 -n1
|
||||
invoke> "woman closeup highly detailed" -s 150
|
||||
# Reuse previous seed and apply face restoration
|
||||
dream> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
||||
invoke> "woman closeup highly detailed" --steps 150 --seed -1 -G 0.75
|
||||
```
|
||||
|
||||
You'll need to experiment to see if face restoration is making it better or
|
||||
@ -210,28 +210,28 @@ If you're on a Docker container, copy your input image into the Docker volume
|
||||
docker cp /Users/<your-user>/Pictures/sketch-mountains-input.jpg dummy:/data/
|
||||
```
|
||||
|
||||
Try it out generating an image (or more). The `dream` script needs absolute
|
||||
Try it out generating an image (or more). The `invoke` script needs absolute
|
||||
paths to find the image so don't use `~`.
|
||||
|
||||
If you're on your Mac
|
||||
|
||||
```Shell
|
||||
dream> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
|
||||
invoke> "A fantasy landscape, trending on artstation" -I /Users/<your-user>/Pictures/sketch-mountains-input.jpg --strength 0.75 --steps 100 -n4
|
||||
```
|
||||
|
||||
If you're on a Linux container on your Mac
|
||||
|
||||
```Shell
|
||||
dream> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
|
||||
invoke> "A fantasy landscape, trending on artstation" -I /data/sketch-mountains-input.jpg --strength 0.75 --steps 50 -n1
|
||||
```
|
||||
|
||||
## Web Interface
|
||||
|
||||
You can use the `dream` script with a graphical web interface. Start the web
|
||||
You can use the `invoke` script with a graphical web interface. Start the web
|
||||
server with:
|
||||
|
||||
```Shell
|
||||
python3 scripts/dream.py --full_precision --web
|
||||
python3 scripts/invoke.py --full_precision --web
|
||||
```
|
||||
|
||||
If it's running on your Mac point your Mac web browser to http://127.0.0.1:9090
|
||||
|
@ -89,16 +89,16 @@ This will create InvokeAI folder where you will follow the rest of the steps.
|
||||
|
||||
```
|
||||
# for the pre-release weights use the -l or --liaon400m switch
|
||||
(ldm) ~/InvokeAI$ python3 scripts/dream.py -l
|
||||
(ldm) ~/InvokeAI$ python3 scripts/invoke.py -l
|
||||
|
||||
# for the post-release weights do not use the switch
|
||||
(ldm) ~/InvokeAI$ python3 scripts/dream.py
|
||||
(ldm) ~/InvokeAI$ python3 scripts/invoke.py
|
||||
|
||||
# for additional configuration switches and arguments, use -h or --help
|
||||
(ldm) ~/InvokeAI$ python3 scripts/dream.py -h
|
||||
(ldm) ~/InvokeAI$ python3 scripts/invoke.py -h
|
||||
```
|
||||
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the dream script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||
9. Subsequently, to relaunch the script, be sure to run "conda activate ldm" (step 5, second command), enter the `InvokeAI` directory, and then launch the invoke script (step 8). If you forget to activate the ldm environment, the script will fail with multiple `ModuleNotFound` errors.
|
||||
|
||||
## Updating to newer versions of the script
|
||||
|
||||
|
@ -137,10 +137,10 @@ ln -s "$PATH_TO_CKPT/sd-v1-4.ckpt" \
|
||||
python scripts/preload_models.py
|
||||
|
||||
# now you can run SD in CLI mode
|
||||
python scripts/dream.py --full_precision # (1)!
|
||||
python scripts/invoke.py --full_precision # (1)!
|
||||
|
||||
# or run the web interface!
|
||||
python scripts/dream.py --web
|
||||
python scripts/invoke.py --web
|
||||
|
||||
# The original scripts should work as well.
|
||||
python scripts/orig_scripts/txt2img.py \
|
||||
@ -155,7 +155,7 @@ it isn't required but wont hurt.
|
||||
|
||||
## Common problems
|
||||
|
||||
After you followed all the instructions and try to run dream.py, you might
|
||||
After you followed all the instructions and try to run invoke.py, you might
|
||||
get several errors. Here's the errors I've seen and found solutions for.
|
||||
|
||||
### Is it slow?
|
||||
@ -220,9 +220,9 @@ There are several causes of these errors:
|
||||
"(ldm)" then you activated it. If it begins with "(base)" or something else
|
||||
you haven't.
|
||||
|
||||
2. You might've run `./scripts/preload_models.py` or `./scripts/dream.py`
|
||||
2. You might've run `./scripts/preload_models.py` or `./scripts/invoke.py`
|
||||
instead of `python ./scripts/preload_models.py` or
|
||||
`python ./scripts/dream.py`. The cause of this error is long so it's below.
|
||||
`python ./scripts/invoke.py`. The cause of this error is long so it's below.
|
||||
|
||||
<!-- I could not find out where the error is, otherwise would have marked it as a footnote -->
|
||||
|
||||
@ -519,7 +519,7 @@ use ARM packages, and use `nomkl` as described above.
|
||||
May appear when just starting to generate, e.g.:
|
||||
|
||||
```bash
|
||||
dream> clouds
|
||||
invoke> clouds
|
||||
Generating: 0%| | 0/1 [00:00<?, ?it/s]/Users/[...]/dev/stable-diffusion/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
|
||||
placeholder_idx = torch.where(
|
||||
loc("mps_add"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/20d6c351-ee94-11ec-bcaf-7247572f23b4/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":219:0)): error: input types 'tensor<2x1280xf32>' and 'tensor<*xf16>' are not broadcast compatible
|
||||
|
@ -101,13 +101,13 @@ you may instead create a shortcut to it from within `models\ldm\stable-diffusion
|
||||
|
||||
```bash
|
||||
# for the pre-release weights
|
||||
python scripts\dream.py -l
|
||||
python scripts\invoke.py -l
|
||||
|
||||
# for the post-release weights
|
||||
python scripts\dream.py
|
||||
python scripts\invoke.py
|
||||
```
|
||||
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate ldm` (step 6b), and then launch the dream script (step 9).
|
||||
10. Subsequently, to relaunch the script, first activate the Anaconda command window (step 3),enter the InvokeAI directory (step 5, `cd \path\to\InvokeAI`), run `conda activate ldm` (step 6b), and then launch the invoke script (step 9).
|
||||
|
||||
**Note:** Tildebyte has written an alternative
|
||||
["Easy peasy Windows install"](https://github.com/invoke-ai/InvokeAI/wiki/Easy-peasy-Windows-install)
|
||||
|
Reference in New Issue
Block a user