use Admonitions for notes, warnings, ...

This commit is contained in:
mauwii 2022-09-18 04:54:20 +02:00
parent 6d09f8c6b2
commit 0071f43b2c
No known key found for this signature in database
GPG Key ID: D923DB04ADB3F5AB
7 changed files with 65 additions and 55 deletions

View File

@ -22,7 +22,9 @@ be retrieved using scripts/images2prompt.py
The script is confirmed to work on Linux, Windows and Mac systems.
_Note:_ This script runs from the command-line or can be used as a Web application. The Web GUI is
!!! note
This script runs from the command-line or can be used as a Web application. The Web GUI is
currently rudimentary, but a much better replacement is on its way.
```bash
@ -99,10 +101,10 @@ These arguments are deprecated but still work:
| `--weights <path>` | | `None` | Pth to weights file; use `--model stable-diffusion-1.4` instead |
| `--laion400m` | `-l` | `False` | Use older LAION400m weights; use `--model=laion400m` instead |
### **A note on path names:**
!!! note
On Windows systems, you may run into problems when passing the dream script standard backslashed
path names because the Python interpreter treats "\" as an escape. You can either double your
path names because the Python interpreter treats `\` as an escape. You can either double your
slashes (ick): `C:\\\\path\\\\to\\\\my\\\\file`, or use Linux/Mac style forward slashes (better):
`C:/path/to/my/file`.
@ -144,7 +146,9 @@ Those are the `dream` commands that apply to txt2img:
| `--variation <float>` | `-v<float>` | `0.0` | Add a bit of noise (0.0=none, 1.0=high) to the image<br>in order to generate a series of variations. Usually<br>used in combination with `-S<seed>` and `-n<int>`<br>to generate a series a riffs on a starting image.<br>See [Variations](./VARIATIONS.md). |
| `--with_variations <pattern>` | `-V<pattern>` | `None` | Combine two or more variations. See [Variations](./VARIATIONS.md)<br>for now to use this. |
Note that the width and height of the image must be multiples of 64. You can provide different
!!! note
The width and height of the image must be multiples of 64. You can provide different
values, but they will be rounded down to the nearest multiple of 64.
### This is an example of img2img

View File

@ -59,7 +59,6 @@ and it can also be less than one if the init_img is too big.
Esrgan_strength defaults to 0.75, and the overlap_ratio defaults to
0.25, both are optional.
Unlike Img2Img, the `--width` (`-W`) and `--height` (`-H`) arguments
do not control the size of the image as a whole, but the size of the
tiles used to Embiggen the image.
@ -120,7 +119,7 @@ tiles:
dream> a photo of puffy clouds over a forest at sunset -s 100 -W 512 -H 512 -I outputs/000002.seed.png -f 0.5 -embiggen_tiles 1 2 3
```
## Note
!!! note
Because the same prompt is used on all the tiled images, and the model
doesn't have the context of anything outside the tile being run - it

View File

@ -39,7 +39,7 @@ and one with the init word provided.
On a RTX3090, the process for SD will take ~1h @1.6 iterations/sec.
!!! Info _Note_
!!! note
According to the associated paper, the optimal number of
images is 3-5. Your model may not converge if you use more images than

View File

@ -30,7 +30,9 @@ this package which asked you to install GFPGAN in a sibling directory, you may u
`--gfpgan_dir` argument with `dream.py` to set a custom path to your GFPGAN directory. _There are
other GFPGAN related boot arguments if you wish to customize further._
**Note: Internet connection needed:** Users whose GPU machines are isolated from the Internet (e.g.
!!! warning "Internet connection needed"
Users whose GPU machines are isolated from the Internet (e.g.
on a University cluster) should be aware that the first time you run dream.py with GFPGAN and
Real-ESRGAN turned on, it will try to download model files from the Internet. To rectify this, you
may run `python3 scripts/preload_models.py` after you have installed GFPGAN and all its
@ -83,7 +85,7 @@ This also works with img2img:
dream> a man wearing a pineapple hat -I path/to/your/file.png -U 2 0.5 -G 0.6
```
### **Note**
!!! note
GFPGAN and Real-ESRGAN are both memory intensive. In order to avoid crashes and memory overloads
during the Stable Diffusion process, these effects are applied after Stable Diffusion has completed

View File

@ -29,7 +29,7 @@ This will be indicated as `prompt` in the examples below.
First we let SD create a series of images in the usual way, in this case
requesting six iterations:
```
```bash
dream> lucy lawless as xena, warrior princess, character portrait, high resolution -n6
...
Outputs:
@ -102,6 +102,7 @@ generate more variations around the almost-but-not-quite image. We do the
latter, using both the `-V` (combining) and `-v` (variation strength) options.
Note that we use `-n6` to generate 6 variations:
```bash
dream> "prompt" -S3357757885 -V3647897225,0.1,1614299449,0.1 -v0.05 -n6
Outputs:
./outputs/Xena/000004.3279757577.png: "prompt" -s50 -W512 -H512 -C7.5 -Ak_lms -V 3647897225:0.1,1614299449:0.1,3279757577:0.05 -S3357757885

View File

@ -34,9 +34,11 @@ source text-to-image generator. It provides a streamlined process with various n
options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on
GPU cards with as little as 4 GB or RAM.
_Note: This fork is rapidly evolving. Please use the
!!! note
This fork is rapidly evolving. Please use the
[Issues](https://github.com/lstein/stable-diffusion/issues) tab to report bugs and make feature
requests. Be sure to use the provided templates. They will help aid diagnose issues faster._
requests. Be sure to use the provided templates. They will help aid diagnose issues faster.
## Installation
@ -64,7 +66,7 @@ You wil need one of the following:
- At least 6 GB of free disk space for the machine learning model, Python, and all its dependencies.
### Note
!!! note
If you are have a Nvidia 10xx series card (e.g. the 1080ti), please run the dream script in
full-precision mode as shown below.

View File

@ -57,7 +57,9 @@ title: Linux
(ldm) ~/stable-diffusion$ python3 scripts/preload_models.py
```
Note that this step is necessary because I modified the original just-in-time
!!! note
This step is necessary because I modified the original just-in-time
model loading scheme to allow the script to work on GPU machines that are not
internet connected. See [Preload Models](../features/OTHER.md#preload-models)