Merge branch 'development' into Improved-fetch-and-option-to-replay-commands-from-file

This commit is contained in:
ArDiouscuros 2022-10-08 13:26:22 +02:00 committed by GitHub
commit dfd5899611
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
239 changed files with 8262 additions and 3944 deletions

View File

@ -108,13 +108,14 @@ you can try starting `dream.py` with the `--precision=float32` flag:
- [Image To Image](docs/features/IMG2IMG.md)
- [Inpainting Support](docs/features/INPAINTING.md)
- [Outpainting Support](docs/features/OUTPAINTING.md)
- [GFPGAN and Real-ESRGAN Support](docs/features/UPSCALE.md)
- [Upscaling, face-restoration and outpainting](docs/features/POSTPROCESS.md)
- [Seamless Tiling](docs/features/OTHER.md#seamless-tiling)
- [Google Colab](docs/features/OTHER.md#google-colab)
- [Web Server](docs/features/WEB.md)
- [Reading Prompts From File](docs/features/PROMPTS.md#reading-prompts-from-a-file)
- [Shortcut: Reusing Seeds](docs/features/OTHER.md#shortcuts-reusing-seeds)
- [Weighted Prompts](docs/features/PROMPTS.md#weighted-prompts)
- [Thresholding and Perlin Noise Initialization Options](/docs/features/OTHER.md#thresholding-and-perlin-noise-initialization-options)
- [Negative/Unconditioned Prompts](docs/features/PROMPTS.md#negative-and-unconditioned-prompts)
- [Variations](docs/features/VARIATIONS.md)
- [Personalizing Text-to-Image Generation](docs/features/TEXTUAL_INVERSION.md)

File diff suppressed because it is too large Load Diff

View File

@ -486,6 +486,8 @@ def parameters_to_generated_image_metadata(parameters):
"variations",
"steps",
"cfg_scale",
"threshold",
"perlin",
"step_number",
"width",
"height",

View File

@ -107,4 +107,4 @@ lightning:
benchmark: True
max_steps: 4000000
# max_steps: 4000

View File

@ -30,9 +30,9 @@ model:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ["sculpture"]
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 1
num_vectors_per_token: 6
progressive_words: False
unet_config:

View File

@ -0,0 +1,110 @@
model:
base_learning_rate: 5.0e-03
target: ldm.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: image
cond_stage_key: caption
image_size: 64
channels: 4
cond_stage_trainable: true # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
embedding_reg_weight: 0.0
personalization_config:
target: ldm.modules.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['face', 'man', 'photo', 'africanmale']
per_image_tokens: false
num_vectors_per_token: 6
progressive_words: False
unet_config:
target: ldm.modules.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: ldm.models.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: ldm.modules.encoders.modules.FrozenCLIPEmbedder
data:
target: main.DataModuleFromConfig
params:
batch_size: 1
num_workers: 2
wrap: false
train:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: train
per_image_tokens: false
repeats: 100
validation:
target: ldm.data.personalized.PersonalizedBase
params:
size: 512
set: val
per_image_tokens: false
repeats: 10
lightning:
modelcheckpoint:
params:
every_n_train_steps: 500
callbacks:
image_logger:
target: main.ImageLogger
params:
batch_frequency: 500
max_images: 5
increase_log_steps: False
trainer:
benchmark: False
max_steps: 6200
# max_steps: 4000

Binary file not shown.

After

Width:  |  Height:  |  Size: 270 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 198 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 148 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 838 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 500 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 422 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 428 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 572 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 538 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

@ -146,6 +146,7 @@ Here are the dream> command that apply to txt2img:
| --cfg_scale <float>| -C<float> | 7.5 | How hard to try to match the prompt to the generated image; any number greater than 1.0 works, but the useful range is roughly 5.0 to 20.0 |
| --seed <int> | -S<int> | None | Set the random seed for the next series of images. This can be used to recreate an image generated previously.|
| --sampler <sampler>| -A<sampler>| k_lms | Sampler to use. Use -h to get list of available samplers. |
| --hires_fix | | | Larger images often have duplication artefacts. This option suppresses duplicates by generating the image at low res, and then using img2img to increase the resolution |
| --grid | -g | False | Turn on grid mode to return a single image combining all the images generated by this prompt |
| --individual | -i | True | Turn off grid mode (deprecated; leave off --grid instead) |
| --outdir <path> | -o<path> | outputs/img_samples | Temporarily change the location of these images |
@ -249,9 +250,9 @@ generated image and either loads them into the command line
(Linux|Mac), or prints them out in a comment for copy-and-paste
(Windows). You may provide either the name of a file in the current
output directory, or a full file path.
Given a wildcard path to a folder with image png files,
command will retrieve the dream command used to generate the images,
and save them to a file commands.txt for further processing
Specify path to a folder with image png files, and wildcard *.png
to retrieve the dream command used to generate the images,
and save them to a file commands.txt for further processing.
Name of the saved file could be set as the second argument to !fetch
~~~
@ -299,10 +300,25 @@ dream> !20
dream> watercolor of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
## !search <search string>
This is similar to !history but it only returns lines that contain
`search string`. For example:
~~~
dream> !search surreal
[21] surrealist painting of beautiful woman sitting under tree wearing broad hat and flowing garment -v0.2 -n6 -S2878767194
~~~
## !clear
This clears the search history from memory and disk. Be advised that
this operation is irreversible and does not issue any warnings!
# Command-line editing and completion
If you are on a Macintosh or Linux machine, the command-line offers
convenient history tracking, editing, and command completion.
The command-line offers convenient history tracking, editing, and
command completion.
- To scroll through previous commands and potentially edit/reuse them, use the up and down cursor keys.
- To edit the current command, use the left and right cursor keys to position the cursor, and then backspace, delete or insert characters.
@ -312,7 +328,8 @@ convenient history tracking, editing, and command completion.
- To paste a cut section back in, position the cursor where you want to paste, and type CTRL-Y
Windows users can get similar, but more limited, functionality if they
launch dream.py with the "winpty" program:
launch dream.py with the "winpty" program and have the `pyreadline3`
library installed:
~~~
> winpty python scripts\dream.py

View File

@ -9,7 +9,7 @@ drawing or photo. This is a really cool feature that tells stable diffusion to b
top of the image you provide, preserving the original's basic shape and layout. To use it, provide
the `--init_img` option as shown here:
```bash
```commandline
dream> "waterfall and rainbow" --init_img=./init-images/crude_drawing.png --strength=0.5 -s100 -n4
```
@ -26,5 +26,99 @@ If the initial image contains transparent regions, then Stable Diffusion will on
transparent regions, a process called "inpainting". However, for this to work correctly, the color
information underneath the transparent needs to be preserved, not erased.
More Details can be found here:
More details can be found here:
[Creating Transparent Images For Inpainting](./INPAINTING.md#creating-transparent-regions-for-inpainting)
## How does it actually work, though?
The main difference between `img2img` and `prompt2img` is the starting point. While `prompt2img` always starts with pure
gaussian noise and progressively refines it over the requested number of steps, `img2img` skips some of these earlier steps
(how many it skips is indirectly controlled by the `--strength` parameter), and uses instead your initial image mixed with gaussian noise as the starting image.
**Let's start** by thinking about vanilla `prompt2img`, just generating an image from a prompt. If the step count is 10, then the "latent space" (Stable Diffusion's internal representation of the image) for the prompt "fire" with seed `1592514025` develops something like this:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025
```
![latent steps](../assets/img2img/000019.steps.png)
Put simply: starting from a frame of fuzz/static, SD finds details in each frame that it thinks look like "fire" and brings them a little bit more into focus, gradually scrubbing out the fuzz until a clear image remains.
**When you use `img2img`** some of the earlier steps are cut, and instead an initial image of your choice is used. But because of how the maths behind Stable Diffusion works, this image needs to be mixed with just the right amount of noise (fuzz/static) for where it is being inserted. This is where the strength parameter comes in. Depending on the set strength, your image will be inserted into the sequence at the appropriate point, with just the right amount of noise.
### A concrete example
Say I want SD to draw a fire based on this hand-drawn image:
![drawing of a fireplace](../assets/img2img/fire-drawing.png)
Let's only do 10 steps, to make it easier to see what's happening. If strength is `0.7`, this is what the internal steps the algorithm has to take will look like:
![](../assets/img2img/000032.steps.gravity.png)
With strength `0.4`, the steps look more like this:
![](../assets/img2img/000030.steps.gravity.png)
Notice how much more fuzzy the starting image is for strength `0.7` compared to `0.4`, and notice also how much longer the sequence is with `0.7`:
| | strength = 0.7 | strength = 0.4 |
| -- | -- | -- |
| initial image that SD sees | ![](../assets/img2img/000032.step-0.png) | ![](../assets/img2img/000030.step-0.png) |
| steps argument to `dream>` | `-S10` | `-S10` |
| steps actually taken | 7 | 4 |
| latent space at each step | ![](../assets/img2img/000032.steps.gravity.png) | ![](../assets/img2img/000030.steps.gravity.png) |
| output | ![](../assets/img2img/000032.1592514025.png) | ![](../assets/img2img/000030.1592514025.png) |
Both of the outputs look kind of like what I was thinking of. With the strength higher, my input becomes more vague, *and* Stable Diffusion has more steps to refine its output. But it's not really making what I want, which is a picture of cheery open fire. With the strength lower, my input is more clear, *but* Stable Diffusion has less chance to refine itself, so the result ends up inheriting all the problems of my bad drawing.
If you want to try this out yourself, all of these are using a seed of `1592514025` with a width/height of `384`, step count `10`, the default sampler (`k_lms`), and the single-word prompt `fire`:
```commandline
dream> "fire" -s10 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png --strength 0.7
```
The code for rendering intermediates is on my (damian0815's) branch [document-img2img](https://github.com/damian0815/InvokeAI/tree/document-img2img) - run `dream.py` and check your `outputs/img-samples/intermediates` folder while generating an image.
### Compensating for the reduced step count
After putting this guide together I was curious to see how the difference would be if I increased the step count to compensate, so that SD could have the same amount of steps to develop the image regardless of the strength. So I ran the generation again using the same seed, but this time adapting the step count to give each generation 20 steps.
Here's strength `0.4` (note step count `50`, which is `20 ÷ 0.4` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s50 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.4
```
![](../assets/img2img/000035.1592514025.png)
and strength `0.7` (note step count `30`, which is roughly `20 ÷ 0.7` to make sure SD does `20` steps from my image):
```commandline
dream> "fire" -s30 -W384 -H384 -S1592514025 -I /tmp/fire-drawing.png -f 0.7
```
![](../assets/img2img/000046.1592514025.png)
In both cases the image is nice and clean and "finished", but because at strength `0.7` Stable Diffusion has been give so much more freedom to improve on my badly-drawn flames, they've come out looking much better. You can really see the difference when looking at the latent steps. There's more noise on the first image with strength `0.7`:
![](../assets/img2img/000046.steps.gravity.png)
than there is for strength `0.4`:
![](../assets/img2img/000035.steps.gravity.png)
and that extra noise gives the algorithm more choices when it is evaluating how to denoise any particular pixel in the image.
Unfortunately, it seems that `img2img` is very sensitive to the step count. Here's strength `0.7` with a step count of `29` (SD did 19 steps from my image):
![](../assets/img2img/000045.1592514025.png)
By comparing the latents we can sort of see that something got interpreted differently enough on the third or fourth step to lead to a rather different interpretation of the flames.
![](../assets/img2img/000046.steps.gravity.png)
![](../assets/img2img/000045.steps.gravity.png)
This is the result of a difference in the de-noising "schedule" - basically the noise has to be cleaned by a certain degree each step or the model won't "converge" on the image properly (see https://huggingface.co/blog/stable_diffusion for more about that). A different step count means a different schedule, which means things get interpreted slightly differently at every step.

View File

@ -55,6 +55,43 @@ outputs/img-samples/000040.3498014304.png: "a cute child playing hopscotch" -G1.
---
## **Weighted Prompts**
You may weight different sections of the prompt to tell the sampler to attach different levels of
priority to them, by adding `:(number)` to the end of the section you wish to up- or downweight. For
example consider this prompt:
```bash
tabby cat:0.25 white duck:0.75 hybrid
```
This will tell the sampler to invest 25% of its effort on the tabby cat aspect of the image and 75%
on the white duck aspect (surprisingly, this example actually works). The prompt weights can use any
combination of integers and floating point numbers, and they do not need to add up to 1.
---
## Thresholding and Perlin Noise Initialization Options
Two new options are the thresholding (`--threshold`) and the perlin noise initialization (`--perlin`) options. Thresholding limits the range of the latent values during optimization, which helps combat oversaturation with higher CFG scale values. Perlin noise initialization starts with a percentage (a value ranging from 0 to 1) of perlin noise mixed into the initial noise. Both features allow for more variations and options in the course of generating images.
For better intuition into what these options do in practice, [here is a graphic demonstrating them both](static/truncation_comparison.jpg) in use. In generating this graphic, perlin noise at initialization was programmatically varied going across on the diagram by values 0.0, 0.1, 0.2, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0; and the threshold was varied going down from
0, 1, 2, 3, 4, 5, 10, 20, 100. The other options are fixed, so the initial prompt is as follows (no thresholding or perlin noise):
```
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 0 --perlin 0
```
Here's an example of another prompt used when setting the threshold to 5 and perlin noise to 0.2:
```
a portrait of a beautiful young lady -S 1950357039 -s 100 -C 20 -A k_euler_a --threshold 5 --perlin 0.2
```
Note: currently the thresholding feature is only implemented for the k-diffusion style samplers, and empirically appears to work best with `k_euler_a` and `k_dpm_2_a`. Using 0 disables thresholding. Using 0 for perlin noise disables using perlin noise for initialization. Finally, using 1 for perlin noise uses only perlin noise for initialization.
---
## **Simplified API**
For programmers who wish to incorporate stable-diffusion into other products, this repository

View File

@ -4,75 +4,95 @@ title: Outpainting
# :octicons-paintbrush-16: Outpainting
## Continous outpainting
## Outpainting and outcropping
This extension uses the inpainting code to extend an existing image to
any direction of "top", "right", "bottom" or "left". To use it you
need to provide an initial image with -I and an extension direction
with -D (direction). When extending using outpainting a higher img2img
strength value of 0.83 is the default.
Outpainting is a process by which the AI generates parts of the image
that are outside its original frame. It can be used to fix up images
in which the subject is off center, or when some detail (often the top
of someone's head!) is cut off.
The code is not foolproof. Sometimes it will do a good job extending
the image, and other times it will generate ghost images and other
artifacts. In addition, the code works best on images that were
generated by dream.py, because it will be able to recover the original
prompt that generated the file and "understand" what you are trying to
achieve.
InvokeAI supports two versions of outpainting, one called "outpaint"
and the other "outcrop." They work slightly differently and each has
its advantages and drawbacks.
### Basic Usage
### Outcrop
To illustrate, consider this image generated with the prompt "fantasy
portrait of eleven princess." It's nice, but rather annoying that the
top of the head has been cropped off.
The `outcrop` extension allows you to extend the image in 64 pixel
increments in any dimension. You can apply the module to any image
previously-generated by InvokeAI. Note that it will **not** work with
arbitrary photographs or Stable Diffusion images created by other
implementations.
![elven_princess](../assets/outpainting/elven_princess.png)
Consider this image:
We can fix that using the `!fix` command!
![curly_woman](../assets/outpainting/curly.png)
Pretty nice, but it's annoying that the top of her head is cut
off. She's also a bit off center. Let's fix that!
~~~~
dream> !fix my_images/elven_princess.png -D top 50
dream> !fix images/curly.png --outcrop top 64 right 64
~~~~
This is telling dream.py to open up a rectangle 50 pixels high at the
top of the image and outpaint into it. The result is:
This is saying to apply the `outcrop` extension by extending the top
of the image by 64 pixels, and the right of the image by the same
amount. You can use any combination of top|left|right|bottom, and
specify any number of pixels to extend. You can also abbreviate
`--outcrop` to `-c`.
![elven_princess.fixed](../assets/outpainting/elven_princess.outpainted.png)
The result looks like this:
Viola! You can similarly specify `bottom`, `left` or `right` to
outpaint into these margins.
![curly_woman_outcrop](../assets/outpainting/curly-outcrop.png)
There are some limitations to be aware of:
The new image is actually slightly larger than the original (576x576,
because 64 pixels were added to the top and right sides.)
1. You cannot change the size of the image rectangle. In the example,
notice that the whole image is shifted downwards by 50 pixels, rather
than the top being extended upwards.
A number of caveats:
2. Attempting to outpaint larger areas will frequently give rise to ugly
1. Although you can specify any pixel values, they will be rounded up
to the nearest multiple of 64. Smaller values are better. Larger
extensions are more likely to generate artefacts. However, if you wish
you can run the !fix command repeatedly to cautiously expand the
image.
2. The extension is stochastic, meaning that each time you run it
you'll get a slightly different result. You can run it repeatedly
until you get an image you like. Unfortunately `!fix` does not
currently respect the `-n` (`--iterations`) argument.
## Outpaint
The `outpaint` extension does the same thing, but with subtle
differences. Starting with the same image, here is how we would add an
additional 64 pixels to the top of the image:
~~~
dream> !fix images/curly.png --out_direction top 64
~~~
(you can abbreviate ``--out_direction` as `-D`.
The result is shown here:
![curly_woman_outpaint](../assets/outpainting/curly-outpaint.png)
Although the effect is similar, there are significant differences from
outcropping:
1. You can only specify one direction to extend at a time.
2. The image is **not** resized. Instead, the image is shifted by the specified
number of pixels. If you look carefully, you'll see that less of the lady's
torso is visible in the image.
3. Because the image dimensions remain the same, there's no rounding
to multiples of 64.
4. Attempting to outpaint larger areas will frequently give rise to ugly
ghosting effects.
3. For best results, try increasing the step number.
4. If you don't specify a pixel value in -D, it will default to half
5. For best results, try increasing the step number.
6. If you don't specify a pixel value in -D, it will default to half
of the whole image, which is likely not what you want.
You can do more with `!fix` including upscaling and facial
reconstruction of previously-generated images. See
[./UPSCALE.md#fixing-previously-generated-images] for the details.
### Advanced Usage
For more control over the outpaintihg process, you can provide the
`-D` option at image generation time. This allows you to apply all the
controls, including the ability to resize the image and apply face-fixing
and upscaling. For example:
~~~~
dream> man with cat on shoulder -I./images/man.png -D bottom 100 -W960 -H960 -fit
~~~~
Or even shorter, since the prompt is read from the metadata of the old image:
~~~~
dream> -I./images/man.png -D bottom 100 -W960 -H960 -fit -U2 -G1
~~~~
Neither `outpaint` nor `outcrop` are perfect, but we continue to tune
and improve them. If one doesn't work, try the other. You may also
wish to experiment with other `img2img` arguments, such as `-C`, `-f`
and `-s`.

View File

@ -1,14 +1,18 @@
---
title: Upscale
title: Postprocessing
---
## Intro
The script provides the ability to restore faces and upscale. You can apply
these operations at the time you generate the images, or at any time to a
previously-generated PNG file, using the
[!fix](#fixing-previously-generated-images) command.
This extension provides the ability to restore faces and upscale
images.
Face restoration and upscaling can be applied at the time you generate
the images, or at any later time against a previously-generated PNG
file, using the [!fix](#fixing-previously-generated-images)
command. [Outpainting and outcropping](OUTPAINTING.md) can only be
applied after the fact.
## Face Fixing
@ -31,7 +35,7 @@ into **src/gfpgan/experiments/pretrained_models**. On Mac and Linux systems,
here's how you'd do it using **wget**:
```bash
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth src/gfpgan/experiments/pretrained_models/
wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth -P src/gfpgan/experiments/pretrained_models/
```
Make sure that you're in the InvokeAI directory when you do this.
@ -158,9 +162,9 @@ situations when there is very little facial data to work with.
## Fixing Previously-Generated Images
It is easy to apply face restoration and/or upscaling to any
previously-generated file. Just use the syntax
`!fix path/to/file.png <options>`. For example, to apply GFPGAN at strength 0.8
and upscale 2X for a file named `./outputs/img-samples/000044.2945021133.png`,
previously-generated file. Just use the syntax `!fix path/to/file.png
<options>`. For example, to apply GFPGAN at strength 0.8 and upscale
2X for a file named `./outputs/img-samples/000044.2945021133.png`,
just run:
```

View File

@ -1,12 +1,15 @@
---
title: Barebones Web Server
title: InvokeAI Web UI & Server
---
# :material-web: Barebones Web Server
# :material-web: InvokeAI Web Server
As of version 1.10, this distribution comes with a bare bones web server (see
screenshot). To use it, run the `dream.py` script by adding the `--web`
option.
As of version 2.0, this distribution's web server has been updated to include
an all-new UI, with optimizations to improve common workflows for image generation.
## Getting Started & Initialization Commands
To start the web server, run the `dream.py` script by adding the `--web` parameter.
```bash
(ldm) ~/stable-diffusion$ python3 scripts/dream.py --web
@ -15,7 +18,58 @@ option.
You can then connect to the server by pointing your web browser at
http://localhost:9090, or to the network name or IP address of the server.
Kudos to [Tesseract Cat](https://github.com/TesseractCat) for contributing this
code, and to [dagf2101](https://github.com/dagf2101) for refining it.
### Additional Options
`--web_develop` - Starts the web server in development mode.
`--web_verbose` - Enables verbose logging
`--cors [CORS ...]` - Additional allowed origins, comma-separated
`--host HOST` - Web server: Host or IP to listen on. Set to 0.0.0.0 to
accept traffic from other devices on your network.
`--port PORT` - Web server: Port to listen on
`--gui` - Start InvokeAI GUI - This is the "desktop mode" version of the web app. It uses Flask
to create a desktop app experience of the webserver.
## Web Specific Features
The web experience offers an incredibly easy-to-use experience for interacting with the InvokeAI toolkit.
For detailed guidance on individual features, see the Feature-specific help documents available in this directory.
Note that the latest functionality available in the CLI may not always be available in the Web interface.
### Dark Mode & Light Mode
The InvokeAI interface is available in a nano-carbon black & purple Dark Mode, and a "burn your eyes out Nosferatu" Light Mode. These can be toggled by clicking the Sun/Moon icons at the top right of the interface.
![InvokeAI Web Server - Dark Mode](../assets/invoke_web_dark.png)
![InvokeAI Web Server - Light Mode](../assets/invoke_web_light.png)
### Invocation Toolbar
The left side of the InvokeAI interface is available for customizing the prompt and the settings used for invoking your new image. Typing your prompt into the open text field and clicking the Invoke button will produce the image based on the settings configured in the toolbar.
See below for additional documentation related to each feature:
- [Core Prompt Settings](./CLI.md)
- [Variations](./VARIATIONS.md)
- [Upscaling](./UPSCALE.md)
- [Image to Image](./IMG2IMG.md)
- [Inpainting](./INPAINTING.md)
- [Other](./OTHER.md)
### Invocation Gallery
The currently selected --outdir (or the default outputs folder) will display all previously generated files on load. As new invocations are generated, these will be dynamically added to the gallery, and can be previewed by selecting them. Each image also has a simple set of actions (e.g., Delete, Use Seed, Use All Parameters, etc.) that can be accessed by hovering over the image.
### Image Workspace
When an image from the Invocation Gallery is selected, or is generated, the image will be displayed within the center of the interface. A quickbar of common image interactions are displayed along the top of the image, including:
- Use image in the `Image to Image` workflow
- Initialize Face Restoration on the selected file
- Initialize Upscaling on the selected file
- View File metadata and details
- Delete the file
## Acknowledgements
A huge shout-out to the core team working to make this vision a reality, including [psychedelicious](https://github.com/psychedelicious), [Kyle0654](https://github.com/Kyle0654) and [blessedcoolant](https://github.com/blessedcoolant). [hipsterusername](https://github.com/hipsterusername) was the team's unofficial cheerleader and added tooltips/docs.
![Dream Web Server](../assets/dream_web_server.png)

View File

@ -58,6 +58,7 @@ We thank them for all of their time and hard work.
- [rabidcopy](https://github.com/rabidcopy)
- [Dominic Letz](https://github.com/dominicletz)
- [Dmitry T.](https://github.com/ArDiouscuros)
- [Kent Keirsey](https://github.com/hipsterusername)
## **Original CompVis Authors:**

View File

@ -33,13 +33,13 @@ dependencies:
- openh264==2.3.0
- onnx==1.12.0
- onnxruntime==1.12.1
- protobuf==3.20.1
- protobuf==3.19.4
- pudb==2022.1
- pytorch-lightning==1.6.5
- pytorch-lightning==1.7.5
- scipy==1.9.1
- streamlit==1.12.2
- sympy==1.10.1
- tensorboard==2.9.0
- tensorboard==2.10.0
- torchmetrics==0.9.3
- pip:
- flask==2.1.3

View File

@ -23,6 +23,7 @@ dependencies:
- send2trash==1.8.0
- pillow==9.2.0
- einops==0.3.0
- pyreadline3
- torch-fidelity==0.3.0
- transformers==4.19.2
- torchmetrics==0.6.0
@ -34,6 +35,6 @@ dependencies:
- kornia==0.6.0
- -e git+https://github.com/openai/CLIP.git@main#egg=clip
- -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
- -e git+https://github.com/lstein/k-diffusion.git@master#egg=k-diffusion
- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k_diffusion
- -e git+https://github.com/lstein/GFPGAN@fix-dark-cast-images#egg=gfpgan
- -e .

View File

@ -2,24 +2,22 @@
## Run
- `python backend/server.py` serves both frontend and backend at http://localhost:9090
- `python scripts/dream.py --web` serves both frontend and backend at
http://localhost:9090
## Evironment
Install [node](https://nodejs.org/en/download/) (includes npm) and optionally
[yarn](https://yarnpkg.com/getting-started/install).
From `frontend/` run `npm install` / `yarn install` to install the frontend packages.
From `frontend/` run `npm install` / `yarn install` to install the frontend
packages.
## Dev
1. From `frontend/`, run `npm dev` / `yarn dev` to start the dev server.
2. Note the address it starts up on (probably `http://localhost:5173/`).
3. Edit `backend/server.py`'s `additional_allowed_origins` to include this address, e.g.
`additional_allowed_origins = ['http://localhost:5173']`.
4. Leaving the dev server running, open a new terminal and go to the project root.
5. Run `python backend/server.py`.
6. Navigate to the dev server address e.g. `http://localhost:5173/`.
2. Run `python scripts/dream.py --web`.
3. Navigate to the dev server address e.g. `http://localhost:5173/`.
To build for dev: `npm build-dev` / `yarn build-dev`
@ -28,10 +26,3 @@ To build for production: `npm build` / `yarn build`
## TODO
- Search repo for "TODO"
- My one gripe with Chakra: no way to disable all animations right now and drop the dependence on
`framer-motion`. I would prefer to save the ~30kb on bundle and have zero animations. This is on
the Chakra roadmap. See https://github.com/chakra-ui/chakra-ui/pull/6368 for last discussion on
this. Need to check in on this issue periodically.
- Mobile friendly layout
- Proper image gallery/viewer/manager
- Help tooltips and such

Binary file not shown.

BIN
frontend/dist/assets/Inter.b9a8e5e2.ttf vendored Normal file

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

File diff suppressed because one or more lines are too long

View File

@ -1 +0,0 @@
.checkerboard{background-position:0px 0px,10px 10px;background-size:20px 20px;background-image:linear-gradient(45deg,#eee 25%,transparent 25%,transparent 75%,#eee 75%,#eee 100%),linear-gradient(45deg,#eee 25%,white 25%,white 75%,#eee 75%,#eee 100%)}

File diff suppressed because one or more lines are too long

483
frontend/dist/assets/index.dd3155db.js vendored Normal file

File diff suppressed because one or more lines are too long

BIN
frontend/dist/assets/logo.13003d72.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -1,14 +1,18 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI Stable Diffusion Dream Server</title>
<script type="module" crossorigin src="/assets/index.1332a4e9.js"></script>
<link rel="stylesheet" href="/assets/index.447eb2a9.css">
</head>
<body>
<div id="root"></div>
</body>
</html>
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="/assets/favicon.0d253ced.ico" />
<script type="module" crossorigin src="/assets/index.dd3155db.js"></script>
<link rel="stylesheet" href="/assets/index.a0250964.css">
</head>
<body>
<div id="root"></div>
</body>
</html>

BIN
frontend/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

View File

@ -1,12 +1,16 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI Stable Diffusion Dream Server</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>InvokeAI - A Stable Diffusion Toolkit</title>
<link rel="shortcut icon" type="icon" href="favicon.ico" />
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>

View File

@ -19,12 +19,15 @@
"dateformat": "^5.0.3",
"framer-motion": "^7.2.1",
"lodash": "^4.17.21",
"re-resizable": "^6.9.9",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-dropzone": "^14.2.2",
"react-hotkeys-hook": "^3.4.7",
"react-icons": "^4.4.0",
"react-redux": "^8.0.2",
"redux-persist": "^6.0.0",
"socket.io": "^4.5.2",
"socket.io-client": "^4.5.2",
"uuid": "^9.0.0"
},
@ -38,6 +41,7 @@
"eslint": "^8.23.0",
"eslint-plugin-prettier": "^4.2.1",
"eslint-plugin-react-hooks": "^4.6.0",
"sass": "^1.55.0",
"tsc-watch": "^5.0.3",
"typescript": "^4.6.4",
"vite": "^3.0.7",

17
frontend/src/app/App.scss Normal file
View File

@ -0,0 +1,17 @@
@use '../styles/Mixins/' as *;
.App {
display: grid;
}
.app-content {
display: grid;
row-gap: 1rem;
margin: 0.6rem;
padding: 1rem;
border-radius: 0.5rem;
background-color: var(--background-color);
grid-auto-rows: max-content;
width: $app-width;
height: $app-height;
}

View File

@ -1,16 +1,14 @@
import { Grid, GridItem } from '@chakra-ui/react';
import { useEffect, useState } from 'react';
import CurrentImageDisplay from '../features/gallery/CurrentImageDisplay';
import ImageGallery from '../features/gallery/ImageGallery';
import ProgressBar from '../features/system/ProgressBar';
import SiteHeader from '../features/system/SiteHeader';
import OptionsAccordion from '../features/options/OptionsAccordion';
import ProcessButtons from '../features/options/ProcessButtons';
import PromptInput from '../features/options/PromptInput';
import LogViewer from '../features/system/LogViewer';
import Console from '../features/system/Console';
import Loading from '../Loading';
import { useAppDispatch } from './store';
import { requestSystemConfig } from './socketio/actions';
import { keepGUIAlive } from './utils';
import InvokeTabs from '../features/tabs/InvokeTabs';
keepGUIAlive();
const App = () => {
const dispatch = useAppDispatch();
@ -22,43 +20,14 @@ const App = () => {
}, [dispatch]);
return isReady ? (
<>
<Grid
width="100vw"
height="100vh"
templateAreas={`
"header header header header"
"progressBar progressBar progressBar progressBar"
"menu prompt processButtons imageRoll"
"menu currentImage currentImage imageRoll"`}
gridTemplateRows={'36px 10px 100px auto'}
gridTemplateColumns={'350px auto 100px 388px'}
gap={2}
>
<GridItem area={'header'} pt={1}>
<SiteHeader />
</GridItem>
<GridItem area={'progressBar'}>
<ProgressBar />
</GridItem>
<GridItem pl="2" area={'menu'} overflowY="scroll">
<OptionsAccordion />
</GridItem>
<GridItem area={'prompt'}>
<PromptInput />
</GridItem>
<GridItem area={'processButtons'}>
<ProcessButtons />
</GridItem>
<GridItem area={'currentImage'}>
<CurrentImageDisplay />
</GridItem>
<GridItem pr="2" area={'imageRoll'} overflowY="scroll">
<ImageGallery />
</GridItem>
</Grid>
<LogViewer />
</>
<div className="App">
<ProgressBar />
<div className="app-content">
<SiteHeader />
<InvokeTabs />
</div>
<Console />
</div>
) : (
<Loading />
);

View File

@ -15,13 +15,15 @@ export const SAMPLERS: Array<string> = [
// Valid image widths
export const WIDTHS: Array<number> = [
64, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960,
1024,
1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792,
1856, 1920, 1984, 2048,
];
// Valid image heights
export const HEIGHTS: Array<number> = [
64, 128, 192, 256, 320, 384, 448, 512, 576, 640, 704, 768, 832, 896, 960,
1024,
1024, 1088, 1152, 1216, 1280, 1344, 1408, 1472, 1536, 1600, 1664, 1728, 1792,
1856, 1920, 1984, 2048,
];
// Valid upscaling levels

View File

@ -7,12 +7,12 @@ type FeatureHelpInfo = {
export enum Feature {
PROMPT,
GALLERY,
OUTPUT,
SEED_AND_VARIATION,
ESRGAN,
OTHER,
SEED,
VARIATIONS,
UPSCALE,
FACE_CORRECTION,
IMAGE_TO_IMAGE,
SAMPLER,
}
export const FEATURES: Record<Feature, FeatureHelpInfo> = {
@ -26,18 +26,23 @@ export const FEATURES: Record<Feature, FeatureHelpInfo> = {
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.OUTPUT]: {
text: 'The Height and Width of generations can be controlled here. If you experience errors, you may be generating an image too large for your system. The seamless option will more often result in repeating patterns in outputs.',
[Feature.OTHER]: {
text: 'Additional Options',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.SEED_AND_VARIATION]: {
text: 'Seed values provide an initial set of noise which guide the denoising process. Try a variation with an amount of between 0 and 1 to change the output image for that seed.',
[Feature.SEED]: {
text: 'Seed values provide an initial set of noise which guide the denoising process.',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.ESRGAN]: {
text: 'The ESRGAN setting can be used to increase the output resolution without requiring a higher width/height in the initial generation.',
[Feature.VARIATIONS]: {
text: 'Try a variation with an amount of between 0 and 1 to change the output image for the set seed.',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.UPSCALE]: {
text: 'Using ESRGAN you can increase the output resolution without requiring a higher width/height in the initial generation.',
href: 'link/to/docs/feature1.html',
guideImage: 'asset/path.gif',
},
@ -51,9 +56,4 @@ export const FEATURES: Record<Feature, FeatureHelpInfo> = {
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
[Feature.SAMPLER]: {
text: 'This setting allows for different denoising samplers to be used, as well as the number of denoising steps used, which will change the resulting output.',
href: 'link/to/docs/feature3.html',
guideImage: 'asset/path.gif',
},
};

View File

@ -129,6 +129,7 @@ export declare type SystemStatus = {
totalIterations: number;
currentStatus: string;
currentStatusHasSteps: boolean;
hasError: boolean;
};
export declare type SystemConfig = {
@ -159,10 +160,8 @@ export declare type ErrorResponse = {
};
export declare type GalleryImagesResponse = {
images: Array<Omit<Image, 'uuid'>>;
nextPage: number;
offset: number;
onlyNewImages: boolean;
images: Array<Omit<Image, 'uuid'>>;
areMoreImagesAvailable: boolean;
};
export declare type ImageUrlAndUuidResponse = {

View File

@ -6,6 +6,7 @@ import {
addLogEntry,
setIsProcessing,
} from '../../features/system/systemSlice';
import { tabMap, tab_dict } from '../../features/tabs/InvokeTabs';
import * as InvokeAI from '../invokeai';
/**
@ -23,8 +24,14 @@ const makeSocketIOEmitters = (
emitGenerateImage: () => {
dispatch(setIsProcessing(true));
const options = { ...getState().options };
if (tabMap[options.activeTab] === 'txt2img') {
options.shouldUseInitImage = false;
}
const { generationParameters, esrganParameters, gfpganParameters } =
frontendToBackendParameters(getState().options, getState().system);
frontendToBackendParameters(options, getState().system);
socketio.emit(
'generateImage',
@ -50,7 +57,10 @@ const makeSocketIOEmitters = (
const esrganParameters = {
upscale: [upscalingLevel, upscalingStrength],
};
socketio.emit('runESRGAN', imageToProcess, esrganParameters);
socketio.emit('runPostprocessing', imageToProcess, {
type: 'esrgan',
...esrganParameters,
});
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
@ -68,7 +78,10 @@ const makeSocketIOEmitters = (
const gfpganParameters = {
gfpgan_strength: gfpganStrength,
};
socketio.emit('runGFPGAN', imageToProcess, gfpganParameters);
socketio.emit('runPostprocessing', imageToProcess, {
type: 'gfpgan',
...gfpganParameters,
});
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
@ -84,16 +97,12 @@ const makeSocketIOEmitters = (
socketio.emit('deleteImage', url, uuid);
},
emitRequestImages: () => {
const { nextPage, offset } = getState().gallery;
socketio.emit('requestImages', nextPage, offset);
const { earliest_mtime } = getState().gallery;
socketio.emit('requestImages', earliest_mtime);
},
emitRequestNewImages: () => {
const { nextPage, offset, images } = getState().gallery;
if (images.length > 0) {
socketio.emit('requestImages', nextPage, offset, images[0].mtime);
} else {
socketio.emit('requestImages', nextPage, offset);
}
const { latest_mtime } = getState().gallery;
socketio.emit('requestLatestImages', latest_mtime);
},
emitCancelProcessing: () => {
socketio.emit('cancel');

View File

@ -11,6 +11,8 @@ import {
setSystemStatus,
setCurrentStatus,
setSystemConfig,
processingCanceled,
errorOccurred,
} from '../../features/system/systemSlice';
import {
@ -25,7 +27,7 @@ import {
setInitialImagePath,
setMaskPath,
} from '../../features/options/optionsSlice';
import { requestNewImages } from './actions';
import { requestImages, requestNewImages } from './actions';
/**
* Returns an object containing listener callbacks for socketio events.
@ -44,7 +46,11 @@ const makeSocketIOListeners = (
try {
dispatch(setIsConnected(true));
dispatch(setCurrentStatus('Connected'));
dispatch(requestNewImages());
if (getState().gallery.latest_mtime) {
dispatch(requestNewImages());
} else {
dispatch(requestImages());
}
} catch (e) {
console.error(e);
}
@ -90,7 +96,6 @@ const makeSocketIOListeners = (
message: `Image generated: ${url}`,
})
);
dispatch(setIsProcessing(false));
} catch (e) {
console.error(e);
}
@ -116,7 +121,6 @@ const makeSocketIOListeners = (
message: `Intermediate image generated: ${url}`,
})
);
dispatch(setIsProcessing(false));
} catch (e) {
console.error(e);
}
@ -124,7 +128,7 @@ const makeSocketIOListeners = (
/**
* Callback to run when we receive an 'esrganResult' event.
*/
onESRGANResult: (data: InvokeAI.ImageResultResponse) => {
onPostprocessingResult: (data: InvokeAI.ImageResultResponse) => {
try {
const { url, metadata, mtime } = data;
@ -140,10 +144,9 @@ const makeSocketIOListeners = (
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
message: `Upscaled: ${url}`,
message: `Postprocessed: ${url}`,
})
);
dispatch(setIsProcessing(false));
} catch (e) {
console.error(e);
}
@ -204,7 +207,7 @@ const makeSocketIOListeners = (
level: 'error',
})
);
dispatch(setIsProcessing(false));
dispatch(errorOccurred());
dispatch(clearIntermediateImage());
} catch (e) {
console.error(e);
@ -214,7 +217,7 @@ const makeSocketIOListeners = (
* Callback to run when we receive a 'galleryImages' event.
*/
onGalleryImages: (data: InvokeAI.GalleryImagesResponse) => {
const { images, nextPage, offset } = data;
const { images, areMoreImagesAvailable } = data;
/**
* the logic here ideally would be in the reducer but we have a side effect:
@ -232,7 +235,9 @@ const makeSocketIOListeners = (
};
});
dispatch(addGalleryImages({ images: preparedImages, nextPage, offset }));
dispatch(
addGalleryImages({ images: preparedImages, areMoreImagesAvailable })
);
dispatch(
addLogEntry({
@ -245,7 +250,7 @@ const makeSocketIOListeners = (
* Callback to run when we receive a 'processingCanceled' event.
*/
onProcessingCanceled: () => {
dispatch(setIsProcessing(false));
dispatch(processingCanceled());
const { intermediateImage } = getState().gallery;
@ -259,6 +264,7 @@ const makeSocketIOListeners = (
);
dispatch(clearIntermediateImage());
}
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),
@ -273,6 +279,17 @@ const makeSocketIOListeners = (
onImageDeleted: (data: InvokeAI.ImageUrlAndUuidResponse) => {
const { url, uuid } = data;
dispatch(removeImage(uuid));
const { initialImagePath, maskPath } = getState().options;
if (initialImagePath === url) {
dispatch(setInitialImagePath(''));
}
if (maskPath === url) {
dispatch(setMaskPath(''));
}
dispatch(
addLogEntry({
timestamp: dateFormat(new Date(), 'isoDateTime'),

View File

@ -35,8 +35,7 @@ export const socketioMiddleware = () => {
onConnect,
onDisconnect,
onError,
onESRGANResult,
onGFPGANResult,
onPostprocessingResult,
onGenerationResult,
onIntermediateResult,
onProgressUpdate,
@ -76,12 +75,9 @@ export const socketioMiddleware = () => {
onGenerationResult(data)
);
socketio.on('esrganResult', (data: InvokeAI.ImageResultResponse) =>
onESRGANResult(data)
);
socketio.on('gfpganResult', (data: InvokeAI.ImageResultResponse) =>
onGFPGANResult(data)
socketio.on(
'postprocessingResult',
(data: InvokeAI.ImageResultResponse) => onPostprocessingResult(data)
);
socketio.on('intermediateResult', (data: InvokeAI.ImageResultResponse) =>
@ -153,7 +149,6 @@ export const socketioMiddleware = () => {
break;
}
case 'socketio/cancelProcessing': {
emitCancelProcessing();
break;

View File

@ -7,6 +7,7 @@ import storage from 'redux-persist/lib/storage'; // defaults to localStorage for
import optionsReducer from '../features/options/optionsSlice';
import galleryReducer from '../features/gallery/gallerySlice';
import systemReducer from '../features/system/systemSlice';
import { socketioMiddleware } from './socketio/middleware';

25
frontend/src/app/utils.ts Normal file
View File

@ -0,0 +1,25 @@
export function keepGUIAlive() {
async function getRequest(url = '') {
const response = await fetch(url, {
method: 'GET',
cache: 'no-cache',
});
return response;
}
const keepAliveServer = () => {
const url = document.location;
const route = '/flaskwebgui-keep-server-alive';
getRequest(url + route).then((data) => {
return data;
});
};
if (!import.meta.env.NODE_ENV || import.meta.env.NODE_ENV === 'production') {
document.addEventListener('DOMContentLoaded', () => {
const intervalRequest = 3 * 1000;
keepAliveServer();
setInterval(keepAliveServer, intervalRequest);
});
}
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 336 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -0,0 +1,20 @@
.guide-popover-arrow {
background-color: var(--tab-panel-bg) !important;
box-shadow: none !important;
}
.guide-popover-content {
background-color: var(--background-color-secondary) !important;
border: none !important;
}
.guide-popover-guide-content {
background: var(--tab-panel-bg);
border: 2px solid var(--tab-hover-color);
border-radius: 0.4rem;
padding: 0.75rem 1rem 0.75rem 1rem;
display: grid;
grid-template-rows: repeat(auto-fill, 1fr);
grid-row-gap: 0.5rem;
justify-content: space-between;
}

View File

@ -3,8 +3,6 @@ import {
PopoverArrow,
PopoverContent,
PopoverTrigger,
PopoverHeader,
Flex,
Box,
} from '@chakra-ui/react';
import { SystemState } from '../../features/system/systemSlice';
@ -12,7 +10,7 @@ import { useAppSelector } from '../../app/store';
import { RootState } from '../../app/store';
import { createSelector } from '@reduxjs/toolkit';
import { ReactElement } from 'react';
import { Feature, FEATURES } from '../../app/features';
import { Feature, FEATURES } from '../../app/features';
type GuideProps = {
children: ReactElement;
@ -33,14 +31,13 @@ const GuidePopover = ({ children, feature }: GuideProps) => {
<Box>{children}</Box>
</PopoverTrigger>
<PopoverContent
className={`guide-popover-content`}
maxWidth="400px"
onClick={(e) => e.preventDefault()}
cursor={'initial'}
>
<PopoverArrow />
<Flex alignItems={'center'} gap={2} p={4}>
{text}
</Flex>
<PopoverArrow className="guide-popover-arrow" />
<div className="guide-popover-guide-content">{text}</div>
</PopoverContent>
</Popover>
) : (

View File

@ -0,0 +1,24 @@
import { Button, ButtonProps, Tooltip } from '@chakra-ui/react';
interface Props extends ButtonProps {
label: string;
tooltip?: string;
}
/**
* Reusable customized button component. Originally was more customized - now probably unecessary.
*
* TODO: Get rid of this.
*/
const IAIButton = (props: Props) => {
const { label, tooltip = '', size = 'sm', ...rest } = props;
return (
<Tooltip label={tooltip}>
<Button size={size} {...rest}>
{label}
</Button>
</Tooltip>
);
};
export default IAIButton;

View File

@ -0,0 +1,21 @@
import { IconButtonProps, IconButton, Tooltip } from '@chakra-ui/react';
interface Props extends IconButtonProps {
tooltip?: string;
}
/**
* Reusable customized button component. Originally was more customized - now probably unecessary.
*
* TODO: Get rid of this.
*/
const IAIIconButton = (props: Props) => {
const { tooltip = '', onClick, ...rest } = props;
return (
<Tooltip label={tooltip}>
<IconButton {...rest} cursor={onClick ? 'pointer' : 'unset'} onClick={onClick}/>
</Tooltip>
);
};
export default IAIIconButton;

View File

@ -0,0 +1,34 @@
.input {
display: grid;
grid-template-columns: max-content auto;
column-gap: 1rem;
align-items: center;
.input-label {
color: var(--text-color-secondary);
margin-right: 0;
}
.input-entry {
background-color: var(--background-color-secondary);
border: 2px solid var(--border-color);
border-radius: 0.2rem;
font-weight: bold;
&:focus {
outline: none;
border: 2px solid var(--prompt-border-color);
box-shadow: 0 0 10px 0 var(--prompt-box-shadow-color);
}
&:disabled {
opacity: 0.2;
}
&[aria-invalid='true'] {
outline: none;
border: 2px solid var(--border-color-invalid);
box-shadow: 0 0 10px 0 var(--box-shadow-color-invalid);
}
}
}

View File

@ -0,0 +1,41 @@
import { FormControl, FormLabel, Input, InputProps } from '@chakra-ui/react';
import { ChangeEvent } from 'react';
interface IAIInputProps extends InputProps {
styleClass?: string;
label?: string;
width?: string | number;
value: string;
onChange: (e: ChangeEvent<HTMLInputElement>) => void;
}
export default function IAIInput(props: IAIInputProps) {
const {
label,
styleClass,
isDisabled = false,
fontSize = '1rem',
width,
isInvalid,
...rest
} = props;
return (
<FormControl
className={`input ${styleClass}`}
isInvalid={isInvalid}
isDisabled={isDisabled}
flexGrow={1}
>
<FormLabel
fontSize={fontSize}
marginBottom={1}
whiteSpace="nowrap"
className="input-label"
>
{label}
</FormLabel>
<Input {...rest} className="input-entry" size={'sm'} width={width} />
</FormControl>
);
}

View File

@ -0,0 +1,52 @@
.number-input {
display: grid;
grid-template-columns: max-content auto;
column-gap: 1rem;
align-items: center;
.number-input-label {
color: var(--text-color-secondary);
margin-right: 0;
}
.number-input-field {
display: grid;
grid-template-columns: auto max-content;
column-gap: 0.5rem;
align-items: center;
background-color: var(--background-color-secondary);
border: 2px solid var(--border-color);
border-radius: 0.2rem;
}
.number-input-entry {
border: none;
font-weight: bold;
width: 100%;
padding-inline-end: 0;
&:focus {
outline: none;
border: 2px solid var(--prompt-border-color);
box-shadow: 0 0 10px 0 var(--prompt-box-shadow-color);
}
&:disabled {
opacity: 0.2;
}
}
.number-input-stepper {
display: grid;
padding-right: 0.7rem;
svg {
width: 12px;
height: 12px;
}
.number-input-stepper-button {
border: none;
}
}
}

View File

@ -0,0 +1,141 @@
import {
FormControl,
NumberInput,
NumberInputField,
NumberIncrementStepper,
NumberDecrementStepper,
NumberInputProps,
FormLabel,
} from '@chakra-ui/react';
import _ from 'lodash';
import { FocusEvent, useEffect, useState } from 'react';
const numberStringRegex = /^-?(0\.)?\.?$/;
interface Props extends Omit<NumberInputProps, 'onChange'> {
styleClass?: string;
label?: string;
width?: string | number;
showStepper?: boolean;
value: number;
onChange: (v: number) => void;
min: number;
max: number;
clamp?: boolean;
isInteger?: boolean;
}
/**
* Customized Chakra FormControl + NumberInput multi-part component.
*/
const IAINumberInput = (props: Props) => {
const {
label,
styleClass,
isDisabled = false,
showStepper = true,
fontSize = '1rem',
size = 'sm',
width,
textAlign,
isInvalid,
value,
onChange,
min,
max,
isInteger = true,
...rest
} = props;
/**
* Using a controlled input with a value that accepts decimals needs special
* handling. If the user starts to type in "1.5", by the time they press the
* 5, the value has been parsed from "1." to "1" and they end up with "15".
*
* To resolve this, this component keeps a the value as a string internally,
* and the UI component uses that. When a change is made, that string is parsed
* as a number and given to the `onChange` function.
*/
const [valueAsString, setValueAsString] = useState<string>(String(value));
/**
* When `value` changes (e.g. from a diff source than this component), we need
* to update the internal `valueAsString`, but only if the actual value is different
* from the current value.
*/
useEffect(() => {
if (!valueAsString.match(numberStringRegex) && value !== Number(valueAsString)) {
setValueAsString(String(value));
}
}, [value, valueAsString]);
const handleOnChange = (v: string) => {
setValueAsString(v);
// This allows negatives and decimals e.g. '-123', `.5`, `-0.2`, etc.
if (!v.match(numberStringRegex)) {
// Cast the value to number. Floor it if it should be an integer.
onChange(isInteger ? Math.floor(Number(v)) : Number(v));
}
};
/**
* Clicking the steppers allows the value to go outside bounds; we need to
* clamp it on blur and floor it if needed.
*/
const handleBlur = (e: FocusEvent<HTMLInputElement>) => {
const clamped = _.clamp(
isInteger ? Math.floor(Number(e.target.value)) : Number(e.target.value),
min,
max
);
setValueAsString(String(clamped));
onChange(clamped);
};
return (
<FormControl
isDisabled={isDisabled}
isInvalid={isInvalid}
className={`number-input ${styleClass}`}
>
{label && (
<FormLabel
fontSize={fontSize}
marginBottom={1}
flexGrow={2}
whiteSpace="nowrap"
className="number-input-label"
>
{label}
</FormLabel>
)}
<NumberInput
size={size}
{...rest}
className="number-input-field"
value={valueAsString}
keepWithinRange={true}
clampValueOnBlur={false}
onChange={handleOnChange}
onBlur={handleBlur}
>
<NumberInputField
fontSize={fontSize}
className="number-input-entry"
width={width}
textAlign={textAlign}
/>
<div
className="number-input-stepper"
style={showStepper ? { display: 'block' } : { display: 'none' }}
>
<NumberIncrementStepper className="number-input-stepper-button" />
<NumberDecrementStepper className="number-input-stepper-button" />
</div>
</NumberInput>
</FormControl>
);
};
export default IAINumberInput;

View File

@ -0,0 +1,28 @@
.iai-select {
display: grid;
grid-template-columns: repeat(2, max-content);
column-gap: 1rem;
align-items: center;
width: max-content;
.iai-select-label {
color: var(--text-color-secondary);
margin-right: 0;
}
.iai-select-picker {
border: 2px solid var(--border-color);
background-color: var(--background-color-secondary);
font-weight: bold;
&:focus {
outline: none;
border: 2px solid var(--prompt-border-color);
box-shadow: 0 0 10px 0 var(--prompt-box-shadow-color);
}
}
.iai-select-option {
background-color: var(--background-color-secondary);
}
}

View File

@ -0,0 +1,56 @@
import { FormControl, FormLabel, Select, SelectProps } from '@chakra-ui/react';
interface Props extends SelectProps {
label: string;
styleClass?: string;
validValues:
| Array<number | string>
| Array<{ key: string; value: string | number }>;
}
/**
* Customized Chakra FormControl + Select multi-part component.
*/
const IAISelect = (props: Props) => {
const {
label,
isDisabled,
validValues,
size = 'sm',
fontSize = 'md',
styleClass,
...rest
} = props;
return (
<FormControl isDisabled={isDisabled} className={`iai-select ${styleClass}`}>
<FormLabel
fontSize={fontSize}
marginBottom={1}
flexGrow={2}
whiteSpace="nowrap"
className="iai-select-label"
>
{label}
</FormLabel>
<Select
fontSize={fontSize}
size={size}
{...rest}
className="iai-select-picker"
>
{validValues.map((opt) => {
return typeof opt === 'string' || typeof opt === 'number' ? (
<option key={opt} value={opt} className="iai-select-option">
{opt}
</option>
) : (
<option key={opt.value} value={opt.value}>
{opt.key}
</option>
);
})}
</Select>
</FormControl>
);
};
export default IAISelect;

View File

@ -0,0 +1,18 @@
.chakra-switch,
.switch-button {
span {
background-color: var(--switch-bg-color);
span {
background-color: var(--white);
}
}
span[data-checked] {
background: var(--switch-bg-active-color);
span {
background-color: var(--white);
}
}
}

View File

@ -14,13 +14,13 @@ interface Props extends SwitchProps {
/**
* Customized Chakra FormControl + Switch multi-part component.
*/
const SDSwitch = (props: Props) => {
const IAISwitch = (props: Props) => {
const {
label,
isDisabled = false,
fontSize = 'md',
size = 'md',
width,
width = 'auto',
...rest
} = props;
return (
@ -36,10 +36,10 @@ const SDSwitch = (props: Props) => {
{label}
</FormLabel>
)}
<Switch size={size} {...rest} />
<Switch size={size} className="switch-button" {...rest} />
</Flex>
</FormControl>
);
};
export default SDSwitch;
export default IAISwitch;

View File

@ -0,0 +1,65 @@
import { Button, useToast } from '@chakra-ui/react';
import React, { useCallback } from 'react';
import { FileRejection } from 'react-dropzone';
import { useAppDispatch } from '../../app/store';
import ImageUploader from '../../features/options/ImageUploader';
interface InvokeImageUploaderProps {
label?: string;
icon?: any;
onMouseOver?: any;
OnMouseout?: any;
dispatcher: any;
styleClass?: string;
}
export default function InvokeImageUploader(props: InvokeImageUploaderProps) {
const { label, icon, dispatcher, styleClass, onMouseOver, OnMouseout } =
props;
const toast = useToast();
const dispatch = useAppDispatch();
// Callbacks to for handling file upload attempts
const fileAcceptedCallback = useCallback(
(file: File) => dispatch(dispatcher(file)),
[dispatch, dispatcher]
);
const fileRejectionCallback = useCallback(
(rejection: FileRejection) => {
const msg = rejection.errors.reduce(
(acc: string, cur: { message: string }) => acc + '\n' + cur.message,
''
);
toast({
title: 'Upload failed',
description: msg,
status: 'error',
isClosable: true,
});
},
[toast]
);
return (
<ImageUploader
fileAcceptedCallback={fileAcceptedCallback}
fileRejectionCallback={fileRejectionCallback}
styleClass={styleClass}
>
<Button
size={'sm'}
fontSize={'md'}
fontWeight={'normal'}
onMouseOver={onMouseOver}
onMouseOut={OnMouseout}
leftIcon={icon}
width={'100%'}
>
{label ? label : null}
</Button>
</ImageUploader>
);
}

View File

@ -1,21 +0,0 @@
import { Button, ButtonProps } from '@chakra-ui/react';
interface Props extends ButtonProps {
label: string;
}
/**
* Reusable customized button component. Originally was more customized - now probably unecessary.
*
* TODO: Get rid of this.
*/
const SDButton = (props: Props) => {
const { label, size = 'sm', ...rest } = props;
return (
<Button size={size} {...rest}>
{label}
</Button>
);
};
export default SDButton;

View File

@ -1,59 +0,0 @@
import {
FormControl,
NumberInput,
NumberInputField,
NumberInputStepper,
NumberIncrementStepper,
NumberDecrementStepper,
Text,
FormLabel,
NumberInputProps,
Flex,
} from '@chakra-ui/react';
interface Props extends NumberInputProps {
label?: string;
width?: string | number;
}
/**
* Customized Chakra FormControl + NumberInput multi-part component.
*/
const SDNumberInput = (props: Props) => {
const {
label,
isDisabled = false,
fontSize = 'md',
size = 'sm',
width,
isInvalid,
...rest
} = props;
return (
<FormControl isDisabled={isDisabled} width={width} isInvalid={isInvalid}>
<Flex gap={2} justifyContent={'space-between'} alignItems={'center'}>
{label && (
<FormLabel marginBottom={1}>
<Text fontSize={fontSize} whiteSpace="nowrap">
{label}
</Text>
</FormLabel>
)}
<NumberInput
size={size}
{...rest}
keepWithinRange={false}
clampValueOnBlur={true}
>
<NumberInputField fontSize={'md'} />
<NumberInputStepper>
<NumberIncrementStepper />
<NumberDecrementStepper />
</NumberInputStepper>
</NumberInput>
</Flex>
</FormControl>
);
};
export default SDNumberInput;

View File

@ -1,56 +0,0 @@
import {
Flex,
FormControl,
FormLabel,
Select,
SelectProps,
Text,
} from '@chakra-ui/react';
interface Props extends SelectProps {
label: string;
validValues:
| Array<number | string>
| Array<{ key: string; value: string | number }>;
}
/**
* Customized Chakra FormControl + Select multi-part component.
*/
const SDSelect = (props: Props) => {
const {
label,
isDisabled,
validValues,
size = 'sm',
fontSize = 'md',
marginBottom = 1,
whiteSpace = 'nowrap',
...rest
} = props;
return (
<FormControl isDisabled={isDisabled}>
<Flex justifyContent={'space-between'} alignItems={'center'}>
<FormLabel marginBottom={marginBottom}>
<Text fontSize={fontSize} whiteSpace={whiteSpace}>
{label}
</Text>
</FormLabel>
<Select fontSize={fontSize} size={size} {...rest}>
{validValues.map((opt) => {
return typeof opt === 'string' || typeof opt === 'number' ? (
<option key={opt} value={opt}>
{opt}
</option>
) : (
<option key={opt.value} value={opt.value}>
{opt.key}
</option>
);
})}
</Select>
</Flex>
</FormControl>
);
};
export default SDSelect;

View File

@ -0,0 +1,16 @@
import React from 'react';
import Img2ImgPlaceHolder from '../../../assets/images/image2img.png';
export const ImageToImageWIP = () => {
return (
<div className="work-in-progress txt2img-work-in-progress">
<img src={Img2ImgPlaceHolder} alt="img2img_placeholder" />
<h1>Image To Image</h1>
<p>
Image to Image is already available in the WebUI. You can access it from
the Text to Image - Advanced Options menu. A dedicated UI for Image To
Image will be released soon.
</p>
</div>
);
};

View File

@ -0,0 +1,14 @@
import React from 'react';
export default function InpaintingWIP() {
return (
<div className="work-in-progress inpainting-work-in-progress">
<h1>Inpainting</h1>
<p>
Inpainting is available as a part of the Invoke AI Command Line
Interface. A dedicated WebUI interface will be released in the near
future.
</p>
</div>
);
}

View File

@ -0,0 +1,13 @@
import React from 'react';
export default function NodesWIP() {
return (
<div className="work-in-progress nodes-work-in-progress">
<h1>Nodes</h1>
<p>
A node based system for the generation of images is under development
currently. Stay tuned for updates about this amazing feature.
</p>
</div>
);
}

View File

@ -0,0 +1,14 @@
import React from 'react';
export default function OutpaintingWIP() {
return (
<div className="work-in-progress outpainting-work-in-progress">
<h1>Outpainting</h1>
<p>
Outpainting is available as a part of the Invoke AI Command Line
Interface. A dedicated WebUI interface will be released in the near
future.
</p>
</div>
);
}

View File

@ -0,0 +1,19 @@
import React from 'react';
export const PostProcessingWIP = () => {
return (
<div className="work-in-progress post-processing-work-in-progress">
<h1>Post Processing</h1>
<p>
Invoke AI offers a wide variety of post processing features. Image
Upscaling and Face Restoration are already available in the WebUI. You
can access them from the Advanced Options menu of the Text To Image tab.
A dedicated UI will be released soon.
</p>
<p>
The Invoke AI Command Line Interface offers various other features
including Embiggen, High Resolution Fixing and more.
</p>
</div>
);
};

View File

@ -0,0 +1,24 @@
@use '../../../styles/Mixins/' as *;
.work-in-progress {
display: grid;
width: 100%;
height: $app-content-height;
grid-auto-rows: max-content;
background-color: var(--background-color-secondary);
border-radius: 0.4rem;
place-content: center;
place-items: center;
row-gap: 1rem;
h1 {
font-size: 2rem;
font-weight: bold;
}
p {
text-align: center;
max-width: 50rem;
color: var(--subtext-color-bright);
}
}

View File

@ -4,10 +4,11 @@ import { useMemo } from 'react';
import { useAppSelector } from '../../app/store';
import { RootState } from '../../app/store';
import { OptionsState } from '../../features/options/optionsSlice';
import { SystemState } from '../../features/system/systemSlice';
import { validateSeedWeights } from '../util/seedWeightPairs';
const optionsSelector = createSelector(
export const optionsSelector = createSelector(
(state: RootState) => state.options,
(options: OptionsState) => {
return {
@ -17,6 +18,7 @@ const optionsSelector = createSelector(
maskPath: options.maskPath,
initialImagePath: options.initialImagePath,
seed: options.seed,
activeTab: options.activeTab,
};
},
{
@ -26,7 +28,7 @@ const optionsSelector = createSelector(
}
);
const systemSelector = createSelector(
export const systemSelector = createSelector(
(state: RootState) => state.system,
(system: SystemState) => {
return {
@ -46,20 +48,26 @@ const systemSelector = createSelector(
* This is used to prevent the 'Generate' button from being clicked.
*/
const useCheckParameters = (): boolean => {
const { prompt } = useAppSelector(optionsSelector);
const {
prompt,
shouldGenerateVariations,
seedWeights,
maskPath,
initialImagePath,
seed,
activeTab,
} = useAppSelector(optionsSelector);
const { isProcessing, isConnected } = useAppSelector(systemSelector);
return useMemo(() => {
// Cannot generate without a prompt
if (!prompt) {
if (!prompt || Boolean(prompt.match(/^[\s\r\n]+$/))) {
return false;
}
if (prompt && !initialImagePath && activeTab === 1) {
return false;
}

View File

@ -0,0 +1,17 @@
import { createIcon } from "@chakra-ui/react";
const ImageToImageIcon = createIcon({
displayName: 'ImageToImageIcon',
viewBox: '0 0 3543 3543',
path: (
<g transform="matrix(1.10943,0,0,1.10943,-206.981,-213.533)">
<path
fill="currentColor"
fillRule="evenodd"
clipRule="evenodd"
d="M688.533,2405.95L542.987,2405.95C349.532,2405.95 192.47,2248.89 192.47,2055.44L192.47,542.987C192.47,349.532 349.532,192.47 542.987,192.47L2527.88,192.47C2721.33,192.47 2878.4,349.532 2878.4,542.987L2878.4,1172.79L3023.94,1172.79C3217.4,1172.79 3374.46,1329.85 3374.46,1523.3C3374.46,1523.3 3374.46,3035.75 3374.46,3035.75C3374.46,3229.21 3217.4,3386.27 3023.94,3386.27L1039.05,3386.27C845.595,3386.27 688.533,3229.21 688.533,3035.75L688.533,2405.95ZM3286.96,2634.37L3286.96,1523.3C3286.96,1378.14 3169.11,1260.29 3023.94,1260.29C3023.94,1260.29 1039.05,1260.29 1039.05,1260.29C893.887,1260.29 776.033,1378.14 776.033,1523.3L776.033,2489.79L1440.94,1736.22L2385.83,2775.59L2880.71,2200.41L3286.96,2634.37ZM2622.05,1405.51C2778.5,1405.51 2905.51,1532.53 2905.51,1688.98C2905.51,1845.42 2778.5,1972.44 2622.05,1972.44C2465.6,1972.44 2338.58,1845.42 2338.58,1688.98C2338.58,1532.53 2465.6,1405.51 2622.05,1405.51ZM2790.9,1172.79L1323.86,1172.79L944.882,755.906L279.97,1509.47L279.97,542.987C279.97,397.824 397.824,279.97 542.987,279.97C542.987,279.97 2527.88,279.97 2527.88,279.97C2673.04,279.97 2790.9,397.824 2790.9,542.987L2790.9,1172.79ZM2125.98,425.197C2282.43,425.197 2409.45,552.213 2409.45,708.661C2409.45,865.11 2282.43,992.126 2125.98,992.126C1969.54,992.126 1842.52,865.11 1842.52,708.661C1842.52,552.213 1969.54,425.197 2125.98,425.197Z"
/>
</g>
),
});
export default ImageToImageIcon;

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,16 @@
import { createIcon } from "@chakra-ui/react";
const NodesIcon = createIcon({
displayName: 'NodesIcon',
viewBox: '0 0 3543 3543',
path: (
<path
fill="currentColor"
fillRule="evenodd"
clipRule="evenodd"
d="M3543.31,770.787C3543.31,515.578 3336.11,308.38 3080.9,308.38L462.407,308.38C207.197,308.38 0,515.578 0,770.787L0,2766.03C0,3021.24 207.197,3228.44 462.407,3228.44L3080.9,3228.44C3336.11,3228.44 3543.31,3021.24 3543.31,2766.03C3543.31,2766.03 3543.31,770.787 3543.31,770.787ZM3427.88,770.787L3427.88,2766.03C3427.88,2957.53 3272.4,3113.01 3080.9,3113.01C3080.9,3113.01 462.407,3113.01 462.407,3113.01C270.906,3113.01 115.431,2957.53 115.431,2766.03L115.431,770.787C115.431,579.286 270.906,423.812 462.407,423.812L3080.9,423.812C3272.4,423.812 3427.88,579.286 3427.88,770.787ZM1214.23,1130.69L1321.47,1130.69C1324.01,1130.69 1326.54,1130.53 1329.05,1130.2C1329.05,1130.2 1367.3,1125.33 1397.94,1149.8C1421.63,1168.72 1437.33,1204.3 1437.33,1265.48L1437.33,2078.74L1220.99,2078.74C1146.83,2078.74 1086.61,2138.95 1086.61,2213.12L1086.61,2762.46C1086.61,2836.63 1146.83,2896.84 1220.99,2896.84L1770.34,2896.84C1844.5,2896.84 1904.71,2836.63 1904.71,2762.46L1904.71,2213.12C1904.71,2138.95 1844.5,2078.74 1770.34,2078.74L1554,2078.74L1554,1604.84C1625.84,1658.19 1703.39,1658.1 1703.39,1658.1C1703.54,1658.1 1703.69,1658.11 1703.84,1658.11L2362.2,1658.11L2362.2,1874.44C2362.2,1948.61 2422.42,2008.82 2496.58,2008.82L3045.93,2008.82C3120.09,2008.82 3180.3,1948.61 3180.3,1874.44L3180.3,1325.1C3180.3,1250.93 3120.09,1190.72 3045.93,1190.72L2496.58,1190.72C2422.42,1190.72 2362.2,1250.93 2362.2,1325.1L2362.2,1558.97L2362.2,1541.44L1704.23,1541.44C1702.2,1541.37 1650.96,1539.37 1609.51,1499.26C1577.72,1468.49 1554,1416.47 1554,1331.69L1554,1265.48C1554,1153.86 1513.98,1093.17 1470.76,1058.64C1411.24,1011.1 1338.98,1012.58 1319.15,1014.03L1214.23,1014.03L1214.23,796.992C1214.23,722.828 1154.02,662.617 1079.85,662.617L530.507,662.617C456.343,662.617 396.131,722.828 396.131,796.992L396.131,1346.34C396.131,1420.5 456.343,1480.71 530.507,1480.71L1079.85,1480.71C1154.02,1480.71 1214.23,1420.5 1214.23,1346.34L1214.23,1130.69Z"
/>
),
});
export default NodesIcon;

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,16 @@
import { createIcon } from '@chakra-ui/react';
const PostprocessingIcon = createIcon({
displayName: 'PostprocessingIcon',
viewBox: '0 0 3543 3543',
path: (
<path
fill="currentColor"
fillRule="evenodd"
clipRule="evenodd"
d="M709.477,1596.53L992.591,1275.66L2239.09,2646.81L2891.95,1888.03L3427.88,2460.51L3427.88,994.78C3427.88,954.66 3421.05,916.122 3408.5,880.254L3521.9,855.419C3535.8,899.386 3543.31,946.214 3543.31,994.78L3543.31,2990.02C3543.31,3245.23 3336.11,3452.43 3080.9,3452.43C3080.9,3452.43 462.407,3452.43 462.407,3452.43C207.197,3452.43 -0,3245.23 -0,2990.02L-0,994.78C-0,739.571 207.197,532.373 462.407,532.373L505.419,532.373L504.644,532.546L807.104,600.085C820.223,601.729 832.422,607.722 841.77,617.116C850.131,625.517 855.784,636.21 858.055,647.804L462.407,647.804C270.906,647.804 115.431,803.279 115.431,994.78L115.431,2075.73L-0,2101.5L115.431,2127.28L115.431,2269.78L220.47,2150.73L482.345,2209.21C503.267,2211.83 522.722,2221.39 537.63,2236.37C552.538,2251.35 562.049,2270.9 564.657,2291.93L671.84,2776.17L779.022,2291.93C781.631,2270.9 791.141,2251.35 806.05,2236.37C820.958,2221.39 840.413,2211.83 861.334,2209.21L1353.15,2101.5L861.334,1993.8C840.413,1991.18 820.958,1981.62 806.05,1966.64C791.141,1951.66 781.631,1932.11 779.022,1911.08L709.477,1596.53ZM671.84,1573.09L725.556,2006.07C726.863,2016.61 731.63,2026.4 739.101,2033.91C746.573,2041.42 756.323,2046.21 766.808,2047.53L1197.68,2101.5L766.808,2155.48C756.323,2156.8 746.573,2161.59 739.101,2169.09C731.63,2176.6 726.863,2186.4 725.556,2196.94L671.84,2629.92L618.124,2196.94C616.817,2186.4 612.05,2176.6 604.579,2169.09C597.107,2161.59 587.357,2156.8 576.872,2155.48L146.001,2101.5L576.872,2047.53C587.357,2046.21 597.107,2041.42 604.579,2033.91C612.05,2026.4 616.817,2016.61 618.124,2006.07L671.84,1573.09ZM609.035,1710.36L564.657,1911.08C562.049,1932.11 552.538,1951.66 537.63,1966.64C522.722,1981.62 503.267,1991.18 482.345,1993.8L328.665,2028.11L609.035,1710.36ZM2297.12,938.615L2451.12,973.003C2480.59,976.695 2507.99,990.158 2528.99,1011.26C2549.99,1032.37 2563.39,1059.9 2567.07,1089.52L2672.73,1566.9C2634.5,1580.11 2593.44,1587.29 2550.72,1587.29C2344.33,1587.29 2176.77,1419.73 2176.77,1213.34C2176.77,1104.78 2223.13,1006.96 2297.12,938.615ZM2718.05,76.925L2793.72,686.847C2795.56,701.69 2802.27,715.491 2812.8,726.068C2823.32,736.644 2837.06,743.391 2851.83,745.242L3458.78,821.28L2851.83,897.318C2837.06,899.168 2823.32,905.916 2812.8,916.492C2802.27,927.068 2795.56,940.87 2793.72,955.712L2718.05,1565.63L2642.38,955.712C2640.54,940.87 2633.83,927.068 2623.3,916.492C2612.78,905.916 2599.04,899.168 2584.27,897.318L1977.32,821.28L2584.27,745.242C2599.04,743.391 2612.78,736.644 2623.3,726.068C2633.83,715.491 2640.54,701.69 2642.38,686.847L2718.05,76.925ZM2883.68,1043.06C2909.88,1094.13 2924.67,1152.02 2924.67,1213.34C2924.67,1335.4 2866.06,1443.88 2775.49,1512.14L2869.03,1089.52C2871.07,1073.15 2876.07,1057.42 2883.68,1043.06ZM925.928,201.2L959.611,472.704C960.431,479.311 963.42,485.455 968.105,490.163C972.79,494.871 978.904,497.875 985.479,498.698L1255.66,532.546L985.479,566.395C978.904,567.218 972.79,570.222 968.105,574.93C963.42,579.638 960.431,585.781 959.611,592.388L925.928,863.893L892.245,592.388C891.425,585.781 888.436,579.638 883.751,574.93C879.066,570.222 872.952,567.218 866.378,566.395L596.195,532.546L866.378,498.698C872.952,497.875 879.066,494.871 883.751,490.163C888.436,485.455 891.425,479.311 892.245,472.704L925.928,201.2ZM2864.47,532.373L3080.9,532.373C3258.7,532.373 3413.2,632.945 3490.58,780.281L3319.31,742.773C3257.14,683.925 3173.2,647.804 3080.9,647.804L2927.07,647.804C2919.95,642.994 2913.25,637.473 2907.11,631.298C2886.11,610.194 2872.71,582.655 2869.03,553.04L2864.47,532.373ZM1352.36,532.373L2571.64,532.373L2567.07,553.04C2563.39,582.655 2549.99,610.194 2528.99,631.298C2522.85,637.473 2516.16,642.994 2509.03,647.804L993.801,647.804C996.072,636.21 1001.73,625.517 1010.09,617.116C1019.43,607.722 1031.63,601.729 1044.75,600.085L1353.15,532.546L1352.36,532.373Z"
/>
),
});
export default PostprocessingIcon;

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,7 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg width="100%" height="100%" viewBox="0 0 3543 3543" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" xml:space="preserve" xmlns:serif="http://www.serif.com/" style="fill-rule:evenodd;clip-rule:evenodd;stroke-linejoin:round;stroke-miterlimit:2;">
<g transform="matrix(1.10943,0,0,1.10943,-206.981,-213.533)">
<path d="M688.533,2405.95L542.987,2405.95C349.532,2405.95 192.47,2248.89 192.47,2055.44L192.47,542.987C192.47,349.532 349.532,192.47 542.987,192.47L2527.88,192.47C2721.33,192.47 2878.4,349.532 2878.4,542.987L2878.4,1172.79L3023.94,1172.79C3217.4,1172.79 3374.46,1329.85 3374.46,1523.3C3374.46,1523.3 3374.46,3035.75 3374.46,3035.75C3374.46,3229.21 3217.4,3386.27 3023.94,3386.27L1039.05,3386.27C845.595,3386.27 688.533,3229.21 688.533,3035.75L688.533,2405.95ZM3286.96,2634.37L3286.96,1523.3C3286.96,1378.14 3169.11,1260.29 3023.94,1260.29C3023.94,1260.29 1039.05,1260.29 1039.05,1260.29C893.887,1260.29 776.033,1378.14 776.033,1523.3L776.033,2489.79L1440.94,1736.22L2385.83,2775.59L2880.71,2200.41L3286.96,2634.37ZM2622.05,1405.51C2778.5,1405.51 2905.51,1532.53 2905.51,1688.98C2905.51,1845.42 2778.5,1972.44 2622.05,1972.44C2465.6,1972.44 2338.58,1845.42 2338.58,1688.98C2338.58,1532.53 2465.6,1405.51 2622.05,1405.51ZM2790.9,1172.79L1323.86,1172.79L944.882,755.906L279.97,1509.47L279.97,542.987C279.97,397.824 397.824,279.97 542.987,279.97C542.987,279.97 2527.88,279.97 2527.88,279.97C2673.04,279.97 2790.9,397.824 2790.9,542.987L2790.9,1172.79ZM2125.98,425.197C2282.43,425.197 2409.45,552.213 2409.45,708.661C2409.45,865.11 2282.43,992.126 2125.98,992.126C1969.54,992.126 1842.52,865.11 1842.52,708.661C1842.52,552.213 1969.54,425.197 2125.98,425.197Z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.9 KiB

Some files were not shown because too many files have changed in this diff Show More