- installer now installs the pretty dialog-based console launcher
- added dialogrc for custom colors
- add updater to download new launcher when users do an update
- Imported V2 legacy models will now autoconvert into diffusers at load
time regardless of setting of --ckpt_convert.
- model manager `heuristic_import()` function now looks for side-by-side
yaml and vae files for custom configuration and VAE respectively.
Example of this:
illuminati-v1.1.safetensors illuminati-v1.1.vae.safetensors
illuminati-v1.1.yaml
When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.
NOTE that the changes to `ckpt_to_diffusers.py` were previously reviewed
by @JPPhoto on the `main` branch and approved.
- Imported V2 legacy models will now autoconvert into diffusers
at load time regardless of setting of --ckpt_convert.
- model manager `heuristic_import()` function now looks for
side-by-side yaml and vae files for custom configuration and VAE
respectively.
Example of this:
illuminati-v1.1.safetensors
illuminati-v1.1.vae.safetensors
illuminati-v1.1.yaml
When the user tries to import `illuminati-v1.1.safetensors`, the yaml
file will be used for its configuration, and the VAE will be used for
its VAE. Conversion to diffusers will happen if needed, and the yaml
file will be used to determine which V2 format (if any) to apply.
Several related security fixes:
1. Port #2946 from main to 2.3.2 branch - this closes a hole that allows
a pickle checkpoint file to masquerade as a safetensors file.
2. Add pickle scanning to the checkpoint to diffusers conversion script.
3. Pickle scan VAE non-safetensors files
4. Avoid running scanner twice on same file during the probing and
conversion process.
5. Clean up diagnostic messages.
- The command `invokeai-batch --invoke` was created a time-stamped
logfile with colons in its name, which is a Windows no-no. This corrects
the problem by writing the timestamp out as "13-06-2023_8-35-10"
- Closes#3005
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
file named `.next_prefix` in the outputs directory. This avoids the
overhead of doing a directory listing to find out what file number comes
next.
- The code uses advisory locking to prevent corruption of this file in
the event that multiple invokeai's try to access it simultaneously, but
some users have experienced corruption of the file nevertheless.
- This PR addresses the problem by detecting a potentially corrupted
`.next_prefix` file and falling back to the directory listing method. A
fixed version of the file is then written out.
- Closes#3001
This PR addresses issues raised by #3008.
1. Update documentation to indicate the correct maximum batch size for
TI training when xformers is and isn't used.
2. Update textual inversion code so that the default for batch size is
aware of xformer availability.
3. Add documentation for how to launch TI with distributed learning.
Lots of little bugs have been squashed since 2.3.2 and a new minor point
release is imminent. This PR updates the version number in preparation
for a RC.
- Since 2.3.2 invokeai stores the next PNG file's numeric prefix in a
file named `.next_prefix` in the outputs directory. This avoids the
overhead of doing a directory listing to find out what file number
comes next.
- The code uses advisory locking to prevent corruption of this file in
the event that multiple invokeai's try to access it simultaneously,
but some users have experienced corruption of the file nevertheless.
- This PR addresses the problem by detecting a potentially corrupted
`.next_prefix` file and falling back to the directory listing method.
A fixed version of the file is then written out.
- Closes#3001
Lots of little bugs have been squashed since 2.3.2 and a new minor
point release is imminent. This PR updates the version number in
preparation for a RC.
- `invokeai-batch --invoke` was created a time-stamped logfile with colons in its
name, which is a Windows no-no. This corrects the problem by writing
the timestamp out as "13-06-2023_8-35-10"
- Closes#3005
This PR addresses issues raised by #3008.
1. Update documentation to indicate the correct maximum batch size for
TI training when xformers is and isn't used.
2. Update textual inversion code so that the default for batch size
is aware of xformer availability.
3. Add documentation for how to launch TI with distributed learning.
Two related security fixes:
1. Port #2946 from main to 2.3.2 branch - this closes a hole that
allows a pickle checkpoint file to masquerade as a safetensors
file.
2. Add pickle scanning to the checkpoint to diffusers conversion
script. This will be ported to main in a separate PR.
This commit enhances support for V2 variant (epsilon and v-predict)
import and conversion to diffusers, by prompting the user to select the
proper config file during startup time autoimport as well as in the
invokeai installer script. Previously the user was only prompted when
doing an `!import` from the command line or when using the WebUI Model
Manager.
This commit enhances support for V2 variant (epsilon and v-predict)
import and conversion to diffusers, by prompting the user to select
the proper config file during startup time autoimport as well as
in the invokeai installer script..
At some point `pyproject.toml` was modified to remove the
invokeai-update and invokeai-model-install scripts. This PR fixes the
issue.
If this was an intentional change, let me know and we'll discuss.
# Support SD version 2 "epsilon" and "v-predict" inference
configurations in v2.3
This is a port of the `main` PR #2870 back into V2.3. It allows both
"epsilon" inference V2 models (e.g. "v2-base") and "v-predict" models
(e.g. "V2-768") to be imported and converted into correct diffusers
models. This depends on picking the right configuration file to use, and
since there is no intrinsic difference between the two types of models,
when we detect that a V2 model is being imported, we fall back to asking
the user to select the model type.
This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts the
global diffusers model cache to work with the 0.14 diffusers layout of
placing models in HF_HOME/hub rather than HF_HOME/diffusers. It also
implements the one-time migration action to the new layout.
This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts
the global diffusers model cache to work with the 0.14 diffusers
layout of placing models in HF_HOME/hub rather than HF_HOME/diffusers.
# Programatically generate a large number of images varying by prompt
and other image generation parameters
This is a little standalone script named `dynamic_prompting.py` that
enables the generation of dynamic prompts. Using YAML syntax, you
specify a template of prompt phrases and lists of generation parameters,
and the script will generate a cross product of prompts and generation
settings for you. You can save these prompts to disk for later use, or
pipe them to the invokeai CLI to generate the images on the fly.
Typical uses are testing step and CFG values systematically while
holding the seed and prompt constant, testing out various artist's
styles, and comparing the results of the same prompt across different
models.
A typical template will look like this:
```
model: stable-diffusion-1.5
steps: 30;50;10
seed: 50
dimensions: 512x512
cfg:
- 7
- 12
sampler:
- k_euler_a
- k_lms
prompt:
style:
- greg rutkowski
- gustav klimt
location:
- the mountains
- a desert
object:
- luxurious dwelling
- crude tent
template: a {object} in {location}, in the style of {style}
```
This will generate 96 different images, each of which varies by one of
the dimensions specified in the template. For example, the prompt axis
will generate a cross product list like:
```
a luxurious dwelling in the mountains, in the style of greg rutkowski
a luxurious dwelling in the mountains, in the style of gustav klimt
a luxious dwelling in a desert, in the style of greg rutkowski
... etc
```
A typical usage would be:
```
python scripts/dynamic_prompts.py --invoke --outdir=/tmp/scanning my_template.yaml
```
This will populate `/tmp/scanning` with each of the requested images,
and also generate a `log.md` file which you can open with an e-book
reader to show something like this:
![image](https://user-images.githubusercontent.com/111189/221970165-4bbd9070-3f32-4d89-8ff2-b03a82ada575.png)
Full instructions can be obtained using the `--instructions` switch, and
an example template can be printed out using `--example`:
```
python scripts/dynamic_prompts.py --instructions
python scripts/dynamic_prompts.py --example > my_first_template.yaml
```