This PR ports the `main` PR #2871 to the v2.3 branch. This adjusts
the global diffusers model cache to work with the 0.14 diffusers
layout of placing models in HF_HOME/hub rather than HF_HOME/diffusers.
# Programatically generate a large number of images varying by prompt
and other image generation parameters
This is a little standalone script named `dynamic_prompting.py` that
enables the generation of dynamic prompts. Using YAML syntax, you
specify a template of prompt phrases and lists of generation parameters,
and the script will generate a cross product of prompts and generation
settings for you. You can save these prompts to disk for later use, or
pipe them to the invokeai CLI to generate the images on the fly.
Typical uses are testing step and CFG values systematically while
holding the seed and prompt constant, testing out various artist's
styles, and comparing the results of the same prompt across different
models.
A typical template will look like this:
```
model: stable-diffusion-1.5
steps: 30;50;10
seed: 50
dimensions: 512x512
cfg:
- 7
- 12
sampler:
- k_euler_a
- k_lms
prompt:
style:
- greg rutkowski
- gustav klimt
location:
- the mountains
- a desert
object:
- luxurious dwelling
- crude tent
template: a {object} in {location}, in the style of {style}
```
This will generate 96 different images, each of which varies by one of
the dimensions specified in the template. For example, the prompt axis
will generate a cross product list like:
```
a luxurious dwelling in the mountains, in the style of greg rutkowski
a luxurious dwelling in the mountains, in the style of gustav klimt
a luxious dwelling in a desert, in the style of greg rutkowski
... etc
```
A typical usage would be:
```
python scripts/dynamic_prompts.py --invoke --outdir=/tmp/scanning my_template.yaml
```
This will populate `/tmp/scanning` with each of the requested images,
and also generate a `log.md` file which you can open with an e-book
reader to show something like this:
![image](https://user-images.githubusercontent.com/111189/221970165-4bbd9070-3f32-4d89-8ff2-b03a82ada575.png)
Full instructions can be obtained using the `--instructions` switch, and
an example template can be printed out using `--example`:
```
python scripts/dynamic_prompts.py --instructions
python scripts/dynamic_prompts.py --example > my_first_template.yaml
```
When a legacy ckpt model was converted into diffusers in RAM, the
built-in NSFW checker was not being disabled, in contrast to models
converted and saved to disk. Because InvokeAI does its NSFW checking as
a separate post-processing step (in order to generate blurred images
rather than black ones), this defeated the
--nsfw and --no-nsfw switches.
This closes#2836 and #2580.
Note - this fix will be applied to `main` as a separate PR.
When a legacy ckpt model was converted into diffusers in RAM, the
built-in NSFW checker was not being disabled, in contrast to models
converted and saved to disk. Because InvokeAI does its NSFW checking
as a separate post-processing step (in order to generate blurred
images rather than black ones), this defeated the
--nsfw and --no-nsfw switches.
This closes#2836 and #2580.
This is a different source/base branch from
https://github.com/invoke-ai/InvokeAI/pull/2823 but is otherwise the
same content. `yarn build` was ran on this clean branch.
## What was the problem/requirement? (What/Why)
As part of a [change in
2.3.0](d74c4009cb),
the high resolution fix was no longer being applied when 'Use all' was
selected. This effectively meant that users had to manually analyze
images to ensure that the parameters were set to match.
~~Additionally, and never actually working, Upscaling and Face
Restoration parameters were also not pulling through with the action,
causing a similar usability issue.~~ See:
https://github.com/invoke-ai/InvokeAI/pull/2823#issuecomment-1445530362
## What was the solution? (How)
This change adds a new reducer to the `postprocessingSlice` file,
mimicking the `generationSlice` reducer to assign all parameters
appropriate for the post processing options. This reducer assigns:
* Hi-res's toggle button only if the type is `txt2img`, since `img2img`
hi-res was removed previously
* ~~Upscaling's toggle button, scale, denoising strength, and upscale
strength~~
* ~~Face Restoration's toggle button, type, strength, and fidelity (if
present/applicable)~~
### Minor
* Added `endOfLine: 'crlf'` to prettier's config to prevent all files
from being checked out on Windows due to difference of line endings (and
git not picking up those changes as modifications, causing ghost
modified files from Git)
### Revision 2:
* Removed out upscaling and face restoration pulling of parameters
### Revision 3:
* More defensive coding for the `hires_fix` not present (assume false)
### Out of Scope
* Hi-res strength (applied as img2img strength in the initial image that
is generated) is not in the metadata of the final image and can't be
reconstructed easily
* Upscaling and face restoration have some peculiarities for multi-post
processing outside of the UI, which complicates it enough to scope out
of this PR.
## How were these changes tested?
* `yarn dev` => Server started successfully
* Manual testing on the development server to ensure parameters pulled
correctly
* `yarn build` => Success
## Notes
As with `generationSlice`, this code assumes `action.payload.image` is
valid and doesn't do a formal check on it to ensure it is valid.
- Crash would occur at the end of this sequence:
- launch CLI
- !convert <URL pointing to a legacy ckpt file>
- Answer "Y" when asked to delete original .ckpt file
- This commit modifies model_manager.heuristic_import() to silently
delete the downloaded legacy file after it has been converted into a
diffusers model. The user is no longer asked to approve deletion.
NB: This should be cherry-picked into main once refactor is done.
For your consideration, here is a revised set of codeowners for the v2.3
branch. The previous set had the bad property that both @blessedcoolant
and @lstein were codeowners of everything, meaning that we had the
superpower of being able to put in a PR and get full approval if any
other member of the team (not a codeowner) approved.
The proposed file is a bit more sensible but needs many eyes on it.
Please take a look and make improvements. I wasn't sure where to put
some people, such as @netsvetaev or @GreggHelt2
I don't think it makes sense to tinker with the `main` CODEOWNERS until
the "Big Freeze" code reorganization happens.
I subscribed everyone to this PR. Apologies
- Crash would occur at the end of this sequence:
- launch CLI
- !convert <URL pointing to a legacy ckpt file>
- Answer "Y" when asked to delete original .ckpt file
- This commit modifies model_manager.heuristic_import()
to silently delete the downloaded legacy file after
it has been converted into a diffusers model. The user
is no longer asked to approve deletion.
NB: This should be cherry-picked into main once refactor
is done.
Simple script to generate a file of InvokeAI prompts and settings
that scan across steps and other parameters.
To use, create a file named "template.yaml" (or similar) formatted like this
>>> cut here <<<
steps: "30:50:1"
seed: 50
cfg:
- 7
- 8
- 12
sampler:
- ddim
- k_lms
prompt:
- a sunny meadow in the mountains
- a gathering storm in the mountains
>>> cut here <<<
Create sections named "steps", "seed", "cfg", "sampler" and "prompt".
- Each section can have a constant value such as this:
steps: 50
- Or a range of numeric values in the format:
steps: "<start>:<stop>:<step>"
- Or a list of values in the format:
- value1
- value2
- value3
Be careful to: 1) put quotation marks around numeric ranges; 2) put a
space between the "-" and the value in a list of values; and 3) use spaces,
not tabs, at the beginnings of indented lines.
When you run this script, capture the output into a text file like this:
python generate_param_scan.py template.yaml > output_prompts.txt
"output_prompts.txt" will now contain an expansion of all the list
values you provided. You can examine it in a text editor such as
Notepad.
Now start the CLI, and feed the expanded prompt file to it using the
"!replay" command:
!replay output_prompts.txt
Alternatively, you can directly feed the output of this script
by issuing a command like this from the developer's console:
python generate_param_scan.py template.yaml | invokeai
You can use the web interface to view the resulting images and their
metadata.
- stable-diffusion-2.1-base base model from
stabilityai/stable-diffusion-2-1-base
- stable-diffusion-2.1-768 768 pixel model from
stabilityai/stable-diffusion-2-1-768
- sd-inpainting-2.0 512 pixel inpainting model from
runwayml/stable-diffusion-inpainting
This PR also bumps the version number up to v2.3.1.post2
- stable-diffusion-2.1-base
base model from stabilityai/stable-diffusion-2-1-base
- stable-diffusion-2.1-768
768 pixel model from stabilityai/stable-diffusion-2-1-768
- sd-inpainting-2.0
512 pixel inpainting model from runwayml/stable-diffusion-inpainting
author Kyle Schouviller <kyle0654@hotmail.com> 1669872800 -0800
committer Kyle Schouviller <kyle0654@hotmail.com> 1676240900 -0800
Adding base node architecture
Fix type annotation errors
Runs and generates, but breaks in saving session
Fix default model value setting. Fix deprecation warning.
Fixed node api
Adding markdown docs
Simplifying Generate construction in apps
[nodes] A few minor changes (#2510)
* Pin api-related requirements
* Remove confusing extra CORS origins list
* Adds response models for HTTP 200
[nodes] Adding graph_execution_state to soon replace session. Adding tests with pytest.
Minor typing fixes
[nodes] Fix some small output query hookups
[node] Fixing some additional typing issues
[nodes] Move and expand graph code. Add base item storage and sqlite implementation.
Update startup to match new code
[nodes] Add callbacks to item storage
[nodes] Adding an InvocationContext object to use for invocations to provide easier extensibility
[nodes] New execution model that handles iteration
[nodes] Fixing the CLI
[nodes] Adding a note to the CLI
[nodes] Split processing thread into separate service
[node] Add error message on node processing failure
Removing old files and duplicated packages
Adding python-multipart