- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()
Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).
This addresses an issue first raised in #652.
On the command line, the new option is --text_mask or -tm.
Example:
```
invoke> a baseball -I /path/to/still_life.png -tm orange
```
This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()
Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).
This addresses an issue first raised in #652.
- --inpaint_replace 0.X will cause inpainting to ignore what is under
the masked region with a strength ranging from 0 (don't ignore at all)
to 1.0 (ignore completely)
- sync with upstream development
- update docs
- add a `--inpaint_replace` option that fills masked regions with
latent noise. This allows radical changes to inpainted regions
at the cost of losing context.
- fix up readline, arg processing and metadata writing to accommodate
this change
- fixed bug in storage and retrieval of variations, discovered incidentally
during testing
- update documentation
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.
This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.
Addresses issue #1083.
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.
This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.
Addresses issue #1083.
- !import_model <path/to/model/weights> will import a new model,
prompt the user for its name and description, write it to the
models.yaml file, and load it.
- !edit_model <model_name> will bring up a previously-defined model
and prompt the user to edit its descriptive fields.
Example of !import_model
<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu Diffusion v1.3
height: 512
weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
</pre>
Example of !edit_model
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
This commit "reverts" the new API changes by extracting the old
functionality into new files.
The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`
PngWriter regained PromptFormatter as old server used that.
`server_legacy.py` is the old server that `dream.py` used.
Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.
One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
- This PR enables two new commands in the invoke.py script
!models -- list the available models and their cache status
!switch <model> -- switch to the indicated model
Example:
invoke> !models
laion400m not loaded Latent Diffusion LAION400M model
stable-diffusion-1.4 active Stable Diffusion inference model version 1.4
waifu-1.3 cached Waifu anime model version 1.3
invoke> !switch waifu-1.3
>> Caching model stable-diffusion-1.4 in system RAM
>> Retrieving model waifu-1.3 from system RAM cache
The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.
More details:
- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
to be adjusted, but invoke.py does not currently have a
command-line argument that allows you to set it. The
minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml
Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
more consistent.
- When generating an image greater than defaults, will only warn about possible
VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
moving the import line for the web backend into the section where it is
called.
Coauthored by: @ArDiouscuros
- This PR enables two new commands in the invoke.py script
!models -- list the available models and their cache status
!switch <model> -- switch to the indicated model
Example:
invoke> !models
laion400m not loaded Latent Diffusion LAION400M model
stable-diffusion-1.4 active Stable Diffusion inference model version 1.4
waifu-1.3 cached Waifu anime model version 1.3
invoke> !switch waifu-1.3
>> Caching model stable-diffusion-1.4 in system RAM
>> Retrieving model waifu-1.3 from system RAM cache
More details:
- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
to be adjusted, but invoke.py does not currently have a
command-line argument that allows you to set it. The
minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml
Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
more consistent.
- When generating an image greater than defaults, will only warn about possible
VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
moving the import line for the web backend into the section where it is
called.
- better readability in dark mode since color change
- better looking changelog
- fix images which where not loading
- also center most of the images
- fix some syntax errors like
- headlines ending with a colon
- codeblocks with wrong fences
- codeblocks without shell
- update conda prompts from ldm to invokeai
- ....