- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- User can choose to download just recommended models, customize list to download,
or skip downloading altogether.
- Does direct download to models directory instead of to HuggingFace cache
- Able to resume interrupted downloads
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
license terms the very first time this is run. After that, everything
works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
for textual inversion, but I don't think so.
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.
- prompt weighting, merging and cross-attention working
- cross-attention does not work with runwayML inpainting
model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.
To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.
When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.
There is a slight but noticeable delay when the safety checker runs.
Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
To add a VAE autoencoder to an existing model:
1. Download the appropriate autoencoder and put it into
models/ldm/stable-diffusion
Note that you MUST use a VAE that was written for the
original CompViz Stable Diffusion codebase. For v1.4,
that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original
2. Edit config/models.yaml to contain the following stanza, modifying `weights`
and `vae` as required to match the weights and vae model file names. There is
no requirement to rename the VAE file.
~~~
stable-diffusion-1.4:
weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
description: Stable Diffusion v1.4
config: configs/stable-diffusion/v1-inference.yaml
vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
width: 512
height: 512
~~~
3. Alternatively from within the `invoke.py` CLI, you may use the command
`!editmodel stable-diffusion-1.4` to bring up a simple editor that will
allow you to add the path to the VAE.
4. If you are just installing InvokeAI for the first time, you can also
use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
to create the configuration from scratch.
5. That's it!
- code for committing config changes to models.yaml now in module
rather than in invoke script
- model marked "default" is now loaded if model not specified on
command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
- The !mask command takes an image path, a text prompt, and
(optionally) a masking threshold. It creates a mask over the region
indicated by the prompt, and outputs several files that show which
regions will be masked by the chosen prompt and threshold.
- The mask images should not be passed directly to img2img because
they are designed for visualization only. Instead, use the
--text_mask option to pass the selected prompt and threshold.
- See docs/features/INPAINTING.md for details.
- The directory "models" in the main InvokeAI directory was conflicting
with loading "models.clipseg". To fix this issue, I have renamed the
models.clipseg to clipseg_models.clipseg, and applied this change to
the 'models-rename' branch of invoke-ai's fork of clipseg.
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()
Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).
This addresses an issue first raised in #652.
- --inpaint_replace 0.X will cause inpainting to ignore what is under
the masked region with a strength ranging from 0 (don't ignore at all)
to 1.0 (ignore completely)
- sync with upstream development
- update docs
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.
This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.
Addresses issue #1083.
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.
This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.
Addresses issue #1083.
- !import_model <path/to/model/weights> will import a new model,
prompt the user for its name and description, write it to the
models.yaml file, and load it.
- !edit_model <model_name> will bring up a previously-defined model
and prompt the user to edit its descriptive fields.
Example of !import_model
<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:
Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu Diffusion v1.3
height: 512
weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
| LatentDiffusion: Running in eps-prediction mode
| DiffusionWrapper has 859.52 M params.
| Making attention of type 'vanilla' with 512 in_channels
| Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
| Making attention of type 'vanilla' with 512 in_channels
| Using faster float16 precision
</pre>
Example of !edit_model
<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512
>> New configuration:
waifu-diffusion:
config: configs/stable-diffusion/v1-inference.yaml
description: Waifu diffusion v1.4beta
weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
height: 512
width: 512
OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
This commit "reverts" the new API changes by extracting the old
functionality into new files.
The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`
PngWriter regained PromptFormatter as old server used that.
`server_legacy.py` is the old server that `dream.py` used.
Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.
One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
- This PR enables two new commands in the invoke.py script
!models -- list the available models and their cache status
!switch <model> -- switch to the indicated model
Example:
invoke> !models
laion400m not loaded Latent Diffusion LAION400M model
stable-diffusion-1.4 active Stable Diffusion inference model version 1.4
waifu-1.3 cached Waifu anime model version 1.3
invoke> !switch waifu-1.3
>> Caching model stable-diffusion-1.4 in system RAM
>> Retrieving model waifu-1.3 from system RAM cache
The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.
More details:
- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
to be adjusted, but invoke.py does not currently have a
command-line argument that allows you to set it. The
minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml
Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
more consistent.
- When generating an image greater than defaults, will only warn about possible
VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
moving the import line for the web backend into the section where it is
called.
Coauthored by: @ArDiouscuros
- rename dream.py to invoke.py
- create a compatibility script named dream.py that execs() invoke.py
- redo documentation
- change help message in args
- this does **not** rename the libraries, which are still ldm.dream.util, etc
- updated environment-mac.yml #932
- use the upstream GFPGAN library now that issues with color-changing fixed
and facial recognition improved #905
- preload_models fixed to download additional models needed by gfpgan
- Added support for pyreadline3 so that Window users can benefit.
- Added the !search command to search the history for a matching string:
~~~
!search puppies
[20] puppies at the food bowl -Ak_lms
[54] house overrun by hungry puppies -C20 -s100
~~~
- Added the !clear command to clear the in-memory and on-disk
command history.
- When generating multiple images, the first seed was being used for second
and subsequent files. This should only happen when variations are being
generated. Now fixed.
- img2img confirmed working with all samplers
- inpainting working on ddim & plms. Changes to k-diffusion
module seem to be needed for inpainting support.
- switched k-diffuser noise schedule to original karras schedule,
which reduces the step number needed for good results
-if readline.set_auto_history() is not implemented, as in pyreadline3, will fall
back gracefully to automatic history saving. The only issue with this is that
-!history commands will be recorded in the history.
-!fetch on missing file no longer crashes script
-!history is now one of the autocomplete commands
-.dream_history now stored in output directory rather than ~user directory.
An important limitation of the last feature is that the history is
loaded and saved to the .dream_history file in the --outdir directory
specified at script launch time. It is not swapped around when the
--outdir is changed during the session.
- When --save_orig *not* provided during image generation with
upscaling/face fixing, an extra image file was being created. This
PR fixes the problem.
- Also generalizes the tab autocomplete for image paths such that
autocomplete searches the output directory for all path-modifying
options except for --outdir.
- normalized how filenames are written out when postprocessing invoked
- various fixes of bugs encountered during testing
- updated documentation
- updated help text
- Enhance tab completion functionality
- Each of the switches that read a filepath (e.g. --init_img) will trigger file path completion. The
-S switch will display a list of recently-used seeds.
- Added new !fetch command to retrieve the metadata from a previously-generated image and populate the
readline linebuffer with the appropriate editable command to regenerate.
- Added new !history command to display previous commands and reload them for modification.
- The !fetch and !fix commands both autocomplete *and* search automatically through the current
outdir for files.
- The completer maintains a list of recently used seeds and will try to autocomplete them.
In the case of this "point", the Warp Terminal cannot be clicked directly to trigger the browser to open, and Chrome is a blank page. It should open properly once you remove it
1. Add ldm/dream/restoration/__init__.py file that was inadvertently not
committed earlier.
2. Add '.' to sys.path to address weird mac problem reported in #723
- Adapted from PR #489, author Dominic Letz [https://github.com/dominicletz]
- Too many upstream changes to merge, so frankensteined it in.
- Added support for !fix syntax
- Added documentation
- The seed printed needs to be the one generated prior to the
initial noising operation. To do this, I added a new "first_seed"
argument to the image callback in dream.py.
- Closes#641
- modify strength of embiggen to reduce tiling ghosts
- normalize naming of postprocessed files (could improve more to avoid
name collisions)
- move restoration modules under ldm.dream
- supports gfpgan, esrgan, codeformer and embiggen
- To use:
dream> !fix ./outputs/img-samples/000056.292144555.png -ft gfpgan -U2 -G0.8
dream> !fix ./outputs/img-samples/000056.292144555.png -ft codeformer -G 0.8
dream> !fix ./outputs/img-samples/000056.29214455.png -U4
dream> !fix ./outputs/img-samples/000056.292144555.png -embiggen 1.5
The first example invokes gfpgan to fix faces and esrgan to upscale.
The second example invokes codeformer to fix faces, no upscaling
The third example uses esrgan to upscale 4X
The four example runs embiggen to enlarge 1.5X
- This is very preliminary work. There are some anomalies to note:
1. The syntax is non-obvious. I would prefer something like:
!fix esrgan,gfpgan
!fix esrgan
!fix embiggen,codeformer
However, this will require refactoring the gfpgan and embiggen
code.
2. Images generated using gfpgan, esrgan or codeformer all are named
"xxxxxx.xxxxxx.postprocessed.png" and the original is saved.
However, the prefix is a new one that is not related to the
original.
3. Images generated using embiggen are named "xxxxx.xxxxxxx.png",
and once again the prefix is new. I'm not sure whether the
prefix should be aligned with the original file's prefix or not.
Probably not, but opinions welcome.
Allowed values are 'auto', 'float32', 'autocast', 'float16'. If not specified or 'auto' a working precision is automatically selected based on the torch device.
Context: #526
Deprecated --full_precision / -F
Tested on both cuda and cpu by calling scripts/dream.py without arguments and checked the auto configuration worked. With --precision=auto/float32/autocast/float16 it performs as expected, either working or failing with a reasonable error. Also checked Img2Img.
- modify strength of embiggen to reduce tiling ghosts
- normalize naming of postprocessed files (could improve more to avoid
name collisions)
- move restoration modules under ldm.dream
- supports gfpgan, esrgan, codeformer and embiggen
- To use:
dream> !fix ./outputs/img-samples/000056.292144555.png -ft gfpgan -U2 -G0.8
dream> !fix ./outputs/img-samples/000056.292144555.png -ft codeformer -G 0.8
dream> !fix ./outputs/img-samples/000056.29214455.png -U4
dream> !fix ./outputs/img-samples/000056.292144555.png -embiggen 1.5
The first example invokes gfpgan to fix faces and esrgan to upscale.
The second example invokes codeformer to fix faces, no upscaling
The third example uses esrgan to upscale 4X
The four example runs embiggen to enlarge 1.5X
- This is very preliminary work. There are some anomalies to note:
1. The syntax is non-obvious. I would prefer something like:
!fix esrgan,gfpgan
!fix esrgan
!fix embiggen,codeformer
However, this will require refactoring the gfpgan and embiggen
code.
2. Images generated using gfpgan, esrgan or codeformer all are named
"xxxxxx.xxxxxx.postprocessed.png" and the original is saved.
However, the prefix is a new one that is not related to the
original.
3. Images generated using embiggen are named "xxxxx.xxxxxxx.png",
and once again the prefix is new. I'm not sure whether the
prefix should be aligned with the original file's prefix or not.
Probably not, but opinions welcome.