- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- ldm.generate.Generator() now takes an argument named `max_load_models`.
This is an integer that limits the model cache size. When the cache
reaches the limit, it will start purging older models from cache.
- CLI takes an argument --max_load_models, default to 2. This will keep
one model in GPU and the other in CPU and switch back and forth
quickly.
- To not cache models at all, pass --max_load_models=1
- user can select which weight files to download using huggingface cache
- user must log in to huggingface, generate an access token, and accept
license terms the very first time this is run. After that, everything
works automatically.
- added placeholder for docs for installing models
- also got rid of unused config files. hopefully they weren't needed
for textual inversion, but I don't think so.
The Args object would crap out when trying to retrieve metadata from
an image file that did not contain InvokeAI-generated metadata, such
as a JPG. This corrects that and returns dummy values (seed of zero,
prompt of '') to avoid downstream breakage.
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.
This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
- This sets a step switchover point at which the k-samplers stop using the
Karras noise schedule and start using the LatentDiffusion noise schedule.
The advantage of this is that the Karras schedule produces excellent
results at low step counts but starts to become unstable at high
steps.
- A new command argument --karras_max, lets the user set where the
switchover occurs. Default is 29 steps (1-29 steps Karras),
(30 or greater LDM)
- Tildebyte, sorry to do a fast forward three-way merge for this
but rebasing was just too painful due to extensive recent
changes to the diffuser code.
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.
- prompt weighting, merging and cross-attention working
- cross-attention does not work with runwayML inpainting
model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
- change default model back to 1.4
- remove --fnformat from canonicalized dream prompt arguments
(not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
(definitely needed for image reproducibility)
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model
Credits for advice and assistance during porting:
@any-winter-4079 (http://github.com/any-winter-4079)
@db3000 (Danny Beer http://github.com/db3000)
This is still a work in progress but seems functional. It supports
inpainting, txt2img and img2img on the ddim and k* samplers (plms
still needs work, but I know what to do).
To test this, get the file `sd-v1-5-inpainting.ckpt' from
https://huggingface.co/runwayml/stable-diffusion-inpainting and place it
at `models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt`
Launch invoke.py with --model inpainting-1.5 and proceed as usual.
Caveats:
1. The inpainting model takes about 800 Mb more memory than the standard
1.5 model. This model will not work on 4 GB cards.
2. The inpainting model is temperamental. It wants you to describe the
entire scene and not just the masked area to replace. So if you want
to replace the parrot on a man's shoulder with a crow, the prompt
"crow" may fail. Try "man with a crow on shoulder" instead. The
symptom of a failed inpainting is that the area will be erased and
replaced with background.
3. This has not been tested well. Please report bugs.
- This is a merge of the final version of PR #1218 "Inpainting
Improvements"
Various merge conflicts made it easier to commit directly.
Author: Kyle0654
Co-Author: lstein
- This is a merge of the final version of PR #1218 "Inpainting
Improvements"
Various merge conflicts made it easier to commit directly.
Author: Kyle0654
Co-Author: lstein
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.
To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.
When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.
There is a slight but noticeable delay when the safety checker runs.
Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
1. If tensors are passed to inpaint as init_image and/or init_mask, then
the post-generation image fixup code will be skipped.
2. Post-generation image fixup will work with either a black and white "L"
or "RGB" mask, or an "RGBA" mask.