InvokeAI/ldm/invoke/generator
Lincoln Stein b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
..
__init__.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
base.py add support for safety checker (NSFW filter) 2022-10-23 22:26:18 -04:00
embiggen.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
img2img.py ported code refactor changes from PR #1221 2022-10-23 09:33:15 -04:00
inpaint.py minor fixes to inpaint code 2022-10-23 09:33:15 -04:00
txt2img2img.py rename all modules from ldm.dream to ldm.invoke 2022-10-08 11:37:23 -04:00
txt2img.py add clipseg support for creating inpaint masks from text 2022-10-18 08:27:48 -04:00