InvokeAI/ldm
Lincoln Stein b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
..
data Textual Inversion for M1 2022-09-27 01:39:17 +02:00
invoke add support for safety checker (NSFW filter) 2022-10-23 22:26:18 -04:00
models Fix typo 2022-10-18 08:29:26 -04:00
modules Merge branch 'development' into fix-high-step-count 2022-10-21 06:55:17 -04:00
generate.py add support for safety checker (NSFW filter) 2022-10-23 22:26:18 -04:00
lr_scheduler.py prettified all the code using "blue" at the urging of @tildebyte 2022-08-26 03:15:42 -04:00
simplet2i.py Squashed commit of the following: 2022-09-12 14:31:48 -04:00
util.py add ability to import and edit alternative models online 2022-10-13 23:48:07 -04:00