Commit Graph

334 Commits

Author SHA1 Message Date
Lincoln Stein
b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
Lincoln Stein
f25c1f900f add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-23 09:33:15 -04:00
Lincoln Stein
ce6d618e3b outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-23 09:33:00 -04:00
Lincoln Stein
b2bf2b08ff Merge branch 'model-switching' into development 2022-10-21 21:27:59 -04:00
Lincoln Stein
c9f9eed04e resolve numerous small merge bugs
- This merges PR #882

Coauthor: ArDiouscuros
2022-10-21 12:57:15 -04:00
Lincoln Stein
be7de4849c
Merge branch 'development' into model-switching 2022-10-21 00:55:52 -04:00
Lincoln Stein
83e6ab08aa further improvements to model loading
- code for committing config changes to models.yaml now in module
  rather than in invoke script
- model marked "default" is now loaded if model not specified on
  command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
2022-10-21 00:28:54 -04:00
psychedelicious
899ba975a6 Improves logic to determine if clipseg weights should be downloaded 2022-10-20 06:56:50 -04:00
psychedelicious
bfa65560eb Fixes torch.load() for MPS/CPU 2022-10-20 06:56:50 -04:00
psychedelicious
ed9307f469 Fix typo 2022-10-20 06:56:50 -04:00
Lincoln Stein
a357bf4f19 add !mask command to view output of clipseg
- The !mask command takes an image path, a text prompt, and
  (optionally) a masking threshold. It creates a mask over the region
  indicated by the prompt, and outputs several files that show which
  regions will be masked by the chosen prompt and threshold.

- The mask images should not be passed directly to img2img because
  they are designed for visualization only. Instead, use the
  --text_mask option to pass the selected prompt and threshold.

- See docs/features/INPAINTING.md for details.
2022-10-20 06:56:50 -04:00
Lincoln Stein
2ca4242f5f fix clipseg loading problems
- The directory "models" in the main InvokeAI directory was conflicting
  with loading "models.clipseg". To fix this issue, I have renamed the
  models.clipseg to clipseg_models.clipseg, and applied this change to
  the 'models-rename' branch of invoke-ai's fork of clipseg.
2022-10-20 06:56:50 -04:00
Lincoln Stein
62dd3b7d7d resolve models.clipseg vs clipseg ambiguity 2022-10-18 23:09:26 -04:00
Lincoln Stein
66f6ef1b35
fix syntax errors in preload 2022-10-18 19:25:18 -04:00
Lincoln Stein
a488b14373 prevent preload warning message 2022-10-18 17:09:17 -04:00
Lincoln Stein
32122e0312 clipseg library and environment in place 2022-10-18 08:27:48 -04:00
Lincoln Stein
c974c95e2b Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-17 23:14:55 -04:00
Lincoln Stein
3b2590243c ^C at invoke> cmd line exits gracefully 2022-10-17 23:14:32 -04:00
Lincoln Stein
0cf11ce488 add option to CLI and pngwriter that allows user to set PNG compression level
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()

Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).

This addresses an issue first raised in #652.
2022-10-17 22:27:47 -04:00
Lincoln Stein
ef2058824a add a strength value to inpaint_replace
- --inpaint_replace 0.X will cause inpainting to ignore what is under
  the masked region with a strength ranging from 0 (don't ignore at all)
  to 1.0 (ignore completely)
- sync with upstream development
- update docs
2022-10-16 10:06:47 -04:00
Lincoln Stein
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
Lincoln Stein
c4fb8e304b fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 16:19:45 -04:00
Lincoln Stein
fe2a2cfc8b
Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
db3000
1ea541baa6 Reword deprecation warning for dream.py
- this plus previous commit closes #1087
2022-10-14 07:33:10 -04:00
db3000
82b7c118c4 Forward dream.py to invoke.py using the same interpreter, add deprecation warning 2022-10-14 07:31:35 -04:00
Lincoln Stein
1c501333e8 minor doc fixes 2022-10-14 07:30:26 -04:00
db3000
ce5e57d828 Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
Lincoln Stein
e98fe9c22d fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 00:01:59 -04:00
Lincoln Stein
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
db3000
7f491fd2d2 Reword deprecation warning for dream.py 2022-10-13 12:12:05 -04:00
db3000
203a6d8a00 Forward dream.py to invoke.py using the same interpreter, add deprecation warning 2022-10-13 12:12:05 -04:00
CapableWeb
6c0dd9b5ef Add back old dream.py as legacy_api.py
This commit "reverts" the new API changes by extracting the old
functionality into new files.

The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`

PngWriter regained PromptFormatter as old server used that.

`server_legacy.py` is the old server that `dream.py` used.

Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.

One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
2022-10-12 20:35:56 -04:00
Lincoln Stein
aa6aa68753 proposed fix to work on mps systems 2022-10-12 11:08:27 -04:00
Lincoln Stein
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
Lincoln Stein
2b1aaf4ee7 rename all modules from ldm.dream to ldm.invoke
- scripts and documentation updated to match
- ran preflight checks on both web and CLI and seems to be working
2022-10-08 11:37:23 -04:00
Lincoln Stein
98fe044dee rebrand CLI from "dream" to "invoke"
- rename dream.py to invoke.py
- create a compatibility script named dream.py that execs() invoke.py
- redo documentation
- change help message in args
- this does **not** rename the libraries, which are still ldm.dream.util, etc
2022-10-08 09:32:06 -04:00
plucked
6499b99dad revert accidental edit 2022-10-07 10:26:14 +00:00
plucked
395445e7b0 using string.format for filename formatting 2022-10-07 09:24:39 +00:00
plucked
89c6c11214 feat: adding filename format template 2022-10-07 08:32:39 +00:00
Lincoln Stein
2154dd2349 prevent crashes due to uninitialized free_gpu_mem 2022-10-06 10:54:05 -04:00
Lincoln Stein
4d5b208601 multiple fixes in response to preflight testing bug reports
- updated environment-mac.yml #932
- use the upstream GFPGAN library now that issues with color-changing fixed
  and facial recognition improved #905
- preload_models fixed to download additional models needed by gfpgan
2022-10-05 12:44:16 -04:00
Lincoln Stein
488890e6bb preload script more robust for downloading gfpgan 2022-10-05 11:29:48 -04:00
Lincoln Stein
d16f0c8a8f add enhancements to CLI command-line completion and history
- Added support for pyreadline3 so that Window users can benefit.
- Added the !search command to search the history for a matching string:
~~~
!search puppies
[20] puppies at the food bowl -Ak_lms
[54] house overrun by hungry puppies -C20 -s100
~~~
- Added the !clear command to clear the in-memory and on-disk
  command history.
2022-10-04 22:53:07 -04:00
Lincoln Stein
609983ffa8 add outcrop postprocessor 2022-10-04 22:37:25 -04:00
blessedcoolant
b8e4c13746 Add New WebUI and Desktop Mode
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-10-03 23:28:53 -04:00
Lincoln Stein
ffcb31faef fix incorrect seed in metadata
- When generating multiple images, the first seed was being used for second
  and subsequent files. This should only happen when variations are being
  generated. Now fixed.
2022-10-03 17:09:48 -04:00
Lincoln Stein
958d7650dd img2img works with all samplers, inpainting working with ddim & plms
- img2img confirmed working with all samplers
- inpainting working on ddim & plms. Changes to k-diffusion
  module seem to be needed for inpainting support.
- switched k-diffuser noise schedule to original karras schedule,
  which reduces the step number needed for good results
2022-10-02 14:58:21 -04:00
db3000
72834ad16c Gracefully handle invalid files passed to !fix and !fetch 2022-10-01 18:13:10 -04:00
Lincoln Stein
a53e1125e6 remove debugging message 2022-10-01 15:24:05 -04:00
Lincoln Stein
1a0cf1320b improve behavior and fix bugs in CLI history handling and completion
-if readline.set_auto_history() is not implemented, as in pyreadline3, will fall
back gracefully to automatic history saving. The only issue with this is that
-!history commands will be recorded in the history.
-!fetch on missing file no longer crashes script
-!history is now one of the autocomplete commands
-.dream_history now stored in output directory rather than ~user directory.

An important limitation of the last feature is that the history is
loaded and saved to the .dream_history file in the --outdir directory
specified at script launch time. It is not swapped around when the
--outdir is changed during the session.
2022-10-01 15:24:04 -04:00
Lincoln Stein
8c9f2ae705 fix duplicate files created during upscaling/facial reconstruction
- When --save_orig *not* provided during image generation with
  upscaling/face fixing, an extra image file was being created. This
  PR fixes the problem.

- Also generalizes the tab autocomplete for image paths such that
  autocomplete searches the output directory for all path-modifying
  options except for --outdir.
2022-09-29 17:29:04 -04:00
Lincoln Stein
97ec1b156c fix module-not-found error on startup of new Invoke WebGUI 2022-09-28 14:31:34 -04:00
Lincoln Stein
6e54f504e7
Merge branch 'development' into webui-integration 2022-09-28 14:21:00 -04:00
Lincoln Stein
dff4850a82 add documentation and bug fixes
- normalized how filenames are written out when postprocessing invoked
- various fixes of bugs encountered during testing
- updated documentation
- updated help text
2022-09-28 14:20:12 -04:00
Lincoln Stein
800f9615c2 implement history viewing & replaying in CLI
- Enhance tab completion functionality
- Each of the switches that read a filepath (e.g. --init_img) will trigger file path completion. The
  -S switch will display a list of recently-used seeds.
- Added new !fetch command to retrieve the metadata from a previously-generated image and populate the
  readline linebuffer with the appropriate editable command to regenerate.
- Added new !history command to display previous commands and reload them for modification.
- The !fetch and !fix commands both autocomplete *and* search automatically through the current
  outdir for files.
- The completer maintains a list of recently used seeds and will try to autocomplete them.
2022-09-28 14:20:12 -04:00
blessedcoolant
af8383c770 Integrate New WebUI with dream.py 2022-09-28 16:06:41 +13:00
blessedcoolant
d80fff70f2 ESRGAN Improvements 2022-09-26 03:28:23 -04:00
ArDiouscuros
485fe67c92 Add workaround low memory crash during image saving
Sample model moved to cpu before processing samples to image
2022-09-26 02:15:51 -04:00
Lincoln Stein
66896dcbbe
Merge branch 'development' into patch-2 2022-09-25 08:28:55 -04:00
H4rk
af8d73a8e8
Update dream.py
In the case of this "point", the Warp Terminal cannot be clicked directly to trigger the browser to open, and Chrome is a blank page. It should open properly once you remove it
2022-09-25 18:04:56 +08:00
blessedcoolant
53b4c3cc60 Upgrade GFPGAN to Version 1.4 2022-09-24 05:09:45 -04:00
blessedcoolant
660641e720 Missing ux warning 2022-09-23 13:17:06 +02:00
blessedcoolant
cd8be1d0e9 Restoration Modules are now optional 2022-09-23 10:51:41 +02:00
Lincoln Stein
0ab5f2159d fix img2img incorrectly loading previous prompt 2022-09-21 15:18:59 -04:00
Lincoln Stein
bdbc76fcd4 Merged two different attempts to create __init__.py 2022-09-21 11:34:53 -04:00
Lincoln Stein
110c4f70df fix module loading errors
1. Add ldm/dream/restoration/__init__.py file that was inadvertently not
   committed earlier.
2. Add '.' to sys.path to address weird mac problem reported in #723
2022-09-21 11:31:15 -04:00
psychedelicious
28f06c7200 Reverts divergent fix to resto module path 2022-09-21 11:18:09 -04:00
blessedcoolant
8c751d342d Bug Fix Patch 2022-09-21 11:18:09 -04:00
blessedcoolant
069f91f930 Bug Fix Patch 2022-09-21 10:07:43 -04:00
Lincoln Stein
2cf294e6de Add outpainting functionality.
- Adapted from PR #489, author Dominic Letz [https://github.com/dominicletz]
- Too many upstream changes to merge, so frankensteined it in.
- Added support for !fix syntax
- Added documentation
2022-09-21 02:44:46 -04:00
Lincoln Stein
b93f04ee38 fix normalized prompt when a variation is generated
- The seed printed needs to be the one generated prior to the
  initial noising operation. To do this, I added a new "first_seed"
  argument to the image callback in dream.py.
- Closes #641
2022-09-21 00:32:29 -04:00
Lincoln Stein
f408ef2e6c resolved multiple conflicts between PR #683 and subsequent PRs 2022-09-20 23:41:43 -04:00
Lincoln Stein
283a0d72c7 tweaks to make postprocess fixing work better
- modify strength of embiggen to reduce tiling ghosts
- normalize naming of postprocessed files (could improve more to avoid
name collisions)
- move restoration modules under ldm.dream
2022-09-20 23:38:04 -04:00
Lincoln Stein
cd69d258aa combine PRs #690 and #683 2022-09-20 23:38:03 -04:00
blessedcoolant
1b5013ab72 GFPGAN and Real ESRGAN Implementation Refactor 2022-09-20 23:37:19 -04:00
Lincoln Stein
e8bb39370c add ability to post-process images from the CLI
- supports gfpgan, esrgan, codeformer and embiggen
- To use:
   dream> !fix ./outputs/img-samples/000056.292144555.png -ft gfpgan -U2 -G0.8
   dream> !fix ./outputs/img-samples/000056.292144555.png -ft codeformer -G 0.8
   dream> !fix ./outputs/img-samples/000056.29214455.png  -U4
   dream> !fix ./outputs/img-samples/000056.292144555.png -embiggen 1.5

   The first example invokes gfpgan to fix faces and esrgan to upscale.
   The second example invokes codeformer to fix faces, no upscaling
   The third example uses esrgan to upscale 4X
   The four example runs embiggen to enlarge 1.5X

- This is very preliminary work. There are some anomalies to note:
  1. The syntax is non-obvious. I would prefer something like:
     !fix esrgan,gfpgan
     !fix esrgan
     !fix embiggen,codeformer

     However, this will require refactoring the gfpgan and embiggen
     code.

   2. Images generated using gfpgan, esrgan or codeformer all are named
      "xxxxxx.xxxxxx.postprocessed.png" and the original is saved.
      However, the prefix is a new one that is not related to the
      original.

   3. Images generated using embiggen are named "xxxxx.xxxxxxx.png",
      and once again the prefix is new. I'm not sure whether the
      prefix should be aligned with the original file's prefix or not.
      Probably not, but opinions welcome.
2022-09-20 23:33:20 -04:00
psychedelicious
810112577f Fixes #717 2022-09-20 19:14:20 -04:00
Lincoln Stein
fc61ddab3c
Merge branch 'development' into postprocessing-commands 2022-09-20 18:48:42 -04:00
Mihail Dumitrescu
d176fb07cd Replace --full_precision with --precision that works even if not specified
Allowed values are 'auto', 'float32', 'autocast', 'float16'. If not specified or 'auto' a working precision is automatically selected based on the torch device.
Context: #526
Deprecated --full_precision / -F

Tested on both cuda and cpu by calling scripts/dream.py without arguments and checked the auto configuration worked. With --precision=auto/float32/autocast/float16 it performs as expected, either working or failing with a reasonable error. Also checked Img2Img.
2022-09-20 17:08:00 -04:00
冯不游
30de9fcfae switch to ldm.dream.log 2022-09-20 16:54:50 -04:00
冯不游
a28636dd4a add markdown log option 2022-09-20 16:54:50 -04:00
Joseph Dries III
27b238999f Use model_url instead of model_path for gfpgan and CodeFormer. Fixes lstein/stable-diffusion#678 2022-09-20 07:15:54 -04:00
Lincoln Stein
23af057e5c tweaks to make postprocess fixing work better
- modify strength of embiggen to reduce tiling ghosts
- normalize naming of postprocessed files (could improve more to avoid
name collisions)
- move restoration modules under ldm.dream
2022-09-19 14:54:52 -04:00
Lincoln Stein
c14bdcb8fd combine PRs #690 and #683 2022-09-19 13:59:43 -04:00
Lincoln Stein
f816526d0d add ability to post-process images from the CLI
- supports gfpgan, esrgan, codeformer and embiggen
- To use:
   dream> !fix ./outputs/img-samples/000056.292144555.png -ft gfpgan -U2 -G0.8
   dream> !fix ./outputs/img-samples/000056.292144555.png -ft codeformer -G 0.8
   dream> !fix ./outputs/img-samples/000056.29214455.png  -U4
   dream> !fix ./outputs/img-samples/000056.292144555.png -embiggen 1.5

   The first example invokes gfpgan to fix faces and esrgan to upscale.
   The second example invokes codeformer to fix faces, no upscaling
   The third example uses esrgan to upscale 4X
   The four example runs embiggen to enlarge 1.5X

- This is very preliminary work. There are some anomalies to note:
  1. The syntax is non-obvious. I would prefer something like:
     !fix esrgan,gfpgan
     !fix esrgan
     !fix embiggen,codeformer

     However, this will require refactoring the gfpgan and embiggen
     code.

   2. Images generated using gfpgan, esrgan or codeformer all are named
      "xxxxxx.xxxxxx.postprocessed.png" and the original is saved.
      However, the prefix is a new one that is not related to the
      original.

   3. Images generated using embiggen are named "xxxxx.xxxxxxx.png",
      and once again the prefix is new. I'm not sure whether the
      prefix should be aligned with the original file's prefix or not.
      Probably not, but opinions welcome.
2022-09-19 12:38:14 -04:00
blessedcoolant
7b0cbb34d6 GFPGAN and Real ESRGAN Implementation Refactor 2022-09-19 23:38:56 +12:00
tildebyte
a0a9b12daf
fix(scripts): dream.py typo (#675)
Signed-off-by: Ben Alkov <ben.alkov@gmail.com>

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
2022-09-18 15:03:11 -04:00
blessedcoolant
f3292a6953
Implement CodeFormer Face Restoration (#669)
* Implement CodeFormer Face Restoration

* fix codeformer model destination path

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-09-18 15:01:05 -04:00
Lincoln Stein
062f3e8f31
Gfpgan download fix (#655)
* restore ability to save files to directories named after prompt

* don't download gfpgan model file if it exists

- Fixes #647
2022-09-18 14:42:43 -04:00
Lincoln Stein
b68cb521ba
restore ability to save files to directories named after prompt (#654) 2022-09-18 08:37:08 -04:00
Lincoln Stein
239f41f3e0 add sd-metadata metadata_loads() and metadata_dumps() functions 2022-09-17 13:28:37 -04:00
Lincoln Stein
2faa116238 tidy up generation of prompt when variations in use 2022-09-17 11:59:47 -04:00
Lincoln Stein
31daf1f0d7 preload_models.py now downloads gfpgan model 2022-09-17 01:32:31 -04:00
Lincoln Stein
5b692f4720 include width and height in png dream prompt 2022-09-17 01:14:00 -04:00
Lincoln Stein
b89aadb3c9 fix crash on second prompt #636 2022-09-17 00:57:35 -04:00
blessedcoolant
42072fc15c Bug Fixes 2022-09-17 14:12:35 +12:00
Lincoln Stein
fe12c6c099 Squashed commit of the following:
commit 67fbaa7c31
Author: Lincoln Stein <lincoln.stein@gmail.com>
Date:   Fri Sep 16 16:57:54 2022 -0400

    reconciled conflicting changes to pngwriter call

commit ddc68b01f7
Merge: f9feaac cbac95b
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Sat Sep 17 06:39:22 2022 +1000

    Merge remote-tracking branch 'upstream/development' into development

commit f9feaac8c7
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Sat Sep 17 06:16:16 2022 +1000

    Fixes metadata related to new args

commit d1de1e357a
Author: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Date:   Sat Sep 17 06:15:55 2022 +1000

    Fixes PromptFormatter import bug
2022-09-16 16:58:16 -04:00
Lincoln Stein
403d02d94f
implementation of RFC #266 (#587)
* Feature complete for #266 with exception of several small deviations:
1. initial image and model weight hashes use full sha256 hash rather than first 8 digits
2. Initialization parameters for post-processing steps not provided
3. Uses top-level "images" tags for both a single image and a grid of images. This change was suggested in a comment.

* Added scripts/sd_metadata.py to retrieve and print metadata from PNG files
* New ldm.dream.args.Args class is a namespace like object which holds all defaults and can be modified during exection to hold current settings.
* Modified dream.py and server.py to accommodate Args class.
2022-09-16 13:09:04 -04:00
Any-Winter-4079
60b731e7ab
Update dream.py. k_euler_a and k_dpm_2_a M1 fix (#579)
* Update dream.py. k_euler_a and k_dpm_2_a M1 fix

Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/lstein/stable-diffusion/issues/397#issuecomment-1240679294

* Update dream.py. Remove import torch from dream.py

* generate.py: k_euler_a and k_dpm_2_a M1 fix #579

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2022-09-15 11:02:17 -04:00
Dominic Letz
340189fa0d fixup 2022-09-15 15:52:36 +02:00