Commit Graph

758 Commits

Author SHA1 Message Date
Lincoln Stein
62e1cb48fd developer documentation fixes 2022-10-25 00:31:43 -04:00
Lincoln Stein
543464182f inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-25 00:31:42 -04:00
Lincoln Stein
83a3cc9eb4 start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00
Damian at mba
61a4897b71 re-enable tokenization logging 2022-10-24 11:49:47 +02:00
Damian at mba
194c8e1c2e Merge branch 'development' into fix-prompts 2022-10-24 11:28:37 +02:00
Damian at mba
44e4090909 re-enable legacy blend syntax 2022-10-24 11:16:52 +02:00
Damian at mba
0564397ee6 cleanup logs 2022-10-24 11:16:43 +02:00
Lincoln Stein
37d38f196e fix a few more metadata bugs
- facetool and upscale arguments now written into metadata
- cleaned up handling of !fetch command
2022-10-23 23:01:32 -04:00
Lincoln Stein
17aee48734 fix incorrect handling of single quotes in prompts 2022-10-23 23:01:32 -04:00
Lincoln Stein
9cdd78c6cb developer documentation fixes 2022-10-23 22:56:58 -04:00
Lincoln Stein
5561a95232 inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-23 22:52:32 -04:00
Lincoln Stein
27f0f3e52b Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into add-safety-checker 2022-10-23 22:37:43 -04:00
Lincoln Stein
b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
Damian at mba
63902f3d34 also apply conditioing during hires fix upscale 2022-10-24 02:08:55 +02:00
Damian at mba
1fb15d5c81 fix hires fix 2022-10-24 02:02:42 +02:00
Damian at mba
cc2042bd4c keep the effect of _start and _end arguments consistent across k* and other samplers 2022-10-24 01:43:35 +02:00
Damian at mba
ee4273d760 fix step count on ddim 2022-10-24 01:23:43 +02:00
Damian at mba
2619a0b286 allow longer substitutions without quotes for cross attention swap 2022-10-24 00:22:14 +02:00
Damian at mba
92c6a3812d catch fewer exceptions in prompt2image 2022-10-24 00:06:53 +02:00
Kyle Schouviller
0c34554170 Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into inpaint-improvement 2022-10-23 14:02:52 -07:00
Damian at mba
b0eb864a25 move attention weighting operations to postfix 2022-10-23 23:01:53 +02:00
Kyle Schouviller
1264cc2d36 Switch from dilate to erode to fix inpaint edges. Default model to 1.4 instead of 1.5. 2022-10-23 14:01:06 -07:00
Damian at mba
f7cd98c238 tweak default cross-attention values 2022-10-23 20:38:28 +02:00
Damian at mba
8e7d744c60 fix bad math 2022-10-23 19:43:35 +02:00
Damian at mba
9210bf7d3a also parse shape_freedom keyword 2022-10-23 19:40:00 +02:00
Damian at mba
8f35819ddf add shape_freedom arg to .swap() 2022-10-23 19:38:31 +02:00
Damian at mba
04d93f0445 for k* samplers, estimate step_index from sigma 2022-10-23 16:26:50 +02:00
Lincoln Stein
7e27f189cf minor fixes to inpaint code
1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.
2022-10-23 09:33:15 -04:00
Lincoln Stein
9472945299 ported code refactor changes from PR #1221
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.
2022-10-23 09:33:15 -04:00
Lincoln Stein
f25c1f900f add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-23 09:33:15 -04:00
Kyle Schouviller
493eaa7389 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-23 09:33:15 -04:00
Lincoln Stein
ce6d618e3b outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-23 09:33:00 -04:00
wfng92
8254ca9492 Removed duplicate fix_func for MPS 2022-10-23 09:32:59 -04:00
Damian at mba
7d677a63b8 cross attention control options 2022-10-23 14:58:25 +02:00
Lincoln Stein
93cba3fba5
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1)
* Removed duplicate fix_func for MPS

* add support for loading VAE autoencoders

To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!

* ported code refactor changes from PR #1221

- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.

* minor fixes to inpaint code

1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.

Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-22 20:09:38 -07:00
Lincoln Stein
3e48b9ff85 cut over from karras to model noise schedule for higher steps
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.

This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
2022-10-22 23:02:50 -04:00
Lincoln Stein
a6e7aa8f97
Merge branch 'development' into patch-1 2022-10-22 19:28:50 -04:00
Lincoln Stein
51fdbe22d2 add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-22 19:27:46 -04:00
Kyle Schouviller
3b01e6e423 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-22 14:56:33 -07:00
Lincoln Stein
2e14ba8716 Let the text-to-mask .mask.png file be used as a mask
Ironically, the black and white mask file generated by the
`invoke> !mask` command could not be passed as the mask to
`img2img`. This is now fixed and the documentation updated.
2022-10-22 13:53:23 -04:00
Lincoln Stein
7308022bc7 outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-22 13:38:32 -04:00
Damian at mba
8273c04575 wip implementing options in diffuse step 2022-10-22 12:15:34 +02:00
Damian at mba
ee7d4d712a parsing CrossAttentionControlSubstitute options works 2022-10-22 11:27:56 +02:00
Lincoln Stein
554445a985 remove debug statement 2022-10-21 21:31:41 -04:00
Lincoln Stein
b2bf2b08ff Merge branch 'model-switching' into development 2022-10-21 21:27:59 -04:00
wfng92
e7573ac90f
Removed duplicate fix_func for MPS 2022-10-22 09:03:31 +08:00
Damian at mba
cdb664f6e5 Merge branch 'development' into fix-prompts 2022-10-21 21:34:09 +02:00
Lincoln Stein
1ca517d73b Merge branch 'fix-high-step-count' of https://github.com/holstvoogd/InvokeAI into holstvoogd-fix-high-step-count 2022-10-21 13:58:00 -04:00
Lincoln Stein
c9f9eed04e resolve numerous small merge bugs
- This merges PR #882

Coauthor: ArDiouscuros
2022-10-21 12:57:15 -04:00
Lincoln Stein
55db9dba0a Merge branch 'Improved-fetch-and-option-to-replay-commands-from-file' of https://github.com/ArDiouscuros/stable-diffusion into ArDiouscuros-Improved-fetch-and-option-to-replay-commands-from-file
- various small conflicts fixed
2022-10-21 10:12:35 -04:00
Damian at mba
64051d081c cleanup 2022-10-21 15:07:11 +02:00
Lincoln Stein
ddb007af65
Merge branch 'development' into fix-high-step-count 2022-10-21 06:55:17 -04:00
Damian at mba
e574a1574f txt2mask.py now tracking development again 2022-10-21 12:42:07 +02:00
Damian at mba
2bf9f1f0d8 rename StrcuturedConditioning to ExtraConditioningInfo 2022-10-21 12:18:40 +02:00
Damian at mba
8142b72bcd Merge remote-tracking branch 'upstream/development' into fix-prompts 2022-10-21 11:59:44 +02:00
Damian at mba
dc2f30a34e put back txt2mask import 2022-10-21 11:59:42 +02:00
Lincoln Stein
be7de4849c
Merge branch 'development' into model-switching 2022-10-21 00:55:52 -04:00
Lincoln Stein
83e6ab08aa further improvements to model loading
- code for committing config changes to models.yaml now in module
  rather than in invoke script
- model marked "default" is now loaded if model not specified on
  command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
2022-10-21 00:28:54 -04:00
Damian at mba
b385fdd7de non-normalized blend 2022-10-21 04:34:53 +02:00
Damian at mba
d965540103 more blend fixes 2022-10-21 04:23:19 +02:00
Damian at mba
404d59b1b8 fix blend 2022-10-21 04:18:17 +02:00
Damian at mba
4c1267338b bring in attention etc. 2022-10-21 03:54:13 +02:00
Damian at mba
2e0b1c4c8b ok now we're cooking 2022-10-21 03:29:50 +02:00
Damian at mba
da75876639 better support for word.swap(otherWord) without parantheses or quotes 2022-10-21 00:08:28 +02:00
Jan Skurovec
d4d1014c9f fix for 'model is not defined' when loading embedding 2022-10-20 17:31:46 -04:00
wfng92
3e0a7b6229 Correct color channels in upscale using array slicing 2022-10-20 16:52:07 -04:00
Damian at mba
da88097aba fix prompt handling in conditioning.py 2022-10-20 21:41:32 +02:00
Damian at mba
3f13dd3ae8 prompt parsing is now much more robust 2022-10-20 21:05:36 +02:00
Damian at mba
79b4afeae7 parser working with basic escapes 2022-10-20 16:56:34 +02:00
Damian at mba
da223dfe81 wip re-writing parts of prompt parser 2022-10-20 15:56:46 +02:00
Lincoln Stein
a357bf4f19 add !mask command to view output of clipseg
- The !mask command takes an image path, a text prompt, and
  (optionally) a masking threshold. It creates a mask over the region
  indicated by the prompt, and outputs several files that show which
  regions will be masked by the chosen prompt and threshold.

- The mask images should not be passed directly to img2img because
  they are designed for visualization only. Instead, use the
  --text_mask option to pass the selected prompt and threshold.

- See docs/features/INPAINTING.md for details.
2022-10-20 06:56:50 -04:00
Lincoln Stein
2ca4242f5f fix clipseg loading problems
- The directory "models" in the main InvokeAI directory was conflicting
  with loading "models.clipseg". To fix this issue, I have renamed the
  models.clipseg to clipseg_models.clipseg, and applied this change to
  the 'models-rename' branch of invoke-ai's fork of clipseg.
2022-10-20 06:56:50 -04:00
Damian at mba
c9d27634b4 bring in prompt parser from fix-prompts branch
attention is parsed but ignored, blends old syntax doesn't work,
	  conjunctions are parsed but ignored, the only part that's used
	  here is the new .blend() syntax and cross-attention control
	  using .swap()
2022-10-20 12:01:48 +02:00
psychedelicious
87469a5fdd Flips channels using array slicing instead of using OpenCV 2022-10-19 23:44:47 -04:00
psychedelicious
4101127011 Corrects color channels in face restoration; Fixes #1167 2022-10-19 23:32:57 -04:00
Damian at mba
42883545f9 add prompt language support for cross-attention .swap 2022-10-20 01:42:04 +02:00
Damian at mba
61357e4e6e be less verbose when assembling prompt 2022-10-19 21:12:07 +02:00
Damian at mba
c6ae9f1176 remove unnecessary assertion 2022-10-19 21:12:07 +02:00
Damian at mba
11d7e6b92f undo unwanted changes 2022-10-19 21:12:07 +02:00
Damian at mba
c3b992db96 Squashed commit of the following:
commit 9bb0b5d0036c4dffbb72ce11e097fae4ab63defd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:43:41 2022 +0200

    undo local_files_only stuff

commit eed93f5d30c34cfccaf7497618ae9af17a5ecfbb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:40:37 2022 +0200

    Revert "Merge branch 'development-invoke' into fix-prompts"

    This reverts commit 7c40892a9f184f7e216f14d14feb0411c5a90e24, reversing
    changes made to e3f2dd62b0548ca6988818ef058093a4f5b022f2.

commit f06d6024e345c69e6d5a91ab5423925a68ee95a7
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 23:30:16 2022 +0200

    more efficiently handle multiple conditioning

commit 5efdfcbcd980ce6202ab74e7f90e7415ce7260da
Merge: b9c0dc5 ac08bb6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:51:01 2022 +0200

    Merge branch 'optional-disable-karras-schedule' into fix-prompts

commit ac08bb6fd25e19a9d35cf6c199e66500fb604af1
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:50:43 2022 +0200

    append '*use_model_sigmas*' to prompt string to use model sigmas

commit 70d8c05a3ff329409f76204f4af94e55d468ab8b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 12:12:17 2022 +0200

    make karras scheduling switchable

    commit d60df54f69 replaced the model's
    own scheduling with karras scheduling. this has changed image generation
    (seems worse now?)

    this commit wraps the change in a bool.

commit b9c0dc5f1a658a0e6c3936000e9ae559e1c7a1db
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 20:16:00 2022 +0200

    add test of more complex conjunction

commit 9ac0c15cc0d7b5f6df3289d3ad474260972a17be
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:18:25 2022 +0200

    improve comments

commit ad33bce60590b87b2a93e90f16dc9d3e935d04a5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:04:46 2022 +0200

    put back thresholding stuff

commit 4852c698a325049834ba0d4b358f07210bc7171a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:25:02 2022 +0200

    notes on improving conjunction efficiency

commit a53bb1e5b68025d09642b935ae6a9a015cfaf2d6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:14:33 2022 +0200

    optional weights support for Conjunction

commit fec79ab15e4f0c84dd61cb1b45a5e6a72ae4aaeb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 12:07:27 2022 +0200

    fix blend error and log parsing output

commit 1f751c2a039f9c97af57b18e0f019512631d5a25
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:33:33 2022 +0200

    fix broken euler sampler

commit 02f8148d17efe4b6bde8d29b827092a0626363ee
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:24:20 2022 +0200

    cleanup prompt parser

commit 8028d49ae6c16c0d6ec9c9de9c12d56c32201421
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:14:18 2022 +0200

    explicit conjunction, improve flattening logic

commit 8a1710892185f07eb77483f7edae0fc4d6bbb250
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:59:30 2022 +0200

    adapt multi-conditioning to also work with ddim

commit 53802a839850d0d1ff017c6bafe457c4bed750b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:31:42 2022 +0200

    unconditioning is also fancy-prompt-syntaxable

commit 7c40892a9f184f7e216f14d14feb0411c5a90e24
Merge: e3f2dd6 dbe0da4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:39:54 2022 +0200

    Merge branch 'development-invoke' into fix-prompts

commit e3f2dd62b0548ca6988818ef058093a4f5b022f2
Merge: eef0e48 06f542e
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:38:09 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit eef0e484c2eaa1bd4e0e0b1d3f8d7bba38478144
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:26:25 2022 +0200

    fix run-on paren-less attention, add some comments

commit fd29afdf0e9f5e0cdc60239e22480c36ca0aaeca
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:03:02 2022 +0200

    python 3.9 compatibility

commit 26f7646eef7f39bc8f7ce805e747df0f723464da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:58:42 2022 +0200

    first pass connecting PromptParser to conditioning

commit ae53dff3796d7b9a5e7ed30fa1edb0374af6cd8d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:51:15 2022 +0200

    update frontend dist

commit 9be4a59a2d76f49e635474b5984bfca826a5dab4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 19:01:39 2022 +0200

    fix issues with correctness checking FlattenedPrompt

commit 3be212323eab68e72a363a654124edd9809e4cf0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:43:16 2022 +0200

    parsing nested seems to work pretty ok

commit acd73eb08cf67c27cac8a22934754321256f56a9
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:26:17 2022 +0200

    wip introducing FlattenedPrompt class

commit 71698d5c7c2ac855b690d8ef67e8830148c59eda
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:59:42 2022 +0200

    recursive attention weighting seems to actually work

commit a4e1ec6b20deb7cc0cd12737bdbd266e56144709
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:06:24 2022 +0200

    now apparently almost supported nested attention

commit da76fd1ddf22a3888cdc08fd4fed38d8b178e524
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 13:23:37 2022 +0200

    wip prompt parsing

commit dbe0da4572c2ac22f26a7afd722349a5680a9e47
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Mon Oct 10 22:32:35 2022 -0700

    Adding node-based invocation apps

commit 8f2a2ffc083366de74d7dae471b50b6f98a7c5f8
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 19:03:18 2022 +0200

    fix merge issues

commit 73118dee2a8f4891700756e014caf1c9ca629267
Merge: fd00844 12413b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:42:48 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit fd0084413541013c2cf71e006af0392719bef53d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:39:38 2022 +0200

    wip prompt parsing

commit 0be9363db9307859d2b65cffc6af01f57d7873a4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 03:20:06 2022 +0200

    better +/- attention parsing

commit 5383f691874a58ab01cda1e4fac6cf330146526a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 02:27:47 2022 +0200

    prompt parser seems to work

commit 591d098a33ce35462428d8c169501d8ed73615ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 20:25:37 2022 +0200

    supports weighting unconditioning, cross-attention with |

commit 7a7220563aa05a2980235b5b908362f66b728309
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 18:15:56 2022 +0200

    i think cross attention might be working?

commit 951ed391e7126bff228c18b2db304ad28d59644a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 16:04:54 2022 +0200

    weighted CFG denoiser working with a single item

commit ee532a0c2827368c9e45a6a5f3975666402873da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 06:33:40 2022 +0200

    wip probably doesn't work or compile

commit 14654bcbd207b9ca28a6cbd37dbd967d699b062d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 18:11:48 2022 +0200

    use tan() to calculate embedding weight for <1 attentions

commit 1a8e76b31aa5abf5150419ebf3b29d4658d07f2b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 16:14:54 2022 +0200

    fix bad math.max reference

commit f697ff896875876ccaa1e5527405bdaa7ed27cde
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 15:55:57 2022 +0200

    respect http[s]x protocol when making socket.io middleware

commit 41d3dd4eeae8d4efb05dfb44fc6d8aac5dc468ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 13:29:54 2022 +0200

    fractional weighting works, by blending with prompts excluding the word

commit 087fb6dfb3e8f5e84de8c911f75faa3e3fa3553c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:52:03 2022 +0200

    wip doing weights <1 by averaging with conditioning absent the lower-weighted fragment

commit 3c49e3f3ec7c18dc60f3e18ed2f7f0d97aad3a47
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:36:15 2022 +0200

    notate CFGDenoiser, perhaps

commit d2bcf1bb522026ebf209ad0103f6b370383e5070
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 05:04:47 2022 +0200

    hack blending syntax to test attention weighting more extensively

commit 94904ef2cf917f74ec23ef7a570e12ff8255b048
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 04:56:37 2022 +0200

    conditioning works, apparently

commit 7c6663ddd70f665fd1308b6dd74f92ca393a8df5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 02:20:24 2022 +0200

    attention weighting, definitely works in positive direction

commit 5856d453a9b020bc1a28ff643ae1f58c12c9be73
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 19:02:14 2022 +0200

    wip bubbling weights down

commit a2ed14fd9b7d3cb36b6c5348018b364c76d1e892
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 17:35:39 2022 +0200

    bring in changes from PC
2022-10-19 21:12:07 +02:00
Damian at mba
1ffd4a9e06 refactored single diffusion path seems to be working for all samplers 2022-10-19 21:08:03 +02:00
Damian at mba
147d39cb7c wip refactoring shared InvokeAI diffuser mixin to component 2022-10-19 21:08:03 +02:00
Damian at mba
824cb201b1 pass img2img ddim/plms edited conditioning through kwargs 2022-10-19 21:08:03 +02:00
Damian at mba
582880b314 add cross-attention support to im2img; prevent inpainting from crashing 2022-10-19 21:08:03 +02:00
Damian at mba
2b79a716aa wip hi-res fix 2022-10-19 21:08:03 +02:00
Damian at mba
d572af2acf fix cross-attention on k* samplers 2022-10-19 21:08:03 +02:00
Damian at mba
54e6a68acb wip bringing cross-attention to PLMS and DDIM 2022-10-19 21:08:03 +02:00
Damian at mba
09f62032ec cleanup and clarify comments 2022-10-19 21:08:03 +02:00
Damian at mba
711ffd238f cleanup 2022-10-19 21:08:03 +02:00
Damian at mba
056cb0d8a8 sliced cross-attention wrangler works 2022-10-19 21:08:03 +02:00
Damian at mba
37a204324b go back to using InvokeAI attention 2022-10-19 21:08:03 +02:00
Damian at mba
1fc1f8bf05 cross-attention working with placeholder {} syntax 2022-10-19 21:06:42 +02:00
Damian at mba
8ff507b03b runs but doesn't work properly - see below for test prompt
test prompt:
"a cat sitting on a car {a dog sitting on a car}" -W 384 -H 256 -s 10 -S 12346 -A k_euler
note that substition of dog for cat is currently hard-coded (ksampler.py
	line 43-44)
2022-10-19 21:06:42 +02:00
Damian at mba
33d6603fef cleanup initial experiments 2022-10-19 21:06:42 +02:00
Damian at mba
b0b1993918 initial experiments 2022-10-19 21:06:42 +02:00
Lincoln Stein
92d4dfaabf Merge branch 'asymmetric-tiling' of https://github.com/carson-katri/InvokeAI into carson-katri-asymmetric-tiling 2022-10-19 13:46:07 -04:00
Lincoln Stein
62dd3b7d7d resolve models.clipseg vs clipseg ambiguity 2022-10-18 23:09:26 -04:00
Carson Katri
15aa7593f6
Merge branch 'development' into asymmetric-tiling 2022-10-18 22:37:18 -04:00
Lincoln Stein
9b3ac92c24
fix incorrect import of clipseg 2022-10-18 19:28:30 -04:00
Carson Katri
9d19213b8a Merge branch 'development' of github.com:lstein/stable-diffusion into asymmetric-tiling 2022-10-18 13:34:10 -04:00
Lincoln Stein
3a1003f702 Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:29:26 -04:00
Lincoln Stein
20551857da add clipseg support for creating inpaint masks from text
On the command line, the new option is --text_mask or -tm.
Example:

```
invoke> a baseball -I /path/to/still_life.png -tm orange
```

This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
2022-10-18 08:27:48 -04:00
Lincoln Stein
32122e0312 clipseg library and environment in place 2022-10-18 08:27:48 -04:00
Lincoln Stein
230de023ff resolve doc conflicts during merge 2022-10-18 08:27:33 -04:00
Lincoln Stein
e6fc8af249
Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:08:58 -04:00
wfng92
1c2bd275fe Fix img2img DDIM index out of bound
Added a [community solution](https://github.com/CompVis/stable-diffusion/issues/111#issuecomment-1229483511) to fix index out of bound when doing img2img generation with `ddim` sampler. Also, restored `steps_out` to be `ddim_timesteps + 1` since the removal was meant to fix the [1000 steps issue](https://github.com/CompVis/stable-diffusion/issues/111)
2022-10-17 22:32:15 -04:00
Lincoln Stein
0cf11ce488 add option to CLI and pngwriter that allows user to set PNG compression level
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()

Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).

This addresses an issue first raised in #652.
2022-10-17 22:27:47 -04:00
Carson Katri
3b79b935a3
Merge branch 'development' into asymmetric-tiling 2022-10-17 20:15:42 -04:00
Carson Katri
99581dbbf7 Split seamless config into separate file 2022-10-17 19:31:20 -04:00
db3000
9e599c65c5 Only output facetool parameters if enhancing faces 2022-10-17 11:49:07 -04:00
Carson Katri
e03a3fcf68 Add seamless_axes options 2022-10-16 22:45:18 -04:00
Lincoln Stein
ef2058824a add a strength value to inpaint_replace
- --inpaint_replace 0.X will cause inpainting to ignore what is under
  the masked region with a strength ranging from 0 (don't ignore at all)
  to 1.0 (ignore completely)
- sync with upstream development
- update docs
2022-10-16 10:06:47 -04:00
Lincoln Stein
6f93dc7712 cleanup inpainting and img2img
- add a `--inpaint_replace` option that fills masked regions with
  latent noise. This allows radical changes to inpainted regions
  at the cost of losing context.
- fix up readline, arg processing and metadata writing to accommodate
  this change
- fixed bug in storage and retrieval of variations, discovered incidentally
  during testing
- update documentation
2022-10-16 08:50:55 -04:00
Conor Reid
a3a50bb886 Update generate.py
Fixed spelling mistake (open source king)
2022-10-15 16:02:14 -04:00
Lincoln Stein
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
Lincoln Stein
c4fb8e304b fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 16:19:45 -04:00
Lincoln Stein
fe2a2cfc8b
Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
Lincoln Stein
32dab7d4bf close #1094, dangling gfpgan_strength reference 2022-10-14 07:45:10 -04:00
db3000
ce5e57d828 Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
Lincoln Stein
e98fe9c22d fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 00:01:59 -04:00
Lincoln Stein
6afc0f9b38 add ability to import and edit alternative models online
- !import_model <path/to/model/weights> will import a new model,
  prompt the user for its name and description, write it to the
  models.yaml file, and load it.

- !edit_model <model_name> will bring up a previously-defined model
  and prompt the user to edit its descriptive fields.

Example of !import_model

<pre>
invoke> <b>!import_model models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt</b>
>> Model import in process. Please enter the values needed to configure this model:

Name for this model: <b>waifu-diffusion</b>
Description of this model: <b>Waifu Diffusion v1.3</b>
Configuration file for this model: <b>configs/stable-diffusion/v1-inference.yaml</b>
Default image width: <b>512</b>
Default image height: <b>512</b>
>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu Diffusion v1.3
  height: 512
  weights: models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
  width: 512
OK to import [n]? <b>y</b>
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch08-float16.ckpt
   | LatentDiffusion: Running in eps-prediction mode
   | DiffusionWrapper has 859.52 M params.
   | Making attention of type 'vanilla' with 512 in_channels
   | Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
   | Making attention of type 'vanilla' with 512 in_channels
   | Using faster float16 precision
</pre>

Example of !edit_model

<pre>
invoke> <b>!edit_model waifu-diffusion</b>
>> Editing model waifu-diffusion from configuration file ./configs/models.yaml
description: <b>Waifu diffusion v1.4beta</b>
weights: models/ldm/stable-diffusion-v1/<b>model-epoch10-float16.ckpt</b>
config: configs/stable-diffusion/v1-inference.yaml
width: 512
height: 512

>> New configuration:
waifu-diffusion:
  config: configs/stable-diffusion/v1-inference.yaml
  description: Waifu diffusion v1.4beta
  weights: models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
  height: 512
  width: 512

OK to import [n]? y
>> Caching model stable-diffusion-1.4 in system RAM
>> Loading waifu-diffusion from models/ldm/stable-diffusion-v1/model-epoch10-float16.ckpt
...
</pre>
2022-10-13 23:48:07 -04:00
Lincoln Stein
916f5bfbb2 gracefully recover from failed model load 2022-10-13 12:27:04 -04:00
Jan Skurovec
cac3f5fc61 fix for "1 leaked semaphore objects to clean up at shutdown" on M1
Implements fix by @Any-Winter-4079 referenced in https://github.com/invoke-ai/InvokeAI/issues/1016#issuecomment-1276825640
2022-10-13 13:33:59 +02:00
hipsterusername
7e33560010 Hires Addition
Updated ImageMetaDataViewer with correct values
Updated tooltip text
Add arguments for Hires & Seamless Metadata
2022-10-13 23:57:24 +13:00
Daniel Manzke
057fc95aa3 Print out the device type which is used
Print out the device type which is used for generating images.
2022-10-12 20:36:43 -04:00
CapableWeb
6c0dd9b5ef Add back old dream.py as legacy_api.py
This commit "reverts" the new API changes by extracting the old
functionality into new files.

The work is based on the commit `803a51d5adca7e6e28491fc414fd3937bee7cb79`

PngWriter regained PromptFormatter as old server used that.

`server_legacy.py` is the old server that `dream.py` used.

Finally `legacy_api.py` is what `dream.py` used to be at the mentioned
commit.

One manually run test has been added in order to be able to test
compatibility with the old API, currently just testing that the API
endpoint works the same way + the image hash is the same as it used to
be before.
2022-10-12 20:35:56 -04:00
Lincoln Stein
1c102c71fc final fixups to memory_cache
- fixed backwards calculation of minimum available memory
- only execute m.padding adjustment code once upon load
2022-10-12 15:56:06 -04:00
Lincoln Stein
aa6aa68753 proposed fix to work on mps systems 2022-10-12 11:08:27 -04:00
Lincoln Stein
b537e92789 move tokenizer into cpu cache as well 2022-10-12 03:03:29 -04:00
Lincoln Stein
488334710b enable fast switching between models in invoke.py
- This PR enables two new commands in the invoke.py script

 !models         -- list the available models and their cache status
 !switch <model> -- switch to the indicated model

Example:

 invoke> !models
   laion400m            not loaded  Latent Diffusion LAION400M model
   stable-diffusion-1.4     active  Stable Diffusion inference model version 1.4
   waifu-1.3                cached  Waifu anime model version 1.3
 invoke> !switch waifu-1.3
   >> Caching model stable-diffusion-1.4 in system RAM
   >> Retrieving model waifu-1.3 from system RAM cache

The name and descriptions of the models are taken from
`config/models.yaml`. A future enhancement to `model_cache.py` will be
to enable new model stanzas to be added to the file
programmatically. This will be useful for the WebGUI.

More details:

- Use fast switching algorithm described in PR #948
- Models are selected using their configuration stanza name
  given in models.yaml.
- To avoid filling up CPU RAM with cached models, this PR
  implements an LRU cache that monitors available CPU RAM.
- The caching code allows the minimum value of available RAM
  to be adjusted, but invoke.py does not currently have a
  command-line argument that allows you to set it. The
  minimum free RAM is arbitrarily set to 2 GB.
- Add optional description field to configs/models.yaml

Unrelated fixes:
- Added ">>" to CompViz model loading messages in order to make user experience
  more consistent.
- When generating an image greater than defaults, will only warn about possible
  VRAM filling the first time.
- Fixed bug that was causing help message to be printed twice. This involved
  moving the import line for the web backend into the section where it is
  called.

Coauthored by: @ArDiouscuros
2022-10-12 02:37:42 -04:00
Lincoln Stein
b9e910b5f4 add mostly functional model caching module 2022-10-11 17:24:10 -04:00
Jan Skurovec
101cac6a21 reintroduce fix for m1 from PR#579 missing after merge
Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/invoke-ai/InvokeAI/issues/397#issuecomment-1240679294
2022-10-11 23:00:20 +02:00
Jan Skurovec
8ea07f3bb0 reintroduce fix for m1 from PR#579 missing after merge
Make results reproducible (so runs with the same seed produce the same result).
Implements fix by @wbowling referenced in https://github.com/invoke-ai/InvokeAI/issues/397#issuecomment-1240679294
2022-10-11 21:50:59 +02:00
Lincoln Stein
c5344acb25 run make_schedule() if it hasn't already been called
- fixes #984
2022-10-09 15:30:23 -04:00
Lincoln Stein
8faa06fb15 Merge branch 'main' into development
- this syncs documentation and code
2022-10-09 14:47:27 -04:00
ArDiouscuros
0b7ca6a326 Allow user to generate images with initial noise as on M1 / mps system 2022-10-09 12:25:22 -04:00
Peter Baylies
27f62999c9 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:24:02 -04:00
Peter Baylies
89d130edf4 * Fix for Perlin noise issue for cuda as well. 2022-10-09 08:23:23 -04:00
Rainer Bernhardt
0e551a3844
Merge branch 'development' into fnformat 2022-10-09 13:43:09 +02:00
Lincoln Stein
31869885d9 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:55:05 -04:00
Lincoln Stein
4c026d9d92 enhance the in-line -h command help text
- the prompt argument comes before the optional arguments
- usage statement shows 'invoke>' rather than 'invoke.py'
- use pydoc pager to help display long help message
2022-10-08 13:53:56 -04:00
Lincoln Stein
2b1aaf4ee7 rename all modules from ldm.dream to ldm.invoke
- scripts and documentation updated to match
- ran preflight checks on both web and CLI and seems to be working
2022-10-08 11:37:23 -04:00
Lincoln Stein
98fe044dee rebrand CLI from "dream" to "invoke"
- rename dream.py to invoke.py
- create a compatibility script named dream.py that execs() invoke.py
- redo documentation
- change help message in args
- this does **not** rename the libraries, which are still ldm.dream.util, etc
2022-10-08 09:32:06 -04:00
ArDiouscuros
02b1040264 Fix typo in ldm/dream/readline.py during merge, add more exception handling 2022-10-08 13:41:31 +02:00
ArDiouscuros
dfd5899611
Merge branch 'development' into Improved-fetch-and-option-to-replay-commands-from-file 2022-10-08 13:26:22 +02:00
Lincoln Stein
7a701506a4 restore ability of ksamplers to process -v variation options
- supersedes PR #977
- works with both img2img and txt2img
2022-10-07 16:25:58 -04:00
Lincoln Stein
5157cbeda1 restore ability of ksamplers to process -v variation options
- supersedes #977
2022-10-07 16:21:16 -04:00
Lincoln Stein
3d7bc074cf autorotate init images using exif orientation tag 2022-10-07 12:06:50 -04:00
Lincoln Stein
b296933ba0 autorotate init images using exif orientation tag 2022-10-07 12:06:40 -04:00
Jakub Kolčář
70bb7f4a61 fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:36:45 -04:00
Jakub Kolčář
45cc867b0c fixed perlin noise generation for mps (macos) - fix for cpu fallback 2022-10-07 10:35:42 -04:00
Rainer Bernhardt
173dc34194
Merge branch 'development' into fnformat 2022-10-07 15:39:41 +02:00
Lincoln Stein
333219be35 fix broken image generation on plms and ddim samplers 2022-10-07 08:26:53 -04:00
spezialspezial
c1230da3ab remove duplicated code 2022-10-07 08:13:34 -04:00
spezialspezial
a7515624b2 remove duplicated code 2022-10-07 08:12:55 -04:00
Lincoln Stein
9f34ddfcea fix crash on len(Nonetype) in k_sampler 2022-10-07 08:05:13 -04:00
plucked
395445e7b0 using string.format for filename formatting 2022-10-07 09:24:39 +00:00
plucked
89c6c11214 feat: adding filename format template 2022-10-07 08:32:39 +00:00
Lincoln Stein
c6a7be63b8 fix crash in generate._transparency_check_and_warning() 2022-10-06 21:00:27 -04:00
Lincoln Stein
75165957c9 Revert "realesrgan inherits precision setting from main program"
This reverts commit 5f42d08945.

This fix was intended to solve issue #939, in which ESRGAN generates
dark images when upscaling 4X on certain GTX cards. However, the fix
apparently causes conflicts with some versions of the ESRGAN library,
and this fix will have to wait until after release of 2.0.
2022-10-06 20:52:38 -04:00
Lincoln Stein
d60df54f69 fix k_samplers in img2img - probably correct now 2022-10-06 18:53:54 -04:00
Lincoln Stein
7541c7cf5d fix k_samplers in img2img - probably correct now 2022-10-06 18:31:04 -04:00
Lincoln Stein
440065f7f8 revert previous change 2022-10-06 14:57:06 -04:00
Lincoln Stein
2c27e759cd fix #889 - fuzzy k* img2img at low strength 2022-10-06 14:16:56 -04:00
Lincoln Stein
5f42d08945 realesrgan inherits precision setting from main program 2022-10-06 12:23:30 -04:00
Lincoln Stein
2154dd2349 prevent crashes due to uninitialized free_gpu_mem 2022-10-06 10:54:05 -04:00
Lincoln Stein
f3050fefce bug and warning message fixes
- txt2img2img back to using DDIM as img2img sampler; results produced
  by some k* samplers are just not reliable enough for good user
  experience
- img2img progress message clarifies why img2img steps taken != steps requested
- warn of potential problems when user tries to run img2img on a small init image
2022-10-06 10:39:08 -04:00
Arthur Holstvoogd
595d15455a Fix generation of image with s>1000 2022-10-06 15:49:35 +02:00
Lincoln Stein
183b98384f set perlin & threshold to zero on generator initialization 2022-10-06 09:35:04 -04:00
Lincoln Stein
a99aab6309 enable --hires to use k* samplers 2022-10-05 20:10:21 -04:00
ArDiouscuros
5a40f7ad15 Fix for crashes in txt2img hires fix mode 2022-10-05 20:10:06 -04:00
Lincoln Stein
2f29b78a00 enable --hires to use k* samplers 2022-10-05 17:18:32 -04:00
ArDiouscuros
bcb6e2e506 Fix for crashes in txt2img hires fix mode 2022-10-05 17:13:43 -04:00
Lincoln Stein
44e3995425 remove dangling -V from normalized dream command 2022-10-05 00:48:17 -04:00
Lincoln Stein
5dd9e30c2f Merge branch 'development' of github.com:invoke-ai/InvokeAI into development 2022-10-04 22:58:09 -04:00
Lincoln Stein
f368f682e1 remove unecessary import of pyreadline3 2022-10-04 22:53:07 -04:00
Lincoln Stein
d16f0c8a8f add enhancements to CLI command-line completion and history
- Added support for pyreadline3 so that Window users can benefit.
- Added the !search command to search the history for a matching string:
~~~
!search puppies
[20] puppies at the food bowl -Ak_lms
[54] house overrun by hungry puppies -C20 -s100
~~~
- Added the !clear command to clear the in-memory and on-disk
  command history.
2022-10-04 22:53:07 -04:00
Lincoln Stein
18e667f98e fix miscellaneous regressions:
- embiggen needs to use ddim sampler due to low step count
- --hires_fix option needs to be written to log and command string
- fix call signature of _init_image_mask()
2022-10-04 22:48:19 -04:00
Lincoln Stein
a09c64a1fe merge outcropping and postprocessing fixes 2022-10-04 22:47:21 -04:00
Lincoln Stein
4c482fe24a refactor how postprocessors work
- similar call structures for outpainting, outcropping and face restoration modules
- added documentation for outcropping
- post-processing steps now leave a provenance chain (of sorts) in the sd-metadata field:

~~~
scripts/sd-metadata.py outputs/img-samples/curly.942491079.upscale.png
outputs/img-samples/curly.942491079.upscale.png:
 {
    "model": "stable diffusion",
    "model_id": "stable-diffusion-1.4",
    "model_hash": "fe4efff1e174c627256e44ec2991ba279b3816e364b49f9be2abc0b3ff3f8556",
    "app_id": "lstein/stable-diffusion",
    "app_version": "v1.15",
    "image": {
        "height": 512,
        "width": 512,
        "steps": 50,
        "cfg_scale": 7.5,
        "seed": 942491079,
        "prompt": [
            {
                "prompt": "pretty curly-haired redhead woman",
                "weight": 1.0
            }
        ],
        "postprocessing": [
            {
                "tool": "outcrop",
                "dream_command": "!fix \"test-pictures/curly.png\" -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -c top 64 right 64"
            },
            {
                "tool": "gfpgan",
                "dream_command": "!fix \"outputs/img-samples/curly.942491079.outcrop-02.png\" -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -G 0.8"
            },
            {
                "tool": "upscale",
                "dream_command": "!fix \"outputs/img-samples/curly.942491079.gfpgan.png\" -s 50 -S 942491079 -W 512 -H 512 -C 7.5 -A k_lms -U 4.0 0.75"
            }
        ],
        "sampler": "k_lms",
        "variations": [],
        "type": "txt2img"
    }
}
~~~
2022-10-04 22:37:25 -04:00
Lincoln Stein
609983ffa8 add outcrop postprocessor 2022-10-04 22:37:25 -04:00
Lincoln Stein
0f9bff66bc Merge branch 'development' of github.com:pbaylies/stable-diffusion into pbaylies-development 2022-10-04 22:31:11 -04:00
Hideyuki Katsushiro
e7fb9f342c add argument --outdir 2022-10-05 10:08:53 +09:00
Peter Baylies
5d911b43c0
Merge branch 'development' into development 2022-10-04 18:40:42 -04:00
Lincoln Stein
483097f31c fix off-by-one error 2022-10-05 00:15:47 +02:00
Lincoln Stein
7a3eae4572 revert to original k* noise schedule 2022-10-05 00:15:47 +02:00
Peter Baylies
db349aa3ce * Fix meshgrid warning for perlin noise. 2022-10-04 16:58:22 -04:00
Peter Baylies
b5c114c5b7 * Changes for PR 2022-10-04 14:37:32 -04:00
Lincoln Stein
f34279b3e7 add perlin options to formatted dream cmd 2022-10-04 14:32:18 -04:00
Peter Baylies
f041510659
Merge branch 'development' into development 2022-10-03 23:53:19 -04:00
Lincoln Stein
feb405f19a reorder mask creation and scaling to avoid deleting colors under transparency 2022-10-03 23:29:09 -04:00
blessedcoolant
b8e4c13746 Add New WebUI and Desktop Mode
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2022-10-03 23:28:53 -04:00
Lincoln Stein
0a217b5f15 Merge branch 'Any-Winter-4079-Textual_Inversion' of github.com:Any-Winter-4079/InvokeAI into development 2022-10-03 19:43:46 -04:00
psychedelicious
88a9f33422 Fixes #875 2022-10-03 19:42:12 -04:00
Lincoln Stein
ea67040ef1 restore behavior of --grid argument 2022-10-03 17:09:48 -04:00
Peter Baylies
e79069a957
Merge branch 'development' into development 2022-10-03 08:33:44 -04:00
Lincoln Stein
1ab09e7a06 use right value for step arg in img_callback in decode() 2022-10-03 05:47:32 -04:00
Lincoln Stein
7c6dbcb14a use right value for step arg in img_callback 2022-10-03 05:46:23 -04:00
Lincoln Stein
8e97bc24a4 restore step argument to step_callback 2022-10-03 05:38:43 -04:00
ArDiouscuros
935a9d3c75 Update !fetch command, add documentation and autocomplete list
-- !fetch takes second optional argument name of the file to save commands to
2022-10-03 10:38:22 +02:00