Commit Graph

737 Commits

Author SHA1 Message Date
Lincoln Stein
16e7cbdb38 tweaks to documentation and call signature for advanced prompting 2022-10-27 08:30:09 -04:00
Damian at mba
135c62f1a4 fix issue with hot-dog, improve () suppression 2022-10-27 07:37:48 -04:00
Lincoln Stein
799dc6d0df acceptable integration of new prompting system and inpainting
This was a difficult merge because both PR #1108 and #1243 made
changes to obscure parts of the diffusion code.

- prompt weighting, merging and cross-attention working
  - cross-attention does not work with runwayML inpainting
    model, but weighting and merging are tested and working
- CLI command parsing code rewritten in order to get embedded
  quotes right
- --hires now works with runwayML inpainting
- --embiggen does not work with runwayML and will give an error
- Added an --invert option to invert masks applied to inpainting
- Updated documentation
2022-10-27 01:51:35 -04:00
Damian at mba
79689e87ce fix crash making embeddings from too-long prompts with attention weights 2022-10-26 22:42:17 -04:00
Lincoln Stein
0d0481ce75 inpaint model progress
- working with plain prompts, weighted prompts and merge prompts
- not tested with prompt2prompt
2022-10-26 22:40:01 -04:00
Lincoln Stein
2daf187bdb working with 1.4, 1.5, not with inpainting 1.5 2022-10-26 18:25:48 -04:00
Lincoln Stein
9b7159720f resolve conflicts between PR #1108 and #1243 2022-10-26 15:37:24 -04:00
Kyle Schouviller
d05373d35a Force RGB for img2img 2022-10-26 12:12:08 -07:00
Kyle Schouviller
bd8bb8c80b Adding outpainting implementation (as part of inpaint). 2022-10-26 12:12:08 -07:00
Kyle Schouviller
dac1ab0a05 Better inpainting color-correction 2022-10-26 12:12:08 -07:00
Lincoln Stein
2f1c1e7695 Merge branch 'fix-prompts' of https://github.com/damian0815/InvokeAI into merge-prompt-and-inpaint-model 2022-10-26 08:50:55 -04:00
Lincoln Stein
2b6d78e436 minor cleanups
- remove --fnformat from canonicalized dream prompt arguments
  (not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
  (definitely needed for image reproducibility)
2022-10-26 08:32:54 -04:00
Lincoln Stein
b1da13a984 minor cleanups
- change default model back to 1.4
- remove --fnformat from canonicalized dream prompt arguments
  (not needed for image reproducibility)
- add -tm to canonicalized dream prompt arguments
  (definitely needed for image reproducibility)
2022-10-26 08:29:56 -04:00
Lincoln Stein
906dafe3cd make variations work with inpainting model 2022-10-26 00:18:31 -04:00
Lincoln Stein
d3047c7cb0 do not encode init image in starting latent 2022-10-25 22:44:42 -04:00
tyler
62412f8398 fixing aspect ratio on hires 2022-10-25 21:28:50 -05:00
Lincoln Stein
ca2f579f43 prevent crash when providing empty quoted prompt ("") 2022-10-25 15:56:07 -04:00
Lincoln Stein
3c1ef48fe2 fix crash when doing img2img with ddim sampler and SD 1.5 2022-10-25 13:57:42 -04:00
Lincoln Stein
c732fd0740 Merge branch 'inpaint-model' of github.com:invoke-ai/InvokeAI into inpaint-model 2022-10-25 13:21:00 -04:00
Lincoln Stein
4352eb6628 stop crashes on non-square images 2022-10-25 13:17:06 -04:00
Lincoln Stein
1ae269b8e0
Merge branch 'development' into inpaint-model 2022-10-25 11:50:08 -04:00
Lincoln Stein
e33971fe2c plms works, bugs quashed
- The plms sampler now works with custom inpainting model
- Quashed bug that was causing generation on normal models to fail (oops!)
- Can now generate non-square images with custom inpainting model

Credits for advice and assistance during porting:

@any-winter-4079 (http://github.com/any-winter-4079)
@db3000 (Danny Beer http://github.com/db3000)
2022-10-25 11:44:01 -04:00
Lincoln Stein
b101be041b add support for runwayML custom inpainting model
This is still a work in progress but seems functional. It supports
inpainting, txt2img and img2img on the ddim and k* samplers (plms
still needs work, but I know what to do).

To test this, get the file `sd-v1-5-inpainting.ckpt' from
https://huggingface.co/runwayml/stable-diffusion-inpainting and place it
at `models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt`

Launch invoke.py with --model inpainting-1.5 and proceed as usual.

Caveats:

1. The inpainting model takes about 800 Mb more memory than the standard
   1.5 model. This model will not work on 4 GB cards.

2. The inpainting model is temperamental. It wants you to describe the
   entire scene and not just the masked area to replace. So if you want
   to replace the parrot on a man's shoulder with a crow, the prompt
   "crow" may fail. Try "man with a crow on shoulder" instead. The
   symptom of a failed inpainting is that the area will be erased and
   replaced with background.

3. This has not been tested well. Please report bugs.
2022-10-25 10:45:15 -04:00
Lincoln Stein
aaf7a4f1d3 inpaint and txt2img working with ddim sampler 2022-10-25 10:00:28 -04:00
Lincoln Stein
99d23c4d81 fix merge conflicts 2022-10-25 07:30:26 -04:00
Lincoln Stein
5e8d1ca19f resolve conflicts 2022-10-25 07:17:54 -04:00
Lincoln Stein
a2e53892ec fixed synax errors; now channel mismatch issue 2022-10-25 00:47:13 -04:00
Lincoln Stein
be8a992b85 add missing file 2022-10-25 00:38:24 -04:00
Lincoln Stein
9bef643bf5 fix a few more metadata bugs
- facetool and upscale arguments now written into metadata
- cleaned up handling of !fetch command
2022-10-25 00:31:43 -04:00
Lincoln Stein
f6b31d51e0 fix incorrect handling of single quotes in prompts 2022-10-25 00:31:43 -04:00
Lincoln Stein
62e1cb48fd developer documentation fixes 2022-10-25 00:31:43 -04:00
Lincoln Stein
543464182f inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-25 00:31:42 -04:00
Lincoln Stein
83a3cc9eb4 start support for 1.5 inpainting model, not complete 2022-10-25 00:30:48 -04:00
Damian at mba
61a4897b71 re-enable tokenization logging 2022-10-24 11:49:47 +02:00
Damian at mba
194c8e1c2e Merge branch 'development' into fix-prompts 2022-10-24 11:28:37 +02:00
Damian at mba
44e4090909 re-enable legacy blend syntax 2022-10-24 11:16:52 +02:00
Damian at mba
0564397ee6 cleanup logs 2022-10-24 11:16:43 +02:00
Lincoln Stein
37d38f196e fix a few more metadata bugs
- facetool and upscale arguments now written into metadata
- cleaned up handling of !fetch command
2022-10-23 23:01:32 -04:00
Lincoln Stein
17aee48734 fix incorrect handling of single quotes in prompts 2022-10-23 23:01:32 -04:00
Lincoln Stein
9cdd78c6cb developer documentation fixes 2022-10-23 22:56:58 -04:00
Lincoln Stein
5561a95232 inpainting fix per PR #1218
- This is a merge of the final version of PR #1218 "Inpainting
  Improvements"

  Various merge conflicts made it easier to commit directly.

Author: Kyle0654
Co-Author: lstein
2022-10-23 22:52:32 -04:00
Lincoln Stein
27f0f3e52b Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into add-safety-checker 2022-10-23 22:37:43 -04:00
Lincoln Stein
b159b2fe42 add support for safety checker (NSFW filter)
Now you can activate the Hugging Face `diffusers` library safety check
for NSFW and other potentially disturbing imagery.

To turn on the safety check, pass --safety_checker at the command
line. For developers, the flag is `safety_checker=True` passed to
ldm.generate.Generate(). Once the safety checker is turned on, it
cannot be turned off unless you reinitialize a new Generate object.

When the safety checker is active, suspect images will be blurred and
a warning icon is added. There is also a warning message printed in
the CLI, but it can be a little hard to see because of its positioning
in the output stream.

There is a slight but noticeable delay when the safety checker runs.

Note that invisible watermarking is *not* currently implemented. The
watermark code distributed by the CompViz distribution uses a library
that does not seem to be able to retrieve the watermarks it creates,
and it does not appear that Hugging Face `diffusers` or other SD
distributions are doing any watermarking.
2022-10-23 22:26:18 -04:00
Damian at mba
63902f3d34 also apply conditioing during hires fix upscale 2022-10-24 02:08:55 +02:00
Damian at mba
1fb15d5c81 fix hires fix 2022-10-24 02:02:42 +02:00
Damian at mba
cc2042bd4c keep the effect of _start and _end arguments consistent across k* and other samplers 2022-10-24 01:43:35 +02:00
Damian at mba
ee4273d760 fix step count on ddim 2022-10-24 01:23:43 +02:00
Damian at mba
2619a0b286 allow longer substitutions without quotes for cross attention swap 2022-10-24 00:22:14 +02:00
Damian at mba
92c6a3812d catch fewer exceptions in prompt2image 2022-10-24 00:06:53 +02:00
Kyle Schouviller
0c34554170 Merge branch 'inpaint-improvement' of https://github.com/Kyle0654/InvokeAI into inpaint-improvement 2022-10-23 14:02:52 -07:00
Damian at mba
b0eb864a25 move attention weighting operations to postfix 2022-10-23 23:01:53 +02:00
Kyle Schouviller
1264cc2d36 Switch from dilate to erode to fix inpaint edges. Default model to 1.4 instead of 1.5. 2022-10-23 14:01:06 -07:00
Damian at mba
f7cd98c238 tweak default cross-attention values 2022-10-23 20:38:28 +02:00
Damian at mba
8e7d744c60 fix bad math 2022-10-23 19:43:35 +02:00
Damian at mba
9210bf7d3a also parse shape_freedom keyword 2022-10-23 19:40:00 +02:00
Damian at mba
8f35819ddf add shape_freedom arg to .swap() 2022-10-23 19:38:31 +02:00
Damian at mba
04d93f0445 for k* samplers, estimate step_index from sigma 2022-10-23 16:26:50 +02:00
Lincoln Stein
7e27f189cf minor fixes to inpaint code
1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.
2022-10-23 09:33:15 -04:00
Lincoln Stein
9472945299 ported code refactor changes from PR #1221
- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.
2022-10-23 09:33:15 -04:00
Lincoln Stein
f25c1f900f add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-23 09:33:15 -04:00
Kyle Schouviller
493eaa7389 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-23 09:33:15 -04:00
Lincoln Stein
ce6d618e3b outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-23 09:33:00 -04:00
wfng92
8254ca9492 Removed duplicate fix_func for MPS 2022-10-23 09:32:59 -04:00
Damian at mba
7d677a63b8 cross attention control options 2022-10-23 14:58:25 +02:00
Lincoln Stein
93cba3fba5
Kyle0654 inpaint improvement - with refactoring from PR #1221 (#1)
* Removed duplicate fix_func for MPS

* add support for loading VAE autoencoders

To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!

* ported code refactor changes from PR #1221

- pass a PIL.Image to img2img and inpaint rather than tensor
- To support clipseg, inpaint needs to accept an "L" or "1" format
  mask. Made the appropriate change.

* minor fixes to inpaint code

1. If tensors are passed to inpaint as init_image and/or init_mask, then
   the post-generation image fixup code will be skipped.

2. Post-generation image fixup will work with either a black and white "L"
   or "RGB"  mask, or an "RGBA" mask.

Co-authored-by: wfng92 <43742196+wfng92@users.noreply.github.com>
2022-10-22 20:09:38 -07:00
Lincoln Stein
3e48b9ff85 cut over from karras to model noise schedule for higher steps
The k_samplers come with a "karras" noise schedule which performs
very well at low step counts but becomes noisy at higher ones.

This commit introduces a threshold (currently 30 steps) at which the
k samplers will switch over from using karras to the older model
noise schedule.
2022-10-22 23:02:50 -04:00
Lincoln Stein
a6e7aa8f97
Merge branch 'development' into patch-1 2022-10-22 19:28:50 -04:00
Lincoln Stein
51fdbe22d2 add support for loading VAE autoencoders
To add a VAE autoencoder to an existing model:

1. Download the appropriate autoencoder and put it into
   models/ldm/stable-diffusion

   Note that you MUST use a VAE that was written for the
   original CompViz Stable Diffusion codebase. For v1.4,
   that would be the file named vae-ft-mse-840000-ema-pruned.ckpt
   that you can download from https://huggingface.co/stabilityai/sd-vae-ft-mse-original

2. Edit config/models.yaml to contain the following stanza, modifying `weights`
   and `vae` as required to match the weights and vae model file names. There is
   no requirement to rename the VAE file.

~~~
stable-diffusion-1.4:
  weights: models/ldm/stable-diffusion-v1/sd-v1-4.ckpt
  description: Stable Diffusion v1.4
  config: configs/stable-diffusion/v1-inference.yaml
  vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
  width: 512
  height: 512
~~~

3. Alternatively from within the `invoke.py` CLI, you may use the command
   `!editmodel stable-diffusion-1.4` to bring up a simple editor that will
   allow you to add the path to the VAE.

4. If you are just installing InvokeAI for the first time, you can also
   use `!import_model models/ldm/stable-diffusion/sd-v1.4.ckpt` instead
   to create the configuration from scratch.

5. That's it!
2022-10-22 19:27:46 -04:00
Kyle Schouviller
3b01e6e423 Improve inpainting by color-correcting result and pasting init image over result using mask 2022-10-22 14:56:33 -07:00
Lincoln Stein
2e14ba8716 Let the text-to-mask .mask.png file be used as a mask
Ironically, the black and white mask file generated by the
`invoke> !mask` command could not be passed as the mask to
`img2img`. This is now fixed and the documentation updated.
2022-10-22 13:53:23 -04:00
Lincoln Stein
7308022bc7 outcropping improvements
- catch syntax errors in the outcrop coordinates
- work (after a fashion) on non-Invoke generated images
2022-10-22 13:38:32 -04:00
Damian at mba
8273c04575 wip implementing options in diffuse step 2022-10-22 12:15:34 +02:00
Damian at mba
ee7d4d712a parsing CrossAttentionControlSubstitute options works 2022-10-22 11:27:56 +02:00
Lincoln Stein
554445a985 remove debug statement 2022-10-21 21:31:41 -04:00
Lincoln Stein
b2bf2b08ff Merge branch 'model-switching' into development 2022-10-21 21:27:59 -04:00
wfng92
e7573ac90f
Removed duplicate fix_func for MPS 2022-10-22 09:03:31 +08:00
Damian at mba
cdb664f6e5 Merge branch 'development' into fix-prompts 2022-10-21 21:34:09 +02:00
Lincoln Stein
1ca517d73b Merge branch 'fix-high-step-count' of https://github.com/holstvoogd/InvokeAI into holstvoogd-fix-high-step-count 2022-10-21 13:58:00 -04:00
Lincoln Stein
c9f9eed04e resolve numerous small merge bugs
- This merges PR #882

Coauthor: ArDiouscuros
2022-10-21 12:57:15 -04:00
Lincoln Stein
55db9dba0a Merge branch 'Improved-fetch-and-option-to-replay-commands-from-file' of https://github.com/ArDiouscuros/stable-diffusion into ArDiouscuros-Improved-fetch-and-option-to-replay-commands-from-file
- various small conflicts fixed
2022-10-21 10:12:35 -04:00
Damian at mba
64051d081c cleanup 2022-10-21 15:07:11 +02:00
Lincoln Stein
ddb007af65
Merge branch 'development' into fix-high-step-count 2022-10-21 06:55:17 -04:00
Damian at mba
e574a1574f txt2mask.py now tracking development again 2022-10-21 12:42:07 +02:00
Damian at mba
2bf9f1f0d8 rename StrcuturedConditioning to ExtraConditioningInfo 2022-10-21 12:18:40 +02:00
Damian at mba
8142b72bcd Merge remote-tracking branch 'upstream/development' into fix-prompts 2022-10-21 11:59:44 +02:00
Damian at mba
dc2f30a34e put back txt2mask import 2022-10-21 11:59:42 +02:00
Lincoln Stein
be7de4849c
Merge branch 'development' into model-switching 2022-10-21 00:55:52 -04:00
Lincoln Stein
83e6ab08aa further improvements to model loading
- code for committing config changes to models.yaml now in module
  rather than in invoke script
- model marked "default" is now loaded if model not specified on
  command line
- uncache changed models when edited, so that they reload properly
- removed liaon from models.yaml and added stable-diffusion-1.5
2022-10-21 00:28:54 -04:00
Damian at mba
b385fdd7de non-normalized blend 2022-10-21 04:34:53 +02:00
Damian at mba
d965540103 more blend fixes 2022-10-21 04:23:19 +02:00
Damian at mba
404d59b1b8 fix blend 2022-10-21 04:18:17 +02:00
Damian at mba
4c1267338b bring in attention etc. 2022-10-21 03:54:13 +02:00
Damian at mba
2e0b1c4c8b ok now we're cooking 2022-10-21 03:29:50 +02:00
Damian at mba
da75876639 better support for word.swap(otherWord) without parantheses or quotes 2022-10-21 00:08:28 +02:00
Jan Skurovec
d4d1014c9f fix for 'model is not defined' when loading embedding 2022-10-20 17:31:46 -04:00
wfng92
3e0a7b6229 Correct color channels in upscale using array slicing 2022-10-20 16:52:07 -04:00
Damian at mba
da88097aba fix prompt handling in conditioning.py 2022-10-20 21:41:32 +02:00
Damian at mba
3f13dd3ae8 prompt parsing is now much more robust 2022-10-20 21:05:36 +02:00
Damian at mba
79b4afeae7 parser working with basic escapes 2022-10-20 16:56:34 +02:00
Damian at mba
da223dfe81 wip re-writing parts of prompt parser 2022-10-20 15:56:46 +02:00
Lincoln Stein
a357bf4f19 add !mask command to view output of clipseg
- The !mask command takes an image path, a text prompt, and
  (optionally) a masking threshold. It creates a mask over the region
  indicated by the prompt, and outputs several files that show which
  regions will be masked by the chosen prompt and threshold.

- The mask images should not be passed directly to img2img because
  they are designed for visualization only. Instead, use the
  --text_mask option to pass the selected prompt and threshold.

- See docs/features/INPAINTING.md for details.
2022-10-20 06:56:50 -04:00
Lincoln Stein
2ca4242f5f fix clipseg loading problems
- The directory "models" in the main InvokeAI directory was conflicting
  with loading "models.clipseg". To fix this issue, I have renamed the
  models.clipseg to clipseg_models.clipseg, and applied this change to
  the 'models-rename' branch of invoke-ai's fork of clipseg.
2022-10-20 06:56:50 -04:00
Damian at mba
c9d27634b4 bring in prompt parser from fix-prompts branch
attention is parsed but ignored, blends old syntax doesn't work,
	  conjunctions are parsed but ignored, the only part that's used
	  here is the new .blend() syntax and cross-attention control
	  using .swap()
2022-10-20 12:01:48 +02:00
psychedelicious
87469a5fdd Flips channels using array slicing instead of using OpenCV 2022-10-19 23:44:47 -04:00
psychedelicious
4101127011 Corrects color channels in face restoration; Fixes #1167 2022-10-19 23:32:57 -04:00
Damian at mba
42883545f9 add prompt language support for cross-attention .swap 2022-10-20 01:42:04 +02:00
Damian at mba
61357e4e6e be less verbose when assembling prompt 2022-10-19 21:12:07 +02:00
Damian at mba
c6ae9f1176 remove unnecessary assertion 2022-10-19 21:12:07 +02:00
Damian at mba
11d7e6b92f undo unwanted changes 2022-10-19 21:12:07 +02:00
Damian at mba
c3b992db96 Squashed commit of the following:
commit 9bb0b5d0036c4dffbb72ce11e097fae4ab63defd
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:43:41 2022 +0200

    undo local_files_only stuff

commit eed93f5d30c34cfccaf7497618ae9af17a5ecfbb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sat Oct 15 23:40:37 2022 +0200

    Revert "Merge branch 'development-invoke' into fix-prompts"

    This reverts commit 7c40892a9f184f7e216f14d14feb0411c5a90e24, reversing
    changes made to e3f2dd62b0548ca6988818ef058093a4f5b022f2.

commit f06d6024e345c69e6d5a91ab5423925a68ee95a7
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 23:30:16 2022 +0200

    more efficiently handle multiple conditioning

commit 5efdfcbcd980ce6202ab74e7f90e7415ce7260da
Merge: b9c0dc5 ac08bb6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:51:01 2022 +0200

    Merge branch 'optional-disable-karras-schedule' into fix-prompts

commit ac08bb6fd25e19a9d35cf6c199e66500fb604af1
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 14:50:43 2022 +0200

    append '*use_model_sigmas*' to prompt string to use model sigmas

commit 70d8c05a3ff329409f76204f4af94e55d468ab8b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 13 12:12:17 2022 +0200

    make karras scheduling switchable

    commit d60df54f69 replaced the model's
    own scheduling with karras scheduling. this has changed image generation
    (seems worse now?)

    this commit wraps the change in a bool.

commit b9c0dc5f1a658a0e6c3936000e9ae559e1c7a1db
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 20:16:00 2022 +0200

    add test of more complex conjunction

commit 9ac0c15cc0d7b5f6df3289d3ad474260972a17be
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:18:25 2022 +0200

    improve comments

commit ad33bce60590b87b2a93e90f16dc9d3e935d04a5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 17:04:46 2022 +0200

    put back thresholding stuff

commit 4852c698a325049834ba0d4b358f07210bc7171a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:25:02 2022 +0200

    notes on improving conjunction efficiency

commit a53bb1e5b68025d09642b935ae6a9a015cfaf2d6
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 14:14:33 2022 +0200

    optional weights support for Conjunction

commit fec79ab15e4f0c84dd61cb1b45a5e6a72ae4aaeb
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 12:07:27 2022 +0200

    fix blend error and log parsing output

commit 1f751c2a039f9c97af57b18e0f019512631d5a25
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:33:33 2022 +0200

    fix broken euler sampler

commit 02f8148d17efe4b6bde8d29b827092a0626363ee
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:24:20 2022 +0200

    cleanup prompt parser

commit 8028d49ae6c16c0d6ec9c9de9c12d56c32201421
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Wed Oct 12 10:14:18 2022 +0200

    explicit conjunction, improve flattening logic

commit 8a1710892185f07eb77483f7edae0fc4d6bbb250
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:59:30 2022 +0200

    adapt multi-conditioning to also work with ddim

commit 53802a839850d0d1ff017c6bafe457c4bed750b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 22:31:42 2022 +0200

    unconditioning is also fancy-prompt-syntaxable

commit 7c40892a9f184f7e216f14d14feb0411c5a90e24
Merge: e3f2dd6 dbe0da4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:39:54 2022 +0200

    Merge branch 'development-invoke' into fix-prompts

commit e3f2dd62b0548ca6988818ef058093a4f5b022f2
Merge: eef0e48 06f542e
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:38:09 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit eef0e484c2eaa1bd4e0e0b1d3f8d7bba38478144
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:26:25 2022 +0200

    fix run-on paren-less attention, add some comments

commit fd29afdf0e9f5e0cdc60239e22480c36ca0aaeca
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 21:03:02 2022 +0200

    python 3.9 compatibility

commit 26f7646eef7f39bc8f7ce805e747df0f723464da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:58:42 2022 +0200

    first pass connecting PromptParser to conditioning

commit ae53dff3796d7b9a5e7ed30fa1edb0374af6cd8d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 20:51:15 2022 +0200

    update frontend dist

commit 9be4a59a2d76f49e635474b5984bfca826a5dab4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 19:01:39 2022 +0200

    fix issues with correctness checking FlattenedPrompt

commit 3be212323eab68e72a363a654124edd9809e4cf0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:43:16 2022 +0200

    parsing nested seems to work pretty ok

commit acd73eb08cf67c27cac8a22934754321256f56a9
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 18:26:17 2022 +0200

    wip introducing FlattenedPrompt class

commit 71698d5c7c2ac855b690d8ef67e8830148c59eda
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:59:42 2022 +0200

    recursive attention weighting seems to actually work

commit a4e1ec6b20deb7cc0cd12737bdbd266e56144709
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 15:06:24 2022 +0200

    now apparently almost supported nested attention

commit da76fd1ddf22a3888cdc08fd4fed38d8b178e524
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 11 13:23:37 2022 +0200

    wip prompt parsing

commit dbe0da4572c2ac22f26a7afd722349a5680a9e47
Author: Kyle Schouviller <kyle0654@hotmail.com>
Date:   Mon Oct 10 22:32:35 2022 -0700

    Adding node-based invocation apps

commit 8f2a2ffc083366de74d7dae471b50b6f98a7c5f8
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 19:03:18 2022 +0200

    fix merge issues

commit 73118dee2a8f4891700756e014caf1c9ca629267
Merge: fd00844 12413b0
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:42:48 2022 +0200

    Merge remote-tracking branch 'upstream/development' into fix-prompts

commit fd0084413541013c2cf71e006af0392719bef53d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 12:39:38 2022 +0200

    wip prompt parsing

commit 0be9363db9307859d2b65cffc6af01f57d7873a4
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 03:20:06 2022 +0200

    better +/- attention parsing

commit 5383f691874a58ab01cda1e4fac6cf330146526a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Mon Oct 10 02:27:47 2022 +0200

    prompt parser seems to work

commit 591d098a33ce35462428d8c169501d8ed73615ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 20:25:37 2022 +0200

    supports weighting unconditioning, cross-attention with |

commit 7a7220563aa05a2980235b5b908362f66b728309
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 18:15:56 2022 +0200

    i think cross attention might be working?

commit 951ed391e7126bff228c18b2db304ad28d59644a
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 16:04:54 2022 +0200

    weighted CFG denoiser working with a single item

commit ee532a0c2827368c9e45a6a5f3975666402873da
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Sun Oct 9 06:33:40 2022 +0200

    wip probably doesn't work or compile

commit 14654bcbd207b9ca28a6cbd37dbd967d699b062d
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 18:11:48 2022 +0200

    use tan() to calculate embedding weight for <1 attentions

commit 1a8e76b31aa5abf5150419ebf3b29d4658d07f2b
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 16:14:54 2022 +0200

    fix bad math.max reference

commit f697ff896875876ccaa1e5527405bdaa7ed27cde
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 15:55:57 2022 +0200

    respect http[s]x protocol when making socket.io middleware

commit 41d3dd4eeae8d4efb05dfb44fc6d8aac5dc468ab
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 13:29:54 2022 +0200

    fractional weighting works, by blending with prompts excluding the word

commit 087fb6dfb3e8f5e84de8c911f75faa3e3fa3553c
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:52:03 2022 +0200

    wip doing weights <1 by averaging with conditioning absent the lower-weighted fragment

commit 3c49e3f3ec7c18dc60f3e18ed2f7f0d97aad3a47
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Fri Oct 7 10:36:15 2022 +0200

    notate CFGDenoiser, perhaps

commit d2bcf1bb522026ebf209ad0103f6b370383e5070
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 05:04:47 2022 +0200

    hack blending syntax to test attention weighting more extensively

commit 94904ef2cf917f74ec23ef7a570e12ff8255b048
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 04:56:37 2022 +0200

    conditioning works, apparently

commit 7c6663ddd70f665fd1308b6dd74f92ca393a8df5
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Thu Oct 6 02:20:24 2022 +0200

    attention weighting, definitely works in positive direction

commit 5856d453a9b020bc1a28ff643ae1f58c12c9be73
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 19:02:14 2022 +0200

    wip bubbling weights down

commit a2ed14fd9b7d3cb36b6c5348018b364c76d1e892
Author: Damian at mba <damian@frey.NOSPAMco.nz>
Date:   Tue Oct 4 17:35:39 2022 +0200

    bring in changes from PC
2022-10-19 21:12:07 +02:00
Damian at mba
1ffd4a9e06 refactored single diffusion path seems to be working for all samplers 2022-10-19 21:08:03 +02:00
Damian at mba
147d39cb7c wip refactoring shared InvokeAI diffuser mixin to component 2022-10-19 21:08:03 +02:00
Damian at mba
824cb201b1 pass img2img ddim/plms edited conditioning through kwargs 2022-10-19 21:08:03 +02:00
Damian at mba
582880b314 add cross-attention support to im2img; prevent inpainting from crashing 2022-10-19 21:08:03 +02:00
Damian at mba
2b79a716aa wip hi-res fix 2022-10-19 21:08:03 +02:00
Damian at mba
d572af2acf fix cross-attention on k* samplers 2022-10-19 21:08:03 +02:00
Damian at mba
54e6a68acb wip bringing cross-attention to PLMS and DDIM 2022-10-19 21:08:03 +02:00
Damian at mba
09f62032ec cleanup and clarify comments 2022-10-19 21:08:03 +02:00
Damian at mba
711ffd238f cleanup 2022-10-19 21:08:03 +02:00
Damian at mba
056cb0d8a8 sliced cross-attention wrangler works 2022-10-19 21:08:03 +02:00
Damian at mba
37a204324b go back to using InvokeAI attention 2022-10-19 21:08:03 +02:00
Damian at mba
1fc1f8bf05 cross-attention working with placeholder {} syntax 2022-10-19 21:06:42 +02:00
Damian at mba
8ff507b03b runs but doesn't work properly - see below for test prompt
test prompt:
"a cat sitting on a car {a dog sitting on a car}" -W 384 -H 256 -s 10 -S 12346 -A k_euler
note that substition of dog for cat is currently hard-coded (ksampler.py
	line 43-44)
2022-10-19 21:06:42 +02:00
Damian at mba
33d6603fef cleanup initial experiments 2022-10-19 21:06:42 +02:00
Damian at mba
b0b1993918 initial experiments 2022-10-19 21:06:42 +02:00
Lincoln Stein
92d4dfaabf Merge branch 'asymmetric-tiling' of https://github.com/carson-katri/InvokeAI into carson-katri-asymmetric-tiling 2022-10-19 13:46:07 -04:00
Lincoln Stein
62dd3b7d7d resolve models.clipseg vs clipseg ambiguity 2022-10-18 23:09:26 -04:00
Carson Katri
15aa7593f6
Merge branch 'development' into asymmetric-tiling 2022-10-18 22:37:18 -04:00
Lincoln Stein
9b3ac92c24
fix incorrect import of clipseg 2022-10-18 19:28:30 -04:00
Carson Katri
9d19213b8a Merge branch 'development' of github.com:lstein/stable-diffusion into asymmetric-tiling 2022-10-18 13:34:10 -04:00
Lincoln Stein
3a1003f702 Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:29:26 -04:00
Lincoln Stein
20551857da add clipseg support for creating inpaint masks from text
On the command line, the new option is --text_mask or -tm.
Example:

```
invoke> a baseball -I /path/to/still_life.png -tm orange
```

This will find the orange fruit in the still life painting and replace
it with an image of a baseball.
2022-10-18 08:27:48 -04:00
Lincoln Stein
32122e0312 clipseg library and environment in place 2022-10-18 08:27:48 -04:00
Lincoln Stein
230de023ff resolve doc conflicts during merge 2022-10-18 08:27:33 -04:00
Lincoln Stein
e6fc8af249
Fix typo
Taken from `main` PR #1147 
Author: eltociear
2022-10-18 08:08:58 -04:00
wfng92
1c2bd275fe Fix img2img DDIM index out of bound
Added a [community solution](https://github.com/CompVis/stable-diffusion/issues/111#issuecomment-1229483511) to fix index out of bound when doing img2img generation with `ddim` sampler. Also, restored `steps_out` to be `ddim_timesteps + 1` since the removal was meant to fix the [1000 steps issue](https://github.com/CompVis/stable-diffusion/issues/111)
2022-10-17 22:32:15 -04:00
Lincoln Stein
0cf11ce488 add option to CLI and pngwriter that allows user to set PNG compression level
- In CLI: the argument is --png_compression <0..9> (-z<0..9>)
- In API, pass `compress_level` to PngWriter.save_image_and_prompt_to_png()

Compression ranges from 0 (no compression) to 9 (maximum compression).
Default value is 6 (as specified by Pillow package).

This addresses an issue first raised in #652.
2022-10-17 22:27:47 -04:00
Carson Katri
3b79b935a3
Merge branch 'development' into asymmetric-tiling 2022-10-17 20:15:42 -04:00
Carson Katri
99581dbbf7 Split seamless config into separate file 2022-10-17 19:31:20 -04:00
db3000
9e599c65c5 Only output facetool parameters if enhancing faces 2022-10-17 11:49:07 -04:00
Carson Katri
e03a3fcf68 Add seamless_axes options 2022-10-16 22:45:18 -04:00
Lincoln Stein
ef2058824a add a strength value to inpaint_replace
- --inpaint_replace 0.X will cause inpainting to ignore what is under
  the masked region with a strength ranging from 0 (don't ignore at all)
  to 1.0 (ignore completely)
- sync with upstream development
- update docs
2022-10-16 10:06:47 -04:00
Lincoln Stein
6f93dc7712 cleanup inpainting and img2img
- add a `--inpaint_replace` option that fills masked regions with
  latent noise. This allows radical changes to inpainted regions
  at the cost of losing context.
- fix up readline, arg processing and metadata writing to accommodate
  this change
- fixed bug in storage and retrieval of variations, discovered incidentally
  during testing
- update documentation
2022-10-16 08:50:55 -04:00
Conor Reid
a3a50bb886 Update generate.py
Fixed spelling mistake (open source king)
2022-10-15 16:02:14 -04:00
Lincoln Stein
a705a5a0aa enhance support for model switching and editing
- Error checks for invalid model
- Add !del_model command to invoke.py
- Add del_model() method to model_cache
- Autocompleter kept in sync with model addition/subtraction.
2022-10-15 15:46:29 -04:00
Lincoln Stein
c4fb8e304b fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 16:19:45 -04:00
Lincoln Stein
fe2a2cfc8b
Merge branch 'development' into model-switching 2022-10-14 13:18:59 -04:00
Lincoln Stein
32dab7d4bf close #1094, dangling gfpgan_strength reference 2022-10-14 07:45:10 -04:00
db3000
ce5e57d828 Generalize facetool strength argument 2022-10-14 00:03:06 -04:00
Lincoln Stein
e98fe9c22d fix noisy images at high step counts
At step counts greater than ~75, the ksamplers start producing noisy
images when using the Karras noise schedule. This PR reverts to using
the model's own noise schedule, which eliminates the problem at the
cost of slowing convergence at lower step counts.

This PR also introduces a new CLI `--save_intermediates <n>' argument,
which will save every nth intermediate image into a subdirectory
named `intermediates/<image_prefix>'.

Addresses issue #1083.
2022-10-14 00:01:59 -04:00