Commit Graph

95 Commits

Author SHA1 Message Date
psychedelicious
b194180f76 feat(backend): make fast latents method static 2023-03-16 20:03:08 +11:00
psychedelicious
fb30b7d17a feat(backend): add image_to_dataURL util 2023-03-16 20:03:08 +11:00
Jonathan
44f489d581
Merge branch 'main' into fix-seampaint 2023-03-14 06:19:25 -05:00
blessedcoolant
0a761d7c43 fix(inpaint): Seam painting being broken 2023-03-15 00:00:08 +13:00
Damian Stewart
a0f47aa72e
Merge branch 'main' into main 2023-03-14 11:41:29 +01:00
Lincoln Stein
d840c597b5 fix --png_compression command line argument
- The value of png_compression was always 6, despite the value provided to the
  --png_compression argument. This fixes the bug.
- It also fixes an inconsistency between the maximum range of png_compression
  and the help text.

- Closes #2945
2023-03-14 00:24:05 -04:00
jeremy
e0e01f6c50 Reduced Pickle ACE attack surface
Prior to this commit, all models would be loaded with the extremely unsafe `torch.load` method, except those with the exact extension `.safetensors`. Even a change in casing (eg. `saFetensors`, `Safetensors`, etc) would cause the file to be loaded with torch.load instead of the much safer `safetensors.toch.load_file`.
If a malicious actor renamed an infected `.ckpt` to something like `.SafeTensors` or `.SAFETENSORS` an unsuspecting user would think they are loading a safe .safetensor, but would in fact be parsing an unsafe pickle file, and executing an attacker's payload. This commit fixes this vulnerability by reversing the loading-method decision logic to only use the unsafe `torch.load` when the file extension is exactly `.ckpt`.
2023-03-13 16:16:30 -04:00
JPPhoto
596ba754b1 Removed seed from get_make_image. 2023-03-13 08:15:46 -05:00
JPPhoto
b980e563b9 Fix bug #2931 2023-03-13 08:11:09 -05:00
Kyle Schouviller
3ee2798ede [fix] Get the model again if current model is empty 2023-03-12 22:26:11 -04:00
Lincoln Stein
6a77634b34 remove unneeded generate initializer routines 2023-03-11 17:14:03 -05:00
Lincoln Stein
8ca91b1774 add restoration services to nodes 2023-03-11 17:00:00 -05:00
Lincoln Stein
3aa1ee1218 restore NSFW checker 2023-03-11 16:16:44 -05:00
Lincoln Stein
c14241436b move ModelManager initialization into its own module and restore embedding support 2023-03-11 10:56:53 -05:00
Lincoln Stein
d612f11c11 initialize InvokeAIGenerator object with model, not manager 2023-03-11 09:06:46 -05:00
Lincoln Stein
250b0ab182 add seamless tiling support 2023-03-11 08:33:23 -05:00
Lincoln Stein
675dd12b6c add attention map images to output object 2023-03-11 08:07:01 -05:00
Lincoln Stein
7e76eea059 add embiggen, remove complicated constructor 2023-03-11 07:50:39 -05:00
Lincoln Stein
fe75b95464 Merge branch 'refactor/nodes-on-generator' of github.com:invoke-ai/InvokeAI into refactor/nodes-on-generator 2023-03-10 19:36:40 -05:00
Lincoln Stein
95954188b2 remove factory pattern
Factory pattern is now removed. Typical usage of the InvokeAIGenerator is now:

```
from invokeai.backend.generator import (
    InvokeAIGeneratorBasicParams,
    Txt2Img,
    Img2Img,
    Inpaint,
)
    params = InvokeAIGeneratorBasicParams(
        model_name = 'stable-diffusion-1.5',
        steps = 30,
        scheduler = 'k_lms',
        cfg_scale = 8.0,
        height = 640,
        width = 640
        )
    print ('=== TXT2IMG TEST ===')
    txt2img = Txt2Img(manager, params)
    outputs = txt2img.generate(prompt='banana sushi', iterations=2)

    for i in outputs:
        print(f'image={output.image}, seed={output.seed}, model={output.params.model_name}, hash={output.model_hash}, steps={output.params.steps}')
```

The `params` argument is optional, so if you wish to accept default
parameters and selectively override them, just do this:

```
    outputs = Txt2Img(manager).generate(prompt='banana sushi',
                                        steps=50,
					scheduler='k_heun',
					model_name='stable-diffusion-2.1'
					)
```
2023-03-10 19:33:04 -05:00
Jonathan
370e8281b3
Merge branch 'main' into refactor/nodes-on-generator 2023-03-10 12:34:00 -06:00
Lincoln Stein
685df33584
fix bug that caused black images when converting ckpts to diffusers in RAM (#2914)
Cause of the problem was inadvertent activation of the safety checker.

When conversion occurs on disk, the safety checker is disabled during loading.
However, when converting in RAM, the safety checker was not removed, resulting
in it activating even when user specified --no-nsfw_checker.

This PR fixes the problem by detecting when the caller has requested the InvokeAi
StableDiffusionGeneratorPipeline class to be returned and setting safety checker
to None. Do not do this with diffusers models destined for disk because then they
will be incompatible with the merge script!!

Closes #2836
2023-03-10 18:11:32 +00:00
Lincoln Stein
14c8738a71 fix dangling reference to _model_to_cpu and missing variable model_description 2023-03-09 21:41:45 -05:00
Kevin Turner
1a829bb998 pipeline: remove code for legacy model 2023-03-09 18:15:12 -08:00
Kevin Turner
9d339e94f2 backend..conditioning: remove code for legacy model 2023-03-09 18:15:12 -08:00
Kevin Turner
ad7b1fa6fb model_manager: model to/from CPU methods are implemented on the Pipeline 2023-03-09 18:15:12 -08:00
Kevin Turner
42355b70c2 fix(Pipeline.debug_latents): fix import for moved utility function 2023-03-09 18:15:12 -08:00
Kevin Turner
faa2558e2f chore: add new argument to overridden method to match new signature upstream 2023-03-09 18:15:12 -08:00
Kevin Turner
081397737b typo: docstring spelling fixes
looks like they've already been corrected in the upstream copy
2023-03-09 18:15:12 -08:00
Kevin Turner
55d36eaf4f fix: image_resized_to_grid_as_tensor: reconnect dropped multiple_of argument 2023-03-09 18:15:12 -08:00
Lincoln Stein
c11e823ff3 remove unused _wrap_results 2023-03-09 16:30:06 -05:00
Lincoln Stein
cde0b6ae8d
Merge branch 'main' into refactor/nodes-on-generator 2023-03-09 01:52:45 -05:00
Lincoln Stein
b679a6ba37 model manager defaults to consistent values of device and precision 2023-03-09 01:09:54 -05:00
Lincoln Stein
5d37fa6e36 node-based txt2img working without generate 2023-03-09 00:18:29 -05:00
Damian Stewart
9ee648e0c3
Merge branch 'main' into feat_longer_prompts 2023-03-09 00:13:01 +01:00
Jonathan
2db180d909
Make img2img strength 1 behave the same as txt2img (#2895)
* Fix img2img and inpainting code so a strength of 1 behaves the same as txt2img.

* Make generated images identical to their txt2img counterparts when strength is 1.
2023-03-08 22:50:16 +01:00
damian
69e2dc0404 update for compel changes 2023-03-08 20:45:01 +01:00
Damian Stewart
a38b75572f don't log excess tokens as truncated 2023-03-08 20:00:18 +01:00
damian
768e969c90 cleanup and fix kwarg 2023-03-08 18:00:54 +01:00
Damian Stewart
57db66634d longer prompts wip 2023-03-08 14:25:48 +01:00
Lincoln Stein
87789c1de8 add InvokeAIGenerator and InvokeAIGeneratorFactory classes 2023-03-07 23:52:53 -05:00
Jonathan
c3ff9e6be8
Fixed startup issues with the web UI. (#2876) 2023-03-06 18:29:28 -05:00
Lincoln Stein
7c60068388
Merge branch 'main' into bugfix/fix-convert-sd-to-diffusers-error 2023-03-06 08:20:29 -05:00
Lincoln Stein
94daaa4abf fix call signature of import_diffuser_model() 2023-03-05 23:37:59 -05:00
Lincoln Stein
2f9dcd7906 support both epsilon and v-prediction v2 inference
There are actually two Stable Diffusion v2 legacy checkpoint
configurations:

1) "epsilon" prediction type for Stable Diffusion v2 Base
2) "v-prediction" type for Stable Diffusion v2-768

This commit adds the configuration file needed for epsilon prediction
type models as well as the UI that prompts the user to select the
appropriate configuration file when the code can't do so
automatically.
2023-03-05 22:51:40 -05:00
blessedcoolant
e537b5d8e1 Revert "Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram"
This reverts commit e0e70c9222, reversing
changes made to 0b184913b9.
2023-03-06 14:29:39 +13:00
blessedcoolant
e0e70c9222 Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-06 14:27:30 +13:00
Lincoln Stein
0b184913b9
Merge branch 'main' into bugfix/reenable-ckpt-conversion-to-ram 2023-03-05 12:37:43 -05:00
Lincoln Stein
92d012a92d
Merge branch 'main' into enhance/use-new-diffusers-path 2023-03-05 12:30:24 -05:00
Lincoln Stein
fc187f263e deal with non-directories in diffusers/ 2023-03-05 12:29:52 -05:00