Commit Graph

989 Commits

Author SHA1 Message Date
Ryan Dick
7e94350351 Tidy DenoiseLatentsInvocation.prep_control_data(...) and fix some type errors. 2024-06-25 11:31:52 -07:00
Ryan Dick
c4e8549c73 Make DenoiseLatentsInvocation.prep_control_data(...) a staticmethod so that it can be called externally. 2024-06-25 11:31:52 -07:00
Ryan Dick
350a210835 Copy TiledStableDiffusionRefineInvocation as a starting point for TiledMultiDiffusionDenoiseLatents.py 2024-06-25 11:31:52 -07:00
Ryan Dick
ed781dbb0c Change tiling strategy to make TiledStableDiffusionRefineInvocation work with more tile shapes and overlaps. 2024-06-25 11:31:52 -07:00
Ryan Dick
b41ea963e7 Expose a few more params from TiledStableDiffusionRefineInvocation. 2024-06-25 11:31:52 -07:00
Ryan Dick
da5d105049 Add support for LoRA models in TiledStableDiffusionRefineInvocation. 2024-06-25 11:31:52 -07:00
Ryan Dick
5301770525 Add naive ControlNet support to TiledStableDiffusionRefineInvocation 2024-06-25 11:31:52 -07:00
Ryan Dick
534640ccde Rough prototype of TiledStableDiffusionRefineInvocation is working. 2024-06-25 11:31:52 -07:00
Ryan Dick
d5ab8cab5c WIP - TiledStableDiffusionRefine 2024-06-25 11:31:52 -07:00
Ryan Dick
4767301ad3 Minor improvements to LatentsToImageInvocation type hints. 2024-06-25 11:31:52 -07:00
Ryan Dick
21d7ca45e6 Expose vae_decode(...) as a staticmethod on LatentsToImageInvocation. 2024-06-25 11:31:52 -07:00
Ryan Dick
020e8eb413 Fix return type of prepare_noise_and_latents(...). 2024-06-25 11:31:52 -07:00
Ryan Dick
3d49541c09 Make init_scheduler() a staticmethod on DenoiseLatentsInvocation so that it can be called externally. 2024-06-25 11:31:52 -07:00
Ryan Dick
1ef266845a Only allow a single positive/negative prompt conditioning input for tiled refine. 2024-06-25 11:31:52 -07:00
Ryan Dick
a37589ca5f WIP on TiledStableDiffusionRefine 2024-06-25 11:31:52 -07:00
Ryan Dick
171a505f5e Convert several methods in DenoiseLatentsInvocation to staticmethods so that they can be called externally. 2024-06-25 11:31:52 -07:00
Ryan Dick
8004a0d5f5 Simplify the logic in prepare_noise_and_latents(...). 2024-06-25 11:31:52 -07:00
Ryan Dick
610a1fd611 Split out the prepare_noise_and_latents(...) logic in DenoiseLatentsInvocation so that it can be called from other invocations. 2024-06-25 11:31:52 -07:00
Ryan Dick
43108eec13 (minor) Add a TODO note to get_scheduler(...). 2024-06-25 11:31:52 -07:00
Ryan Dick
79ceac2f82 (minor) Use SilenceWarnings as a decorator rather than a context manager to save an indentation level. 2024-06-18 15:06:22 -04:00
Ryan Dick
d13aafb514 Tidy denoise_latents.py imports to all use absolute import paths. 2024-06-18 15:06:22 -04:00
Ryan Dick
785bb1d9e4 Fix all comparisons against the DEFAULT_PRECISION constant. DEFAULT_PRECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision. 2024-06-14 11:26:10 -07:00
Lincoln Stein
7d19af2caa
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-06-08 18:55:06 -04:00
Ryan Dick
52c0c4a32f Rename latent.py -> denoise_latents.py. 2024-06-07 09:28:42 -04:00
Ryan Dick
8f1afc032a Move SchedulerInvocation to a new file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
854bca668a Move CreateDenoiseMaskInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
fea9013cad Move CreateGradientMaskInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
045caddee1 Move LatentsToImageInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
58697141bf Move ImageToLatentsInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
5e419dbb56 Move ScaleLatentsInvocation and ResizeLatentsInvocation to their own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
595096bdcf Move BlendLatentsInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
ed03d281e6 Move CropLatentsCoreInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
Ryan Dick
0b37496c57 Move IdealSizeInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
psychedelicious
fde58ce0a3 Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api 2024-06-07 14:23:41 +10:00
Lincoln Stein
dc134935c8 replace load_and_cache_model() with load_remote_model() and load_local_odel() 2024-06-07 14:12:16 +10:00
Lincoln Stein
2871676f79
LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
psychedelicious
14372e3818 fix(nodes): blend latents with weight=0 with DPMSolverSDEScheduler
- Pass the seed from `latents_a` to the output latents. Fixed an issue where using `BlendLatentsInvocation` could result in different outputs during denoising even when the alpha or slerp weight was 0.

## Explanation

`LatentsField` has an optional `seed` field. During denoising, if this `seed` field is not present, we **fall back to 0 for the seed**. The seed is used during denoising in a few ways:

1. Initializing the scheduler.

The seed is used in two places in `invokeai/app/invocations/latent.py`.

The `get_scheduler()` utility function has special handling for `DPMSolverSDEScheduler`, which appears to need a seed for deterministic outputs.

`DenoiseLatentsInvocation.init_scheduler()` has special handling for schedulers that accept a generator - the generator needs to be seeded in a particular way. At the time of this commit, these are the Invoke-supported schedulers that need this seed:
  - DDIMScheduler
  - DDPMScheduler
  - DPMSolverMultistepScheduler
  - EulerAncestralDiscreteScheduler
  - EulerDiscreteScheduler
  - KDPM2AncestralDiscreteScheduler
  - LCMScheduler
  - TCDScheduler

2. Adding noise during inpainting.

If a mask is used for denoising, and we are not using an inpainting model, we add noise to the unmasked area. If, for some reason, we have a mask but no noise, the seed is used to add noise.

I wonder if we should instead assert that if a mask is provided, we also have noise.

This is done in `invokeai/backend/stable_diffusion/diffusers_pipeline.py` in `StableDiffusionGeneratorPipeline.latents_from_embeddings()`.

When we create noise to be used in denoising, we are expected to set `LatentsField.seed` to the seed used to create the noise. This introduces some awkwardness when we manipulate any "latents" that will be used for denoising. We have to pass the seed along for every operation.

If the wrong seed or no seed is passed along, we can get unexpected outputs during denoising. One notable case relates to blending latents (slerping tensors).

If we slerp two noise tensors (`LatentsField`s) _without_ passing along the seed from the source latents, when we denoise with a seed-dependent scheduler*, the schedulers use the fallback seed of 0 and we get the wrong output. This is most obvious when slerping with a weight of 0, in which case we expect the exact same output after denoising.

*It looks like only the DPMSolver* schedulers are affected, but I haven't tested all of them.

Passing the seed along in the output fixes this issue.
2024-06-05 00:02:52 +10:00
Lincoln Stein
756108f6bd Update invokeai/app/invocations/latent.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-03 11:41:47 -07:00
Lincoln Stein
68d628dc14 use zip to iterate over image prompts and adapters 2024-06-03 11:41:47 -07:00
Lincoln Stein
93c9852142 fix ruff 2024-06-03 11:41:47 -07:00
Lincoln Stein
493f81788c added a few comments to document design choices 2024-06-03 11:41:47 -07:00
Lincoln Stein
f13427e3f4 refactor redundant code and fix typechecking errors 2024-06-03 11:41:47 -07:00
Lincoln Stein
e28737fc8b add check for congruence between # of ip_adapters and image_prompts 2024-06-03 11:41:47 -07:00
Lincoln Stein
7391c126d3 handle case of no IP adapters requested 2024-06-03 11:41:47 -07:00
Lincoln Stein
1c59fce6ad reduce peak VRAM memory usage of IP adapter 2024-06-03 11:41:47 -07:00
psychedelicious
521f907f58 tidy(nodes): infill
- Set `self._context=context` instead of passing it as an arg
2024-06-03 09:43:25 +10:00
psychedelicious
ccdecf21a3 tidy(nodes): cnet processors
- Set `self._context=context` instead of changing the type signature of `run_processor`
- Tidy a few typing things
2024-06-03 09:41:17 +10:00
Lincoln Stein
2276f327e5
Merge branch 'main' into lstein/feat/simple-mm2-api 2024-06-02 09:45:31 -04:00
psychedelicious
50d3030471 feat(app): dynamic type adapters for invocations & outputs
Keep track of whether or not the typeadapter needs to be updated. Allows for dynamic invocation and output unions.
2024-05-30 12:03:38 +10:00
psychedelicious
2f9ebdec69 fix(app): openapi schema generation
Some tech debt related to dynamic pydantic schemas for invocations became problematic. Including the invocations and results in the event schemas was breaking pydantic's handling of ref schemas. I don't really understand why - I think it's a pydantic bug in a remote edge case that we are hitting.

After many failed attempts I landed on this implementation, which is actually much tidier than what was in there before.

- Create pydantic-enabled types for `AnyInvocation` and `AnyInvocationOutput` and use these in place of the janky dynamic unions. Actually, they are kinda the same, but better encapsulated. Use these in `Graph`, `GraphExecutionState`, `InvocationEventBase` and `InvocationCompleteEvent`.
- Revise the custom openapi function to work with the new models.
- Split out the custom openapi function to a separate file. Add a `post_transform` callback so consumers can customize the output schema.
- Update makefile scripts.
2024-05-30 12:03:03 +10:00