Commit Graph

932 Commits

Author SHA1 Message Date
psychedelicious
14372e3818 fix(nodes): blend latents with weight=0 with DPMSolverSDEScheduler
- Pass the seed from `latents_a` to the output latents. Fixed an issue where using `BlendLatentsInvocation` could result in different outputs during denoising even when the alpha or slerp weight was 0.

## Explanation

`LatentsField` has an optional `seed` field. During denoising, if this `seed` field is not present, we **fall back to 0 for the seed**. The seed is used during denoising in a few ways:

1. Initializing the scheduler.

The seed is used in two places in `invokeai/app/invocations/latent.py`.

The `get_scheduler()` utility function has special handling for `DPMSolverSDEScheduler`, which appears to need a seed for deterministic outputs.

`DenoiseLatentsInvocation.init_scheduler()` has special handling for schedulers that accept a generator - the generator needs to be seeded in a particular way. At the time of this commit, these are the Invoke-supported schedulers that need this seed:
  - DDIMScheduler
  - DDPMScheduler
  - DPMSolverMultistepScheduler
  - EulerAncestralDiscreteScheduler
  - EulerDiscreteScheduler
  - KDPM2AncestralDiscreteScheduler
  - LCMScheduler
  - TCDScheduler

2. Adding noise during inpainting.

If a mask is used for denoising, and we are not using an inpainting model, we add noise to the unmasked area. If, for some reason, we have a mask but no noise, the seed is used to add noise.

I wonder if we should instead assert that if a mask is provided, we also have noise.

This is done in `invokeai/backend/stable_diffusion/diffusers_pipeline.py` in `StableDiffusionGeneratorPipeline.latents_from_embeddings()`.

When we create noise to be used in denoising, we are expected to set `LatentsField.seed` to the seed used to create the noise. This introduces some awkwardness when we manipulate any "latents" that will be used for denoising. We have to pass the seed along for every operation.

If the wrong seed or no seed is passed along, we can get unexpected outputs during denoising. One notable case relates to blending latents (slerping tensors).

If we slerp two noise tensors (`LatentsField`s) _without_ passing along the seed from the source latents, when we denoise with a seed-dependent scheduler*, the schedulers use the fallback seed of 0 and we get the wrong output. This is most obvious when slerping with a weight of 0, in which case we expect the exact same output after denoising.

*It looks like only the DPMSolver* schedulers are affected, but I haven't tested all of them.

Passing the seed along in the output fixes this issue.
2024-06-05 00:02:52 +10:00
Lincoln Stein
756108f6bd Update invokeai/app/invocations/latent.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-03 11:41:47 -07:00
Lincoln Stein
68d628dc14 use zip to iterate over image prompts and adapters 2024-06-03 11:41:47 -07:00
Lincoln Stein
93c9852142 fix ruff 2024-06-03 11:41:47 -07:00
Lincoln Stein
493f81788c added a few comments to document design choices 2024-06-03 11:41:47 -07:00
Lincoln Stein
f13427e3f4 refactor redundant code and fix typechecking errors 2024-06-03 11:41:47 -07:00
Lincoln Stein
e28737fc8b add check for congruence between # of ip_adapters and image_prompts 2024-06-03 11:41:47 -07:00
Lincoln Stein
7391c126d3 handle case of no IP adapters requested 2024-06-03 11:41:47 -07:00
Lincoln Stein
1c59fce6ad reduce peak VRAM memory usage of IP adapter 2024-06-03 11:41:47 -07:00
psychedelicious
50d3030471 feat(app): dynamic type adapters for invocations & outputs
Keep track of whether or not the typeadapter needs to be updated. Allows for dynamic invocation and output unions.
2024-05-30 12:03:38 +10:00
psychedelicious
2f9ebdec69 fix(app): openapi schema generation
Some tech debt related to dynamic pydantic schemas for invocations became problematic. Including the invocations and results in the event schemas was breaking pydantic's handling of ref schemas. I don't really understand why - I think it's a pydantic bug in a remote edge case that we are hitting.

After many failed attempts I landed on this implementation, which is actually much tidier than what was in there before.

- Create pydantic-enabled types for `AnyInvocation` and `AnyInvocationOutput` and use these in place of the janky dynamic unions. Actually, they are kinda the same, but better encapsulated. Use these in `Graph`, `GraphExecutionState`, `InvocationEventBase` and `InvocationCompleteEvent`.
- Revise the custom openapi function to work with the new models.
- Split out the custom openapi function to a separate file. Add a `post_transform` callback so consumers can customize the output schema.
- Update makefile scripts.
2024-05-30 12:03:03 +10:00
Lincoln Stein
532f82cb97
Optimize RAM to VRAM transfer (#6312)
* avoid copying model back from cuda to cpu

* handle models that don't have state dicts

* add assertions that models need a `device()` method

* do not rely on torch.nn.Module having the device() method

* apply all patches after model is on the execution device

* fix model patching in latents too

* log patched tokenizer

* closes #6375

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-24 17:06:09 +00:00
psychedelicious
281bd31db2 feat(nodes): make ModelIdentifierInvocation a prototype 2024-05-19 20:14:01 +10:00
psychedelicious
2680ef52c2 feat(nodes): add ModelIdentifierInvocation
This node allows a user to select _any_ model, outputting a `ModelIdentifierField` for that model.
2024-05-19 20:14:01 +10:00
psychedelicious
fe7ed72c9c feat(nodes): make all ModelIdentifierField inputs accept connections 2024-05-19 20:14:01 +10:00
psychedelicious
ef89c7e537 feat(nodes): add LoRASelectorInvocation, LoRACollectionLoader, SDXLLoRACollectionLoader
These simplify loading multiple LoRAs. Instead of requiring chained lora loader nodes, configure each LoRA (model & weight) with a selector, collect them, then send the collection to the collection loader to apply all of the LoRAs to the UNet/CLIP models.

The collection loaders accept a single lora or collection of loras.
2024-05-15 14:09:44 +10:00
psychedelicious
93da75209c feat(nodes): use new blur_if_nsfw method 2024-05-14 07:23:38 +10:00
blessedcoolant
da61396b1c cleanup: seamless unused older code cleanup 2024-05-13 08:11:08 +10:00
psychedelicious
33a9f9a4dc fix(nodes): fix constraints in cnet processors
There were some invalid constraints with the processors - minimum of 0 for resolution or multiple of 64 for resolution.

Made minimum 1px and no multiple ofs.
2024-05-02 12:24:04 +10:00
blessedcoolant
dce8b88aaf fix: change eta only for TCD Scheduler 2024-05-01 12:47:46 +05:30
blessedcoolant
1bdcbe3284 cleanup: use dict update to actually update the scheduler keyword args 2024-05-01 12:22:39 +05:30
blessedcoolant
2ddb82200c fix: Manually update eta(gamma) to 1.0 for TCDScheduler
seems to work best with invoke at 4 steps
2024-05-01 01:20:53 +05:30
psychedelicious
e822897b1c feat(nodes): add prototype heuristic image resize node
Uses the fancy cnet resize that retains edges.
2024-04-30 08:10:59 -04:00
dunkeroni
f262b9032d fix: changed validation to not error on connection 2024-04-28 12:48:56 -04:00
dunkeroni
71c3197eab fix: denoise latents accepts CFG lists as input 2024-04-28 12:48:56 -04:00
psychedelicious
5b8f77f990 tidy(nodes): move cnet mode literals to utils
Now they can be used in type signatures without circular imports.
2024-04-25 13:20:09 +10:00
psychedelicious
1bef13db37 feat(nodes): restore unet check on CreateGradientMaskInvocation
Special handling for inpainting models
2024-04-23 07:32:53 -04:00
dunkeroni
bc12d6654e chore: comments and ruff 2024-04-23 07:32:53 -04:00
dunkeroni
6d7c8d5f57 remove unet test 2024-04-23 07:32:53 -04:00
dunkeroni
781de914f4 fix threshhold 2024-04-23 07:32:53 -04:00
dunkeroni
c094bad233 add unet check in gradient mask node 2024-04-23 07:32:53 -04:00
dunkeroni
0063014f2b gradient mask node test for inpaint 2024-04-23 07:32:53 -04:00
psychedelicious
a00e703144 feat(nodes): image mask to tensor invocation
Thanks @JPPhoto!
2024-04-20 11:32:08 -04:00
psychedelicious
ea527f5fe1 feat(nodes): add beta classification to mask tensor nodes 2024-04-19 09:32:56 -04:00
psychedelicious
aace364677 feat(nodes): add InvertTensorMaskInvocation 2024-04-19 09:32:56 -04:00
psychedelicious
602a59066e fix(nodes): handle invert in alpha_mask_to_tensor 2024-04-19 09:32:56 -04:00
psychedelicious
8911017bd1 feat(ui): selectable & draggable layers 2024-04-19 09:32:56 -04:00
psychedelicious
fc26f3e430 feat(nodes): add alpha mask to tensor invocation 2024-04-19 09:32:56 -04:00
blessedcoolant
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
blessedcoolant
b39ce642b6 cleanup: raise ValueErrors when target_blocks dont match base model 2024-04-16 04:12:30 +05:30
Lincoln Stein
e93f4d632d
[util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
blessedcoolant
2d5786d3bb fix: Incorrect composition blocks for SD1.5 2024-04-13 13:52:10 +05:30
blessedcoolant
27466ffa1a chore: update the ip adapter node version 2024-04-13 13:39:08 +05:30
blessedcoolant
9fc73743b2 feat: support SD1.5 2024-04-13 12:30:39 +05:30
blessedcoolant
d4393e4170 chore: linter fixes 2024-04-13 12:14:45 +05:30
blessedcoolant
7a67fd6a06 Revert "chore: ruff fixes"
This reverts commit af36fe8c1e.
2024-04-13 12:10:20 +05:30
blessedcoolant
af36fe8c1e chore: ruff fixes 2024-04-13 12:08:52 +05:30
blessedcoolant
e9f16ac8c7 feat: add UI for IP Adapter Method 2024-04-13 12:06:59 +05:30
blessedcoolant
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
Lincoln Stein
dedf0c6ffa fix ruff issues 2024-04-12 07:19:16 +10:00