Go to file
psychedelicious 14372e3818 fix(nodes): blend latents with weight=0 with DPMSolverSDEScheduler
- Pass the seed from `latents_a` to the output latents. Fixed an issue where using `BlendLatentsInvocation` could result in different outputs during denoising even when the alpha or slerp weight was 0.

## Explanation

`LatentsField` has an optional `seed` field. During denoising, if this `seed` field is not present, we **fall back to 0 for the seed**. The seed is used during denoising in a few ways:

1. Initializing the scheduler.

The seed is used in two places in `invokeai/app/invocations/latent.py`.

The `get_scheduler()` utility function has special handling for `DPMSolverSDEScheduler`, which appears to need a seed for deterministic outputs.

`DenoiseLatentsInvocation.init_scheduler()` has special handling for schedulers that accept a generator - the generator needs to be seeded in a particular way. At the time of this commit, these are the Invoke-supported schedulers that need this seed:
  - DDIMScheduler
  - DDPMScheduler
  - DPMSolverMultistepScheduler
  - EulerAncestralDiscreteScheduler
  - EulerDiscreteScheduler
  - KDPM2AncestralDiscreteScheduler
  - LCMScheduler
  - TCDScheduler

2. Adding noise during inpainting.

If a mask is used for denoising, and we are not using an inpainting model, we add noise to the unmasked area. If, for some reason, we have a mask but no noise, the seed is used to add noise.

I wonder if we should instead assert that if a mask is provided, we also have noise.

This is done in `invokeai/backend/stable_diffusion/diffusers_pipeline.py` in `StableDiffusionGeneratorPipeline.latents_from_embeddings()`.

When we create noise to be used in denoising, we are expected to set `LatentsField.seed` to the seed used to create the noise. This introduces some awkwardness when we manipulate any "latents" that will be used for denoising. We have to pass the seed along for every operation.

If the wrong seed or no seed is passed along, we can get unexpected outputs during denoising. One notable case relates to blending latents (slerping tensors).

If we slerp two noise tensors (`LatentsField`s) _without_ passing along the seed from the source latents, when we denoise with a seed-dependent scheduler*, the schedulers use the fallback seed of 0 and we get the wrong output. This is most obvious when slerping with a weight of 0, in which case we expect the exact same output after denoising.

*It looks like only the DPMSolver* schedulers are affected, but I haven't tested all of them.

Passing the seed along in the output fixes this issue.
2024-06-05 00:02:52 +10:00
.dev_scripts Apply black 2023-07-27 10:54:01 -04:00
.github ci: fix name of installer build artifact 2024-03-28 07:58:01 -04:00
coverage combine pytest.ini with pyproject.toml 2023-03-05 17:00:08 +00:00
docker fix typo (#6255) 2024-05-24 15:26:05 +00:00
docs docs: add FAQ for fixing controlnet_aux 2024-05-29 18:19:06 -07:00
installer fix: remove db maintenance script from launcher 2024-05-23 22:39:55 +10:00
invokeai fix(nodes): blend latents with weight=0 with DPMSolverSDEScheduler 2024-06-05 00:02:52 +10:00
scripts fix(app): openapi schema generation 2024-05-30 12:03:03 +10:00
tests fix(app): add dynamic validator to AnyInvocation & AnyInvocationOutput 2024-05-30 12:03:38 +10:00
.dockerignore Update dockerignore, set venv to 3.10, pass cache to yarn vite buidl 2023-07-12 16:51:15 -04:00
.editorconfig Merge dev into main for 2.2.0 (#1642) 2022-11-30 16:12:23 -05:00
.git-blame-ignore-revs (meta) hide the 'black' formatting commit from git blame 2023-07-27 11:29:22 -04:00
.gitattributes Enforce Unix line endings in container (#4990) 2023-10-30 12:34:30 -04:00
.gitignore feat: no frontend build in repo 2023-12-11 12:30:13 +11:00
.gitmodules remove src directory, which is gumming up conda installs; addresses issue #77 2022-08-25 10:43:05 -04:00
.pre-commit-config.yaml Adding isort GHA and pre-commit hooks 2023-09-12 13:01:58 -04:00
.prettierrc.yaml feat: automated releases via github action 2024-02-29 21:57:20 -05:00
flake.lock Add Nix Flake for development, which uses Python virtualenv. 2023-07-31 19:14:30 +10:00
flake.nix fix: flake: add opencv with CUDA, new patchmatch dependency. 2023-08-01 23:56:41 +10:00
InvokeAI_Statement_of_Values.md Add @ebr to Contributors (#2095) 2022-12-21 14:33:08 -05:00
LICENSE Update LICENSE 2023-07-05 23:46:27 -04:00
LICENSE-SD1+SD2.txt updated LICENSE files and added information about watermarking 2023-07-26 17:27:33 -04:00
LICENSE-SDXL.txt updated LICENSE files and added information about watermarking 2023-07-26 17:27:33 -04:00
Makefile fix(app): openapi schema generation 2024-05-30 12:03:03 +10:00
mkdocs.yml docs: merge INSTALL_TROUBLESHOOTING into FAQ 2024-03-27 18:59:55 +05:30
pyproject.toml chore: bump pydantic, fastapi to latest 2024-05-30 12:03:03 +10:00
README.md Update README.md 2024-05-03 17:31:50 +10:00
Stable_Diffusion_v1_Model_Card.md Global replace [ \t]+$, add "GB" (#1751) 2022-12-19 16:36:39 +00:00

project hero

Invoke - Professional Creative AI Tools for Visual Media

To learn more about Invoke, or implement our Business solutions, visit invoke.com

discord badge latest release badge github stars badge github forks badge CI checks on main badge latest commit to main badge github open issues badge github open prs badge translation status badge

Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.

Installation and Updates - Documentation and Tutorials - Bug Reports - Contributing

Highlighted Features - Canvas and Workflows

Quick Start

  1. Download and unzip the installer from the bottom of the latest release.

  2. Run the installer script.

    • Windows: Double-click on the install.bat script.
    • macOS: Open a Terminal window, drag the file install.sh from Finder into the Terminal, and press enter.
    • Linux: Run install.sh.
  3. When prompted, enter a location for the install and select your GPU type.

  4. Once the install finishes, find the directory you selected during install. The default location is C:\Users\Username\invokeai for Windows or ~/invokeai for Linux/macOS.

  5. Run the launcher script (invoke.bat for Windows, invoke.sh for macOS and Linux) the same way you ran the installer script in step 2.

  6. Select option 1 to start the application. Once it starts up, open your browser and go to http://localhost:9090.

  7. Open the model manager tab to install a starter model and then you'll be ready to generate.

More detail, including hardware requirements and manual install instructions, are available in the installation documentation.

Troubleshooting, FAQ and Support

Please review our FAQ for solutions to common installation problems and other issues.

For more help, please join our Discord.

Features

Full details on features can be found in our documentation.

Web Server & UI

Invoke runs a locally hosted web server & React UI with an industry-leading user experience.

Unified Canvas

The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.

Workflows & Nodes

Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.

Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.

Other features

  • Support for both ckpt and diffusers models
  • SD1.5, SD2.0, and SDXL support
  • Upscaling Tools
  • Embedding Manager & Support
  • Model Manager & Support
  • Workflow creation & management
  • Node-Based Architecture

Contributing

Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.

Get started with contributing by reading our contribution documentation, joining the #dev-chat or the GitHub discussion board.

We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.

Thanks

Invoke is a combined effort of passionate and talented people from across the world. We thank them for their time, hard work and effort.

Original portions of the software are Copyright © 2024 by respective contributors.