Commit Graph

12109 Commits

Author SHA1 Message Date
Ryan Dick
9db0e9d696 Add better documentation/errors around the possibility that inpainting models may be incorrectly labelled as non-inpainting models. 2024-07-26 10:18:10 -04:00
Sergey Borisov
416d29fb83 Ruff format 2024-07-24 01:17:28 +03:00
Sergey Borisov
19c00241c6 Use non-inverted mask generally(except inpaint model handling) 2024-07-24 00:59:13 +03:00
Sergey Borisov
c323a760a5 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-23 23:34:28 +03:00
Sergey Borisov
9d1fcba415 Fix create gradient mask node output 2024-07-23 23:29:28 +03:00
Sergey Borisov
87eb018380 Revert debug change 2024-07-22 23:49:20 +03:00
Sergey Borisov
5003e5d763 Same changes as in other PRs, add check for running inpainting on inpaint model without source image
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-22 23:47:39 +03:00
Sergey Borisov
58f3072b91 Handle inpainting on normal models 2024-07-21 22:17:29 +03:00
Sergey Borisov
9e7b470189 Handle inpaint models 2024-07-21 20:45:55 +03:00
Ryan Dick
f9c61f1b6c
Fix function call that we forgot to update in #6606 (#6636)
## Summary

Fix function call that we forgot to update in #6606

## QA Instructions

Run a TiledMultiDiffusionDenoiseLatents invocation and make sure it
doesn't crash.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-19 17:19:32 -04:00
Ryan Dick
a8cc5caf96 Fix function call that we forgot to update in https://github.com/invoke-ai/InvokeAI/pull/6606 2024-07-19 17:07:52 -04:00
Mary Hipp
930ff559e4 add sdxl tile to starter models 2024-07-19 16:49:33 -04:00
Ryan Dick
473f4cc1c3
Base of modular backend (#6606)
## Summary

Base code of new modular backend from #6577.
Contains normal generation and regional prompts support.
Also preview extension included to test if extensions logic works.

## Related Issues / Discussions


https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.
Currently only normal and regional conditionings supported, so just
generate some images and compare with main output.

## Merge Plan

Discuss a bit more about injection point names?
As if for example in future unet will be overridable, current
`pre_unet`/`post_unet` assumes to name override as `unet` what feels a
bit odd.
Also `apply_cfg` - future implementation could ignore/not use cfg, so in
this case `combine_noise_predictions`/`combine_noise` seems more
suitable.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-19 16:37:57 -04:00
Ryan Dick
78d2b1b650 Merge branch 'main' into stalker-backend_base 2024-07-19 16:25:20 -04:00
Sergey Borisov
39e10d894c Add invocation cancellation logic to patchers 2024-07-19 23:17:01 +03:00
Ryan Dick
e16faa6370 Add gradient blending to tile seams in MultiDiffusion. 2024-07-19 13:05:50 -07:00
Ryan Dick
83a86abce2 Add unit tests for ExtensionsManager and ExtensionBase. 2024-07-19 14:15:46 -04:00
Sergey Borisov
0c56d4a581 Ryan's suggested changes to extension manager/extensions
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-18 23:49:44 +03:00
Lincoln Stein
97a7f51721
don't use cpu state_dict for model unpatching when executing on cpu (#6631)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-07-18 15:34:01 -04:00
StAlKeR7779
710dc6b487
Merge branch 'main' into stalker7779/backend_base 2024-07-18 01:08:04 +03:00
Sergey Borisov
2ef3b49a79 Add run cancelling logic to extension manager 2024-07-17 04:39:15 +03:00
Sergey Borisov
3f79467f7b Ruff format 2024-07-17 04:24:45 +03:00
Sergey Borisov
2c2ec8f0bc Comments, a bit refactor 2024-07-17 04:20:31 +03:00
Sergey Borisov
79e35bd0d3 Minor fixes 2024-07-17 03:48:37 +03:00
Sergey Borisov
137202b77c Remove patch_unet logic for now 2024-07-17 03:40:27 +03:00
Sergey Borisov
03e22c257b Convert conditioning_mode to enum 2024-07-17 03:37:11 +03:00
Sergey Borisov
ae6d4fbc78 Move out _concat_conditionings_for_batch submethods 2024-07-17 03:31:26 +03:00
Sergey Borisov
cd1bc1595a Rename sequential as private variable 2024-07-17 03:24:11 +03:00
Ryan Dick
0583101c1c
Add Spandrel upscale starter models (#6605)
## Summary

This PR adds some spandrel upscale models to the starter model list.

In the future we may also want to add:
- Some DAT models
(https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM)

## QA Instructions

I installed the starter models via the model manager UI, and tested that
I could use them in a workflow.

## Merge Plan

- [ ] Merge the preceding Spandrel PRs first, then change the target
branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 16:04:52 -04:00
Ryan Dick
f866b49255 Add some ESRGAN and SwinIR upscale models to the starter models list. 2024-07-16 15:55:10 -04:00
Sergey Borisov
b7c6c63005 Added some comments 2024-07-16 22:52:44 +03:00
Ryan Dick
95e9f5323b
Add tiling to SpandrelImageToImageInvocation (#6594)
## Summary

Add tiling to the `SpandrelImageToImageInvocation` node so that it can
process large images.

Tiling enables this node to run on effectively any input image
dimension. Of course, the computation time increases quadratically with
the image dimension.

Some profiling results on an RTX4090:
- Input 1024x1024, 4x upscale, 4x UltraSharp ESRGAN: `13 secs`, `<4 GB
VRAM`
- Input 4096x4096, 4x upscale, 4x UltraSharop ESRGAN: `46 secs`, `<4 GB
VRAM`
- Input 4096x4096, 2x upscale, SwinIR: `165 secs`, `<5 GB VRAM`

A lot of the time is spent PNG encoding the final image:
- PNG encoding of a 16384x16384 image takes `83secs @
pil_compress_level=7`, `24secs @ pil_compress_level=1`

Callout: If we want to start building workflows that pass large images
between nodes, we are going to have to find a way to avoid the PNG
encode/decode roundtrip that we are currently doing. As is, we will be
incurring a huge penalty for every node that receives/produces a large
image.

## QA Instructions

- [x] Tested with tiling up to 4096x4096 -> 16384x16384.
- [x] Test on images with an alpha channel (the alpha channel is
dropped).
- [x] Test on images with odd dimension.
- [x] Test no tiling (`tile_size=0`)

## Merge Plan

- [x] Merge #6556 first, and change the target branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 15:51:15 -04:00
Ryan Dick
6b0ca88177
Merge branch 'main' into ryan/spandrel-upscale-tiling 2024-07-16 15:40:14 -04:00
Ryan Dick
7ad32dcad2
Add support for Spandrel Image-to-Image models (e.g. ESRGAN, Real-ESRGAN, Swin-IR, DAT, etc.) (#6556)
## Summary

- Add support for all
[spandrel](https://github.com/chaiNNer-org/spandrel) image-to-image
models - this is a collection of many popular super-resolution models
(e.g. ESRGAN, Real-ESRGAN, SwinIR, DAT, etc.)

Examples of supported models:

- DAT:
https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM
- SwinIR: https://github.com/JingyunLiang/SwinIR/releases
- Any ESRGAN / Real-ESRGAN model

## Related Issues

Closes #6394 

## QA Instructions

- [x] Test that unsupported models still fail the probe (i.e. no false
positive spandrel models)
- [x] Test adding a few non-spandrel model types
- [x] Test adding a handful of spandrel model types: ESRGAN,
Real-ESRGAN, SwinIR, DAT
- [x] Verify model size estimation for the model cache
- [x] Test using the spandrel models in a practical image upscaling
workflow

## Merge Plan

- [x] Get approval from @brandonrising and @maryhipp before merging -
this PR has commercial implications.
- [x] Merge #6571 and change the target branch to `main`

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 15:37:20 -04:00
Ryan Dick
81991e072b Merge branch 'main' into ryan/spandrel-upscale 2024-07-16 15:14:08 -04:00
Sergey Borisov
cec345cb5c Change attention processor apply logic 2024-07-16 20:03:29 +03:00
Sergey Borisov
608cbe3f5c Separate inputs in denoise context 2024-07-16 19:30:29 +03:00
psychedelicious
7905a46ca4 chore: bump version to 4.2.6post1 2024-07-16 09:09:04 +10:00
psychedelicious
38343917f8 fix(backend): revert non-blocking device transfer
In #6490 we enabled non-blocking torch device transfers throughout the model manager's memory management code. When using this torch feature, torch attempts to wait until the tensor transfer has completed before allowing any access to the tensor. Theoretically, that should make this a safe feature to use.

This provides a small performance improvement but causes race conditions in some situations. Specific platforms/systems are affected, and complicated data dependencies can make this unsafe.

- Intermittent black images on MPS devices - reported on discord and #6545, fixed with special handling in #6549.
- Intermittent OOMs and black images on a P4000 GPU on Windows - reported in #6613, fixed in this commit.

On my system, I haven't experience any issues with generation, but targeted testing of non-blocking ops did expose a race condition when moving tensors from CUDA to CPU.

One workaround is to use torch streams with manual sync points. Our application logic is complicated enough that this would be a lot of work and feels ripe for edge cases and missed spots.

Much safer is to fully revert non-locking - which is what this change does.
2024-07-16 08:59:42 +10:00
Sergey Borisov
9f088d1bf5 Multiple small fixes 2024-07-16 00:51:25 +03:00
Sergey Borisov
fd8d1c12d4 Remove 'del' operator overload 2024-07-16 00:43:32 +03:00
Sergey Borisov
d623bd429b Fix condtionings logic 2024-07-16 00:31:56 +03:00
psychedelicious
5a0c99816c chore: bump version to v4.2.6 2024-07-15 14:16:31 +10:00
psychedelicious
24bf1ea65a fix(ui): boards cut off when search open 2024-07-15 14:07:20 +10:00
psychedelicious
28e79c4c5e chore: ruff
Looks like an upstream change to ruff resulted in this file being a violation.
2024-07-15 14:05:04 +10:00
psychedelicious
d7d59d704b chore: update default workflows
- Update all existing defaults
- Add Tiled MultiDiffusion workflow
2024-07-15 14:05:04 +10:00
Riccardo Giovanetti
8539c601e6 translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1262 of 1282 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-07-15 11:54:45 +10:00
psychedelicious
5cbe9fafb2 fix(ui): clear selection when deleting last image in board 2024-07-15 08:57:13 +10:00
psychedelicious
3ecd14f394 chore: bump version to 4.2.6rc1 2024-07-13 14:55:21 +10:00
psychedelicious@windows
7c0dfd74a5 fix(api): deleting large images fails
This issue is caused by a race condition. When a large image is served to the client, it is done using a streaming `FileResponse`. This concurrently serves the image straight from disk. The file is kept open by FastAPI until the image is fully served.

When a user deletes an image before the file is done serving, the delete fails because the file is still held by FastAPI.

To reproduce the issue:
- Create a very large image (8k reliably creates the issue).
- Create a smaller image, so that the first image in the gallery is not the large image.
- Refresh the app. The small image should be selected.
- Select the large image and immediately delete it. You have to be fast, to delete it before it finishes loading.
- In the terminal, we expect to see an error saying `Failed to delete image file`, and the image does not disappear from the UI.
- After a short wait, once the image has fully loaded, try deleting it again. We expect this to work.

The workaround is to instead serve the image from memory.

Loading the image to memory is very fast, so there is only a tiny window in which we could create the race condition, but it technically could still occur, because FastAPI is asynchronous and handles requests concurrently.

Once we load the image into memory, deletions of that image will work. Then we return a normal `Response` object with the image bytes. This is essentially what `FileResponse` does - except it uses `anyio.open_file`, which is async.

The tradeoff is that the server thread is blocked while opening the file. I think this is a fair tradeoff.

A future enhancement could be to implement soft deletion of images (db is already set up for this), and then clean up deleted image files on startup/shutdown. We could move back to using the async `FileResponse` for best responsiveness in the server without any risk of race conditions.
2024-07-13 14:46:41 +10:00