Compare commits

..

1753 Commits

Author SHA1 Message Date
3dfa8ce11f feat(ui): upgrade to reactflow 12 2024-08-10 20:40:34 +10:00
09d1e190e7 show warning for maxUpscaleDimension if model tab is disabled 2024-08-09 14:07:55 -04:00
17ff8196cb Remove tmp code 2024-08-07 22:06:05 -04:00
68f993998a Add support for norm layer 2024-08-07 22:06:05 -04:00
7da6120b39 Fix LoKR refactor bug 2024-08-07 22:06:05 -04:00
6cd40965c4 Depth Anything V2 (#6674)
- Updated the previous DepthAnything manual implementation to use the
`transformers` implementation instead. So we can get upstream features.
- Plugged in the DepthAnything models to be handled by Invoke's Model
Manager.
- `small_v2` model will use DepthAnythingV2. This has been added as a
new model option and is now also the default in the Linear UI.


![opera_TxRhmbFole](https://github.com/user-attachments/assets/2a25abe3-ba0b-4f97-b75a-2ce5fd6246e6)


# Merge

Review and merge.
2024-08-07 20:26:58 +05:30
408a1d6dbb Merge branch 'main' into depth_anything_v2 2024-08-07 10:45:56 -04:00
140670d00e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-08-06 17:54:47 +10:00
70233fae5d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 98.1% (1296 of 1321 strings)

Co-authored-by: Phrixus2023 <920414016@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-08-06 17:54:47 +10:00
6f457a6c4c translationBot(ui): update translation (German)
Currently translated at 65.1% (860 of 1321 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-08-06 17:54:47 +10:00
B N
5c319f5356 translationBot(ui): update translation (German)
Currently translated at 64.8% (857 of 1321 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-08-06 17:54:47 +10:00
991a04f090 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (1303 of 1321 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (1302 of 1320 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.6% (1294 of 1312 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-08-06 17:54:47 +10:00
c39fa75113 docs(ui): add comment in useIsTooLargeToUpscale 2024-08-06 11:49:35 +10:00
f7863e17ce docs(ui): add docstring for maxUpscaleDimension 2024-08-06 11:49:35 +10:00
7c526390ed fix(ui): compare upscaledPixels vs square of max dimension 2024-08-06 11:49:35 +10:00
2cff20f87a update translations, change config value to be dimension instead of total pixels 2024-08-06 11:49:35 +10:00
90ec757802 lint 2024-08-06 11:49:35 +10:00
4b85dfcefe (ui): restore optioanl limit on upcsale output resolution 2024-08-06 11:49:35 +10:00
21deefdc41 (ui): add image resolution badge to initial upscale image 2024-08-06 11:49:35 +10:00
4d4f921a4e build: exclude matplotlib 3.9.1
There was a problem w/ this release on windows and the builds were pulled from pypi. When installing invoke on windows, pip attempts to build from source, but most (all?) systems won't have the prerequisites for this and installs fail.

This also affects GH actions.

The simple fix is to exclude version 3.9.1 from our deps.

For more information, see https://github.com/matplotlib/matplotlib/issues/28551
2024-08-05 08:38:44 +10:00
98db8f395b feat(app): clean up DiskImageStorage types 2024-08-04 09:43:20 +10:00
f465a956a3 feat(ui): remove "images can be restored" messages 2024-08-04 09:43:20 +10:00
9edb02d7ef build: remove send2trash dependency 2024-08-04 09:43:20 +10:00
6c4cf58a31 feat(app): delete model_images instead of using send2trash 2024-08-04 09:43:20 +10:00
08993c0d29 feat(app): delete images instead of using send2trash
Closes #6709
2024-08-04 09:43:20 +10:00
4f8a4b0f22 Merge branch 'main' into depth_anything_v2 2024-08-03 00:38:57 +05:30
a743f3c9b5 fix: implement model to func for depth anything 2024-08-03 00:37:17 +05:30
571ba87e13 fix(ui): include upscale metadata for SDXL multidiffusion 2024-08-01 21:30:42 -04:00
f27b6e2b44 Add Grounded SAM support (text prompt image segmentation) (#6701)
## Summary

This PR enables Grounded SAM workflows
(https://arxiv.org/pdf/2401.14159) via the following:
- `GroundingDinoInvocation` for running a Grounding DINO model.
- `SegmentAnythingModelInvocation` for running a SAM model.
- `MaskTensorToImageInvocation` for convenient visualization.

Other notes:
- Uses the transformers implementation of Grounding DINO and SAM.
- The new models are treated as 'utility models' meaning that they are
not visible in the Models tab, and are downloaded automatically the
first time that they are used.

<img width="874" alt="image"
src="https://github.com/user-attachments/assets/1cbaa97d-0e27-4943-86b1-dc7327ba8675">

## Example

Input image

![be10ec0c-20a8-4ac7-840e-d1a05fffdb6a](https://github.com/user-attachments/assets/bf21572c-635d-4703-b4ab-7aba658a9671)

Prompt: "wheels", all other configs default
Result:

![2221c44e-64e6-4b18-b4cb-610514b7a554](https://github.com/user-attachments/assets/344b91f4-7f4a-4b70-8e2e-3b4a0e55176d)

## Related Issues / Discussions

Thanks to @blessedcoolant for the initial draft here:
https://github.com/invoke-ai/InvokeAI/pull/6678

## QA Instructions

Manual tests:
- [ ] Test that default settings work well.
- [ ] Test with / without apply_polygon_refinement
- [ ] Test mask_filter options
- [ ] Test detection_threshold values
- [ ] Test RGB input image
- [ ] Test RGBA input image
- [ ] Test grayscale input image
- [ ] Smoke test that an empty mask is returned when 0 objects are
detected
- [ ] Test on CPU
- [ ] Test on MPS (Works on Mac OS, but had to force both models to run
on CPU instead of MPS)

Performance:
- Peak GPU memory utilization with both Grounding DINO and SAM models
loaded is ~4.5GB. (The models do not need to be loaded at the same time,
so could be offloaded by the MM if needed.)
- On an RTX4090, with the models already cached, node execution takes
~0.6 secs.
- On my CPU, with the models cached, node execution takes ~10secs.

## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-08-01 20:40:18 +02:00
981475a624 Merge branch 'main' into ryan/grounded-sam 2024-08-01 20:30:35 +02:00
27ac61a4fb Expose all model options in the GroundingDinoInvocation and the SegmentAnythingInvocation. 2024-08-01 14:23:32 -04:00
675ffc2757 Remove BoundingBoxInvocation field name overrides. 2024-08-01 14:05:44 -04:00
44b21f10f1 Add a pydantic model_validator to BoundingBoxField to check the validity of the coords. 2024-08-01 14:00:57 -04:00
c6d49e8b1f Shorten SegmentAnythingInvocation and GroundingDinoInvocatino docstrings, since they are used as the invocation descriptions in the UI. 2024-08-01 10:17:42 -04:00
e6a512aa86 (minor) Tweak order of mask operations. 2024-08-01 10:12:24 -04:00
c3a6a6fb22 Rename SegmentAnythingModelInvocation -> SegmentAnythingInvocation. 2024-08-01 10:00:36 -04:00
b9dc3460ba Rename SegmentAnythingModel -> SegmentAnythingPipeline. 2024-08-01 09:57:47 -04:00
63581ec980 (minor) Add None check to fix static type checking error. 2024-08-01 09:51:53 -04:00
08b1feeed7 add base prop for destination to direct users to different tabs on initial load (#6706)
## Summary
- we want a way to load the studio while being directed to a specific
tab, introduced a destination prop to achieve that
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-31 19:25:36 -04:00
f5cfdcf32d feat: Add BoundingBox Primitive Node 2024-08-01 04:09:08 +05:30
e78fb428f0 simplify destination prop handling 2024-07-31 18:06:22 -04:00
31e270e32c add base prop for destination to direct users to different tabs 2024-07-31 17:20:51 -04:00
b5832768dc Return a MaskOutput from SegmentAnythingModelInvocation. And add a MaskTensorToImageInvocation. 2024-07-31 17:16:14 -04:00
4ce64b69cb Modular backend - LoRA/LyCORIS (#6667)
## Summary

Code for lora patching from #6577.
Additionally made it the way, that lora can patch not only `weight`, but
also `bias`, because saw some loras which doing it.

## Related Issues / Discussions

#6606 

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.

## Merge Plan

Replace old lora patcher with new after review done.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-31 21:31:31 +02:00
5a9173f766 Merge branch 'main' into stalker-modular_lora 2024-07-31 15:13:22 -04:00
0bb7ed44f6 Add some docs to OriginalWeightsStorage and fix type hints. 2024-07-31 15:08:24 -04:00
332bc9da5b fix: Update depth anything node default to v2 2024-07-31 23:52:29 +05:30
08def3da95 fix: Update canvas depth anything processor default to v2 2024-07-31 23:50:13 +05:30
daf899f9c4 fix: Move the manual image resizing out of the depth anything pipeline 2024-07-31 23:38:12 +05:30
13fb2d1f49 fix: Add Depth Anything V2 as a new option
It is also now the default in the UI replacing Depth Anything V1 small
2024-07-31 23:29:43 +05:30
95dde802ea fix: assert the return depth map to be a PIL image 2024-07-31 23:22:01 +05:30
fca119773b Split invokeai/backend/image_util/segment_anything/ dir into grounding_dino/ and segment_anything/ 2024-07-31 12:28:47 -04:00
0193267a53 Split GroundedSamInvocation into GroundingDinoInvocation and SegmentAnythingModelInvocation. 2024-07-31 12:20:23 -04:00
b4cf78a95d fix: make DA Pipeline a subclass of RawModel 2024-07-31 21:14:49 +05:30
73386826d6 Make GroundingDinoPipeline and SegmentAnythingModel subclasses of RawModel for type checking purposes. 2024-07-31 10:25:34 -04:00
9f448fecb7 Move invokeai/backend/grounded_sam -> invokeai/backend/image_util/grounded_sam 2024-07-31 10:00:30 -04:00
bcd1483a14 Re-order GroundedSAMInvocation._to_numpy_masks(...) to do slightly more work on the GPU. 2024-07-31 09:51:14 -04:00
e206890e25 Use staticmethods rather than inner functions for the Grounding DINO and SAM model loaders. 2024-07-31 09:28:52 -04:00
0a7048f650 (minor) Simplify GroundedSAMInvocation._merge_masks(...). 2024-07-31 08:58:51 -04:00
e8ecf5e155 (minor) Move apply_polygon_refinement condition up a layer. 2024-07-31 08:50:56 -04:00
33e8604b57 Make Grounding DINO DetectionResult a Pydantic model. 2024-07-31 08:47:00 -04:00
cec7399366 (minor) Use a new variable name to satisfy type checks. 2024-07-31 08:27:01 -04:00
bdae81e429 (minor) Simplify GroundedSAMInvocation._filter_detections() 2024-07-31 08:25:19 -04:00
67c32f3d6c Fix typo: zip(..., strict=True) 2024-07-31 08:15:28 -04:00
94d64b8a78 Fix gradient mask values range (#6688)
## Summary

Gradient mask node outputs mask tensor with values in range [-1, 1],
which unexpected range for mask.
It handled in denoise node the way it translates to [0, 2] mask, which
looks even more wrongly)
From discussion with @dunkeroni I understand him as he thought that
negative values will be treated same as 0, so clamping values not change
intended node logic.

## Related Issues / Discussions

#6643 

## QA Instructions

\-

## Merge Plan

\-

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-31 06:37:32 +05:30
fa3c0c81b3 Merge branch 'main' into stalker7779/fix_gradient_mask 2024-07-31 06:30:44 +05:30
66547b99c1 Add more karras schedulers (#6695)
## Summary

Add karras variants of `deis`, `unipc`, `kdpm2` and `kdpm_2_a`
schedulers.
Also added `dpmpp_3` schedulers, but `dpmpp_3s` currently bugged, so
added only 3m:
https://github.com/huggingface/diffusers/issues/9007

## Related Issues / Discussions

\-

## QA Instructions

\-

## Merge Plan

~@psychedelicious We need to decide what to do with schedulers order, as
it looks a bit broken:~

![image](https://github.com/user-attachments/assets/e41674af-d87c-4432-8014-c90bd86965a6)

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-31 06:09:26 +05:30
328e58be4c Merge branch 'main' into stalker7779/new_karras_schedulers 2024-07-31 05:56:13 +05:30
18f89ed5ed fix: Make DepthAnything work with Invoke's Model Management 2024-07-31 03:57:54 +05:30
5701c79fab Prevent Grounding DINO and Segment Anything from being moved to MPS - they don't work on MPS devices. 2024-07-30 23:04:15 +02:00
2da9f913f3 Add detection_result.py - was forgotten in a prior commit 2024-07-30 16:04:29 -04:00
6b10b59abe Make GroundedSAMInvocation work with any input image mode (RGB, RGBA, grayscale). 2024-07-30 15:55:57 -04:00
918f77bce0 Move some logic from GroundedSAMInvocation to the backend classes. 2024-07-30 15:34:33 -04:00
f170697ebe Merge branch 'main' into depth_anything_v2 2024-07-31 00:53:32 +05:30
556c6a1d84 fix: Update DepthAnything to use the transformers implementation 2024-07-31 00:51:55 +05:30
aca2a2fa13 Add mask_filter and detection_threshold options to the GroundedSAMInvocation. 2024-07-30 14:22:40 -04:00
ff6398f7d8 Add a GroundedSamInvocation for image segmentation from a text prompt (Grounding DINO + Segment Anything Model). 2024-07-30 11:12:26 -04:00
cf996472b9 Suggested changes
Co-Authored-By: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-07-30 04:50:56 +03:00
156d14c349 Run api regen 2024-07-30 04:05:21 +03:00
86f705bf48 Optimize weights handling 2024-07-30 03:39:01 +03:00
1fd9631f2d Comments fix
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-30 00:39:50 +03:00
2227a2357f Suggested changes + simplify weights logic in patching
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-30 00:34:37 +03:00
58e7ab157d Ruff format 2024-07-29 22:59:17 +03:00
8d16fa6a49 Remove dpmpp_3s schedulers as it bugged now 2024-07-29 22:55:45 +03:00
55e810efa3 Add dpmpp_3 schedulers 2024-07-29 22:52:15 +03:00
2755316021 update delete board modal to be more descriptive (#6690)
## Summary

<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-29 13:43:17 -04:00
6525f18610 Merge branch 'main' into chainchompa/board-delete-info 2024-07-29 12:52:36 -04:00
2ad13ac7eb Modular backend - inpaint (#6643)
## Summary

Code for inpainting and inpaint models handling from
https://github.com/invoke-ai/InvokeAI/pull/6577.
Separated in 2 extensions as discussed briefly before, so wait for
discussion about such implementation.

## Related Issues / Discussions

#6606

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.
Try and compare outputs between backends in cases:
- Normal generation on inpaint model
- Inpainting on inpaint model
- Inpainting on normal model

## Merge Plan

Nope.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-29 10:27:25 -04:00
693a3eaff5 Merge branch 'main' into stalker-modular_inpaint-2 2024-07-29 10:14:45 -04:00
ffca792d5b edited copy for deleted boards message 2024-07-29 09:46:08 -04:00
86a92bb6b5 Add more karras schedulers 2024-07-29 15:14:34 +03:00
171a4e6d80 fix(ui): race condition when deleting a board and resetting selected/auto-add
We were checking the selected and auto-add board ids against the query cache to see if they still exist. If not, we reset.

This only works if the query cache is updated by the time we do the check - race condition!

We already have the board id from the query args, so there's no need to check the query cache - just compare the deleted board ID directly.

Previously this file's several listeners were all in a single one and I had adapted/split its logic up a bit wonkily, introducing these problems.
2024-07-29 11:36:03 +10:00
e3a75a8adf fix(ui): fix logic to reset selected/auto-add boards when toggling show archived boards
The logic was incorrect in two ways:
1. We only ran the logic if we _enable_ showing archived boards. It should be run we we _disable_ showing archived boards.
2. If we couldn't find the selected board in the query cache, we didn't do the reset. This is wrong - if the board isn't in the query cache, we _should_ do the reset. This inverted logic makes more sense before the fix for issue 1.
2024-07-29 11:36:03 +10:00
ee7503ce13 Modular backend - T2I Adapter (#6662)
## Summary

T2I Adapter code from #6577.

## Related Issues / Discussions

#6606 

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.

## Merge Plan

Nope.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-28 15:52:04 -04:00
8500bac3ca Use logger for warning 2024-07-28 22:51:52 +03:00
310719eb4c Merge branch 'main' into stalker-modular_t2i_adapter 2024-07-28 15:30:00 -04:00
e8e24822ec Modular backend - Seamless (#6651)
## Summary

Seamless code from #6577.

## Related Issues / Discussions

#6606 

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.

## Merge Plan

Nope.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-28 13:57:38 -04:00
c57a7afb87 Merge branch 'main' into stalker7779/modular_seamless 2024-07-28 13:49:43 -04:00
84d028898c Revert wrong comment copy 2024-07-27 13:20:58 +03:00
ed0174fbc6 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-27 13:18:28 +03:00
9e582563eb Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-27 04:25:15 +03:00
faa88f72bf Make lora as separate extensions 2024-07-27 02:39:53 +03:00
0d69a31df0 Merge branch 'main' into chainchompa/board-delete-info 2024-07-26 14:03:18 -04:00
daa5a88eb2 Update docker image to use pnpm version 8 2024-07-26 13:57:33 -04:00
5b84e117b2 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-26 20:51:12 +03:00
eb257d2d28 update delete board modal to be more descriptive 2024-07-26 13:34:25 -04:00
5810cee6c9 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-26 19:47:28 +03:00
eef88d1f83 Update gradient mask node version 2024-07-26 19:33:41 +03:00
78f6850fc0 Fix gradient mask values range 2024-07-26 19:28:00 +03:00
bd8890be11 Revert "Fix create gradient mask node output"
This reverts commit 9d1fcba415.
2024-07-26 19:24:46 +03:00
adf1a977ea Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-26 19:22:26 +03:00
e1509bcb45 bump version to 4.2.7 2024-07-26 09:11:17 -07:00
edcaf8287d feat(app): remove beta from multidiffusion workflows 2024-07-26 13:47:51 +10:00
39bd30f2a0 feat(app): update default workflows
- Update `MultiDiffusion SDXL (Beta)`
- Add `MultiDiffusion SD1.5 (Beta)`
2024-07-26 13:47:51 +10:00
102b47190f feat(ui): update qr code cnet starter model
- For SD1.5, use the new V2 version
- Add the SDXL version
2024-07-26 13:34:32 +10:00
269fe2e3bb track accordions in tabs separately so open/close state isnt shared 2024-07-26 08:20:24 +10:00
b32aa1c77f fix missing quote in translation 2024-07-26 08:20:24 +10:00
6656544ed5 tooltip copy updates
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-07-26 08:20:24 +10:00
4c75b93410 feat(ui): add informational popovers for upscale params 2024-07-26 08:20:24 +10:00
5be0de967d feat(ui): close generation and advanced accordions when switching to upscale tab 2024-07-26 08:20:24 +10:00
f8e27b837b fix(ui): memoize model manager components 2024-07-26 07:52:10 +10:00
47414be1e6 fix(ui): dropped model config cache breaking model edit UI
The model edit UI's composition allows for the model edit form to be instantiated before the model's config has been received. This results in the form having no values - all the fields are blank instead of populated by the model config.

Part of the fix is to pass the model config around directly instead of relying on _all_ components to fetch the model directly.

I also fixed a crapload of performance issues related to improper use of redux selectors.
2024-07-26 07:52:10 +10:00
74cef38bcf fix(backend): add refiner to single-file load_classes
Fixes single-file refiner loading.
2024-07-26 05:08:01 +10:00
bb876b8d4e fix(ui): copied edges must have new ids set
Problems this was causing:
- Deleting an edge was a copy of another edge deletes both edges
- Deleting a node that was a copy-with-edges of another node deletes its edges and it's original edges, leaving what I will call "ghost noodles" behind
2024-07-26 04:54:33 +10:00
ba747373db feat(ui): add button to disable info popovers from info popover 2024-07-25 08:06:41 -04:00
95661c8b21 feat(ui): enable info popovers by default 2024-07-25 08:06:41 -04:00
e5d9ca013e fix: use v1 models for large and base versions 2024-07-25 17:24:12 +05:30
4166c756ce wip: depth_anything_v2 init lint fixes 2024-07-25 14:41:22 +05:30
4f0dfbd34d wip: depth_anything_v2 initial implementation 2024-07-25 13:53:06 +05:30
b70ac88684 perf(ui): throttle page changes
Previously you could spam the next/prev buttons and really thrash the server. Throttled to 500ms, which feels like a happy medium between responsive and not-thrash-y.
2024-07-25 11:57:54 +10:00
24609da6ab feat(ui): tweak pagination styles 2024-07-25 11:57:54 +10:00
524647b1f1 fix(ui): jumpto interactions
- Autofocus on popover open
- Autoselect number on popover open
- Enter works to change page when input is focused
- Esc works to close popover when input is focused
2024-07-25 11:57:54 +10:00
cf1af94f53 feat(ui): make jump to page a popover 2024-07-25 11:57:54 +10:00
2a9fdc6314 feat(ui): add jump to option for gallery pagination 2024-07-25 11:57:54 +10:00
46c632e7cc Change layer detection keys according to LyCORIS repository 2024-07-25 02:10:47 +03:00
653f63ae71 Add layer keys check 2024-07-25 02:03:08 +03:00
8a9e2f57a4 Handle bias in full/diff lora layer 2024-07-25 02:02:37 +03:00
31949ed2f2 Refactor code a bit 2024-07-25 02:00:30 +03:00
3657285b1b chore: bump version v4.2.7rc1 2024-07-25 06:23:50 +10:00
e4b5975305 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-07-25 06:09:04 +10:00
b59825edc0 translationBot(ui): update translation (Spanish)
Currently translated at 34.4% (448 of 1300 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-07-25 06:09:04 +10:00
25788f6869 translationBot(ui): update translation (Italian)
Currently translated at 98.6% (1289 of 1307 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1277 of 1296 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-07-25 06:09:04 +10:00
0ccb304b8b Ruff format 2024-07-24 16:01:29 +03:00
ca5a4ee59d fix(ui): few cases where board totals don't updated when moving 2024-07-24 22:30:44 +10:00
4fdefe58c7 feat(ui): clear gallery search on esc key 2024-07-24 14:10:16 +10:00
9870f5a96f fix(ui): race condition with gallery search
It was possible to clear the search term while a debounced setSearchTerm is still pending. This resulted in the gallery getting out of sync w/ the search term.

To fix this, we need to lift the state up a bit and  cancel any pending debounced setSearchTerm calls when closing the search or clearing the search term box.
2024-07-24 14:10:16 +10:00
c296ae8cfe feat(ui): add useAssertSingleton hook
Use this to enforce singleton components and hooks.
2024-07-24 14:10:16 +10:00
17493f4ae0 fix(ui): close boards search when toggling panel 2024-07-24 14:10:16 +10:00
2503dca813 fix(ui): show boards panel when opening board search 2024-07-24 14:10:16 +10:00
cb61ef9bb1 feat(ui): use color instead of super tiny icon change to indicate board search toggle state
You can't even see the icon, no point in changing it. Blue = active/open, Grey = closed.
2024-07-24 14:10:16 +10:00
1831ed620f fix(ui): gallery tabs layout 2024-07-24 14:10:16 +10:00
c385e76356 fix(ui): DeleteBoardModal must be a singleton 2024-07-24 14:10:16 +10:00
ff1972fbb3 fix(ui): spacing issue w/ boards search 2024-07-24 14:10:16 +10:00
c4b3405bfa fix(ui): make uncategorized and board components same height 2024-07-24 14:10:16 +10:00
ab2548c0cd feat(ui): minor padding tweaks in boardslist 2024-07-24 14:10:16 +10:00
dc2a3363b0 feat(ui): layout shift when using a collapse w/ flex gap
the gap isn't handled smoothly, there's always a jump. cannot use gap in the collapsible's container
2024-07-24 14:10:16 +10:00
d7a5fe2805 feat(ui): make arrow icon rotate on boards list 2024-07-24 14:10:16 +10:00
4e49689d46 feat(ui): make isPrivate required on BoardsList 2024-07-24 14:10:16 +10:00
ca8441a32f fix(ui): alignment & overflow on gallery header 2024-07-24 14:10:16 +10:00
44284d671c feat(ui): tweak padding for boards in list 2024-07-24 14:10:16 +10:00
e89de1d5b7 feat(ui): tweak board tooltip styles
When the totals were high enough, the image looked really off. Also fixed some inconsistent padding.
2024-07-24 14:10:16 +10:00
6db63349f8 fix(ui): missing key on list element 2024-07-24 14:10:16 +10:00
7f6f892533 fix circular dep 2024-07-24 14:10:16 +10:00
d1bbd0cf80 cleanup 2024-07-24 14:10:16 +10:00
bd73b6b2af reorganize the gallery - move board name to top of image grid, add hide/view boards button for toggle 2024-07-24 14:10:16 +10:00
0d40a7d865 exclude uncategorized from search and make sure list is always correct 2024-07-24 14:10:16 +10:00
c2f6b80246 move Uncategorized back to private board list 2024-07-24 14:10:16 +10:00
80f5f8210a increase font size of Move for boards 2024-07-24 14:10:16 +10:00
b7383cc0e5 board UI updates: always show search for boards and images if a term is entered, clear search when view is toggled off 2024-07-24 14:10:16 +10:00
2172e4d292 board UI updates: font tweaks, add cover image to tooltip, move uncategorized out of board list, allow collapsible board list if private enabled 2024-07-24 14:10:16 +10:00
ab0bfa709a Handle loras in modular denoise 2024-07-24 05:07:29 +03:00
6af659b1da Handle t2i adapter in modular denoise 2024-07-24 02:55:33 +03:00
db664afc49 fix(ui): model select overflowing when model names are too long 2024-07-24 09:35:32 +10:00
b99a53e64e tidy(ui): organise postprocessing listeners 2024-07-24 08:22:46 +10:00
5f4ce6fda3 tidy(ui): organise postprocessing files 2024-07-24 08:22:46 +10:00
93e95ce53f chore(ui): lint 2024-07-24 08:22:46 +10:00
2997f0a1f8 fix(ui): ts issue 2024-07-24 08:22:46 +10:00
40b262bcc2 tidy(ui): "simpleUpscale" -> "postProcessing" 2024-07-24 08:22:46 +10:00
a26f050cbb feat(ui): rename ad-hoc upscale stuff to post-processing 2024-07-24 08:22:46 +10:00
94b5b2a467 feat(ui): improve starter model search for spandrel models 2024-07-24 08:22:46 +10:00
b4519ea61f tidy(ui): remove unused maxUpscalePixels config 2024-07-24 08:22:46 +10:00
7f7ce291b5 feat(ui): revised simple upscale warning UI 2024-07-24 08:22:46 +10:00
aeb53563ff feat(ui): use graph util for ad-hoc upscale graph 2024-07-24 08:22:46 +10:00
e8d2e2330e fix(ui): set board in ad-hoc upscale graph 2024-07-24 08:22:46 +10:00
4c6b9ce7c9 fix(ui): use spandrel autoscale node in upscaling tab 2024-07-24 08:22:46 +10:00
87a2221d72 chore(ui): typegen 2024-07-24 08:22:46 +10:00
76aa6bdf05 feat(nodes): split spandrel node
`spandrel_image_to_image` now just runs the model with no changes.

`spandrel_image_to_image_autoscale` runs the model repeatedly until the desired scale is reached. previously, `spandrel_image_to_image` did this.
2024-07-24 08:22:46 +10:00
416d29fb83 Ruff format 2024-07-24 01:17:28 +03:00
0c1994d682 fix(ui): restore pnpm-lock.yaml
#6645 inadvertently removed the lockfile
2024-07-24 08:07:32 +10:00
19c00241c6 Use non-inverted mask generally(except inpaint model handling) 2024-07-24 00:59:13 +03:00
633bbb4e85 [MM2] Use typed ModelRecordChanges for model_install() rather than untyped dict (#6645)
* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges

- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
  models.

* documentation fix

* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges

- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
  models.

* documentation fix

* remove v9 pnpm lockfile

* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges

- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
  models.

* [MM2] replace untyped config dict passed to install_model with typed ModelRecordChanges

- adjusted frontend to work with new schema
- used this facility to assign "starter model" names and descriptions to the installed
  models.

* remove v9 pnpm lockfile

* regenerate schema.ts

* prettified

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-07-23 21:41:00 +00:00
a221ab2fb6 fix(ui): upsell menuitem styling 2024-07-24 06:58:27 +10:00
0279a27f66 fix(ui): render settingsmenu in portal, no zindex 2024-07-24 06:58:27 +10:00
54aef4959c cleanup 2024-07-24 06:56:02 +10:00
4017609b91 clean up useIsAllowedToUpscale since its no longer necessary 2024-07-24 06:56:02 +10:00
cb0bffedd5 fix board handling for simple upscale 2024-07-24 06:56:02 +10:00
1fd2a91ccd only show warning for simple upscale if no simple upscale model is available 2024-07-24 06:56:02 +10:00
c323a760a5 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-23 23:34:28 +03:00
9d1fcba415 Fix create gradient mask node output 2024-07-23 23:29:28 +03:00
075e0405f9 Update Simple Upscale Button to work with spandrel models (#6649)
## Summary
Update Simple Upscale Button to work with spandrel models, add
UpscaleWarning when models aren't available, clean up ESRGAN logic
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-23 13:33:01 -04:00
bf6066d834 Merge branch 'main' into chainchompa/simple-upscale-updates 2024-07-23 13:27:48 -04:00
5635f65ee9 feat(ui): add upsells for pro edition to settings menu 2024-07-23 13:27:00 -04:00
6317cf8ef9 move handleSimpleUpscaleModels logic into handleSpandrelImageToImageModels listener 2024-07-23 13:13:21 -04:00
9e1daf06f7 Merge branch 'main' into chainchompa/simple-upscale-updates 2024-07-23 12:16:44 -04:00
e1a718b512 cleanup 2024-07-23 12:16:35 -04:00
cbce89162b update simple upscale metadata to match upscale metadata 2024-07-23 12:15:26 -04:00
b46b20210d handle simple upscale models on modelsLoaded 2024-07-23 11:53:43 -04:00
8e89157a83 reuse ParamSpandrelModel for simple upscale 2024-07-23 11:36:46 -04:00
ca21996a97 Remove old seamless class 2024-07-23 18:04:33 +03:00
62aa064e56 Handle seamless in modular denoise 2024-07-23 18:03:59 +03:00
7c975f0d00 Modular backend - add ControlNet (#6642)
## Summary

ControlNet code from #6577.

## Related Issues / Discussions

#6606

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.

## Merge Plan

Merge #6641 firstly, to be able see output difference properly.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-23 10:37:25 -04:00
8107884c8d Merge branch 'main' into chainchompa/simple-upscale-updates 2024-07-23 10:28:11 -04:00
a2f49ef7c1 cleanup esrgan frontend code 2024-07-23 10:22:38 -04:00
e2e47fd606 Merge branch 'main' into stalker-modular_controlnet 2024-07-23 10:19:12 -04:00
c098edc6b2 updated simple upscale to use spandrel node and list of available spandrel models 2024-07-23 10:15:31 -04:00
bc1d9748ce updated upscale warning to work for simple upscale 2024-07-23 10:04:31 -04:00
7b8e25f525 Modular backend - add FreeU (#6641)
## Summary

FreeU code from https://github.com/invoke-ai/InvokeAI/pull/6577.
Also fix issue with sometimes slightly different output.

## Related Issues / Discussions

#6606 

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.

## Merge Plan

Nope.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-23 10:02:56 -04:00
db52f5606f Merge branch 'main' into stalker-modular_freeu 2024-07-23 09:53:32 -04:00
de39c5ed21 Modular backend - add rescale cfg (#6640)
## Summary

Rescale CFG code from #6577.

## Related Issues / Discussions

#6606 

https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.
~~Note: for some reasons there slightly different output from run to
run, but I able sometimes to get same output on main and this branch.~~
Fix presented in #6641.

## Merge Plan

~~Nope.~~ Merge #6641 firstly, to be able see output difference
properly.
If you think that there should be some kind of tests - feel free to add.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-23 09:45:30 -04:00
d014dc94fd Merge branch 'main' into stalker7779/modular_rescale_cfg 2024-07-23 09:34:22 -04:00
39e804d0f8 Use consistent param names in patch_extension(...) functions: context -> ctx. 2024-07-23 09:18:04 -04:00
154e8f6e78 chore(ui): lint 2024-07-23 15:42:16 +10:00
2d31b82e60 feat(ui): tweak layout for warning message 2024-07-23 15:42:16 +10:00
8f934747f3 feat(ui): updated upscale tab warnings 2024-07-23 15:42:16 +10:00
4352341a00 feat(ui): starter models filter matches spandrel models to "upscale" search term 2024-07-23 15:42:16 +10:00
d7e0ec52ff feat(ui): make model install tab controlled
This lets us navigate directly to eg the Starter Models tab
2024-07-23 15:42:16 +10:00
1072b74c0e fix(ui): edge cases in starter models search 2024-07-23 15:42:16 +10:00
46dc8c6641 chore(ui): lint 2024-07-23 15:42:16 +10:00
a8bc6ab5b1 fix(ui): typos 2024-07-23 15:42:16 +10:00
bd91bd4a84 Math Updates 2024-07-23 15:42:16 +10:00
8756a6b8c3 fix(ui): remove sharpness param 2024-07-23 10:55:54 +10:00
2e0cebb571 fix(ui): bug where viewer would disappear on upscaling tab 2024-07-23 10:55:54 +10:00
c3a8184431 feat(ui): add number input to scale slider 2024-07-23 10:55:54 +10:00
ffa39d74b3 feat(ui): remove first unsharp from upscale graph 2024-07-23 10:55:54 +10:00
f9d3966ea2 feat(ui): add scale param to upscaling tab 2024-07-23 10:55:54 +10:00
7cee4e42a7 feat(ui): add addEdgeToMetadata graph helper 2024-07-23 10:55:54 +10:00
071c7c7c7e chore(ui): typegen 2024-07-23 10:55:54 +10:00
818045f678 tidy(ui): use × instead of translation string 2024-07-23 10:55:54 +10:00
7edefbefff feat(ui): add translation for upscaling tab 2024-07-23 10:55:54 +10:00
29efab70b7 feat(nodes): spandrel_image_to_image.scale defaults to 4.0 2024-07-23 10:55:54 +10:00
ac6adc392a feat(nodes): add scale and fit_to_multiple_of_8 to spandrel node 2024-07-23 10:55:54 +10:00
a2ef5d56ee feat(nodes): split out spandrel node upscale logic into utils 2024-07-23 10:55:54 +10:00
13f3560e55 more lint fixes 2024-07-23 10:55:54 +10:00
c4bd60e00f knip fix 2024-07-23 10:55:54 +10:00
54eda9163c remove tiledVAE option and make it true 2024-07-23 10:55:54 +10:00
582f384fff lint fix 2024-07-23 10:55:54 +10:00
a43211e650 math updates for controlnet tiles 2024-07-23 10:55:54 +10:00
6cb0581b0d add description to upscale model dropdown tooltip 2024-07-23 10:55:54 +10:00
845d77916e lint fix 2024-07-23 10:55:54 +10:00
f18431a999 use fn to get width/height of output image 2024-07-23 10:55:54 +10:00
5060bf2f62 lint fix 2024-07-23 10:55:54 +10:00
7854d913b2 add upscaling data to metadata 2024-07-23 10:55:54 +10:00
890a3ce32a add limited metadata 2024-07-23 10:55:54 +10:00
fb4b3f3350 fix creativity/sharpness/structure scales, move where loras are added, get scale const working 2024-07-23 10:55:54 +10:00
d166b08b6a restore scale but hardcode it to 2 regardless of upscale model 2024-07-23 10:55:54 +10:00
5266e9e682 fix(ui): remove unused scale param 2024-07-23 10:55:54 +10:00
d0265e21b0 fix(ui): use spandrel node for upscaling 2024-07-23 10:55:54 +10:00
3126e8e49a chore(ui): typegen 2024-07-23 10:55:54 +10:00
9e3412d776 translations and lint fix 2024-07-23 10:55:54 +10:00
4a09cc57be use the tile conttrolnet in graph instad of marys hardcoded key 2024-07-23 10:55:54 +10:00
5ab36e0433 add warning if no upscale model or no tile controlnet for base model 2024-07-23 10:55:54 +10:00
d2bf3629bf base scale off of upscale model selected 2024-07-23 10:55:54 +10:00
d9b217d908 hook up sharpness, structure, and creativity 2024-07-23 10:55:54 +10:00
2847f1b5ac add vae toggle, lint fix 2024-07-23 10:55:54 +10:00
bc30850f3a hardcode marys tile cnet key 2024-07-23 10:55:54 +10:00
7668dc68a0 cleanup, add loras 2024-07-23 10:55:54 +10:00
ea449f5a0a upscale graph built, no multidiffusion yet 2024-07-23 10:55:54 +10:00
5a1ed99ca1 restore adhoc upscale button 2024-07-23 10:55:54 +10:00
3a2707ac02 disable invoke button properly for upscaling tab 2024-07-23 10:55:54 +10:00
ce5b1103ed add send to upscale to context menu 2024-07-23 10:55:54 +10:00
fd91b83d86 build out the rest of the accordions 2024-07-23 10:55:54 +10:00
a0a54348e8 removed upscale button, created spandrel model dropdown, created upscale initial image that works with dnd 2024-07-23 10:55:54 +10:00
43b3e242b0 tidy(ui): refactor parameters panel components to be 1:1 with tabs 2024-07-23 10:55:54 +10:00
4e8dcb7a1a Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-23 01:46:29 +03:00
3cb13d6288 Rename as suggested in other PRs
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-23 01:01:18 +03:00
4f01c0f2d3 fix: update uncategorized board totals when deleting and moving images (#6646)
## Summary
- currently the total for uncategorized images is not updating when
moving and deleting images, this will update that count when making
those actions
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-22 17:10:52 -04:00
87eb018380 Revert debug change 2024-07-22 23:49:20 +03:00
5003e5d763 Same changes as in other PRs, add check for running inpainting on inpaint model without source image
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-22 23:47:39 +03:00
e92af52fb8 fix moving items to uncategorized updating 2024-07-22 16:11:36 -04:00
5f0fe3c8a9 Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-22 23:09:11 +03:00
339dddd018 update uncategorized board totals when deleting and moving images 2024-07-22 16:03:01 -04:00
1b359b55cb Suggested changes
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-22 22:17:29 +03:00
d0435575c1 chore(deps): bump fastapi-events to the next minor version 2024-07-22 04:15:36 -07:00
58f3072b91 Handle inpainting on normal models 2024-07-21 22:17:29 +03:00
9e7b470189 Handle inpaint models 2024-07-21 20:45:55 +03:00
42356ec866 Add ControlNet support to denoise 2024-07-21 20:01:30 +03:00
1748848b7b Ruff fixes 2024-07-21 18:37:20 +03:00
5772965f09 Fix slightly different output with old backend 2024-07-21 18:31:30 +03:00
e046e60e1c Add FreeU support to denoise 2024-07-21 18:31:10 +03:00
9a1420280e Add rescale cfg support to denoise 2024-07-21 17:33:43 +03:00
f9c61f1b6c Fix function call that we forgot to update in #6606 (#6636)
## Summary

Fix function call that we forgot to update in #6606

## QA Instructions

Run a TiledMultiDiffusionDenoiseLatents invocation and make sure it
doesn't crash.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-19 17:19:32 -04:00
a8cc5caf96 Fix function call that we forgot to update in https://github.com/invoke-ai/InvokeAI/pull/6606 2024-07-19 17:07:52 -04:00
930ff559e4 add sdxl tile to starter models 2024-07-19 16:49:33 -04:00
473f4cc1c3 Base of modular backend (#6606)
## Summary

Base code of new modular backend from #6577.
Contains normal generation and regional prompts support.
Also preview extension included to test if extensions logic works.

## Related Issues / Discussions


https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

## QA Instructions

Run with and without set `USE_MODULAR_DENOISE` environment.
Currently only normal and regional conditionings supported, so just
generate some images and compare with main output.

## Merge Plan

Discuss a bit more about injection point names?
As if for example in future unet will be overridable, current
`pre_unet`/`post_unet` assumes to name override as `unet` what feels a
bit odd.
Also `apply_cfg` - future implementation could ignore/not use cfg, so in
this case `combine_noise_predictions`/`combine_noise` seems more
suitable.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-19 16:37:57 -04:00
78d2b1b650 Merge branch 'main' into stalker-backend_base 2024-07-19 16:25:20 -04:00
39e10d894c Add invocation cancellation logic to patchers 2024-07-19 23:17:01 +03:00
e16faa6370 Add gradient blending to tile seams in MultiDiffusion. 2024-07-19 13:05:50 -07:00
83a86abce2 Add unit tests for ExtensionsManager and ExtensionBase. 2024-07-19 14:15:46 -04:00
0c56d4a581 Ryan's suggested changes to extension manager/extensions
Co-Authored-By: Ryan Dick <14897797+RyanJDick@users.noreply.github.com>
2024-07-18 23:49:44 +03:00
97a7f51721 don't use cpu state_dict for model unpatching when executing on cpu (#6631)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-07-18 15:34:01 -04:00
710dc6b487 Merge branch 'main' into stalker7779/backend_base 2024-07-18 01:08:04 +03:00
2ef3b49a79 Add run cancelling logic to extension manager 2024-07-17 04:39:15 +03:00
3f79467f7b Ruff format 2024-07-17 04:24:45 +03:00
2c2ec8f0bc Comments, a bit refactor 2024-07-17 04:20:31 +03:00
79e35bd0d3 Minor fixes 2024-07-17 03:48:37 +03:00
137202b77c Remove patch_unet logic for now 2024-07-17 03:40:27 +03:00
03e22c257b Convert conditioning_mode to enum 2024-07-17 03:37:11 +03:00
ae6d4fbc78 Move out _concat_conditionings_for_batch submethods 2024-07-17 03:31:26 +03:00
cd1bc1595a Rename sequential as private variable 2024-07-17 03:24:11 +03:00
0583101c1c Add Spandrel upscale starter models (#6605)
## Summary

This PR adds some spandrel upscale models to the starter model list.

In the future we may also want to add:
- Some DAT models
(https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM)

## QA Instructions

I installed the starter models via the model manager UI, and tested that
I could use them in a workflow.

## Merge Plan

- [ ] Merge the preceding Spandrel PRs first, then change the target
branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 16:04:52 -04:00
f866b49255 Add some ESRGAN and SwinIR upscale models to the starter models list. 2024-07-16 15:55:10 -04:00
b7c6c63005 Added some comments 2024-07-16 22:52:44 +03:00
95e9f5323b Add tiling to SpandrelImageToImageInvocation (#6594)
## Summary

Add tiling to the `SpandrelImageToImageInvocation` node so that it can
process large images.

Tiling enables this node to run on effectively any input image
dimension. Of course, the computation time increases quadratically with
the image dimension.

Some profiling results on an RTX4090:
- Input 1024x1024, 4x upscale, 4x UltraSharp ESRGAN: `13 secs`, `<4 GB
VRAM`
- Input 4096x4096, 4x upscale, 4x UltraSharop ESRGAN: `46 secs`, `<4 GB
VRAM`
- Input 4096x4096, 2x upscale, SwinIR: `165 secs`, `<5 GB VRAM`

A lot of the time is spent PNG encoding the final image:
- PNG encoding of a 16384x16384 image takes `83secs @
pil_compress_level=7`, `24secs @ pil_compress_level=1`

Callout: If we want to start building workflows that pass large images
between nodes, we are going to have to find a way to avoid the PNG
encode/decode roundtrip that we are currently doing. As is, we will be
incurring a huge penalty for every node that receives/produces a large
image.

## QA Instructions

- [x] Tested with tiling up to 4096x4096 -> 16384x16384.
- [x] Test on images with an alpha channel (the alpha channel is
dropped).
- [x] Test on images with odd dimension.
- [x] Test no tiling (`tile_size=0`)

## Merge Plan

- [x] Merge #6556 first, and change the target branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 15:51:15 -04:00
6b0ca88177 Merge branch 'main' into ryan/spandrel-upscale-tiling 2024-07-16 15:40:14 -04:00
7ad32dcad2 Add support for Spandrel Image-to-Image models (e.g. ESRGAN, Real-ESRGAN, Swin-IR, DAT, etc.) (#6556)
## Summary

- Add support for all
[spandrel](https://github.com/chaiNNer-org/spandrel) image-to-image
models - this is a collection of many popular super-resolution models
(e.g. ESRGAN, Real-ESRGAN, SwinIR, DAT, etc.)

Examples of supported models:

- DAT:
https://drive.google.com/drive/folders/1iBdf_-LVZuz_PAbFtuxSKd_11RL1YKxM
- SwinIR: https://github.com/JingyunLiang/SwinIR/releases
- Any ESRGAN / Real-ESRGAN model

## Related Issues

Closes #6394 

## QA Instructions

- [x] Test that unsupported models still fail the probe (i.e. no false
positive spandrel models)
- [x] Test adding a few non-spandrel model types
- [x] Test adding a handful of spandrel model types: ESRGAN,
Real-ESRGAN, SwinIR, DAT
- [x] Verify model size estimation for the model cache
- [x] Test using the spandrel models in a practical image upscaling
workflow

## Merge Plan

- [x] Get approval from @brandonrising and @maryhipp before merging -
this PR has commercial implications.
- [x] Merge #6571 and change the target branch to `main`

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-16 15:37:20 -04:00
81991e072b Merge branch 'main' into ryan/spandrel-upscale 2024-07-16 15:14:08 -04:00
cec345cb5c Change attention processor apply logic 2024-07-16 20:03:29 +03:00
608cbe3f5c Separate inputs in denoise context 2024-07-16 19:30:29 +03:00
7905a46ca4 chore: bump version to 4.2.6post1 2024-07-16 09:09:04 +10:00
38343917f8 fix(backend): revert non-blocking device transfer
In #6490 we enabled non-blocking torch device transfers throughout the model manager's memory management code. When using this torch feature, torch attempts to wait until the tensor transfer has completed before allowing any access to the tensor. Theoretically, that should make this a safe feature to use.

This provides a small performance improvement but causes race conditions in some situations. Specific platforms/systems are affected, and complicated data dependencies can make this unsafe.

- Intermittent black images on MPS devices - reported on discord and #6545, fixed with special handling in #6549.
- Intermittent OOMs and black images on a P4000 GPU on Windows - reported in #6613, fixed in this commit.

On my system, I haven't experience any issues with generation, but targeted testing of non-blocking ops did expose a race condition when moving tensors from CUDA to CPU.

One workaround is to use torch streams with manual sync points. Our application logic is complicated enough that this would be a lot of work and feels ripe for edge cases and missed spots.

Much safer is to fully revert non-locking - which is what this change does.
2024-07-16 08:59:42 +10:00
9f088d1bf5 Multiple small fixes 2024-07-16 00:51:25 +03:00
fd8d1c12d4 Remove 'del' operator overload 2024-07-16 00:43:32 +03:00
d623bd429b Fix condtionings logic 2024-07-16 00:31:56 +03:00
5a0c99816c chore: bump version to v4.2.6 2024-07-15 14:16:31 +10:00
24bf1ea65a fix(ui): boards cut off when search open 2024-07-15 14:07:20 +10:00
28e79c4c5e chore: ruff
Looks like an upstream change to ruff resulted in this file being a violation.
2024-07-15 14:05:04 +10:00
d7d59d704b chore: update default workflows
- Update all existing defaults
- Add Tiled MultiDiffusion workflow
2024-07-15 14:05:04 +10:00
8539c601e6 translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1262 of 1282 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-07-15 11:54:45 +10:00
5cbe9fafb2 fix(ui): clear selection when deleting last image in board 2024-07-15 08:57:13 +10:00
3ecd14f394 chore: bump version to 4.2.6rc1 2024-07-13 14:55:21 +10:00
7c0dfd74a5 fix(api): deleting large images fails
This issue is caused by a race condition. When a large image is served to the client, it is done using a streaming `FileResponse`. This concurrently serves the image straight from disk. The file is kept open by FastAPI until the image is fully served.

When a user deletes an image before the file is done serving, the delete fails because the file is still held by FastAPI.

To reproduce the issue:
- Create a very large image (8k reliably creates the issue).
- Create a smaller image, so that the first image in the gallery is not the large image.
- Refresh the app. The small image should be selected.
- Select the large image and immediately delete it. You have to be fast, to delete it before it finishes loading.
- In the terminal, we expect to see an error saying `Failed to delete image file`, and the image does not disappear from the UI.
- After a short wait, once the image has fully loaded, try deleting it again. We expect this to work.

The workaround is to instead serve the image from memory.

Loading the image to memory is very fast, so there is only a tiny window in which we could create the race condition, but it technically could still occur, because FastAPI is asynchronous and handles requests concurrently.

Once we load the image into memory, deletions of that image will work. Then we return a normal `Response` object with the image bytes. This is essentially what `FileResponse` does - except it uses `anyio.open_file`, which is async.

The tradeoff is that the server thread is blocked while opening the file. I think this is a fair tradeoff.

A future enhancement could be to implement soft deletion of images (db is already set up for this), and then clean up deleted image files on startup/shutdown. We could move back to using the async `FileResponse` for best responsiveness in the server without any risk of race conditions.
2024-07-13 14:46:41 +10:00
2c1a91241e fix(app): windows indefinite hang while finding port
For some reason, I started getting this indefinite hang when the app checks if port 9090 is available. After some fiddling around, I found that adding a timeout resolves the issue.

I confirmed that the util still works by starting the app on 9090, then starting a second instance. The second instance correctly saw 9090 in use and moved to 9091.
2024-07-13 14:46:41 +10:00
84f136e737 translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1262 of 1282 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-07-13 08:38:22 +10:00
499e4d4fde Add preview extension to check logic 2024-07-13 00:45:04 +03:00
e961dd1dec Remove remains of priority logic 2024-07-13 00:44:21 +03:00
7e00526999 Remove overrides logic for now 2024-07-13 00:28:56 +03:00
3a9dda9177 Renames 2024-07-12 22:44:00 +03:00
bd8ae5d896 Simplify guidance modes 2024-07-12 22:01:37 +03:00
87e96e1be2 Rename modifiers to callbacks, convert order to int, a bit unify injection points 2024-07-12 22:01:05 +03:00
0bc60378d3 A bit rework conditioning convert to unet kwargs 2024-07-12 20:43:32 +03:00
9cc852cf7f Base code from draft PR 2024-07-12 20:31:26 +03:00
712cf00a82 fix(app): vae tile size field description 2024-07-12 06:30:27 -07:00
fb1130c644 fix(ui): do not invalidate image dto cache when deleting image 2024-07-12 14:25:38 +10:00
0f65a12cf3 fix(ui): handle archived boards like other boards when they are visible, do not reset board selection when autoadd board is hidden 2024-07-12 14:25:38 +10:00
84abdc5780 fix(ui): prevent cutoff of last board 2024-07-12 14:25:38 +10:00
2320701929 Do not crash if there are invalid model configs in the DB (#6593)
## Summary

This PR changes the handling of invalid model configs in the DB to log a
warning rather than crashing the app.

This change is being made in preparation for some upcoming new model
additions. Previously, if a user rolled back from an app version that
added a new model type, the app would not launch until the DB was fixed.
This PR changes this behaviour to allow rollbacks of this type (with
warnings).

**Keep in mind that this change is only helpful to users _rolling back
to a version that has this fix_. I.e. it offers no help in the first
version that includes it.**

## QA Instructions

1. Run the Spandrel model branch, which adds a new model type
https://github.com/invoke-ai/InvokeAI/pull/6556.
2. Add a spandrel model via the model manager.
3. Rollback to main. The app will crash on launch due to the invalid
spandrel model config.
4. Checkout this branch. The app should now run with warnings about the
invalid model config.


## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-11 21:15:51 -04:00
69af099532 Warn on invalid model configs in the DB rather than crashing. 2024-07-11 21:05:55 -04:00
0428ce73a9 Add early cancellation to SpandrelImageToImageInvocation. 2024-07-11 15:42:33 -04:00
5795617f86 translationBot(ui): update translation (German)
Currently translated at 67.0% (859 of 1282 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
b533bc072e translationBot(ui): update translation (French)
Currently translated at 25.2% (322 of 1275 strings)

Co-authored-by: Nathan <bonnemainsnathan@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fr/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
d7199c7ca6 translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1282 of 1282 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1280 of 1280 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1275 of 1275 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1273 of 1273 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
a69284367b translationBot(ui): update translation (Italian)
Currently translated at 98.2% (1260 of 1282 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1260 of 1280 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1255 of 1275 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1253 of 1273 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1245 of 1265 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
c4d2fe9c65 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 76.5% (968 of 1265 strings)

Co-authored-by: Phrixus2023 <920414016@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
fe0d56de5c translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
HAL
7aec5624f7 translationBot(ui): update translation (Japanese)
Currently translated at 50.4% (636 of 1261 strings)

Co-authored-by: HAL <HALQME@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
B N
2f3ec41f94 translationBot(ui): update translation (German)
Currently translated at 67.3% (849 of 1261 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-07-11 19:23:28 +10:00
de1235c980 chore: bump version to 4.2.6a1 2024-07-11 10:34:53 +10:00
d0d2955992 Reduce peak VRAM utilization of SpandrelImageToImageInvocation. 2024-07-10 14:25:19 -04:00
d868d5d584 Make SpandrelImageToImage tiling much faster. 2024-07-10 14:25:19 -04:00
ab775726b7 Add tiling support to the SpoandrelImageToImage node. 2024-07-10 14:25:19 -04:00
650902dc29 Fix broken unit test caused by non-existent model path. 2024-07-10 13:59:17 -04:00
88c3a71586 fix(ui): fix bug with usePanel 2024-07-10 04:27:24 -07:00
ec1b429d45 feat(ui): add divider between board search and list 2024-07-10 04:27:24 -07:00
146e3a3377 feat(ui): tweak board tooltip behaviour 2024-07-10 04:27:24 -07:00
38622b0d91 feat(ui): board list title verbiage 2024-07-10 04:27:24 -07:00
7db767b7c3 feat(ui): sticky board list header 2024-07-10 04:27:24 -07:00
b70e87f25b feat(ui): tweak add board button style 2024-07-10 04:27:24 -07:00
fea1ec9085 feat(ui): updated boards resizable panel logic 2024-07-10 04:27:24 -07:00
2e7a95998c feat(ui): add support for default size in usePanel 2024-07-10 04:27:24 -07:00
788f90a7d5 feat(ui): tweak resizehandle styling 2024-07-10 04:27:24 -07:00
6bf29b20af fix(ui): fix edge case in panels
Not sure why I didn't figure out how to do this before - we only should reset a panel if it's too small.
2024-07-10 04:27:24 -07:00
8f0edcd4f4 fix(ui): edge cases when deleting, archiving, updating boards
Need to handle different cases where the selected or auto-add board is hidden - fall back to uncategorized in these situations.
2024-07-10 04:27:24 -07:00
a7c44b4a98 feat(ui): rename gallery boards on double click 2024-07-10 04:27:24 -07:00
48a57f0da8 feat(ui): boards styling
- Refine layout
- Update colors - more minimal, fewer shaded boxes
- Add indicator for search icons showing a search term is entered
- Handle new `projectName` and `projectUrl` ui props
2024-07-10 04:27:24 -07:00
dfd94bbd0b feat(ui): remove galleryHeader in favor of projectUrl & projectName 2024-07-10 04:27:24 -07:00
2edfb2356d remove extra boardname 2024-07-10 04:27:24 -07:00
58d2c1557d prettier 2024-07-10 04:27:24 -07:00
8fdff33cf8 update board header styling, toggle board search, resizing gallery panels 2024-07-10 04:27:24 -07:00
a96e34d2d1 remove collapsibles and update board title 2024-07-10 04:27:24 -07:00
8826adad24 filter out uncategorized when not included in search 2024-07-10 04:27:24 -07:00
cdacf2ecd0 clear out boards search when adding a new board 2024-07-10 04:27:24 -07:00
f193a576a6 move boardname back and make collapsible again 2024-07-10 04:27:24 -07:00
b7ebdca70a update image and assets tabs styling 2024-07-10 04:27:24 -07:00
7b5d4935b4 Merge branch 'main' into ryan/spandrel-upscale 2024-07-09 13:47:11 -04:00
c90b5541e8 Boards UI update and add support for private boards (#6588)
## Summary
Update Boards UI in the gallery and adds support for creating and
displaying private boards
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions
Can view private boards by setting config.allowPrivateBoards to true
<!--WHEN APPLICABLE: Describe how you have tested the changes in this
PR. Provide enough detail that a reviewer can reproduce your tests.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-07-09 10:52:01 -04:00
a79e9caab1 Merge branch 'main' into boards-ui-update 2024-07-09 10:00:26 -04:00
4313578d8e fix(docker): ensure 'chown' does not break on read-only fs; fixes #6264 2024-07-09 09:47:29 -04:00
42c2dea202 fix(docker): change 'nvidia' profile name to 'cuda' 2024-07-09 09:47:29 -04:00
b672cc37a7 docs: overhaul Docker documentation, add to main README 2024-07-09 09:47:29 -04:00
476ebd13ae feat(ui): add board button tooltip when private boards enabled 2024-07-09 22:51:08 +10:00
9ae808712e Demote error log to warning for models treated as having size 0 (#6589)
## Summary

Demote error log to warning for models treated as having size 0.

## Related Issues / Discussions

Closes #6587 

I looked into handling ESRGAN model sizes properly. They load a
state_dict with a bit of an unusual nested-dict structure. Rather than
figure out how to accurately calculate their size, we can just wait for
https://github.com/invoke-ai/InvokeAI/pull/6556. ESRGAN model size
handling should work properly when loaded through that pathway.

## QA Instructions

Loaded an ESRGAN model, and confirmed that the warning log is at the
warning level.

## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-09 08:51:00 -04:00
2460689c00 feat(ui): style board name 2024-07-09 22:47:03 +10:00
781b800ef7 feat(ui): boards lists start collapsed 2024-07-09 22:40:50 +10:00
d38d513d23 fix(ui): autoadd badge doesn't flex shrink 2024-07-09 22:39:32 +10:00
80e1b87b9e fix(ui): autoadd badge hides when editing name 2024-07-09 22:39:17 +10:00
6014382c7b feat(ui): select a board when it is created 2024-07-09 22:37:41 +10:00
af63c538ed Demote error log to warning to models treated as having size 0. 2024-07-09 08:35:43 -04:00
060d698a12 feat(ui): restore image count for boards 2024-07-09 22:19:20 +10:00
637802d803 fix(ui): restore auto-add indicator 2024-07-09 22:14:21 +10:00
2faf1e2ed3 fix(ui): show uncategorized board when private boards disabled 2024-07-09 22:02:54 +10:00
81cf47dd99 feat(ui): boards list layout & style tweaking 2024-07-09 21:58:48 +10:00
907b257984 remove unused file and addressed pr feedback 2024-07-08 23:20:50 -04:00
e2667f957c prettier 2024-07-08 22:16:31 -04:00
40c3b5e727 generate types again 2024-07-08 22:13:12 -04:00
38c5804457 remove unused disclosure 2024-07-08 22:09:23 -04:00
faf65c988a Merge branch 'main' into boards-ui-update 2024-07-08 22:06:26 -04:00
1785825690 add current gallery board name 2024-07-08 22:03:42 -04:00
0e092c0fb5 update is_private name 2024-07-08 22:03:12 -04:00
79a7b11214 remove old boards list 2024-07-08 15:02:22 -04:00
3a85ab15a1 update BoardRecord 2024-07-08 14:55:04 -04:00
9ca6980c7a cleanup and bug fixes 2024-07-08 13:29:53 -04:00
bdf4fcda23 Fixed 404 error on latest release link (line 16):
This commit corrects a broken link on line 16 that was pointing to the latest release but causing a 404 error (page not found) when clicked. The issue was identified as a trailing dot at the end of the URL, which has now been removed. This ensures users can access the intended latest release page.
2024-07-07 08:35:06 -07:00
ecbff2aa44 Whoops... forgot to commit this file. 2024-07-05 14:57:05 -04:00
0ce6ec634d Do not assign the result of SpandrelImageToImageModel.load_from_file(...) during probe to ensure that the model is immediately gc'd. 2024-07-05 14:05:12 -04:00
d09999736c Rename spandrel models to 'Image-to-Image Model' throughout the UI. 2024-07-05 14:04:08 -04:00
35f8781ea2 Fix static type errors with SCHEDULER_NAME_VALUES. And, avoid bi-directional cross-directory imports, which contribute to circular import issues. 2024-07-05 07:38:35 -07:00
3a24d70279 Update the PR template QA instructions (#6580)
## Summary

This PR tweaks the wording of the PR template QA instructions with the
goals of:
1. Make it more clear that PR authors are responsible for testing their
PRs.
2. Encouraging sufficient detail in the test descriptions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-04 21:20:08 +05:30
7c8846e309 Update the PR template QA instructions to 1) make it clear that authors are responsible for testing their PRs, and 2) encourage sufficient detail in the QA section. 2024-07-04 11:30:38 -04:00
bd42b75d1e Delete unused duplicate libc_util.py file (#6579)
## Summary
 
Delete an unused duplicate libc_util.py file. The active version is at
`invokeai/backend/model_manager/libc_util.py`

## QA Instructions

I ran a smoke test to confirm that memory snapshotting still works.

## Merge Plan

- [x] Change target branch to `main` before merging.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-04 20:15:39 +05:30
36202d6d25 Delete unused duplicate libc_util.py file. The active version is at invokeai/backend/model_manager/libc_util.py. 2024-07-04 10:30:40 -04:00
b35f5b3877 Enforce absolute imports with ruff (#6576)
## Summary

This PR migrates all relative imports to absolute imports, and adds a
ruff check to enforce this going forward.

The justification for this change is here:
https://github.com/invoke-ai/InvokeAI/issues/6575

## QA Instructions

Smoke test all common workflows. Most of the relative -> absolute
conversions could be completed automatically, so the risk is relatively
low.

## Merge Plan

As with any far-reaching change like this, it is likely to cause some
merge conflicts with some in-flight branches. Unfortunately, there's no
way around this, but let me know if you can think of in-flight work that
will be significantly disrupted by this.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_ N/A
- [x] _Documentation added / updated (if applicable)_ N/A
2024-07-04 10:29:01 -04:00
1d449097cc Apply ruff rule to disallow all relative imports. 2024-07-04 09:35:37 -04:00
9da5925287 Add ruff rule to disallow relative parent imports. 2024-07-04 09:35:37 -04:00
7bbd793064 Fix some models treated as having size 0 in the model cache (#6571)
## Summary

This PR fixes a regression that caused the following models to be
treated as having size 0 in the model cache: `(TextualInversionModelRaw,
IPAdapter, LoRAModelRaw)`.

Changes:
- Call the correct model size calculation for all supported model types.
- Log an error message if an unexpected model type is loaded, to prevent
similar regressions in the future.

## QA Instructions

I tested the following features and verified that no models fell back to
using a size of 0 unexpectedly:
- Test-to-image
- Textual Inversion
- LoRA
- IP-Adapter
- ControlNet
(All tested with both SD1.5 and SDXL.)

I compared the model cache switching behavior before and after this
change with a large number of LoRAs (10). Since LoRAs are small compared
to the main models, the changes in behaviour are minimal. Nonetheless,
it makes sense to get this in for correctness. And it might make a
difference for some usage patterns with limited RAM.

## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-04 09:21:30 -04:00
414750a45d Update calc_model_size_by_data(...) to handle all expected model types, and to log an error if an unexpected model type is received. 2024-07-04 09:08:25 -04:00
0fe92cd406 [MM bugfix] Put model install errors on the event bus (#6578)
* fix access token lookup

* fix bug preventing model install error events from being reported

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-07-03 22:44:34 -04:00
a405f14ea2 Fix SpandrelImageToImageModel size calculation for the model cache. 2024-07-03 16:38:16 -04:00
9d3739244f Prettier formatting. 2024-07-03 16:28:21 -04:00
534528b85a Re-generate schema.ts 2024-07-03 16:28:21 -04:00
114320ee69 (minor) typo 2024-07-03 16:28:21 -04:00
6161aa73af Move pil_to_tensor() and tensor_to_pil() utilities to the SpandrelImageToImage class. 2024-07-03 16:28:21 -04:00
1ab20f43c8 Tidy spandrel model probe logic, and document the reasons behind the current implementation. 2024-07-03 16:28:21 -04:00
9328c17ded Add Spandrel models to the list of models in the Model Manager tab. 2024-07-03 16:28:21 -04:00
c1c8e55e8e Fix static check errors. 2024-07-03 16:28:21 -04:00
504a42fe61 typo: fix UIType on Spandrel Upscaling node. 2024-07-03 16:28:21 -04:00
29c8ddfb88 WIP - A bunch of boilerplate to support Spandrel Image-to-Image models throughout the model manager and the frontend. 2024-07-03 16:28:21 -04:00
95079dc7d4 Use a ModelIdentifierField to identify the spandrel model in the UpscaleSpandrelInvocation. 2024-07-03 16:28:21 -04:00
2a1514272f Set the dtype correctly for SpandrelImageToImageModels when they are loaded. 2024-07-03 16:28:21 -04:00
59ce9cf41c WIP - Begin to integrate SpandreImageToImageModel type into the model manager. 2024-07-03 16:28:21 -04:00
e6abea7bc5 (minor) Remove redundant else clause on a for-loop with no break statement. 2024-07-03 16:28:21 -04:00
c335f92345 (minor) simplify startswith(...) syntax. 2024-07-03 16:28:21 -04:00
c1afe35704 Add prototype invocation for running upscaling models with spandrel. 2024-07-03 16:28:21 -04:00
6437ef3f82 add view that displays private boards with shared boards 2024-07-03 14:25:36 -04:00
bb6ff4cf37 chore(ci): update pnpm github action 2024-07-03 13:16:25 -04:00
e719018ba1 fix sort order 2024-07-03 09:20:08 -07:00
a11dc62c2e fix access token lookup 2024-07-03 13:31:08 +10:00
7c01b69c12 fix(ui): revise image selection after deletion
- For single image deletion, select the image in the same slot as the deleted image
- For multiple image deletion, empty selection
- On list images, if no images are currently selected, select the first image
2024-07-03 13:20:40 +10:00
5578660ccb fix(ui): reset page when search term changes 2024-07-03 13:20:40 +10:00
e4813f800a Update calc_model_size_by_data(...) to handle all expected model types, and to log an error if an unexpected model type is received. 2024-07-02 21:51:45 -04:00
e9936c27fb Make the VAE tile size configurable for tiled VAE (#6555)
## Summary

- This PR exposes a `tile_size` field on `ImageToLatentsInvocation` and
`LatentsToImageInvocation`.
  - Setting `tile_size = 0` preserves the default behaviour.
- This feature is primarily intended to support upscaling workflows that
require VAE encoding/decoding high resolution images. In the future, we
may want to expose the tile size as a global application config, but
that's a separate conversation.
- As a general rule, larger tile sizes produce better results at the
cost of higher memory usage.

### Example:

Original (5472x5472)

![orig](https://github.com/invoke-ai/InvokeAI/assets/14897797/af0a975d-11ed-4f3c-9e53-84f3da6c997e)

VAE roundtrip with 512x512 tiles (note the discoloration)

![vae_roundtrip_512x512](https://github.com/invoke-ai/InvokeAI/assets/14897797/d589ae3e-fe93-410a-904c-f61f0fc0f1f2)

VAE roundtrip with 1024x1024 tiles (some discoloration still present,
but less severe than at 512x512)

![vae_roundtrip_1024x1024](https://github.com/invoke-ai/InvokeAI/assets/14897797/d0bb9752-3bfa-444f-88c9-39a3ca89c748)


## Related Issues / Discussions

Related: #6144 

## QA Instructions

- [x] Test image generation via the Linear tab
- [x] Test VAE roundtrip with tiling disabled
- [x] Test VAE roundtrip with tiling and tile_size = 0
- [x] Test VAE roundtrip with tiling and tile_size > 0

## Merge Plan

No special instructions.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-07-02 09:16:07 -04:00
3752509066 Expose the VAE tile_size on the VAE encode and decode invocations. 2024-07-02 09:07:03 -04:00
a1b7dbfa54 Add unit test for patch_vae_tiling_params(). 2024-07-02 09:07:03 -04:00
79640ba14e Add context manager for overriding VAE tiling params. 2024-07-02 09:07:03 -04:00
4075a81676 feat(ui): gallery image selection ux
The selection logic is a bit complicated. We have image selection and pagination, both of which can be triggered using the mouse or hotkeys. We have viewer image selection and comparison image selection, which is determined by the alt key.

This change ties the room together with these behaviours:

- Changing the page using pagination buttons never changes the selection.
- Changing the selected image using arrows may change the page, if the arrow key pressed would select an image off the current page.
  - `right` on the last image of the current page goes to the next page
  - `down` on the last row of images goes to the next page
  - `left` on the first image of the current page goes to the previous page
  - `up` on the first row of images goes to the previous page
- If `alt` is held when using arrow keys, we change the page, but we only change the comparison image selection.
- When using arrow keys, if the page has changed since the last image was selected, the selection is reset to the first image on the page.
- The next/previous buttons on the image viewer do the same thing as `left` and `right` without `alt`.
- When clicking an image in the gallery:
  - If no modifier keys are held, the image is exclusively selected.
  - If `ctrl` or `meta` are held, the image's selection status is toggled.
  - If `shift` is held, all images from the last-selected image to the image are selected. If there are no images on the current page, the selection is unchanged.
  - If `alt` is held, the image is set as the compare image.
- `ctrl+a` and `meta+a` add the current page to the selection.

The logic for gallery navigation and selection is now pretty hairy. It's spread across 3 hooks, a listener, redux slice, components.

When we next make changes to this part of the app, we should consider consolidating some of the related logic. Probably most of it can go into a single listener and make it much simpler to grok.
2024-07-02 13:52:32 +10:00
4d39976909 feat(ui): restore loading spinner in search box
@maryhipp you were right, after trying loading bars and different placements, this feels like the best place for it.
2024-07-02 13:52:32 +10:00
d14894b3ae (ui) clarify auto-add options 2024-07-02 06:44:09 +10:00
6f5c5b0757 lint fix 2024-07-01 15:36:06 -04:00
93caa23ef8 undo 2024-07-01 15:36:06 -04:00
977a77f4e6 fix(ui): dont mess up redux if 403 gets thrown 2024-07-01 15:36:06 -04:00
57c0fcb93d (ui) clarify auto-add options 2024-07-01 15:36:06 -04:00
8b55900035 Update README.md
Updated to include more context confirming the community edition is in fact free for commercial use.
2024-07-01 09:12:31 -07:00
b1cc413bbd tidy(ui): remove search term fetching indicator
Don't like this UI (even though I suggested it). No need to prevent the user from interacting with the search term field during fetching. Let's figure out a nicer way to present this in a followup.
2024-07-01 20:06:28 +10:00
face94ce33 feat(ui): tweak search term placeholder verbiage 2024-07-01 20:06:28 +10:00
f0b1f0e5b6 feat(ui): pass search term as-is to query
The images service does not add the query filter if the search term is an empty string.
2024-07-01 20:06:28 +10:00
390dc47db5 feat(app): transform search term to lowercase 2024-07-01 20:06:28 +10:00
20d5c3a8bf (ui): improve loader/fetching state while searching, make search term a string in redux 2024-07-01 20:06:28 +10:00
134d831ebf (api) simplify query 2024-07-01 20:06:28 +10:00
b65ed8e8f2 fix commented out migration 2024-07-01 20:06:28 +10:00
93951dcf82 (api) ruff 2024-07-01 20:06:28 +10:00
da05034e20 feat(ui): debounced gallery search 2024-07-01 20:06:28 +10:00
d579aefb3e feat(api): add optional search_term query param to image list to search metadata 2024-07-01 20:06:28 +10:00
5d1f6db414 fix(app): fix SQL query w/ enum for python 3.11 (#6557)
## Summary

Python 3.11 has a wonderfully devious breaking change where _sometimes_
using enum classes that inherit from `str` or `int` do not work the same
way as they do in 3.10 when used within string formatting/interpolation.

This breaks the new gallery sort queries. The fix is to use
`order_dir.value` instead of `order_dir` in the query.

This was not an issue during development because the feature was
developed w/ python 3.10.

## Related Issues / Discussions

Thanks to @JPPhoto for reporting and troubleshooting:
https://discord.com/channels/1020123559063990373/1149513625321603162/1256211815982039173

## QA Instructions

JP's fancy python 3.11 system should work on this PR.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-06-29 18:50:16 +05:30
f9961eceb7 fix(app): fix SQL query w/ enum for python 3.11 2024-06-29 11:07:39 +10:00
10076fb1e8 feat(ui): tweak gallery settings popover divider styling 2024-06-28 18:01:01 +10:00
d6e85e5f67 tidy(ui): rename GalleryBulkSelect -> GallerySelectionCountTag 2024-06-28 18:01:01 +10:00
1ce459198c chore(ui): knip 2024-06-28 18:01:01 +10:00
17d337169d fix(ui): do not reset limit when changing gallery view 2024-06-28 18:01:01 +10:00
1468f4d37e perf(ui): split out gallery settings popover components
This was taking over 15ms (!) to render each time a setting changed, wtf
2024-06-28 18:01:01 +10:00
2b744480d6 feat(ui): update UI for sorting 2024-06-28 18:01:01 +10:00
abb8d34b56 chore(ui): typegen 2024-06-28 18:01:01 +10:00
9e664d7c58 feat(api): remove order_by in favor of starred_first for images records 2024-06-28 18:01:01 +10:00
c96ccae70b feat(app): remove order_by in favor of starred_first for images records 2024-06-28 18:01:01 +10:00
f268fe126e feat(api): add order_by and order_dir to list images for sorting 2024-06-28 18:01:01 +10:00
6109a06f04 feat(ui): gallery sort by created at or starred, asc or desc 2024-06-28 18:01:01 +10:00
5df2a79549 Update starter models 2024-06-28 17:49:45 +10:00
10b9088312 update controlnet starter models 2024-06-28 17:49:45 +10:00
41f46b846b chore: ruff 2024-06-28 10:36:05 +10:00
6dfc406c52 tests: update test_bulk_download.py after addition of archived field 2024-06-28 10:36:05 +10:00
0d4b80780b feat(ui): handle edge cases when archiving/deleting boards
If the currently selected or auto-add board is archived or deleted, we should reset them. There are some edge cases taht weren't handled in the previous implementation.

All handling of this logic is moved to the (renamed) listener.
2024-06-28 10:36:05 +10:00
15b9ece411 chore(ui): typegen 2024-06-28 10:36:05 +10:00
89fcab34d0 feat(app): BoardRecord.archived is a required field 2024-06-28 10:36:05 +10:00
132289de55 chore: ruff E721
Looks like in the latest version of ruff, E721 was added or changed and now catches something it didn't before.
2024-06-28 10:36:05 +10:00
9f93e9d120 fix(app): when creating image, skip adding to board if board doesn't exist
Before this change, if you attempt to create an image that with a nonexistent board, we'd get an unhandled error when adding the image to a board. The record would be created, but file not, due to the structure of the code.

With this change, we now log a warning if we have a problem adding the image to the board, but the record and file are still created.

A future improvement would be to create a transaction for this part of the code, preventing some other situation that could result in only the record or only the file beings saved.
2024-06-28 10:36:05 +10:00
b5f23292d4 lint fix 2024-06-28 10:36:05 +10:00
a63dbb2c2d (api) change query param to include_archived 2024-06-28 10:36:05 +10:00
740bf80f3e (ui): update query param to include_archived, fix cache when archiving boards 2024-06-28 10:36:05 +10:00
dc90de600d (ui) allow auto-add on archived boards, reset to uncategorized if auto-add board is not currently visible due to archived view 2024-06-28 10:36:05 +10:00
5709f82e5f feat(ui): separate context menu for no board board
Much easier to not need to handle the board being optional in the component.
2024-06-28 10:36:05 +10:00
20042d99ec tidy(ui): archived icon component 2024-06-28 10:36:05 +10:00
8fce168dc5 fix tsc errors 2024-06-28 10:36:05 +10:00
a7ea096b28 ruff format 2024-06-28 10:36:05 +10:00
29eb3c8b62 lint fix 2024-06-28 10:36:05 +10:00
071e8bcee4 feat(ui): make archiving and auto-add mutually exclusive 2024-06-28 10:36:05 +10:00
68c0aa898f feat(ui): add ability to archive/unarchive boards, add toggle to gallery settings to show/hide archived boards in list 2024-06-28 10:36:05 +10:00
5120a76ce5 cleanup 2024-06-28 10:36:05 +10:00
38a948ac9f feat(api): add archived query param to board list endpoint to include them in the response 2024-06-28 10:36:05 +10:00
c33111468e feat(api): ability to archive boards 2024-06-28 10:36:05 +10:00
3e0fb45dd7 Load single-file checkpoints directly without conversion (#6510)
* use model_class.load_singlefile() instead of converting; works, but performance is poor

* adjust the convert api - not right just yet

* working, needs sql migrator update

* rename migration_11 before conflict merge with main

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* implement lightweight version-by-version config migration

* simplified config schema migration code

* associate sdxl config with sdxl VAEs

* remove use of original_config_file in load_single_file()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-27 17:31:28 -04:00
aba16085a5 fix(backend): mps should not use non_blocking (#6549)
## Summary

We can get black outputs when moving tensors from CPU to MPS. It appears
MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
-
https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility
takes a torch device and returns the flag to be used for non_blocking
when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

## Related Issues / Discussions

Fixes: #6545

## QA Instructions

For both MPS and CUDA:
- Generate at least 5 images using LoRAs
- Generate at least 5 images using IP Adapters

## Merge Plan

We have pagination merged into `main` but aren't ready for that to be
released.

Once this fix is tested and merged, we will probably want to create a
`v4.2.5post1` branch off the `v4.2.5` tag, cherry-pick the fix and do a
release from the hotfix branch.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_ @RyanJDick @lstein This
feels testable but I'm not sure how.
- [ ] _Documentation added / updated (if applicable)_
2024-06-27 10:11:53 -04:00
14775cc9c4 ruff format 2024-06-27 09:45:13 -04:00
c7562dd6c0 fix(backend): mps should not use non_blocking
We can get black outputs when moving tensors from CPU to MPS. It appears MPS to CPU is fine. See:
- https://github.com/pytorch/pytorch/issues/107455
- https://discuss.pytorch.org/t/should-we-set-non-blocking-to-true/38234/28

Changes:
- Add properties for each device on `TorchDevice` as a convenience.
- Add `get_non_blocking` static method on `TorchDevice`. This utility takes a torch device and returns the flag to be used for non_blocking when moving a tensor to the device provided.
- Update model patching and caching APIs to use this new utility.

Fixes: #6545
2024-06-27 19:15:23 +10:00
a0a0c57789 chore(ui): knip 2024-06-27 13:48:40 +10:00
32ebf82d1a feat(ui): better pagination buttons 2024-06-27 13:48:40 +10:00
2dd172c2c6 feat(ui): gallery bulk select styling 2024-06-27 13:48:40 +10:00
280ec9d4b3 fix(ui): invalidate getImageDTO caches when images are mutated 2024-06-27 13:48:40 +10:00
fde8fc7575 perf(ui): optimistic updates for getImageDTO query cache 2024-06-27 13:48:40 +10:00
6dcdc87eb1 fix(ui): control adapter image preview 2024-06-27 13:48:40 +10:00
93ffcb642e lint fix 2024-06-27 13:48:40 +10:00
4c914ef2e8 use correct query params for boardIdSelected listener 2024-06-27 13:48:40 +10:00
c0ad5bc4a4 fix when deleting first image in list 2024-06-27 13:48:40 +10:00
8c58a180de GG another fix 2024-06-27 13:48:40 +10:00
715dd983b0 appease the knip 2024-06-27 13:48:40 +10:00
84ffd36071 lint fix 2024-06-27 13:48:40 +10:00
9f30f1bfec fix circular dep 2024-06-27 13:48:40 +10:00
bdff5c4e87 only show selected when greater than 0 2024-06-27 13:48:40 +10:00
afb0651f91 clear selection when board or gallery view changes 2024-06-27 13:48:40 +10:00
66e25628c3 fix neg pages 2024-06-27 13:48:40 +10:00
3a531a3c88 remove rest of cache, add bulk select UI 2024-06-27 13:48:40 +10:00
f01df49128 lint fix 2024-06-27 13:48:40 +10:00
7bbe236107 implmenet custom sort to replace images adapter logic 2024-06-27 13:48:40 +10:00
719c066ac4 feat(ui): more efficient board totals fetching
We only need to show the totals in the tooltip. Tooltips accpet a component for the tooltip label. The component isn't rendered until the tooltip is triggered.

Move the board total fetching into a tooltip component for the boards. Now we only fire these requests when the user mouses over the board
2024-06-27 13:48:40 +10:00
689dc30f87 feat(ui): tweak pagination buttons
- Fix off-by-one error when going to last page
- Update component to have minimal/no layout shift
2024-06-27 13:48:40 +10:00
1f22f6ae02 feat(ui): iterate on dynamic gallery limit
- Simplify the gallery layout
- Set an initial gallery limit to load _some_ images immediately.
- Refactor the resize observer to use the actual rendered image component to calculate the number of images per row/col. This prevents inaccuracies caused by image padding that could result in the wrong number of images.
- Debounce the limit update to not thrash teh API
- Use absolute positioning trick to ensure the gallery container is always exactly the right size
- Minimum of `imagesPerRow` images loaded at all times
2024-06-27 13:48:40 +10:00
9c931d9ca0 fix(ui): gallery content overflow
This is one of those unexpected CSS quirks. Flex containers need min-width or min-height for their children to not overflow. Add `minH={0}` to gallery container.
2024-06-27 13:48:40 +10:00
e0a241fa4f wip change limit based on size of gallery 2024-06-27 13:48:40 +10:00
6a4b4ee340 trying to invalidate all the tags 2024-06-27 13:48:40 +10:00
488bf21925 fix single pagers 2024-06-27 13:48:40 +10:00
c9c39c02b6 handle generations coming in, fix pagination to use total from list query so it updates as that changes 2024-06-27 13:48:40 +10:00
5101dc4bef some cleanup, add page buttons 2024-06-27 13:48:40 +10:00
98c77a3ed1 pull in spencers work 2024-06-27 13:48:40 +10:00
4fca62680d Update invokeai_version.py 2024-06-27 10:41:01 +10:00
f76282a5ff Fix handling handling of 0-step denoising process (#6544)
## Summary

https://github.com/invoke-ai/InvokeAI/pull/6522 introduced a change in
behavior in cases where start/end were set such that there are 0
timesteps. This PR reverts that change.

cc @StAlKeR7779 

## QA Instructions

Run with euler, 5 steps, start: 0.0, end: 0.05. I ran this test before
#6522, after #6522, and on this branch. This branch restores the
behavior to pre-#6522 i.e. noise is injected even if no denoising steps
are applied.


## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-06-26 13:01:58 -04:00
9a3b8c6fcb Fix handling of init_timestep in StableDiffusionGeneratorPipeline and improve its documentation. 2024-06-26 12:51:51 -04:00
bd74b84cc5 Revert "Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array."
This reverts commit fa40061eca.
2024-06-26 12:51:51 -04:00
dc23bebebf Run ruff 2024-06-26 21:46:59 +10:00
38b6f90c02 Update prevention exception message 2024-06-26 21:46:59 +10:00
cd9dfefe3c Fix inpainting mask shape assertions. 2024-06-25 11:31:52 -07:00
b9946e50f9 Use image-space tile dimensions on the TiledMultiDiffusionDenoiseLatents invocation. This is more natural for many users. 2024-06-25 11:31:52 -07:00
06f49a30f6 Mark TiledMultiDiffusionDenoiseLatents as a Beta node. 2024-06-25 11:31:52 -07:00
e1af78c702 Make the tile_overlap input to MultiDiffusion *strictly* control the amount of overlap rather than being a lower bound. 2024-06-25 11:31:52 -07:00
c5588e1ff7 Add TODO comment explaining why some schedulers do not interact well with MultiDiffusion. 2024-06-25 11:31:52 -07:00
07ac292680 Consolidate _region_step() function - the separation wasn't really adding any value. 2024-06-25 11:31:52 -07:00
7c032ea604 (minor) Fix some documentation typos. 2024-06-25 11:31:52 -07:00
c5ee415607 Add progress image callbacks to TiledMultiDiffusionDenoiseLatentsInvocation. 2024-06-25 11:31:52 -07:00
fa40061eca Remove the redundant init_timestep parameter that was being passed around. It is simply the first element of the timesteps array. 2024-06-25 11:31:52 -07:00
7cafd78d6e Revert "Expose vae_decode(...) as a staticmethod on LatentsToImageInvocation."
This reverts commit 753239b48d.
2024-06-25 11:31:52 -07:00
8a43656cf9 (minor) Address a few small TODOs. 2024-06-25 11:31:52 -07:00
bd3b6ca11b Remove TiledStableDiffusionRefineInvocation. It was a proof-of-concept that has been superseded by TiledMultiDiffusionDenoiseLatents. 2024-06-25 11:31:52 -07:00
ceae5fe1db (minor) typo 2024-06-25 11:31:52 -07:00
25067e4f0d Delete rough notes. 2024-06-25 11:31:52 -07:00
fb0aaa3e6d Fix advanced scheduler behaviour in MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
c22526b9d0 Fix handling of stateful schedulers in MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
c881882f73 Connect TiledMultiDiffusionDenoiseLatents to the MultiDiffusionPipeline backend. 2024-06-25 11:31:52 -07:00
36473fc52a Remove regional conditioning logic from MultiDiffusionPipeline - it is not yet supported. 2024-06-25 11:31:52 -07:00
b9964ecc4a Initial (untested) implementation of MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
051af802fe Remove inpainting support from MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
3ff2e558d9 Remove IP-Adapter and T2I-Adapter support from MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
fc187c9253 Document plan for the rest of the MultiDiffusion implementation. 2024-06-25 11:31:52 -07:00
605f460c7d Add detailed docstring to latents_from_embeddings(). 2024-06-25 11:31:52 -07:00
60d1e686d8 Copy StableDiffusionGeneratorPipeline as a starting point for a new MultiDiffusionPipeline. 2024-06-25 11:31:52 -07:00
22704dd542 Simplify handling of inpainting models. Improve the in-code documentation around inpainting. 2024-06-25 11:31:52 -07:00
875673c9ba Minor tidying of latents_from_embeddings(...). 2024-06-25 11:31:52 -07:00
f604575862 Consolidate latents_from_embeddings(...) and generate_latents_from_embeddings(...) into a single function. 2024-06-25 11:31:52 -07:00
80a67572f1 Fix invocation name of tiled_multi_diffusion_denoise_latents. 2024-06-25 11:31:52 -07:00
60ac937698 Improve clarity of comments regarded when 'noise' and 'latents' are expected to be set. 2024-06-25 11:31:52 -07:00
1e41949a02 Fix static check errors on imports in diffusers_pipeline.py. 2024-06-25 11:31:52 -07:00
5f0e330ed2 Remove a condition for handling inpainting models that never resolves to True. The same logic is already applied earlier by AddsMaskLatents. 2024-06-25 11:31:52 -07:00
9dd779b414 Add clarifying comment to explain why noise might be None in latents_from_embedding(). 2024-06-25 11:31:52 -07:00
fa183025ac Remove unused are_like_tensors() function. 2024-06-25 11:31:52 -07:00
d3c85aa91a Remove unused StableDiffusionGeneratorPipeline.use_ip_adapter member. 2024-06-25 11:31:52 -07:00
82619602a5 Remove unused StableDiffusionGeneratorPipeline.control_model. 2024-06-25 11:31:52 -07:00
196f3b721d Stricter typing for the is_gradient_mask: bool. 2024-06-25 11:31:52 -07:00
244c28859d Fix typing of control_data to reflect that it can be None. 2024-06-25 11:31:52 -07:00
40ae174c41 Fix typing of timesteps and init_timestep. 2024-06-25 11:31:52 -07:00
afaebdf151 Fix typing to reflect that the callback arg to latents_from_embeddings is never None. 2024-06-25 11:31:52 -07:00
d661517d94 Move seed above optional params. 2024-06-25 11:31:52 -07:00
82a69a54ac Simplify handling of AddsMaskGuidance, and fix some related type errors. 2024-06-25 11:31:52 -07:00
ffc28176fe Remove unused num_inference_steps. 2024-06-25 11:31:52 -07:00
230e205541 WIP TiledMultiDiffusionDenoiseLatents. Updated parameter list and first half of the logic. 2024-06-25 11:31:52 -07:00
7e94350351 Tidy DenoiseLatentsInvocation.prep_control_data(...) and fix some type errors. 2024-06-25 11:31:52 -07:00
c4e8549c73 Make DenoiseLatentsInvocation.prep_control_data(...) a staticmethod so that it can be called externally. 2024-06-25 11:31:52 -07:00
350a210835 Copy TiledStableDiffusionRefineInvocation as a starting point for TiledMultiDiffusionDenoiseLatents.py 2024-06-25 11:31:52 -07:00
ed781dbb0c Change tiling strategy to make TiledStableDiffusionRefineInvocation work with more tile shapes and overlaps. 2024-06-25 11:31:52 -07:00
b41ea963e7 Expose a few more params from TiledStableDiffusionRefineInvocation. 2024-06-25 11:31:52 -07:00
da5d105049 Add support for LoRA models in TiledStableDiffusionRefineInvocation. 2024-06-25 11:31:52 -07:00
5301770525 Add naive ControlNet support to TiledStableDiffusionRefineInvocation 2024-06-25 11:31:52 -07:00
d08e405017 Fix ControlNetModel type hint import source. 2024-06-25 11:31:52 -07:00
534640ccde Rough prototype of TiledStableDiffusionRefineInvocation is working. 2024-06-25 11:31:52 -07:00
d5ab8cab5c WIP - TiledStableDiffusionRefine 2024-06-25 11:31:52 -07:00
4767301ad3 Minor improvements to LatentsToImageInvocation type hints. 2024-06-25 11:31:52 -07:00
21d7ca45e6 Expose vae_decode(...) as a staticmethod on LatentsToImageInvocation. 2024-06-25 11:31:52 -07:00
020e8eb413 Fix return type of prepare_noise_and_latents(...). 2024-06-25 11:31:52 -07:00
3d49541c09 Make init_scheduler() a staticmethod on DenoiseLatentsInvocation so that it can be called externally. 2024-06-25 11:31:52 -07:00
1ef266845a Only allow a single positive/negative prompt conditioning input for tiled refine. 2024-06-25 11:31:52 -07:00
a37589ca5f WIP on TiledStableDiffusionRefine 2024-06-25 11:31:52 -07:00
171a505f5e Convert several methods in DenoiseLatentsInvocation to staticmethods so that they can be called externally. 2024-06-25 11:31:52 -07:00
8004a0d5f5 Simplify the logic in prepare_noise_and_latents(...). 2024-06-25 11:31:52 -07:00
610a1fd611 Split out the prepare_noise_and_latents(...) logic in DenoiseLatentsInvocation so that it can be called from other invocations. 2024-06-25 11:31:52 -07:00
43108eec13 (minor) Add a TODO note to get_scheduler(...). 2024-06-25 11:31:52 -07:00
b03073d888 [MM] Add support for probing and loading SDXL VAE checkpoint files (#6524)
* add support for probing and loading SDXL VAE checkpoint files

* broaden regexp probe for SDXL VAEs

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-06-20 02:57:27 +00:00
a43d602f16 fix(queue): add clear_queue_on_startup config to clear problematic queues 2024-06-19 11:39:25 +10:00
7e9a89f8c6 Tidy SilenceWarnings context manager (#6493)
## Summary

No functional changes, just cleaning some things up as I touch the code.
This PR cleans up the `SilenceWarnings` context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a
decorator
- Remove duplicate implementation
- Check the initial verbosity on `__enter__()` rather than `__init__()`
- Save an indentation level in DenoiseLatents

## QA Instructions

I generated an image to confirm that warnings are still muted.

## Merge Plan

- [x] ⚠️ Merge https://github.com/invoke-ai/InvokeAI/pull/6492 first,
then change the target branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-06-18 15:23:32 -04:00
79ceac2f82 (minor) Use SilenceWarnings as a decorator rather than a context manager to save an indentation level. 2024-06-18 15:06:22 -04:00
8e47e005a7 Tidy SilenceWarnings context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a decorator
- Remove duplicate implementation
- Check the initial verbosity on __enter__() rather than __init__()
2024-06-18 15:06:22 -04:00
d13aafb514 Tidy denoise_latents.py imports to all use absolute import paths. 2024-06-18 15:06:22 -04:00
63a7e19dbf Run ruff 2024-06-18 10:38:29 -04:00
fbc5a8ec65 Ignore validation on improperly formatted hashes (pytest) 2024-06-18 10:38:29 -04:00
8ce6e4540e Run ruff 2024-06-18 10:38:29 -04:00
f14f377ede Update validator list 2024-06-18 10:38:29 -04:00
1925f83f5e Update validator list 2024-06-18 10:38:29 -04:00
3a5ad6d112 Update validator list 2024-06-18 10:38:29 -04:00
41a6bb45f3 Initial functionality 2024-06-18 10:38:29 -04:00
70e40fa6c1 added route to install huggingface models from model marketplace (#6515)
## Summary
added route to install huggingface models from model marketplace
<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions
test by going to
http://localhost:5173/api/v2/models/install/huggingface?source=${hfRepo}
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-06-16 21:13:58 -04:00
e26125b734 tests: fix test_model_install.py 2024-06-17 10:57:11 +10:00
cd70937b7f feat(api): improved model install confirmation page styling & messaging 2024-06-17 10:51:08 +10:00
f002bca2fa feat(ui): handle new model_install_download_started event
When a model install is initiated from outside the client, we now trigger the model manager tab's model install list to update.

- Handle new `model_install_download_started` event
- Handle `model_install_download_complete` event (this event is not new but was never handled)
- Update optimistic updates/cache invalidation logic to efficiently update the model install list
2024-06-17 10:07:10 +10:00
56771de856 feat(ui): add redux actions for model_install_download_started event 2024-06-17 09:52:46 +10:00
c11478a94a chore(ui): typegen 2024-06-17 09:51:18 +10:00
fb694b3e17 feat(app): add model_install_download_started event
Previously, we used `model_install_download_progress` for both download starting and progressing. When handling this event, we don't know which actual thing it represents.

Add `model_install_download_started` event to explicitly represent a model download started event.
2024-06-17 09:50:25 +10:00
1bc98abc76 docs(ui): explain model install events 2024-06-17 09:33:46 +10:00
7f03b04b2f Merge branch 'main' into chainchompa/model-install-deeplink 2024-06-14 17:16:25 -04:00
4029972530 formatting 2024-06-14 17:15:55 -04:00
328f160e88 refetch model installs when a new model install starts 2024-06-14 17:09:07 -04:00
aae318425d added route for installing huggingface model from model marketplace 2024-06-14 17:08:39 -04:00
785bb1d9e4 Fix all comparisons against the DEFAULT_PRECISION constant. DEFAULT_PRECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision. 2024-06-14 11:26:10 -07:00
a3cb5da130 Improve RAM<->VRAM memory copy performance in LoRA patching and elsewhere (#6490)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes requested during penultimate review

* add non-blocking=True parameters to several torch.nn.Module.to() calls, for slight performance increases

* fix ruff errors

* prevent crash on non-cuda-enabled systems

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-13 17:10:03 +00:00
568a4844f7 fix: other recursive imports 2024-06-10 04:12:20 -07:00
b1e56e2485 fix: SchedulerOutput not being imported correctly 2024-06-10 04:12:20 -07:00
9432336e2b Add simplified model manager install API to InvocationContext (#6132)
## Summary

This three two model manager-related methods to the InvocationContext
uniform API. They are accessible via `context.models.*`:

1. **`load_local_model(model_path: Path, loader:
Optional[Callable[[Path], AnyModel]] = None) ->
LoadedModelWithoutConfig`**

*Load the model located at the indicated path.*

This will load a local model (.safetensors, .ckpt or diffusers
directory) into the model manager RAM cache and return its
`LoadedModelWithoutConfig`. If the optional loader argument is provided,
the loader will be invoked to load the model into memory. Otherwise the
method will call `safetensors.torch.load_file()` `torch.load()` (with a
pickle scan), or `from_pretrained()` as appropriate to the path type.

Be aware that the `LoadedModelWithoutConfig` object differs from
`LoadedModel` by having no `config` attribute.

Here is an example of usage:

```
def invoke(self, context: InvocatinContext) -> ImageOutput:
       model_path = Path('/opt/models/RealESRGAN_x4plus.pth')
       loadnet = context.models.load_local_model(model_path)
       with loadnet as loadnet_model:
             upscaler = RealESRGAN(loadnet=loadnet_model,...)
```

---

2. **`load_remote_model(source: str | AnyHttpUrl, loader:
Optional[Callable[[Path], AnyModel]] = None) ->
LoadedModelWithoutConfig`**

*Load the model located at the indicated URL or repo_id.*

This is similar to `load_local_model()` but it accepts either a
HugginFace repo_id (as a string), or a URL. The model's file(s) will be
downloaded to `models/.download_cache` and then loaded, returning a

```
def invoke(self, context: InvocatinContext) -> ImageOutput:
       model_url = 'https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth'
       loadnet = context.models.load_remote_model(model_url)
       with loadnet as loadnet_model:
             upscaler = RealESRGAN(loadnet=loadnet_model,...)
```
---

3. **`download_and_cache_model( source: str | AnyHttpUrl, access_token:
Optional[str] = None, timeout: Optional[int] = 0) -> Path`**

Download the model file located at source to the models cache and return
its Path. This will check `models/.download_cache` for the desired model
file and download it from the indicated source if not already present.
The local Path to the downloaded file is then returned.

---

## Other Changes

This PR performs a migration, in which it renames `models/.cache` to
`models/.convert_cache`, and migrates previously-downloaded ESRGAN,
openpose, DepthAnything and Lama inpaint models from the `models/core`
directory into `models/.download_cache`.

There are a number of legacy model files in `models/core`, such as
GFPGAN, which are no longer used. This PR deletes them and tidies up the
`models/core` directory.

## Related Issues / Discussions

I have systematically replaced all the calls to
`download_with_progress_bar()`. This function is no longer used
elsewhere and has been removed.

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

I have added unit tests for the three new calls. You may test that the
`load_and_cache_model()` call is working by running the upscaler within
the web app. On first try, you will see the model file being downloaded
into the models `.cache` directory. On subsequent tries, the model will
either load from RAM (if it hasn't been displaced) or will be loaded
from the filesystem.

<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->

## Merge Plan

Squash merge when approved.

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [X] _Tests added / updated (if applicable)_
- [X] _Documentation added / updated (if applicable)_
2024-06-08 16:24:31 -07:00
7d19af2caa Merge branch 'main' into lstein/feat/simple-mm2-api 2024-06-08 18:55:06 -04:00
0dbec3ad8b Split up latent.py (code reorganization, no functional changes) (#6491)
## Summary

I've started working towards a better tiled upscaling implementation. It
is going to require some refactoring of `DenoiseLatentsInvocation`. As a
first step, this PR splits up all of the invocations in latent.py into
their own files. That file had become a bit of a dumping ground - it
should be a bit more manageable to work with now.

This PR just re-organizes the code. There should be no functional
changes.

## QA Instructions

I've done some light smoke testing. I'll do some more before merging.
The main risk is that I missed a broken import, or some other copy-paste
error.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_: N/A
- [x] _Documentation added / updated (if applicable)_: N/A
2024-06-07 12:01:56 -04:00
52c0c4a32f Rename latent.py -> denoise_latents.py. 2024-06-07 09:28:42 -04:00
8f1afc032a Move SchedulerInvocation to a new file. No functional changes. 2024-06-07 09:28:42 -04:00
854bca668a Move CreateDenoiseMaskInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
fea9013cad Move CreateGradientMaskInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
045caddee1 Move LatentsToImageInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
58697141bf Move ImageToLatentsInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
5e419dbb56 Move ScaleLatentsInvocation and ResizeLatentsInvocation to their own file. No functional changes. 2024-06-07 09:28:42 -04:00
595096bdcf Move BlendLatentsInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
ed03d281e6 Move CropLatentsCoreInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
0b37496c57 Move IdealSizeInvocation to its own file. No functional changes. 2024-06-07 09:28:42 -04:00
fde58ce0a3 Merge remote-tracking branch 'origin/main' into lstein/feat/simple-mm2-api 2024-06-07 14:23:41 +10:00
dc134935c8 replace load_and_cache_model() with load_remote_model() and load_local_odel() 2024-06-07 14:12:16 +10:00
9f9379682e ruff fixes 2024-06-07 13:54:41 +10:00
f81b8bc9f6 add support for generic loading of diffusers directories 2024-06-07 13:54:30 +10:00
6d067e56f2 fix(ui): on page load, if CA processed image no longer exists, re-process it 2024-06-07 10:32:28 +10:00
2871676f79 LoRA patching optimization (#6439)
* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* allow model patcher to optimize away the unpatching step when feasible

* remove lazy_offloading functionality

* do not save original weights if there is a CPU copy of state dict

* Update invokeai/backend/model_manager/load/load_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* documentation fixes added during penultimate review

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-06 13:53:35 +00:00
1c5c3cdbd6 tidy(ui): organize control layers konva logic
- More comments, docstrings
- Move things into saner, less-coupled locations
2024-06-06 07:45:13 +10:00
3db69af220 refactor(ui): generalize stage event handlers
Create intermediary nanostores for values required by the event handlers. This allows the event handlers to be purely imperative, with no reactivity: instead of recreating/setting the handlers when a dependent piece of state changes, we use nanostores' imperative API to access dependent state.

For example, some handlers depend on brush size. If we used the standard declarative `useSelector` API, we'd need to recreate the event handler callback each time the brush size changed. This can be costly.

An intermediate `$brushSize` nanostore is set in a `useLayoutEffect()`, which responds to changes to the redux store. Then, in the event handler, we use the imperative API to access the brush size: `$brushSize.get()`.

This change allows the event handler logic to be shared with the pending canvas v2, and also more easily tested. It's a noticeable perf improvement, too, especially when changing brush size.
2024-06-06 07:45:13 +10:00
1823e446ac fix(ui): conditionally render CL preview
This fixes an issue where it sometimes gets out of sync, and fixes some konva errors.
2024-06-06 07:45:13 +10:00
311e44ad19 tidy(ui): clean up control layers renderers, docstrings 2024-06-06 07:45:13 +10:00
848ca79da8 Changed translated labels to static suffixes, cleanup. 2024-06-05 14:45:43 +10:00
9cba0dfac9 Providing fileName string directly to DataViewer as suggested 2024-06-05 14:45:43 +10:00
37b1f21bcf ... and the workflow 2024-06-05 14:45:43 +10:00
b2e005f6b5 Just realized we might want the same change made for the Graph JSON 2024-06-05 14:45:43 +10:00
52aac954c0 Prefixed JSON filenames with the image UUID #6469 2024-06-05 14:45:43 +10:00
ff01ceae99 Update invokeai_version.py 2024-06-05 05:53:19 +10:00
669d92d8db translationBot(ui): update translation (Chinese (Traditional))
Currently translated at 14.1% (179 of 1261 strings)

Co-authored-by: hugoalh <hugoalh@users.noreply.hosted.weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hant/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
2903060154 translationBot(ui): update translation (German)
Currently translated at 67.0% (834 of 1243 strings)

Co-authored-by: Ettore Atalan <atalanttore@googlemail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
4af8699a00 translationBot(ui): update translation (Spanish)
Currently translated at 34.3% (427 of 1243 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
71fedd1a07 translationBot(ui): update translation (Spanish)
Currently translated at 34.3% (427 of 1243 strings)

Co-authored-by: Bruno Castillejo <soybrunocastillejo@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
6bb1189c88 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1243 of 1261 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1243 of 1261 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1225 of 1243 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1225 of 1243 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
c7546bc82e translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1261 of 1261 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1243 of 1243 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-06-05 00:08:03 +10:00
14372e3818 fix(nodes): blend latents with weight=0 with DPMSolverSDEScheduler
- Pass the seed from `latents_a` to the output latents. Fixed an issue where using `BlendLatentsInvocation` could result in different outputs during denoising even when the alpha or slerp weight was 0.

## Explanation

`LatentsField` has an optional `seed` field. During denoising, if this `seed` field is not present, we **fall back to 0 for the seed**. The seed is used during denoising in a few ways:

1. Initializing the scheduler.

The seed is used in two places in `invokeai/app/invocations/latent.py`.

The `get_scheduler()` utility function has special handling for `DPMSolverSDEScheduler`, which appears to need a seed for deterministic outputs.

`DenoiseLatentsInvocation.init_scheduler()` has special handling for schedulers that accept a generator - the generator needs to be seeded in a particular way. At the time of this commit, these are the Invoke-supported schedulers that need this seed:
  - DDIMScheduler
  - DDPMScheduler
  - DPMSolverMultistepScheduler
  - EulerAncestralDiscreteScheduler
  - EulerDiscreteScheduler
  - KDPM2AncestralDiscreteScheduler
  - LCMScheduler
  - TCDScheduler

2. Adding noise during inpainting.

If a mask is used for denoising, and we are not using an inpainting model, we add noise to the unmasked area. If, for some reason, we have a mask but no noise, the seed is used to add noise.

I wonder if we should instead assert that if a mask is provided, we also have noise.

This is done in `invokeai/backend/stable_diffusion/diffusers_pipeline.py` in `StableDiffusionGeneratorPipeline.latents_from_embeddings()`.

When we create noise to be used in denoising, we are expected to set `LatentsField.seed` to the seed used to create the noise. This introduces some awkwardness when we manipulate any "latents" that will be used for denoising. We have to pass the seed along for every operation.

If the wrong seed or no seed is passed along, we can get unexpected outputs during denoising. One notable case relates to blending latents (slerping tensors).

If we slerp two noise tensors (`LatentsField`s) _without_ passing along the seed from the source latents, when we denoise with a seed-dependent scheduler*, the schedulers use the fallback seed of 0 and we get the wrong output. This is most obvious when slerping with a weight of 0, in which case we expect the exact same output after denoising.

*It looks like only the DPMSolver* schedulers are affected, but I haven't tested all of them.

Passing the seed along in the output fixes this issue.
2024-06-05 00:02:52 +10:00
64523c4b1b fix(ui): handle concat when recalling prompts
This required some minor reworking of of the logic to recall multiple items. I split this into a utility function that includes some special handling for concat.

Closes #6478
2024-06-04 06:01:01 +10:00
89a764a359 fix(ui): improve model metadata parsing fallback
When the model in metadata's key no longer exists, fall back to fetching by name, base and type. This was the intention all along but the logic was never put in place.
2024-06-04 06:01:01 +10:00
756108f6bd Update invokeai/app/invocations/latent.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2024-06-03 11:41:47 -07:00
68d628dc14 use zip to iterate over image prompts and adapters 2024-06-03 11:41:47 -07:00
93c9852142 fix ruff 2024-06-03 11:41:47 -07:00
493f81788c added a few comments to document design choices 2024-06-03 11:41:47 -07:00
f13427e3f4 refactor redundant code and fix typechecking errors 2024-06-03 11:41:47 -07:00
e28737fc8b add check for congruence between # of ip_adapters and image_prompts 2024-06-03 11:41:47 -07:00
7391c126d3 handle case of no IP adapters requested 2024-06-03 11:41:47 -07:00
1c59fce6ad reduce peak VRAM memory usage of IP adapter 2024-06-03 11:41:47 -07:00
a9962fd104 chore: ruff 2024-06-03 11:53:20 +10:00
e7513f6088 docs(mm): add comment in move_model_to_device 2024-06-03 10:56:04 +10:00
c7f22b6a3b tidy(mm): remove extraneous docstring
It's inherited from the ABC.
2024-06-03 10:46:28 +10:00
99413256ce tidy(mm): pass enum member instead of string 2024-06-03 10:43:09 +10:00
aa9695e377 tidy(download): _download_job -> _multifile_job 2024-06-03 10:15:53 +10:00
c58ac1e80d tidy(mm): minor formatting 2024-06-03 10:11:08 +10:00
6cc6a45274 feat(download): add type for callback_name
Just a bit of typo protection in lieu of full type safety for these methods, which is difficult due to the typing of `DownloadEventHandler`.
2024-06-03 10:05:52 +10:00
521f907f58 tidy(nodes): infill
- Set `self._context=context` instead of passing it as an arg
2024-06-03 09:43:25 +10:00
ccdecf21a3 tidy(nodes): cnet processors
- Set `self._context=context` instead of changing the type signature of `run_processor`
- Tidy a few typing things
2024-06-03 09:41:17 +10:00
b124440023 tidy(mm): move load_model_from_url from mm to invocation context 2024-06-03 08:51:21 +10:00
e3a70e598e docs(app): simplify docstring in invocation_context 2024-06-03 08:40:29 +10:00
132bbf330a tidy(app): remove unnecessary changes in invocation_context
- Any mypy issues are a misconfiguration of mypy
- Use simple conditionals instead of ternaries
- Consistent & standards-compliant docstring formatting
- Use `dict` instead of `typing.Dict`
2024-06-03 08:35:23 +10:00
2276f327e5 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-06-02 09:45:31 -04:00
6b24424727 feat(ui): add help icon to compare toolbar 2024-06-02 15:30:00 +10:00
7153d846a9 feat(ui): add hotkey to cycle compare modes 2024-06-02 15:30:00 +10:00
9a0b77ad38 feat(ui): add hotkey to swap comparison images 2024-06-02 15:30:00 +10:00
220d45967e fix(ui): typo 2024-06-02 15:30:00 +10:00
038a482ef0 feat(ui): rework visibility conditions for image viewer 2024-06-02 15:30:00 +10:00
c325ad3432 feat(ui): add hotkey hint to exit compare button 2024-06-02 15:30:00 +10:00
449bc4dbe5 feat(ui): abstract out and share logic between comparisons 2024-06-02 15:30:00 +10:00
34d68a3663 feat(ui): hover comparison mode 2024-06-02 15:30:00 +10:00
8bb9571485 feat(ui): tweak slider divider styling 2024-06-02 15:30:00 +10:00
08bcc71e99 fix(ui): workflows fit on load 2024-06-02 15:30:00 +10:00
ff2b2fad83 feat(ui): revise drop zones
The main viewer area has two drop zones:
- Select for Viewer
- Select for Compare

These do what you'd imagine they would do.
2024-06-02 15:30:00 +10:00
0f0a6852f1 fix(ui): make compare image scale with first image when using contain fit 2024-06-02 15:30:00 +10:00
745140fa6b feat(ui): "first image"/"second image" -> "viewer image"/"compare image" 2024-06-02 15:30:00 +10:00
405fc46888 feat(ui): z/esc first exit compare before closing viewer 2024-06-02 15:30:00 +10:00
ca728ca29f fix(ui): ignore context menu in slider view
It doesn't make sense to allow context menu here, because the context menu will technically be on a div and not an image - there won't be any image options there.
2024-06-02 15:30:00 +10:00
d0fca53e67 fix(ui): only clear comparison image on alt click of gallery image
This logic can't e in the reducer else it applies to dnd events which isn't right
2024-06-02 15:30:00 +10:00
ad9740d72d feat(ui): alt-click comparison image exits compare 2024-06-02 15:30:00 +10:00
1c9c982b63 feat(ui): use appropriate cursor on slider 2024-06-02 15:30:00 +10:00
3cfd2755c2 fix(ui): when changing viewer state, always clear compare image 2024-06-02 15:30:00 +10:00
8ea4067f83 feat(ui): rework compare toolbar 2024-06-02 15:30:00 +10:00
940de6a5c5 fix(ui): allow drop of currently-selected image for compare 2024-06-02 15:30:00 +10:00
dd74e89127 fix(ui): close context menu on click select for compare 2024-06-02 15:30:00 +10:00
69da67e920 fix(ui): dnd on board
Copy-paste error broke this
2024-06-02 15:30:00 +10:00
76b1f241d7 fix(ui): useGalleryNavigation callback typing issue 2024-06-02 15:30:00 +10:00
0e5336d8fa feat(ui): rework comparison activation, add hotkeys 2024-06-02 15:30:00 +10:00
3501636018 feat(ui): add fill mode for slider comparison 2024-06-02 15:30:00 +10:00
e4ce188500 feat(ui): image selection gallery state & tweaks 2024-06-02 15:30:00 +10:00
e976571fba build(ui): remove unused dep 2024-06-02 15:30:00 +10:00
0da36c1238 feat(ui): use IAIDndImage for compare mode 2024-06-02 15:30:00 +10:00
4ef8cbd9d0 fix(ui): use isValidDrop in imageDropped listener
It was possible for a drop event to be invalid but still processed. Fixed by slightly changing the signature of isValidDrop.
2024-06-02 15:30:00 +10:00
8f8ddd620b feat(ui): add comparison modes, side-by-side view 2024-06-02 15:30:00 +10:00
1af53aed60 feat(ui): fix image comparison slider resizing/aspect ratio jank 2024-06-02 15:30:00 +10:00
7a4bbd092e feat(ui): revised image comparison slider
Should work for any components and image now.
2024-06-02 15:30:00 +10:00
72bbcb2d94 feat(ui): slider working for all aspect ratios 2024-06-02 15:30:00 +10:00
c2eef93476 feat(ui): wip slider implementations 2024-06-02 15:30:00 +10:00
cfb12615e1 fix: openapi stuff (#6454)
## Summary

Fix some issues with openapi schema generation. See commits for details.

## Related Issues / Discussions


https://discord.com/channels/1020123559063990373/1049495067846524939/1245141831394529352

## QA Instructions

App should work, workflows should work.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-05-30 08:22:34 +05:30
a983f27aad fix(ui): update types 2024-05-30 12:03:38 +10:00
7cb32d3d83 chore(ui): typegen 2024-05-30 12:03:38 +10:00
ac56ab79a7 fix(app): add dynamic validator to AnyInvocation & AnyInvocationOutput
This fixes the tests and slightly changes output types.
2024-05-30 12:03:38 +10:00
50d3030471 feat(app): dynamic type adapters for invocations & outputs
Keep track of whether or not the typeadapter needs to be updated. Allows for dynamic invocation and output unions.
2024-05-30 12:03:38 +10:00
5beec8211a feat(api): sort openapi schemas
Reduces the constant changes to the frontend client types due to inconsistent ordering of pydantic models.
2024-05-30 12:03:38 +10:00
5a4d10467b feat(ui): use updated types 2024-05-30 12:03:38 +10:00
7590f3005e chore(ui): typegen 2024-05-30 12:03:03 +10:00
2f9ebdec69 fix(app): openapi schema generation
Some tech debt related to dynamic pydantic schemas for invocations became problematic. Including the invocations and results in the event schemas was breaking pydantic's handling of ref schemas. I don't really understand why - I think it's a pydantic bug in a remote edge case that we are hitting.

After many failed attempts I landed on this implementation, which is actually much tidier than what was in there before.

- Create pydantic-enabled types for `AnyInvocation` and `AnyInvocationOutput` and use these in place of the janky dynamic unions. Actually, they are kinda the same, but better encapsulated. Use these in `Graph`, `GraphExecutionState`, `InvocationEventBase` and `InvocationCompleteEvent`.
- Revise the custom openapi function to work with the new models.
- Split out the custom openapi function to a separate file. Add a `post_transform` callback so consumers can customize the output schema.
- Update makefile scripts.
2024-05-30 12:03:03 +10:00
e257a72f94 chore: bump pydantic, fastapi to latest 2024-05-30 12:03:03 +10:00
843f82c837 fix(ui): remove overly strict constraints on control adapter weight 2024-05-29 19:01:28 -07:00
66858effa2 docs: add FAQ for fixing controlnet_aux 2024-05-29 18:19:06 -07:00
21a60af881 when unlocking models, offload_unlocked_models should prune to vram limit only (#6450)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-29 03:01:21 +00:00
ead1748c54 issue a download progress event when install download starts 2024-05-28 19:30:42 -04:00
df91d1b849 Update TI handling for compatibility with transformers 4.40.0 (#6449)
## Summary

- Updated the documentation for `TextualInversionManager`
- Updated the `self.tokenizer.model_max_length` access to work with the
latest transformers version. Thanks to @skunkworxdark for looking into
this here:
https://github.com/invoke-ai/InvokeAI/issues/6445#issuecomment-2133098342

## Related Issues / Discussions

Closes #6445 

## QA Instructions

I tested with `transformers==4.41.1`, and compared the results against a
recent InvokeAI version before updating tranformers - no change, as
expected.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
2024-05-28 08:32:02 -04:00
829b9ad66b Add a callout about the hackiness of dropping tokens in the TextualInversionManager. 2024-05-28 05:11:54 -07:00
3aa1c8d3a8 Update TextualInversionManager for compatibility with the latest transformers release. See https://github.com/invoke-ai/InvokeAI/issues/6445. 2024-05-28 05:11:54 -07:00
994c61b67a Add docs to TextualInversionManager and improve types. No changes to functionality. 2024-05-28 05:11:54 -07:00
21aa42627b feat(events): add dynamic invocation & result validators
This is required to get these event fields to deserialize correctly. If omitted, pydantic uses `BaseInvocation`/`BaseInvocationOutput`, which is not correct.

This is similar to the workaround in the `Graph` and `GraphExecutionState` classes where we need to fanagle pydantic with manual validation handling.
2024-05-28 05:11:37 -07:00
a4f88ff834 feat(events): add __event_name__ as ClassVar to EventBase
This improves types for event consumers that need to access the event name.
2024-05-28 05:11:37 -07:00
cd12ca6e85 add migration_11; fix typo 2024-05-27 22:40:01 -04:00
34e1eb19f9 merge with main and resolve conflicts 2024-05-27 22:20:34 -04:00
ddff9b4584 fix(events): typing for download event handler 2024-05-27 11:13:47 +10:00
b50133d5e1 feat(events): register event schemas
This allows for events to be dispatched using dicts as payloads, and have the dicts validated as pydantic schemas.
2024-05-27 11:13:47 +10:00
5388f5a817 fix(ui): edit variant for main models only
Closes #6444
2024-05-27 11:02:00 +10:00
27a3eb15f8 feat(ui): update event types 2024-05-27 10:17:02 +10:00
4b2d57a5e0 chore(ui): typegen
Note about the huge diff: I had a different version of pydantic installed at some point, which slightly altered a _ton_ of schema components. This typegen was done on the correct version of pydantic and un-does those alterations, in addition to the intentional changes to event models.
2024-05-27 10:17:02 +10:00
bbb90ff949 feat(events): restore whole invocation to event payloads
Removing this is a breaking API change - some consumers of the events need the whole invocation. Didn't realize that until now.
2024-05-27 10:17:02 +10:00
9d9801b2c2 feat(events): stronger generic typing for event registration 2024-05-27 10:17:02 +10:00
8498d4344b docs: update docstrings in sockets.py 2024-05-27 09:06:02 +10:00
dfad37a262 docs: update comments & docstrings 2024-05-27 09:06:02 +10:00
89dede7bad feat(ui): simplify client sio redux actions
- Add a simple helper to create socket actions in a less error-prone way
- Organize and tidy sio files
2024-05-27 09:06:02 +10:00
60784a4361 feat(ui): update client for removal of session events 2024-05-27 09:06:02 +10:00
3d8774d295 chore(ui): typegen 2024-05-27 09:06:02 +10:00
084cf26ed6 refactor: remove all session events
There's no longer any need for session-scoped events now that we have the session queue. Session started/completed/canceled map 1-to-1 to queue item status events, but queue item status events also have an event for failed state.

We can simplify queue and processor handling substantially by removing session events and instead using queue item events.

- Remove the session-scoped events entirely.
- Remove all event handling from session queue. The processor still needs to respond to some events from the queue: `QueueClearedEvent`, `BatchEnqueuedEvent` and `QueueItemStatusChangedEvent`.
- Pass an `is_canceled` callback to the invocation context instead of the cancel event
- Update processor logic to ensure the local instance of the current queue item is synced with the instance in the database. This prevents race conditions and ensures lifecycle callback do not get stale callbacks.
- Update docstrings and comments
- Add `complete_queue_item` method to session queue service as an explicit way to mark a queue item as successfully completed. Previously, the queue listened for session complete events to do this.

Closes #6442
2024-05-27 09:06:02 +10:00
8592f5c6e1 feat(events): move event sets outside sio class
This lets the event sets be consumed programmatically.
2024-05-27 09:06:02 +10:00
368127bd25 feat(events): register_events supports single event 2024-05-27 09:06:02 +10:00
c0aabcd8ea tidy(events): use tuple index access for event payloads 2024-05-27 09:06:02 +10:00
ed6c716ddc fix(mm): emit correct event when model load complete 2024-05-27 09:06:02 +10:00
eaf67b2150 feat(ui): add logging for session events 2024-05-27 09:06:02 +10:00
575943d0ad fix(processor): move session started event to session runner 2024-05-27 09:06:02 +10:00
25d1d2b591 tidy(processor): use separate handlers for each event type
Just a bit clearer without needing `isinstance` checks.
2024-05-27 09:06:02 +10:00
39415428de chore(ui): typegen 2024-05-27 09:06:02 +10:00
64d553f72c feat(events): restore temp handling of user/project 2024-05-27 09:06:02 +10:00
5b390bb11c tests: clean up tests after events changes 2024-05-27 09:06:02 +10:00
a9f773c03c fix(mm): port changes into new model_install_common file
Some subtle changes happened between this PR's last update and now. Bring them into the file.
2024-05-27 09:06:02 +10:00
585feccf82 fix(ui): update event handling to match new types 2024-05-27 09:06:02 +10:00
cbd3b15cae chore(ui): typegen 2024-05-27 09:06:02 +10:00
cc56918453 tidy(ui): remove old unused session subscribe actions 2024-05-27 09:06:02 +10:00
f82df2661a docs: clarify comment in api_app 2024-05-27 09:06:02 +10:00
a1d68eb319 fix(ui): denoise percentage 2024-05-27 09:06:02 +10:00
8b5caa7e57 chore(ui): typegen 2024-05-27 09:06:02 +10:00
b3a051250f feat(api): sort socket event names for openapi schema
Deterministic ordering prevents extraneous, non-functional changes to the autogenerated types
2024-05-27 09:06:02 +10:00
0f733c42fc fix(events): fix denoise progress percentage
- Restore calculation of step percentage but in the backend instead of client
- Simplify signatures for denoise progress event callbacks
- Clean up `step_callback.py` (types, do not recreate constant matrix on every step, formatting)
2024-05-27 09:06:02 +10:00
ec4f10aed3 chore(ui): typegen 2024-05-27 09:06:02 +10:00
d97186dfc8 feat(events): remove payload registry, add method to get event classes
We don't need to use the payload schema registry. All our events are dispatched as pydantic models, which are already validated on instantiation.

We do want to add all events to the OpenAPI schema, and we referred to the payload schema registry for this. To get all events, add a simple helper to EventBase. This is functionally identical to using the schema registry.
2024-05-27 09:06:02 +10:00
18b4f1b72a feat(ui): add missing socket events 2024-05-27 09:06:02 +10:00
5cdf71b72f feat(events): add missing events
These events weren't being emitted via socket.io:
- DownloadCancelledEvent
- DownloadCompleteEvent
- DownloadErrorEvent
- DownloadProgressEvent
- DownloadStartedEvent
- ModelInstallDownloadsCompleteEvent
2024-05-27 09:06:02 +10:00
88a2340b95 feat(events): use builder pattern for download events 2024-05-27 09:06:02 +10:00
1be4cab2d9 fix(events): dump events with mode="json"
Ensures all model events are serializable.
2024-05-27 09:06:02 +10:00
567b87cc50 docs(events): update event docstrings 2024-05-27 09:06:02 +10:00
4756920282 tests: move fixtures import to conftest.py 2024-05-27 09:06:02 +10:00
a876675448 tests: update tests to use new events 2024-05-27 09:06:02 +10:00
655f62008f fix(mm): check for presence of invoker before emitting model load event
The model loader emits events. During testing, it doesn't have access to a fully-mocked events service, so the test fails when attempting to call a nonexistent method. There was a check for this previously, but I accidentally removed it. Restored.
2024-05-27 09:06:02 +10:00
300725d1dd fix(ui): correct model load event format 2024-05-27 09:06:02 +10:00
bf03127c69 fix(events): add missing __event_name__ to EventBase 2024-05-27 09:06:02 +10:00
2dc752ea83 feat(events): simplify event classes
- Remove ABCs, they do not work well with pydantic
- Remove the event type classvar - unused
- Remove clever logic to require an event name - we already get validation for this during schema registration.
- Rename event bases to all end in "Base"
2024-05-27 09:06:02 +10:00
1b9bbaa5a4 fix(events): emit bulk download events in correct room 2024-05-27 09:06:02 +10:00
3abc182b44 chore(ui): tidy after rebase 2024-05-27 09:06:02 +10:00
8d79ce94aa feat(ui): update UI to use new events
- Use OpenAPI schema for event payload types
- Update all event listeners
- Add missing events / remove old nonexistent events
2024-05-27 09:06:02 +10:00
975dc14579 chore(ui): typegen 2024-05-27 09:06:02 +10:00
9bd78823a3 refactor(events): use pydantic schemas for events
Our events handling and implementation has a couple pain points:
- Adding or removing data from event payloads requires changes wherever the events are dispatched from.
- We have no type safety for events and need to rely on string matching and dict access when interacting with events.
- Frontend types for socket events must be manually typed. This has caused several bugs.

`fastapi-events` has a neat feature where you can create a pydantic model as an event payload, give it an `__event_name__` attr, and then dispatch the model directly.

This allows us to eliminate a layer of indirection and some unpleasant complexity:
- Event handler callbacks get type hints for their event payloads, and can use `isinstance` on them if needed.
- Event payload construction is now the responsibility of the event itself (a pydantic model), not the service. Every event model has a `build` class method, encapsulating this logic. The build methods are provided as few args as possible. For example, `InvocationStartedEvent.build()` gets the invocation instance and queue item, and can choose the data it wants to include in the event payload.
- Frontend event types may be autogenerated from the OpenAPI schema. We use the payload registry feature of `fastapi-events` to collect all payload models into one place, making it trivial to keep our schema and frontend types in sync.

This commit moves the backend over to this improved event handling setup.
2024-05-27 09:06:02 +10:00
461e857824 fix(ui): parameter not set translation 2024-05-26 08:21:06 -07:00
48db0b90e8 Bump transformers 2024-05-26 12:51:07 +10:00
c010ce49f7 Bump huggingface-hub 2024-05-26 12:51:07 +10:00
6df8b23c59 Bump transformers 2024-05-26 12:51:07 +10:00
dfe02b26c1 Bump accelerate 2024-05-26 12:51:07 +10:00
4142dc7141 Update deps to their lastest version 2024-05-26 12:51:07 +10:00
86bfcc53a3 docs: fix typo (#6395)
may noise steps -> many noise steps
2024-05-24 18:02:17 +00:00
532f82cb97 Optimize RAM to VRAM transfer (#6312)
* avoid copying model back from cuda to cpu

* handle models that don't have state dicts

* add assertions that models need a `device()` method

* do not rely on torch.nn.Module having the device() method

* apply all patches after model is on the execution device

* fix model patching in latents too

* log patched tokenizer

* closes #6375

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-05-24 17:06:09 +00:00
7437085cac fix typo (#6255) 2024-05-24 15:26:05 +00:00
e9b80cf28f fix(ui): isLocal erroneously hardcoded 2024-05-25 00:05:44 +10:00
f5a775ae4e feat(ui): toast on queue item errors, improved error descriptions
Show error toasts on queue item error events instead of invocation error events. This allows errors that occurred outside node execution to be surfaced to the user.

The error description component is updated to show the new error message if available. Commercial handling is retained, but local now uses the same component to display the error message itself.
2024-05-24 20:02:24 +10:00
50dd569411 fix(processor): race condition that could result in node errors not getting reported
I had set the cancel event at some point during troubleshooting an unrelated issue. It seemed logical that it should be set there, and didn't seem to break anything. However, this is not correct.

The cancel event should not be set in response to a queue status change event. Doing so can cause a race condition when nodes are executed very quickly.

It's possible that a previously-executed session's queue item status change event is handled after the next session starts executing. The cancel event is set and the session runner sees it aborting the session run early.

In hindsight, it doesn't make sense to set the cancel event here either. It should be set in response to user action, e.g. the user cancelled the session or cleared the queue (which implicitly cancels the current session). These events actually trigger the queue item status changed event, so if we set the cancel event here, we'd be setting it twice per cancellation.
2024-05-24 20:02:24 +10:00
125e1d7eb4 tidy: remove unnecessary whitespace changes 2024-05-24 20:02:24 +10:00
2fbe5ecb00 fix(ui): correctly fallback to error message when traceback is empty string 2024-05-24 20:02:24 +10:00
ba4d27860f tidy(ui): remove extraneous condition in socketInvocationError 2024-05-24 20:02:24 +10:00
6fc7614b4a fix(ui): race condition with progress
There's a race condition where a canceled session may emit a progress event or two after it's been canceled, and the progress image isn't cleared out.

To resolve this, the system slice tracks canceled session ids. When a progress event comes in, we check the cancellations and skip setting the progress if canceled.
2024-05-24 20:02:24 +10:00
9c926f249f feat(processor): add debug log stmts to session running callbacks 2024-05-24 20:02:24 +10:00
80faeac913 fix(processor): fix race condition related to clearing the queue 2024-05-24 20:02:24 +10:00
418c932595 tidy(processor): remove test callbacks 2024-05-24 20:02:24 +10:00
9117db2673 tidy(queue): delete unused delete_queue_item method 2024-05-24 20:02:24 +10:00
4a48aa98a4 chore: ruff 2024-05-24 20:02:24 +10:00
e365d35c93 docs(processor): update docstrings, comments 2024-05-24 20:02:24 +10:00
aa329ea811 feat(ui): handle enriched events 2024-05-24 20:02:24 +10:00
1e622a5706 chore(ui): typegen 2024-05-24 20:02:24 +10:00
ae66d32b28 feat(app): update test event callbacks 2024-05-24 20:02:24 +10:00
2dd3a85ade feat(processor): update enriched errors & fail_queue_item() 2024-05-24 20:02:24 +10:00
a8492bd7e4 feat(events): add enriched errors to events 2024-05-24 20:02:24 +10:00
25954ea750 feat(queue): session queue error handling
- Add handling for new error columns `error_type`, `error_message`, `error_traceback`.
- Update queue item model to include the new data. The `error_traceback` field has an alias of `error` for backwards compatibility.
- Add `fail_queue_item` method. This was previously handled by `cancel_queue_item`. Splitting this functionality makes failing a queue item a bit more explicit. We also don't need to handle multiple optional error args.
-
2024-05-24 20:02:24 +10:00
887b73aece feat(db): add error_type, error_message, rename error -> error_traceback to session_queue table 2024-05-24 20:02:24 +10:00
3c41c67d13 fix(processor): restore missing update of session 2024-05-24 20:02:24 +10:00
6c79be7dc3 chore: ruff 2024-05-24 20:02:24 +10:00
097619ef51 feat(processor): get user/project from queue item w/ fallback 2024-05-24 20:02:24 +10:00
a1f7a9cd6f fix(app): fix logging of error classes instead of class names 2024-05-24 20:02:24 +10:00
25b9c19eed feat(app): handle preparation errors as node errors
We were not handling node preparation errors as node errors before. Here's the explanation, copied from a comment that is no longer required:

---

TODO(psyche): Sessions only support errors on nodes, not on the session itself. When an error occurs outside
node execution, it bubbles up to the processor where it is treated as a queue item error.

Nodes are pydantic models. When we prepare a node in `session.next()`, we set its inputs. This can cause a
pydantic validation error. For example, consider a resize image node which has a constraint on its `width`
input field - it must be greater than zero. During preparation, if the width is set to zero, pydantic will
raise a validation error.

When this happens, it breaks the flow before `invocation` is set. We can't set an error on the invocation
because we didn't get far enough to get it - we don't know its id. Hence, we just set it as a queue item error.

---

This change wraps the node preparation step with exception handling. A new `NodeInputError` exception is raised when there is a validation error. This error has the node (in the state it was in just prior to the error) and an identifier of the input that failed.

This allows us to mark the node that failed preparation as errored, correctly making such errors _node_ errors and not _processor_ errors. It's much easier to diagnose these situations. The error messages look like this:

> Node b5ac87c6-0678-4b8c-96b9-d215aee12175 has invalid incoming input for height

Some of the exception handling logic is cleaned up.
2024-05-24 20:02:24 +10:00
cc2d877699 docs(app): explain why errors are handled poorly 2024-05-24 20:02:24 +10:00
be82404759 tidy(app): "outputs" -> "output" 2024-05-24 20:02:24 +10:00
33f9fe2c86 tidy(app): rearrange proccessor 2024-05-24 20:02:24 +10:00
1d973f92ff feat(app): support multiple processor lifecycle callbacks 2024-05-24 20:02:24 +10:00
7f70cde038 feat(app): make things in session runner private 2024-05-24 20:02:24 +10:00
47722528a3 feat(app): iterate on processor split 2
- Use protocol to define callbacks, this allows them to have kwargs
- Shuffle the profiler around a bit
- Move `thread_limit` and `polling_interval` to `__init__`; `start` is called programmatically and will never get these args in practice
2024-05-24 20:02:24 +10:00
be41c84305 feat(app): iterate on processor split
- Add `OnNodeError` and `OnNonFatalProcessorError` callbacks
- Move all session/node callbacks to `SessionRunner` - this ensures we dump perf stats before resetting them and generally makes sense to me
- Remove `complete` event from `SessionRunner`, it's essentially the same as `OnAfterRunSession`
- Remove extraneous `next_invocation` block, which would treat a processor error as a node error
- Simplify loops
- Add some callbacks for testing, to be removed before merge
2024-05-24 20:02:24 +10:00
82b4298b03 Fix next node calling logic 2024-05-24 20:02:24 +10:00
fa6c7badd6 Run ruff 2024-05-24 20:02:24 +10:00
45d2504c1e Break apart session processor and the running of each session into separate classes 2024-05-24 20:02:24 +10:00
f1bb7e86c0 feat(ui): invalidate cache for queue item on status change
This query is only subscribed-to in the `QueueItemDetail` component - when is rendered only when the user clicks on a queue item in the queue. Invalidating this tag instead of optimistically updating it won't cause any meaningful change to network traffic.
2024-05-24 08:59:49 +10:00
93e4c3dbc2 feat(app): update queue item's session on session completion
The session is never updated in the queue after it is first enqueued. As a result, the queue detail view in the frontend never never updates and the session itself doesn't show outputs, execution graph, etc.

We need a new method on the queue service to update a queue item's session, then call it before updating the queue item's status.

Queue item status may be updated via a session-type event _or_ queue-type event. Adding the updated session to all these events is a hairy - simpler to just update the session before we do anything that could trigger a queue item status change event:
- Before calling `emit_session_complete` in the processor (handles session error, completed and cancel events and the corresponding queue events)
- Before calling `cancel_queue_item` in the processor (handles another way queue items can be canceled, outside the session execution loop)

When serializing the session, both in the new service method and the `get_queue_item` endpoint, we need to use `exclude_none=True` to prevent unexpected validation errors.
2024-05-24 08:59:49 +10:00
c3f28f7a35 translationBot(ui): update translation (Spanish)
Currently translated at 30.5% (380 of 1243 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-05-24 08:05:45 +10:00
c900a63842 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-05-24 08:05:45 +10:00
4eb5f004e6 Update invokeai_version.py 2024-05-24 08:00:03 +10:00
bcae735d7c fix(ui): initial image layers always ignored (#6434)
## Summary

Whoops!

## Related Issues / Discussions


https://discord.com/channels/1020123559063990373/1049495067846524939/1243186572115837009

## QA Instructions

- Generate w/ initial image layer

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-24 03:16:18 +05:30
861f06c459 Merge branch 'main' into psyche/fix/ui/initial-image-layer 2024-05-24 03:14:18 +05:30
c493628272 fix(ui): 'undefined' being used for metadata on uploaded images (#6433)
## Summary

TIL if you add `undefined` to a form data object, it gets stringified to
`'undefined'`. Whoops!

## Related Issues / Discussions

n/a

## QA Instructions

n/a

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-24 03:14:02 +05:30
46a90ca402 fix(ui): initial image layers always ignored
Whoops!
2024-05-24 06:40:48 +10:00
d45c33b446 fix(ui): 'undefined' being used for metadata on uploaded images 2024-05-24 06:17:07 +10:00
88025d32c2 feat(api): downgrade metadata parse warnings to debug
I set these to warn during testing and neglected to undo the change.
2024-05-23 22:48:34 +10:00
af64764082 fix: remove db maintenance script from launcher
It is broken.
2024-05-23 22:39:55 +10:00
70487f0c2e fix(ui): layers are "enabled", not "visible" 2024-05-23 10:14:34 +10:00
55d7d9cc75 fix(ui): control layers don't disable correctly
Closes #6424
2024-05-23 10:14:34 +10:00
106674175c add logo and change text for non-local; 2024-05-23 06:51:13 +10:00
dd1d5bdb25 use support URL for non-local 2024-05-23 06:51:13 +10:00
6259ac0bec translationBot(ui): update translation (Dutch)
Currently translated at 79.6% (973 of 1222 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2024-05-22 09:51:12 +10:00
ba31f8a9a9 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1210 of 1228 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1206 of 1224 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1204 of 1222 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-05-22 09:51:12 +10:00
0ba57d6dc5 feat(ui): close starter models toast when a model is installed 2024-05-22 09:40:46 +10:00
abc133e936 feat(ui): revised invocation error toast handling
Only display the session if local. Otherwise, just display the error message.
2024-05-22 09:40:46 +10:00
57743239d7 feat(ui): add updateDescription flag to toast API
If false, when updating a toast, the description is left alone. The count will still tick up.
2024-05-22 09:40:46 +10:00
4a394c60cf feat(ui): add isLocal flag to config 2024-05-22 09:40:46 +10:00
624d28a93d feat(ui): invocation error toasts do not autoclose 2024-05-22 09:40:46 +10:00
29e1ea59fc feat(ui): style copy button on ToastWithSessionRefDescription 2024-05-22 09:40:46 +10:00
2e5d24f272 tidy(ui): remove old comment 2024-05-22 09:40:46 +10:00
1afa340b1a fix(ui): show toast when recalling seed 2024-05-22 09:40:46 +10:00
3b381b5a8c tidy(ui): remove the ToastID enum
With the model install logic cleaned up the enum is less useful
2024-05-22 09:40:46 +10:00
f2b9684de8 tidy(ui): split install model into helper hook
This was duplicated like 7 times or so
2024-05-22 09:40:46 +10:00
a66b3497e0 feat(ui): port all toasts to use new util 2024-05-22 09:40:46 +10:00
683ec8e5f2 feat(ui): add stateful toast utility
Small wrapper around chakra's toast system simplifies creating and updating toasts. See comments in toast.ts for details.
2024-05-22 09:40:46 +10:00
f31f0cf733 feat(ui): restore spellcheck on prompt boxes 2024-05-22 08:52:25 +10:00
38265b3123 docs(ui): update validateWorkflow comments 2024-05-21 05:17:10 -07:00
caca28286c tests(ui): add test for resource usage check 2024-05-21 05:17:10 -07:00
38320a5100 feat(ui): reset missing images, boards and models when loading workflows
These fields are reset back to `undefined` if not accessible. A warning toast is showing, and in the JS console, the full warning message is logged.
2024-05-21 05:17:10 -07:00
7badaab17d docs: fix link to invoke ai models site 2024-05-20 20:48:42 -07:00
aa0c59bb51 fix(ui): crash when using notes nodes or missing node/field templates (#6412)
## Summary

Notes nodes used some overly-strict redux selectors. The selectors are
now more chill. Also fixed an issue where you couldn't edit a notes node
title.

Found another class of error related to the overly strict reducers that
caused errors when loading a workflow that had missing templates. Fixed
this with fallback wrapper component, works like an error boundary when
a template isn't found.

## Related Issues / Discussions


https://discord.com/channels/1020123559063990373/1149506274971631688/1242256425527545949

## QA Instructions

- Add a notes node to a workflow. Edit the notes title.
- Load a workflow that has nodes that aren't installed. Should get a
fallback UI for each missing node.
- Load a workflow that references a node with different inputs than are
in the template - like an old version of a node. Should get a fallback
field warning for both missing templates, or missing inputs.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-21 07:59:43 +05:30
e4acaa5c8f chore: v4.2.2post1 2024-05-21 11:31:06 +10:00
9ba47cae20 fix(ui): unable to edit notes node title 2024-05-21 11:27:11 +10:00
bf4310ca71 fix(ui): errors when node template or field template doesn't exist
Some asserts were bubbling up in places where they shouldn't have, causing errors when a node has a field without a matching template, or vice-versa.

To resolve this without sacrificing the runtime safety provided by asserts, a `InvocationFieldCheck` component now wraps all field components. This component renders a fallback when a field doesn't exist, so the inner components can safely use the asserts.
2024-05-21 11:22:08 +10:00
e75f98317f fix(ui): notes node text not selectable 2024-05-21 10:06:25 +10:00
1249d4a6e3 fix(ui): crash when using a notes node 2024-05-21 10:06:09 +10:00
66c9f4708d Update invokeai_version.py 2024-05-21 07:11:09 +10:00
32277193b6 fix(ui): retain denoise strength and opacity when changing image 2024-05-20 18:27:51 +10:00
620ee2875e fix(ui): store hidden state of edges in workflows
This prevents a minor visual bug where collapsed edges between collapsed nodes didn't display correctly on first load of a workflow.
2024-05-20 11:36:47 +10:00
5553588147 fix(ui): ensure invocation edges have a type 2024-05-20 11:36:47 +10:00
1c29b3bd85 feat(ui): updated field type translations 2024-05-20 11:28:33 +10:00
e88b807a13 docs(ui): update field type docs & comments 2024-05-20 11:28:33 +10:00
9e55ef3d4b fix(ui): workflow migration field type
At some point, I made a mistake and imported the wrong types to some files for the old v1 and v2 workflow schema migration data.

The relevant zod schemas and inferred types have been restored.

This change doesn't alter runtime behaviour. Only type annotations.
2024-05-20 11:28:33 +10:00
8062a47d16 fix(ui): use new field type cardinality throughout app
Update business logic and tests.
2024-05-20 11:28:33 +10:00
dba8c43ecb feat(ui): explicit field type cardinality
Replace the `isCollection` and `isCollectionOrScalar` flags with a single enum value `cardinality`. Valid values are `SINGLE`, `COLLECTION` and `SINGLE_OR_COLLECTION`.

Why:
- The two flags were mutually exclusive, but this wasn't enforce. You could create a field type that had both `isCollection` and `isCollectionOrScalar` set to true, whuch makes no sense.
- There was no explicit declaration for scalar/single types.
- Checking if a type had only a single flag was tedious.

Thanks to a change a couple months back in which the workflows schema was revised, field types are internal implementation details. Changes to them are non-breaking.
2024-05-20 11:28:33 +10:00
8ebf2ddf15 fix(ui): fix t2i adapter dimensions error message
It now indicates the correct dimension of 64 (SD1.5) or 32 (SDXL) - before was hardcoded to 64.
2024-05-20 11:23:14 +10:00
f4625c2671 feat(ui): add canvas objects to metadat a for all canvas graphs 2024-05-20 10:32:59 +10:00
c94742bde6 feat(ui): add canvas objects to metadata when saving canvas to gallery 2024-05-20 10:32:59 +10:00
a34faf0bd8 chore(ui): typegen 2024-05-20 10:32:59 +10:00
ecfff6cb1e feat(api): add metadata to upload route
Canvas images are saved by uploading a blob generated from the HTML canvas element. This means the existing metadata handling, inside the graph execution engine, is not available.

To save metadata to canvas images, we need to provide it when uploading that blob.

The upload route now has a `metadata` body param. If this is provided, we use it over any metadata embedded in the image.
2024-05-20 10:32:59 +10:00
ba8bed6870 fix(ui): edge case resulting in no node templates when loading workflow, causing failure
Depending on the user behaviour and network conditions, it's possible that we could try to load a workflow before the invocation templates are available.

Fix is simple:
- Use the RTKQ query hook for openAPI schema in App.tsx
- Disable the load workflow buttons until w have templates parsed
2024-05-19 07:34:00 -07:00
ca186bca61 fix(ui): missed node execution state for progress images 2024-05-19 20:14:01 +10:00
e2f109807c fix(ui): delete edges when their source or target no longer exists 2024-05-19 20:14:01 +10:00
281bd31db2 feat(nodes): make ModelIdentifierInvocation a prototype 2024-05-19 20:14:01 +10:00
cea1874e00 perf(ui): memoize WorkflowName selectors 2024-05-19 20:14:01 +10:00
89b0e9e4de feat(ui): use connection validationResults directly in components 2024-05-19 20:14:01 +10:00
26d0d55d97 fix(ui): set nodeDragThreshold to prevent spurious position change events 2024-05-19 20:14:01 +10:00
059c5586a4 perf(ui): ignore all no-op node and edge changes 2024-05-19 20:14:01 +10:00
9ed5698aa8 fix(ui): do not remove exposed fields when updating workflows 2024-05-19 20:14:01 +10:00
0b5696c5d4 feat(ui): remove nodeExclusivelySelected action 2024-05-19 20:14:01 +10:00
a51142674a tidy(ui): more succinct syntax for edge and node updates 2024-05-19 20:14:01 +10:00
b8b671c0db feat(ui): remove selectionDeleted action 2024-05-19 20:14:01 +10:00
7cceafe0dd feat(ui): remove selectionPasted action 2024-05-19 20:14:01 +10:00
cbe32b647a feat(ui): remove selectedAll action 2024-05-19 20:14:01 +10:00
9a8e0842bb feat(ui): remove nodeReplaced action 2024-05-19 20:14:01 +10:00
1d7671298f fix(ui): group edge selection actions 2024-05-19 20:14:01 +10:00
e38d75c3dc feat(ui): get rid of nodeAdded 2024-05-19 20:14:01 +10:00
21fab9785a feat(ui): tweak edge styling 2024-05-19 20:14:01 +10:00
b3429553bb fix(ui): collapsed edges selected state 2024-05-19 20:14:01 +10:00
e480844042 fix(ui): edge styling 2024-05-19 20:14:01 +10:00
26029108f7 feat(ui): rework node and edge mutation logic
Remove our DIY'd reducers, consolidating all node and edge mutations to use `edgesChanged` and `nodesChanged`, which are called by reactflow. This makes the API for manipulating nodes and edges less tangly and error-prone.
2024-05-19 20:14:01 +10:00
504ac82077 fix(ui): duplicated edges when updating edge with lazy connect 2024-05-19 20:14:01 +10:00
6b11740dda chore(ui): knip 2024-05-19 20:14:01 +10:00
a80e3448f5 feat(ui): rework pendingConnection 2024-05-19 20:14:01 +10:00
4bda174eb9 tests(ui): coverage for getCollectItemType 2024-05-19 20:14:01 +10:00
b1e28c2f2c tests(ui): coverage for getFirstValidConnection 2024-05-19 20:14:01 +10:00
83000a4190 feat(ui): rework getFirstValidConnection with new helpers 2024-05-19 20:14:01 +10:00
c98205d0d7 tests(ui): candidate fields, getFirstValidConnection (wip) 2024-05-19 20:14:01 +10:00
ce2ad5903c feat(ui): extract logic for finding candidate fields to own function 2024-05-19 20:14:01 +10:00
fe3980a369 tests(ui): add buildNode convenience wrapper for buildInvocationNode 2024-05-19 20:14:01 +10:00
ea97ae5ae8 tidy(ui): extraneous vars in makeConnectionErrorSelector 2024-05-19 20:14:01 +10:00
3605b6b1a3 fix(ui): handling for in-progress edge updates during conection validation 2024-05-19 20:14:01 +10:00
fc31dddbf7 feat(ui): use new validateConnection 2024-05-19 20:14:01 +10:00
6ad01d824d feat(ui): add strict mode to validateConnection 2024-05-19 20:14:01 +10:00
78f9f3ee95 feat(ui): better types for validateConnection 2024-05-19 20:14:01 +10:00
972398d203 tests(ui): add iterate to test schema 2024-05-19 20:14:01 +10:00
857889d1fa tests(ui): coverage for getCollectItemType 2024-05-19 20:14:01 +10:00
8074a802d6 tests(ui): coverage for validateConnectionTypes 2024-05-19 20:14:01 +10:00
059d5a682c tidy(ui): validateConnection code clarity 2024-05-19 20:14:01 +10:00
00c2d8f95d tidy(ui): areTypesEqual var names 2024-05-19 20:14:01 +10:00
04a596179b tests(ui): finish test cases for validateConnection 2024-05-19 20:14:01 +10:00
3fcb2720d7 tests(ui): add tests for consolidated connection validation 2024-05-19 20:14:01 +10:00
6f7160b9fd fix(ui): call updateNodeInternals when making connections 2024-05-19 20:14:01 +10:00
6b4e464d17 fix(ui): rework edge update logic 2024-05-19 20:14:01 +10:00
9f7841a04b tidy(ui): clean up addnodepopover hotkeys 2024-05-19 20:14:01 +10:00
468644ab18 fix(ui): rebase conflict 2024-05-19 20:14:01 +10:00
9d127fee6b feat(ui): makeConnectionErrorSelector now creates a parameterized selector 2024-05-19 20:14:01 +10:00
6658897210 tidy(ui): tidy connection validation functions and logic 2024-05-19 20:14:01 +10:00
af7b194bec chore(ui): lint 2024-05-19 20:14:01 +10:00
de1ea50e6d fix(ui): rebase resolution 2024-05-19 20:14:01 +10:00
2680ef52c2 feat(nodes): add ModelIdentifierInvocation
This node allows a user to select _any_ model, outputting a `ModelIdentifierField` for that model.
2024-05-19 20:14:01 +10:00
a012bb6e07 feat(ui): add ModelIdentifierField field type
This new field type accepts _any_ model. A field renderer lets the user select any available model.
2024-05-19 20:14:01 +10:00
6a2c53f6c5 fix(ui): do not allow comparison between undefined original types 2024-05-19 20:14:01 +10:00
2cbf7d9221 fix(ui): stupid ts 2024-05-19 20:14:01 +10:00
fe7ed72c9c feat(nodes): make all ModelIdentifierField inputs accept connections 2024-05-19 20:14:01 +10:00
85a5a7c47a feat(ui): add originalType to FieldType, improved connection validation
We now keep track of the original field type, derived from the python type annotation in addition to the override type provided by `ui_type`.

This makes `ui_type` work more like it sound like it should work - change the UI input component only.

Connection validation is extend to also check the original types. If there is any match between two fields' "final" or original types, we consider the connection valid.This change is backwards-compatible; there is no workflow migration needed.
2024-05-19 20:14:01 +10:00
af3fd26d4e fix(ui): bug when clearing processor
When clearing the processor config, we shouldn't re-process the image. This logic wasn't handled correctly, but coincidentally the bug didn't cause a user-facing issue.

Without a config, we had a runtime error when trying to build the node for the processor graph and the listener failed.

So while we didn't re-process the image, it was because there was an error, not because the logic was correct.

Fix this by bailing if there is no image or config.
2024-05-19 07:25:48 +10:00
5127fd6320 fix(ui): control adapter autoprocess jank
If you change the control model and the new model has the same default processor, we would still re-process the image, even if there was no need to do so.

With this change, if the image and processor config are unchanged, we bail out.
2024-05-19 07:25:48 +10:00
124d34a8cc docs: add link for --extra-index-url 2024-05-19 00:56:31 +10:00
e8387d7523 docs: add link to tool on pytorch website 2024-05-19 00:56:31 +10:00
a5d08c981b docs: fix typo in --root arg of invokeai-web 2024-05-19 00:56:31 +10:00
811d0da0f0 docs: fix link to. install reqs 2024-05-19 00:56:31 +10:00
987ee704a1 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-05-17 22:54:03 -04:00
e77c7e40b7 fix ruff error 2024-05-17 22:53:45 -04:00
8aebc29b91 fix test to run on 32bit cpu 2024-05-17 22:48:54 -04:00
d968c6f379 refactor multifile download code 2024-05-17 22:29:19 -04:00
17e1fc5254 chore(app): ruff 2024-05-18 09:21:45 +10:00
84e031edc2 add nulable project also 2024-05-18 09:21:45 +10:00
b6b7e737e0 ruff 2024-05-18 09:21:45 +10:00
5f3e7afd45 add nullable user to invocation error events 2024-05-18 09:21:45 +10:00
b0cfca9d24 fix(app): pass image metadata as stringified json 2024-05-18 09:04:37 +10:00
985ef89825 fix(app): type annotations in images service 2024-05-18 09:04:37 +10:00
5928ade5fd feat(app): simplified create image API
Graph, metadata and workflow all take stringified JSON only. This makes the API consistent and means we don't need to do a round-trip of pydantic parsing when handling this data.

It also prevents a failure mode where an uploaded image's metadata, workflow or graph are old and don't match the current schema.

As before, the frontend does strict validation and parsing when loading these values.
2024-05-18 09:04:37 +10:00
93ebc175c6 fix(app): retain graph in metadata when uploading images 2024-05-18 09:04:37 +10:00
386d552493 fix(ui): loading workflows from file 2024-05-18 09:04:37 +10:00
799cf06d20 fix(ui): loading library workflows 2024-05-18 09:04:37 +10:00
922716d2ab feat(ui): store graph in image metadata
The previous super-minimal implementation had a major issue - the saved workflow didn't take into account batched field values. When generating with multiple iterations or dynamic prompts, the same workflow with the first prompt, seed, etc was stored in each image.

As a result, when the batch results in multiple queue items, only one of the images has the correct workflow - the others are mismatched.

To work around this, we can store the _graph_ in the image metadata (alongside the workflow, if generated via workflow editor). When loading a workflow from an image, we can choose to load the workflow or the graph, preferring the workflow.

Internally, we need to update images router image-saving services. The changes are minimal.

To avoid pydantic errors deserializing the graph, when we extract it from the image, we will leave it as stringified JSON and let the frontend's more sophisticated and flexible parsing handle it. The worklow is also changed to just return stringified JSON, so the API is consistent.
2024-05-18 09:04:37 +10:00
66fc110b64 Revert "feat(ui): store workflow in generation tab images"
This reverts commit c9c4190fb45696088207b0ac3c69c2795d7f9694.
2024-05-18 09:04:37 +10:00
822f1e1f06 feat(ui): store workflow in generation tab images 2024-05-18 09:04:37 +10:00
5d60c3c8e1 fix(ui): jank when editing field title 2024-05-18 08:46:40 +10:00
4e21d01c7f feat(ui): dim field name when connected 2024-05-18 08:46:40 +10:00
6b7b0b3777 fix(ui): do not rearrange fields when connection/disconnecting 2024-05-18 08:46:40 +10:00
07feb5ba07 Revert "feat(ui): SDXL clip skip"
This reverts commit 40b4fa7238.
2024-05-17 15:08:04 -07:00
a18d7adad4 fix(ui): allow image dims multiple of 32 with SDXL and T2I adapter
See https://github.com/invoke-ai/InvokeAI/pull/6342#issuecomment-2109912452 for discussion.
2024-05-17 23:38:54 +10:00
32dff2c4e3 feat(ui): copy/paste input edges when copying node
- Copy edges to selected nodes on copy
- If pasted with `ctrl/meta-shift-v`, also paste the input edges
2024-05-17 23:12:29 +10:00
575ecb4028 feat(ui): prevent connections to direct-only inputs 2024-05-17 22:08:40 +10:00
ad8778df6c feat(ui): extract node execution state from nodesSlice
This state is ephemeral and not undoable.
2024-05-17 13:24:23 +10:00
d2f5103f9f fix(ui): ignore actions from other slices in nodesSlice history 2024-05-17 13:24:23 +10:00
dd42a56084 tests(ui): fix parseSchema test fixture
The schema fixture wasn't formatted quite right - doesn't affect the test but still.
2024-05-17 13:24:23 +10:00
23ac340a3f tests(ui): add test for parseSchema 2024-05-17 13:24:23 +10:00
6791b4eaa8 chore(ui): lint 2024-05-17 13:24:23 +10:00
a8b042177d feat(ui): connection validation for collection items types 2024-05-17 13:24:23 +10:00
76825f4261 fix(ui): allow collect node inputs to connect to multiple fields when using lazy connect 2024-05-17 13:24:23 +10:00
78cb4d75ad fix(ui): use elevateEdgesOnSelect so last-selected edge is the interactable one when updating edges 2024-05-17 13:24:23 +10:00
a18bbac262 fix(ui): jank interaction between edge update and autoconnect 2024-05-17 13:24:23 +10:00
9ff5596963 feat(ui): hide values for connected fields 2024-05-17 13:24:23 +10:00
8ea596b1e9 fix(ui): janky editable field title
- Do not allow whitespace-only field titles
- Make only preview text trigger editable
- Tooltip over the preview, not the whole "row"
2024-05-17 13:24:23 +10:00
e3a143eaed fix(ui): fix jank w/ stale connections 2024-05-17 13:24:23 +10:00
c359ab6d9b fix(ui): fix dependency tracking for copy/paste hotkeys 2024-05-17 13:24:23 +10:00
dbfaa07e03 feat(ui): add checks for undo/redo actions 2024-05-17 13:24:23 +10:00
7f78fe7a36 feat(ui): move viewport state to nanostores 2024-05-17 13:24:23 +10:00
6cf5b402c6 feat(ui): remove extraneous selectedEdges and selectedNodes state 2024-05-17 13:24:23 +10:00
b0c7c7cb47 feat(ui): remove remaining extraneous state from nodes slice 2024-05-17 13:24:23 +10:00
4d68cd8dbb feat(ui): recreate edge auto-add-node logic 2024-05-17 13:24:23 +10:00
2c1fa30639 feat(ui): recreate edge autoconnect logic 2024-05-17 13:24:23 +10:00
708c68413d tidy(ui): add type for templates 2024-05-17 13:24:23 +10:00
1d884fb794 feat(ui): move invocation templates out of redux
Templates are stored in nanostores. All hooks, selectors, etc are reworked to reference the nanostore.
2024-05-17 13:24:23 +10:00
f6a44681a8 feat(ui): move invocation templates out of redux (wip) 2024-05-17 13:24:23 +10:00
d4df312300 feat(ui): move nodes copy/paste out of slice 2024-05-17 13:24:23 +10:00
9c0d44b412 feat(ui): split workflow editor settings to separate slice
We need the undoable slice to be only undoable state - settings are not undoable.
2024-05-17 13:24:23 +10:00
27826369f0 feat(ui): make nodesSlice undoable 2024-05-17 13:24:23 +10:00
2dae5eb7ad more refactoring; HF subfolders not working 2024-05-16 22:26:18 -04:00
31d8b50276 [Refactor] Update min and max values for LoRACard weight input 2024-05-17 10:38:26 +10:00
40b4fa7238 feat(ui): SDXL clip skip
Uses the same CLIP Skip value for both CLIP1 and CLIP2.

Adjusted SDXL CLIP Skip min/max/markers to be within the valid range (0 to 11).

Closes #4583
2024-05-16 07:49:30 -04:00
911a24479b add tests for model install file size reporting 2024-05-16 07:18:33 -04:00
3b1743b7c2 docs: fix install reqs link 2024-05-16 10:37:42 +10:00
f489c818f1 docs(ui): add comments to nsfw & watermarker helpers 2024-05-15 14:09:44 +10:00
af477fa295 tidy(ui): remove unused modelLoader from refiner helper 2024-05-15 14:09:44 +10:00
0ff0290735 tidy(ui): use Invocation<> helper type in canvas graph builders, elsewhere 2024-05-15 14:09:44 +10:00
67dbe6d949 tidy(ui): use Invocation<> helper type in OG control adapters 2024-05-15 14:09:44 +10:00
4c3c2297b9 tidy(ui): organise graph builder files 2024-05-15 14:09:44 +10:00
cadea55521 tidy(ui): organise graph builder files 2024-05-15 14:09:44 +10:00
c8f30b1392 tidy(ui): move testing-only types to test file 2024-05-15 14:09:44 +10:00
3d14a98abf tidy(ui): use Invocation<> type in control layers types 2024-05-15 14:09:44 +10:00
77024bfca7 fix(ui): fix sdxl generation mode metadata 2024-05-15 14:09:44 +10:00
4a1c3786a1 tidy(ui): organise CL graph builder 2024-05-15 14:09:44 +10:00
b239891986 tidy(ui): clean up base model handling in graph builder 2024-05-15 14:09:44 +10:00
9fb03d43ff tests(ui): get coverage to 100% for graph builder 2024-05-15 14:09:44 +10:00
bdc59786bd tidy(ui): clean up graph builder helper functions 2024-05-15 14:09:44 +10:00
fb6e926500 tidy(ui): remove extraneous graph validate calls 2024-05-15 14:09:44 +10:00
48ccd63dba feat(ui): use integrated metadata helper 2024-05-15 14:09:44 +10:00
ee647a05dc feat(ui): move metadata util to graph class
No good reason to have it be separate. A bit cleaner this way.
2024-05-15 14:09:44 +10:00
154b52ca4d docs(ui): update docstrings for Graph builder 2024-05-15 14:09:44 +10:00
5dd460c3ce chore(ui): knip 2024-05-15 14:09:44 +10:00
4897ce2a13 tidy(ui): remove unused files 2024-05-15 14:09:44 +10:00
5425526d50 feat(ui): use graph builder for generation tab sdxl 2024-05-15 14:09:44 +10:00
5a4b050e66 feat(ui): use asserts in graph builder 2024-05-15 14:09:44 +10:00
8d39520232 feat(ui): port NSFW and watermark nodes to graph builder 2024-05-15 14:09:44 +10:00
04d12a1e98 feat(ui): add HRF graph builder helper 2024-05-15 14:09:44 +10:00
39aa70963b docs(ui): update docstrings for addGenerationTabSeamless 2024-05-15 14:09:44 +10:00
5743254a41 fix(ui): use arrays for edge methods 2024-05-15 14:09:44 +10:00
c538ffea26 tidy(ui): remove console.log 2024-05-15 14:09:44 +10:00
e8d3a7c870 feat(ui): support multiple fields for getEdgesTo, getEdgesFrom, deleteEdgesTo, deleteEdgesFrom 2024-05-15 14:09:44 +10:00
2be66b1546 feat(ui): add deleteNode and getEdges to graph util 2024-05-15 14:09:44 +10:00
76e181fd44 build(ui): add eslint no-console rule 2024-05-15 14:09:44 +10:00
b5d42fbc66 tidy(ui): remove unused graph helper 2024-05-15 14:09:44 +10:00
b463cd763e tidy(ui): remove extraneous is_intermediate node fields 2024-05-15 14:09:44 +10:00
eb320df41d feat(ui): use new lora loaders, simplify VAE loader, seamless 2024-05-15 14:09:44 +10:00
de1869773f chore(ui): typegen 2024-05-15 14:09:44 +10:00
ef89c7e537 feat(nodes): add LoRASelectorInvocation, LoRACollectionLoader, SDXLLoRACollectionLoader
These simplify loading multiple LoRAs. Instead of requiring chained lora loader nodes, configure each LoRA (model & weight) with a selector, collect them, then send the collection to the collection loader to apply all of the LoRAs to the UNet/CLIP models.

The collection loaders accept a single lora or collection of loras.
2024-05-15 14:09:44 +10:00
008645d386 fix(ui): work through merge conflicts (wip) 2024-05-15 14:09:44 +10:00
f8042ffb41 WIP, sd1.5 works 2024-05-15 14:09:44 +10:00
dbe22be598 feat(ui): use graph utils in builders (wip) 2024-05-15 14:09:44 +10:00
8f6078d007 feat(ui): refine graph building util
Simpler types and API surface.
2024-05-15 14:09:44 +10:00
4020bf47e2 feat(ui): add MetadataUtil class
Provides methods for manipulating a graph's metadata.
2024-05-15 14:09:44 +10:00
9d685da759 feat(ui): add stateful Graph class
This stateful class provides abstractions for building a graph. It exposes graph methods like adding and removing nodes and edges.

The methods are documented, tested, and strongly typed.
2024-05-15 14:09:44 +10:00
e3289856c0 feat(ui): add and use type helpers for invocations and invocation outputs 2024-05-15 14:09:44 +10:00
47b8153728 build(ui): enable TS strictPropertyInitialization
https://www.typescriptlang.org/tsconfig/#strictPropertyInitialization
2024-05-15 14:09:44 +10:00
7901e4c082 chore(ui): typegen 2024-05-15 14:09:44 +10:00
18b0977a31 feat(api): add InvocationOutputMap to OpenAPI schema
This dynamically generated schema object maps node types to their pydantic schemas. This makes it much simpler to infer node types in the UI.
2024-05-15 14:09:44 +10:00
fc6b214470 tests(ui): set up vitest coverage 2024-05-15 14:09:44 +10:00
e22211dac0 fix: Fix Outpaint not applying the expanded mask correctly
In unscaled situations
2024-05-15 13:59:01 +10:00
f29c406fed refactor model_install to work with refactored download queue 2024-05-13 22:49:15 -04:00
287c679f7b clean up type checking for single file and multifile download job callbacks 2024-05-13 18:31:40 -04:00
e222484663 chore: v4.2.1 (#6362)
## Summary

Bump to v4.2.1

## Related Issues / Discussions

n/a

## QA Instructions

n/a

## Merge Plan

Do the release after merging.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-14 03:17:03 +05:30
2a9cea6689 Update invokeai_version.py
Bump to v4.2.1
2024-05-14 07:37:02 +10:00
93da75209c feat(nodes): use new blur_if_nsfw method 2024-05-14 07:23:38 +10:00
9c819f0fd8 fix(nodes): fix nsfw checker model download 2024-05-14 07:23:38 +10:00
eef6fcf286 translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1210 of 1210 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
e375d9f787 translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1192 of 1210 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1192 of 1210 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1192 of 1210 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.5% (1192 of 1210 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
ab18174774 translationBot(ui): update translation (Spanish)
Currently translated at 31.3% (379 of 1208 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
9265841384 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
c5fd08125d translationBot(ui): update translation (Italian)
Currently translated at 98.5% (1192 of 1210 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
11d88dae7f translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1210 of 1210 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
3b495659b0 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
15c9a3a4b6 translationBot(ui): update translation (Italian)
Currently translated at 98.3% (1189 of 1209 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.3% (1189 of 1209 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
60e77e4ed6 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 77.8% (922 of 1185 strings)

Co-authored-by: flower_elf <miaoju2005@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
fa832a8ac6 translationBot(ui): update translation (Russian)
Currently translated at 100.0% (1209 of 1209 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1209 of 1209 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1188 of 1188 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1185 of 1185 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
f7834d7d59 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
63d7461510 translationBot(ui): update translation (German)
Currently translated at 71.9% (839 of 1166 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
1de704160e translationBot(ui): update translation (Russian)
Currently translated at 97.3% (1154 of 1185 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1174 of 1174 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1173 of 1173 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1166 of 1166 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1165 of 1165 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1149 of 1149 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (1147 of 1147 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
b118a2565c translationBot(ui): update translation (Italian)
Currently translated at 96.0% (1138 of 1185 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1156 of 1174 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.3% (1155 of 1174 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1129 of 1147 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-05-14 07:15:12 +10:00
0bf14c2830 add multifile_download() method to download service 2024-05-12 20:14:00 -06:00
eb166baafe fix(ui): invoke button shows loading while queueing
Make the Invoke button show a loading spinner while queueing.

The queue mutations need to be awaited else the `isLoading` state doesn't work as expected. I feel like I should understand why, but I don't...
2024-05-13 11:53:29 +10:00
818d37f304 fix(api): retain cover image when converting model to diffusers
We need to retrieve and re-save the image, because a conversion to diffusers creates a new model record, with a new key.

See: https://old.reddit.com/r/StableDiffusion/comments/1cnx40d/invoke_42_control_layers_regional_guidance_w_text/l3bv152/
2024-05-13 08:46:07 +10:00
9cdb801c1c fix(api): add cover image to update model response
Fixes a bug where the image _appears_ to be reset when editing a model.

See: https://old.reddit.com/r/StableDiffusion/comments/1cnx40d/invoke_42_control_layers_regional_guidance_w_text/l3asdej/
2024-05-13 08:46:07 +10:00
5da8cde4fc fix(ui): disable listening on CA and II layers (#6332)
## Summary

Do not listen for mouse events on CA and II layers (which are not
interact-able).

## Related Issues / Discussions

Closes #6331

## QA Instructions

Move a CA or II layer above a regional guidance layer. The move tool
should now work.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-13 04:07:27 +05:30
6ec3dc0c0d Merge branch 'main' into psyche/fix/ui/cl-listening-layers 2024-05-13 04:05:35 +05:30
6050dffb25 fix(ui): use translations for canvas layer select (#6357)
## Summary

Use translations instead of plain strings.

## Related Issues / Discussions


https://discord.com/channels/1020123559063990373/1054129386447716433/1239181243078279208

## QA Instructions

The layer select should still work.

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-13 04:04:13 +05:30
93efeafe30 Merge branch 'main' into psyche/fix/ui/canvas-layer-translations 2024-05-13 04:02:23 +05:30
f167e8a8d3 fix(ui): jank in depthanything model size select (#6335)
## Summary

The select had a default search value, which meant it only showed
"small" as an option on first load.

## Related Issues / Discussions

n/a

## QA Instructions

- Add a CA layer
- Expand advanced
- Set processor to depth anything
- Click the model size dropdown, it should show all 3 sizes

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-05-13 04:01:58 +05:30
124d49f35e fix(ui): use translations for canvas layer select 2024-05-13 08:30:18 +10:00
52d8efa892 Merge branch 'main' into psyche/fix/ui/depth-anything-select 2024-05-13 04:00:07 +05:30
4ea8416c68 fix(ui): use pluralization for invoke button tooltip 2024-05-13 08:29:31 +10:00
8dd0bfb068 feat(ui): use new model type grouping for control adapters in control layers 2024-05-13 08:29:31 +10:00
6ff1c7d541 feat(ui): add group by base & type to useGroupedModelCombobox hook
This allows comboboxes for models to have more granular groupings. For example, Control Adapter models can be grouped by base model & model type.

Before:
- `SD-1`
- `SDXL`

After:
- `SD-1 / ControlNet`
- `SD-1 / T2I Adapter`
- `SDXL / ControlNet`
- `SDXL / T2I Adapter`
2024-05-13 08:29:31 +10:00
19f5a9c3a9 feat(ui): better invoke button checks
- Improved/more thorough checking before invoking for control layers
- Improved styling for the tooltip
2024-05-13 08:29:31 +10:00
d9ce9c62ac feat(ui): disable invoke button when t2i adapter used w/ image dims that are not multiples of 64 2024-05-13 08:29:31 +10:00
cdc468a38c Merge branch 'main' into psyche/fix/ui/depth-anything-select 2024-05-13 03:57:47 +05:30
2656f13a4a fix(ui): CA processor cancellation
When a control adapter processor config is changed, if we were already processing an image, that batch is immediately canceled. This prevents the processed image from getting stuck in a weird state if you change or reset the processor at the right (err, wrong?) moment.

- Update internal state for control adapters to track processor batches, instead of just having a flag indicating if the image is processing. Add a slice migration to not break the user's existing app state.
- Update preprocessor listener with more sophisticated logic to handle canceling the batch and resetting the processed image when the config changes or is reset.
- Fixed error handling that erroneously showed "failed to queue graph" errors when an active listener instance is canceled, need to check the abort signal.
2024-05-13 08:23:02 +10:00
da61396b1c cleanup: seamless unused older code cleanup 2024-05-13 08:11:08 +10:00
6c9fb617dc fix: fix seamless 2024-05-13 08:11:08 +10:00
5dd73fe53e fix(ui): jank in depthanything model size select 2024-05-10 09:52:30 +10:00
e6793be465 fix(ui): disable listening on CA and II layers
Closes #6331
2024-05-10 06:42:53 +10:00
63e62c5720 Update INSTALL_REQUIREMENTS.md - 'linux only' under AMD for SDXL.
Moved 'Linux only.' back from under NVIDIA to under AMD for the SDXL hardware requirements.
2024-05-09 10:56:23 -04:00
0848cb8ebd Update invokeai_version.py 2024-05-09 08:01:40 -04:00
1b777bb972 Revert "feat(ui): negative prompt boxes are italicized"
This reverts commit 49c4704379.
2024-05-09 07:52:52 -04:00
029ee90351 docs(ui): add comment & TODO for konva bug 2024-05-09 07:52:52 -04:00
2f9a064d48 feat(ui): ip adapter layers are selectable
This is largely an internal change, and it should have been this way from the start - less tip-toeing around layer types. The user-facing change is when you click an IP Adapter layer, it is highlighted. That's it.
2024-05-09 07:52:52 -04:00
b180666497 feat(ui): disable spellcheck on prompt boxes
These are almost guaranteed to have non-english words - disable the spellcheck to prevent red squigglies.
2024-05-09 07:52:52 -04:00
4740cd4f64 feat(ui): add "global" to global prompt placeholders 2024-05-09 07:52:52 -04:00
8b51298ba1 feat(ui): negative prompt boxes are italicized 2024-05-09 07:52:52 -04:00
1533429e54 feat(ui): optimized empty mask logic
Turns out, it's more efficient to just use the bbox logic for empty mask calculations. We already track if if the bbox needs updating, so this calculation does minimal work.

The dedicated calculation wasn't able to use the bbox tracking so it ran far more often than the bbox calculation.

Removed the "fast" bbox calculation logic, bc the new logic means we are continually updating the bbox in the background - not only when the user switches to the move tool and/or selects a layer.

The bbox calculation logic is split out from the bbox rendering logic to support this.

Result - better perf overall, with the empty mask handling retained.
2024-05-09 07:52:52 -04:00
fc000214a5 feat(ui): check for transparency and clear masks if no pixel data
Mask vector data includes additive (brush, rect) shapes and subtractive (eraser) shapes. A different composite operation is used to draw a shape, depending on whether it is additive or subtractive.

This means that a mask may have vector objects, but once rendered, is _visually_ empty (fully transparent). The only way determine if a mask is visually empty is to render it and check every pixel.

When we generate and save layer metadata, these fully erased masks are still used. Generating with an empty mask is a no-op in the backend, so we want to avoid this and not pollute graphs/metadata.

Previously, we did that pixel-based when calculating the bbox, which we only did when using the move tool, and only for the selected layer.

This change introduces a simpler function to check if a mask is transparent, and if so, deletes all its objects to reset it. This allows us skip these no-op layers entirely.

This check is debounced to 300 ms, trailing edge only.
2024-05-09 07:52:52 -04:00
f631aea4ee fix(ui): skip RG layers with no mask
These do not need to be added to the graph or metadata, as they are no-ops on the backend.
2024-05-09 07:52:52 -04:00
32f4c1f966 fix(ui): memoize mouse event handlers
This prevents resetting the stage event handlers on every frame. Whoops!
2024-05-09 07:52:52 -04:00
adebe639e3 tidy(ui): remove errant console.logs 2024-05-09 07:52:52 -04:00
44280ed472 fix(ui): layer recall uses fresh ids
When layer metadata is stored, the layer IDs are included. When recalling the metadata, we need to assign fresh IDs, else we can end up with multiple layers with the same ID, which of course causes all sorts of issues.
2024-05-09 07:52:52 -04:00
cec8840038 fix(ui): handle disabled RG layers
Was missing a check for `layer.isEnabled`.
2024-05-09 07:52:52 -04:00
b48d4a049d bad implementation of diffusers folder download 2024-05-08 21:21:01 -07:00
fc7f484935 feat(ui): add data-testids to control layers components:
- Add Layer Menu Button: `control-layers-add-layer-menu-button`
- Delete All Layers Button: `control-layers-delete-all-layers-button`
- CL Layer List: `control-layers-layer-list`
- CL Canvas: `control-layers-canvas`
- Toggle Metadata Button: `toggle-show-metadata-button`
- Toggle Progress Button: `toggle-show-progress-button`
- Toggle Viewer Menu Button: `toggle-viewer-menu-button`
- Settings Tab Button: `generation-tab-settings-tab-button`
- Control Layers Tab Button: `generation-tab-control-layers-tab-button`
2024-05-09 07:03:13 +10:00
1aa7cd57c2 feat(ui): add invert brush scroll checkbox to control layers settings 2024-05-09 07:03:13 +10:00
722a91aedb fix(ui): canvas toolbar centering 2024-05-09 07:03:13 +10:00
03c24ca9cb lint fix 2024-05-08 15:49:37 -04:00
5820579237 switch to generation tab when someone sends to img2img 2024-05-08 15:49:37 -04:00
6c768bfe7e fix(ui): viewer toggle prevents progress toggle interaction 2024-05-08 08:39:18 -04:00
5ca794b94f feat(ui): show progress toggle on control layers toolbar 2024-05-08 08:39:18 -04:00
d20695260d feat(ui): open viewer on enqueue from generation tab 2024-05-08 08:39:18 -04:00
d8557d573b Revert "feat(ui): extend zod with a is typeguard` method"
This reverts commit 0f45933791.
2024-05-08 08:39:18 -04:00
6c1fd584d2 feat(ui): pre-CL control adapter metadata recall 2024-05-08 08:39:18 -04:00
e8e764be20 feat(ui): revise image viewer
- Viewer only exists on Generation tab
- Viewer defaults to open
- When clicking the Control Layers tab on the left panel, close the viewer (i.e. open the CL editor)
- Do not switch to editor when adding layers (this is handled by clicking the Control Layers tab)
- Do not open viewer when single-clicking images in gallery
- _Do_ open viewer when _double_-clicking images in gallery
- Do not change viewer state when switching between app tabs (this no longer makes sense; the viewer only exists on generation tab)
- Change the button to a drop down menu that states what you are currently doing, e.g. Viewing vs Editing
2024-05-08 08:39:18 -04:00
e8023c44b0 chore(ui): lint 2024-05-08 08:39:18 -04:00
a3a6449786 feat(ui): versioned control layers metadata 2024-05-08 08:39:18 -04:00
e9d2ffe3d7 fix(ui): process control image on recall if no processed image 2024-05-08 08:39:18 -04:00
23ad6fb730 feat(ui): handle missing images/models when recalling control layers 2024-05-08 08:39:18 -04:00
00f36cb491 tidy(ui): clean up control layers graph builder 2024-05-08 08:39:18 -04:00
3f489c92c8 feat(ui): handle initial image layers in control layers helper 2024-05-08 08:39:18 -04:00
f147f99bef feat(ui): better metadata labels for layers 2024-05-08 08:39:18 -04:00
6107e3d281 fix(ui): fix zControlAdapterBase schema weight 2024-05-08 08:39:18 -04:00
de33d6e647 fix(ui): metadata "Layers" -> "Layer" 2024-05-08 08:39:18 -04:00
e36e5871a1 chore(ui): lint 2024-05-08 08:39:18 -04:00
8b25c1a62e tidy(ui): remove extraneous metadata handlers 2024-05-08 08:39:18 -04:00
dfbd7eb1cf feat(ui): individual layer recall 2024-05-08 08:39:18 -04:00
b43b2714cc feat(ui): add fracturedjsonjs to pretty-serialize objects
In use on the metadata viewer - makes it sooo much easier on the eyes.
2024-05-08 08:39:18 -04:00
e537de2f6d feat(ui): layers recall
This still needs some finessing - needs logic depending on the tab...
2024-05-08 08:39:18 -04:00
ccd399e277 feat(ui): add getIsVisible to metadata handlers 2024-05-08 08:39:18 -04:00
bfad814862 fix(ui): fix IPAdapterConfigV2 schema weight 2024-05-08 08:39:18 -04:00
6e8b7f9421 feat(ui): write layers to metadata 2024-05-08 08:39:18 -04:00
e47629cbe7 feat(ui): add zod schema for layers array 2024-05-08 08:39:18 -04:00
e840de27ed feat(ui): extend zod with a is typeguard` method
Feels dangerous, but it's very handy.
2024-05-08 08:39:18 -04:00
8342f32f2e refactor(ui): rewrite all types as zod schemas
This change prepares for safe metadata recall.
2024-05-08 08:39:18 -04:00
a7aa529b99 tidy(ui): "imageName" -> "name" 2024-05-08 08:39:18 -04:00
4adc592657 feat(ui): move strength to init image layer
This further splits the control layers state into its own thing.
2024-05-07 11:02:16 +10:00
e8d60e8d83 fix(ui): image metadata viewer stuck when spamming hotkey 2024-05-07 11:02:16 +10:00
886f5c90a3 feat(ui): move img2img strength out of advanced on canvas 2024-05-07 11:02:16 +10:00
5e684c11f1 Update invokeai_version.py 2024-05-07 09:09:10 +10:00
72ce239592 revert(ui): remove floating viewer
There are unresolved platform-specific issues with this component, and its utility is debatable.

Should be easy to just revert this commit to add it back in the future if desired.
2024-05-06 19:00:07 -04:00
a826f8f8c5 fix(ui): show total layer count in control layers tab 2024-05-06 19:00:07 -04:00
b6c19a8e47 feat(ui): close viewer when adding a RG layer 2024-05-06 19:00:07 -04:00
67d6cf19c6 fix(ui): switch to viewer if auto-switch is enabled 2024-05-06 19:00:07 -04:00
a9bf651c69 chore(ui): bump all deps 2024-05-06 19:00:07 -04:00
3bd5d9a8e4 fix(ui): memoize FloatingImageViewer
Maybe this will fix @JPPhoto's issue?
2024-05-06 19:00:07 -04:00
6249982d82 fix(ui): stuck viewer when spamming toggle
There are a number of bugs with `framer-motion` that can result in sync issues with AnimatePresence and the conditionally rendered component.

You can see this if you rapidly click an accordion, occasionally it gets out of sync and is closed when it should be open.

This is a bigger problem with the viewer where the user may hold down the `z` key. It's trivial to get it to lock up.

For now, just remove the animation entirely.

Upstream issues for reference:
https://github.com/framer/motion/issues/2023
https://github.com/framer/motion/issues/2618
https://github.com/framer/motion/issues/2554
2024-05-06 19:00:07 -04:00
6b98dba71d chore(ui): lint 2024-05-06 08:55:32 -04:00
c0065a65a0 feat(ui): floating viewer always shows progress, never shows metadata 2024-05-06 08:55:32 -04:00
cce3144c74 feat(ui): add floating image viewer 2024-05-06 08:55:32 -04:00
aab152a7e9 fix(ui): track mouse out flags correctly 2024-05-06 08:55:32 -04:00
c5b948bc3f feat(ui): fade layer selection color 2024-05-06 08:55:32 -04:00
44ecddae2e feat(ui): style Settings/Control Layers tabs like tabs 2024-05-06 08:55:32 -04:00
26847895b9 fix(ui): update hotkeys for viewer 2024-05-06 08:55:32 -04:00
f211c95dbc move access token regex matching into download queue 2024-05-05 21:00:31 -04:00
8e5e9b53d6 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-05-04 17:01:15 -04:00
e4a640f0a7 feat(ui): optimized rendering of selected layer
Instead of caching on every stroke, we can use a compositing rect when the layer is being drawn to improve performance.
2024-05-04 12:03:28 -04:00
b5b6a96d94 feat(ui): dynamic brush spacing
Scaled to 10% of brush size, clamped between 5px and 15px. This makes drawing feel a bit smoother, but maintains reasonable performance.
2024-05-04 12:03:28 -04:00
806a8f69c5 perf(ui): rerender of opacity sliders 2024-05-04 12:03:28 -04:00
ac0b9ba290 tidy(ui): $cursorPosition -> $lastCursorPos 2024-05-04 12:03:28 -04:00
7ca613d41c feat(ui): snap cursor pos when drawing rects
- Rects snap to stage edge when within a threshold (10 screen pixels)
- When mouse leaves stage, set last mousedown pos to null, preventing nonfunctional rect outlines

Partially addresses #6306.

There's a technical challenge to fully address the issue - mouse event are not fired when the mouse is outside the stage. While we could draw the rect even if the mouse leaves, we cannot update the rect's dimensions on mouse move, or complete the drawing on mouse up.

To fully address the issue, we'd need to a way to forward window events back to the stage, or at least handle window events. We can explore this later.
2024-05-04 12:03:28 -04:00
5cb1ff8679 fix(ui): open viewer on image click, not select 2024-05-04 12:03:28 -04:00
8794b99d51 fix(ui): save upscaled images to gallery on canvas tab 2024-05-03 23:15:10 -04:00
6bdded85da fix(ui): do not auto-hide next/prev image buttons 2024-05-03 23:15:10 -04:00
26613f10c7 feat(ui): close viewer when user switches tabs 2024-05-03 23:15:10 -04:00
6d2fe3b691 tidy(ui): clean up layer reset logic 2024-05-03 23:15:10 -04:00
2888845f7c fix(ui): invalidate mask cache when moving layer 2024-05-03 23:15:10 -04:00
4beccea6e7 fix(ui): do not run HRO if using an initial image 2024-05-03 23:15:10 -04:00
68d1458c83 fix(ui): address feedback 2024-05-04 08:40:12 +10:00
f4dde883ca feat: improve the switch states of the control layers / viewer area 2024-05-04 08:40:12 +10:00
e9a20051bd refactor DWOpenPose and add type hints 2024-05-03 18:08:53 -04:00
be7eeb576b fix(ui): fix viewer getting stuck when spamming toggle 2024-05-03 20:57:18 +10:00
af9f0e0963 feat(ui): cache control layer mask images
When invoking with control layers, we were creating and uploading the mask images on every enqueue, even when the mask didn't change. The mask image can be cached to greatly reduce the number of uploads.

With this change, we are a bit smarter about the mask images:
- Check if there is an uploaded mask image name
- If so, attempt to retrieve its DTO. Typically it will be in the RTKQ cache, so there is no network request, but it will make a network request if not cached to confirm the image actually exists on the server.
- If we don't have an uploaded mask image name, or the request fails, we go ahead and upload the generated blob
- Update the layer's state with a reference to this uploaded image for next time
- Continue as before

Any time we modify the mask (drawing/erasing, resetting the layer), we invalidate that cached image name (set it to null).

We now only upload images when we need to and generation starts faster.
2024-05-03 20:57:18 +10:00
3cba53533d Update README.md 2024-05-03 17:31:50 +10:00
ab87511a03 Update INSTALLATION.md 2024-05-03 17:31:50 +10:00
af868b0ea6 Update 010_INSTALL_AUTOMATED.md 2024-05-03 17:31:50 +10:00
960eae8255 Update TRAINING.md 2024-05-03 17:30:42 +10:00
0787c6c746 Update invokeai_version.py 2024-05-03 13:23:19 +10:00
579d436934 fix(ui): floating param/gallery buttons 2024-05-02 23:09:26 -04:00
36f01988e8 chore(ui): lint 2024-05-02 23:09:26 -04:00
d9b92d19f9 feat(ui): clearer viewer/editor context switching 2024-05-02 23:09:26 -04:00
fdfc379a84 fix(ui): layer counts 2024-05-02 23:09:26 -04:00
2062cfe84a fix(ui): cursor when no renderable layers added 2024-05-02 23:09:26 -04:00
eb36e834b2 feat(ui): add fallback when no layers exist 2024-05-02 23:09:26 -04:00
2baa33730a fix(ui): fix control layer list layout 2024-05-02 23:09:26 -04:00
c30df7ce79 feat(ui): style settings/control layers tabs 2024-05-02 23:09:26 -04:00
f05ac5a7a5 chore(ui): bump @invoke-ai/ui-library 2024-05-02 23:09:26 -04:00
85dd78b8df fix(ui): handle deleting images in use in generation tab 2024-05-02 23:09:26 -04:00
4c7be03702 tidy(ui): rename generation tab graph builders 2024-05-02 23:09:26 -04:00
e354fee4f4 fix(ui): add img2img metadata to graphs 2024-05-02 23:09:26 -04:00
20e628297c fix(ui): smoother animations in current image preview 2024-05-02 23:09:26 -04:00
98664fc46f fix(ui): gallery prev/next buttons animations 2024-05-02 23:09:26 -04:00
33617fc06a feat(ui): rework image viewer
- Rework styling
- Replace "CurrentImageDisplay" entirely
- Add a super short fade to reduce jarring transition
- Make the viewer a singleton component, overlaid on everything else - reduces change when switching tabs
2024-05-02 23:09:26 -04:00
c05e52ebae fix(ui): do not delete all layers when using image as initial image 2024-05-02 23:09:26 -04:00
5734a97c55 fix(ui): do not attempt drawing when invalid layer type selected 2024-05-02 23:09:26 -04:00
94a73d5377 feat(ui): update mm-related translations 2024-05-02 23:09:26 -04:00
0f7fdabe9b feat(ui): rename tab identifiers
- "txt2img" -> "generation"
- "unifiedCanvas" -> "canvas"
- "modelManager" -> "models"
- "nodes" -> "workflows"
- Add UI slice migration setting the active tab to "generation"
2024-05-02 23:09:26 -04:00
7c1f1076b4 feat(ui): rename tabs
- "Text to Image" -> "Generation"
- "Unified Canvas" -> "Canvas"
- "Model Manager" -> "Models"
2024-05-02 23:09:26 -04:00
a6ac184211 tidy(ui): excise img2img tab 2024-05-02 23:09:26 -04:00
7d58908e32 fix(ui): fix img2img graphs w/ control layers 2024-05-02 23:09:26 -04:00
26d3ec3fce fix(ui): destroy initial image layer after deleting 2024-05-02 23:09:26 -04:00
dc81357152 feat(ui): add img2img via control layers to graph builders 2024-05-02 23:09:26 -04:00
c9886796f6 feat(ui): add image viewer overlay
- Works on txt2img, canvas and workflows tabs, img2img has its own side-by-side view
- In workflow editor, the is closeable only if you are in edit mode, else it's always there
- Press `i` to open
- Press `esc` to close
- Selecting an image or changing image selection opens the viewer
- When generating, if auto-switch to new image is enabled, the viewer opens when an image comes in

To support this change, I organized and restructured some tab stuff.
2024-05-02 23:09:26 -04:00
209ddc2037 fix(ui): do not toggle layers on double click of opacity popover 2024-05-02 23:09:26 -04:00
8b6a283eab feat(ui): add opacity to initial image layer 2024-05-02 23:09:26 -04:00
75be6814bb feat(ui): add renderer for initial image 2024-05-02 23:09:26 -04:00
1d213067e8 feat(ui): add initial image layer to CL 2024-05-02 23:09:26 -04:00
d67480d92c feat(ui): add layerwrapper component 2024-05-02 23:09:26 -04:00
d55ea318ec tidy(ui): remove unused gallery hotkeys 2024-05-02 23:09:26 -04:00
474eab6f8a fix(ui): clamp incoming w/h to ensure always a multiple of 8
When recalling metadata and/or using control image dimensions, it was possible to set a width or height that was not a multiple of 8, resulting in generation failures.

Added a `clamp` option to the w/h actions to fix this. The option is used for all untrusted sources - everything except for the w/h number inputs, which clamp the values themselves.
2024-05-02 23:09:26 -04:00
1b13fee256 fix(ui): firefox drawing lag
Firefox v125.0.3 and below has a bug where `mouseenter` events are fired continually during mouse moves. The issue isn't present on FF v126.0b6 Developer Edition. It's not clear if the issue is present on FF nightly, and we're not sure if it will actually be fixed in the stable v126 release.

The control layers drawing logic relied on on `mouseenter` events to create new lines, and `mousemove` to extend existing lines. On the affected version of FF, all line extensions are turned into new lines, resulting in very poor performance, noncontiguous lines, and way-too-big internal state.

To resolve this, the drawing handling was updated to not use `mouseenter` at all. As a bonus, resolving this issue has resulted in simpler logic for drawing on the canvas.
2024-05-02 23:09:26 -04:00
6363095b29 feat(ui): control adapter recall for control layers
- Add set of metadata handlers for the control layers CAs
- Use these conditionally depending on the active tab - when recalling on txt2img, the CAs go to control layers, else they go to the old CA area.
2024-05-02 23:09:26 -04:00
4cd78b9478 feat(ui): add getImageDTO imperative RTKQ helper 2024-05-02 23:09:26 -04:00
2cde8a643e tidy(ui): suffix a control adapter types/objects with V2
Prevent mixing the old and new implementations up
2024-05-02 23:09:26 -04:00
f9555f03f5 tidy(ui): "CONTROLNET_PROCESSORS" -> "CA_PROCESSOR_DATA" 2024-05-02 23:09:26 -04:00
b1d8f3a3f9 tidy(ui): revert changes to old CA implementation
These changes were left over from the previous attempt to handle control adapters in control layers with the same logic. Control Layers are now handled totally separately, so these changes may be reverted.
2024-05-02 23:09:26 -04:00
38df6f3702 fix ruff error 2024-05-02 21:22:33 -04:00
3b64e7a1fd Merge branch 'main' into lstein/feat/simple-mm2-api 2024-05-02 21:20:35 -04:00
33a9f9a4dc fix(nodes): fix constraints in cnet processors
There were some invalid constraints with the processors - minimum of 0 for resolution or multiple of 64 for resolution.

Made minimum 1px and no multiple ofs.
2024-05-02 12:24:04 +10:00
c35625eb44 feat(ui): processor layout changes 2024-05-01 21:48:47 -04:00
6f572e1cce fix(ui): convert t2i to cnet and vice-versa when model changes 2024-05-01 21:48:47 -04:00
54acd3f2b1 ci(ui): restore error status for circular deps 2024-05-01 21:48:47 -04:00
6e966909ab chore(ui): lint 2024-05-01 21:48:47 -04:00
311ba8c04b fix(ui): ensure canvas size is correctly updated when model changed
Closes #6293
2024-05-01 21:48:47 -04:00
1b617768cf fix(ui): canvas infinite loop when setting bbox dims
When typing in a number into the w/h number inputs, if the number is less than the step, it appears the value of 0 is used. This is unexpected; it means Chakra isn't clamping the value correctly (or maybe our wrapper isn't clamping it).

Add checks to never bail if the width or height value from the number input component is 0.
2024-05-01 21:48:47 -04:00
8ceb94497e fix(ui): fix canvas rendering of control images 2024-05-01 21:48:47 -04:00
efb571401c feat(ui): tweak control adapter layout 2024-05-01 21:48:47 -04:00
ffba4871d0 tidy(ui): "scribble" -> "Scribble" 2024-05-01 21:48:47 -04:00
9437d701b2 fix(ui): disable clear processor when no processor selected 2024-05-01 21:48:47 -04:00
6effa19626 fix(ui): edge cases in auto-process 2024-05-01 21:48:47 -04:00
45c2ac41d5 feat(ui): processor layout/styling 2024-05-01 21:48:47 -04:00
ca1c3c0873 fix(ui): do not re-process if processor config hasn't changed 2024-05-01 21:48:47 -04:00
47ee08db91 fix(ui): processor select styling 2024-05-01 21:48:47 -04:00
c96b98fc9e feat(ui): auto-process for control layer CAs 2024-05-01 21:48:47 -04:00
905baf2787 refactor(ui): continue wiring up CA logic across (wip)
It works!
2024-05-01 21:48:47 -04:00
0e55488ff6 refactor(ui): wire up CA logic across (wip) 2024-05-01 21:48:47 -04:00
424a27eeda refactor(ui): add CA processor config components (wip) 2024-05-01 21:48:47 -04:00
6007218a51 refactor(ui): add CA config components (wip) 2024-05-01 21:48:47 -04:00
811e8a5a8b refactor(ui): rename & export actions from CL slice 2024-05-01 21:48:47 -04:00
121918352a refactor(ui): add control layers separate control adapter implementation (wip)
- Revise control adapter config types
- Recreate all control adapter mutations in control layers slice
- Bit of renaming along the way - typing 'RegionalGuidanceLayer' over and over again was getting tedious
2024-05-01 21:48:47 -04:00
3717321480 tidy(ui): organize layer components 2024-05-01 21:48:47 -04:00
4a250bdf9c Add TCD scheduler (#6086)
Adds the TCD scheduler to better support.
https://huggingface.co/h1t/TCD-SDXL-LoRA or checkpoints that have been
made with TCD

Example:
TCD Lora with Euler A

![b0ad6174-cd2b-49fe-ae42-3a83bc6ae571](https://github.com/invoke-ai/InvokeAI/assets/82827604/d823cb2f-4d9c-4f93-9fc2-e63773a378b6)

TCD Lora with TCD scheduler

![74495a51-eeac-45e6-9983-fb6551a5bdef](https://github.com/invoke-ai/InvokeAI/assets/82827604/c87604d8-a44e-4fb9-a7be-ef2600784727)
2024-05-01 12:57:01 +05:30
dce8b88aaf fix: change eta only for TCD Scheduler 2024-05-01 12:47:46 +05:30
1bdcbe3284 cleanup: use dict update to actually update the scheduler keyword args 2024-05-01 12:22:39 +05:30
49c84cd423 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-30 18:13:42 -04:00
88ac3bc7f0 Merge branch 'main' into main 2024-04-30 16:51:44 -04:00
abb3bb9f7e Update invokeai_version.py 2024-05-01 06:30:28 +10:00
2ddb82200c fix: Manually update eta(gamma) to 1.0 for TCDScheduler
seems to work best with invoke at 4 steps
2024-05-01 01:20:53 +05:30
38880cde5c chore: update schema 2024-05-01 01:20:22 +05:30
39ab4dd83e Merge branch 'main' into pr/6086 2024-05-01 00:37:06 +05:30
631878b212 feat(ui): border radius on canvas 2024-04-30 08:10:59 -04:00
7a5399e83c feat(ui): display message when no layers are added 2024-04-30 08:10:59 -04:00
e90775731d fix(ui): layer layout orientation 2024-04-30 08:10:59 -04:00
3f26880493 fix(ui): "Global Settings" -> "Settings" 2024-04-30 08:10:59 -04:00
21cf1004db fix(ui): layers default to expanded 2024-04-30 08:10:59 -04:00
d74cd12aa6 feat(ui): collapsible layers 2024-04-30 08:10:59 -04:00
cf1883585d chore(ui): lint 2024-04-30 08:10:59 -04:00
8a791d4f16 feat(ui): make control image opacity filter toggleable 2024-04-30 08:10:59 -04:00
1212698059 tidy(ui): more renaming of components 2024-04-30 08:10:59 -04:00
ba6db33b39 tidy(ui): more renaming of components 2024-04-30 08:10:59 -04:00
b3dbfdaa02 tidy(ui): more renaming of components 2024-04-30 08:10:59 -04:00
3441187c23 tidy(ui): "regional prompts" -> "control layers" 2024-04-30 08:10:59 -04:00
8de56fd77c tidy(ui): move regionalPrompts files to controlLayers 2024-04-30 08:10:59 -04:00
22bd33b7c6 chore(ui): lint 2024-04-30 08:10:59 -04:00
2af5c4be9f fix(ui): ip adapter layers are not selectable 2024-04-30 08:10:59 -04:00
415a41e21a perf(ui): reset maskobjects when layer has no bbox (all objects erased) 2024-04-30 08:10:59 -04:00
aa2ca03056 fix(ui): filter layers based on tab when disabling invoke button 2024-04-30 08:10:59 -04:00
a20faca20f feat(ui): layer layout tweaks 2024-04-30 08:10:59 -04:00
9d042baf48 fix(ui): ip adapter layers always at bottom of list 2024-04-30 08:10:59 -04:00
6195741814 feat(ui): move global mask opacity to settings popover 2024-04-30 08:10:59 -04:00
c2f8adf93e fix(ui): deselect other layers when new layer added 2024-04-30 08:10:59 -04:00
ace3955760 fix(ui): tool preview/cursor when non-interactable layer selected 2024-04-30 08:10:59 -04:00
720e16cea6 feat(ui): tweak layer list styling to better indicate selectablility 2024-04-30 08:10:59 -04:00
a357a1ac9d feat(ui): remove select layer on click in canvas
It's very easy to end up in a spot where you cannot select a layer at all to move it around. Too tricky to handle otherwise.
2024-04-30 08:10:59 -04:00
22f160bfcc fix(ui): unlink control adapter opaicty from global mask opacity 2024-04-30 08:10:59 -04:00
fa637b5c59 fix(ui): add missed ca layer opacity logic
didn't stage the right changes a few commits back
2024-04-30 08:10:59 -04:00
1f68a60752 feat(ui): hold shift to use control image size w/o model constraints 2024-04-30 08:10:59 -04:00
048bd18e10 feat(ui): separate ca layer opacity 2024-04-30 08:10:59 -04:00
e5ec529f0f feat(ui): fix layer arranging 2024-04-30 08:10:59 -04:00
d884c15d0c feat(ui): update layer menus 2024-04-30 08:10:59 -04:00
9ee7cad613 feat(ui): make control layer ui exclusive to txt2img tab 2024-04-30 08:10:59 -04:00
629110784d fix(ui): delete control layers correctly 2024-04-30 08:10:59 -04:00
c1666a8b5a fix(ui): select default control/ip adapter models in control layers 2024-04-30 08:10:59 -04:00
d14b315bc6 fix(ui): use optimal size when using control image dims 2024-04-30 08:10:59 -04:00
fe459295ea fix(ui): exclude disabled control adapters on control layers 2024-04-30 08:10:59 -04:00
9d67ec9efe fix(ui): toggle control adapter layer vis 2024-04-30 08:10:59 -04:00
5bf4d37949 perf(ui): reduce control image processing to when it is needed
Only should reprocess if the processor settings or the image has changed.
2024-04-30 08:10:59 -04:00
387ab9cee7 feat(ui): reset controlnet model to null instead of disabling when base model changes 2024-04-30 08:10:59 -04:00
56050f7887 fix(ui): fix canvas scaling when window is zoomed
Konva doesn't react to changes to window zoom/scale. If you open the tab at, say, 90%, then bump to 100%, the pixel ratio of the canvas doesn't change. This results in lower-quality renders on the canvas (generation is unaffected).
2024-04-30 08:10:59 -04:00
c354470cd1 perf(ui): do not cache controlnet images unless required 2024-04-30 08:10:59 -04:00
ded8267505 WIP control adapters in regional 2024-04-30 08:10:59 -04:00
e822897b1c feat(nodes): add prototype heuristic image resize node
Uses the fancy cnet resize that retains edges.
2024-04-30 08:10:59 -04:00
2d7b8c2a1b fix(backend): do not round image dims to 64 in controlnet processor resize
Rounding the dims results in control images that are subtly different than the input. We round to the nearest 8px later, there's no need to round now.
2024-04-30 08:10:59 -04:00
ebeae41cb2 tidy(ui): minor ca component tidy 2024-04-30 08:10:59 -04:00
6f5f3381f9 feat(ui): revise internal state for RCC 2024-04-30 08:10:59 -04:00
2f6fec8c6c chore(ui): lint 2024-04-30 08:10:59 -04:00
cc4bef4859 refactor(ui): move size state to regional 2024-04-30 08:10:59 -04:00
b6a45e53f1 refactor(ui): move positive2 and negative2 prompt to regional 2024-04-30 08:10:59 -04:00
1cf1e53a6c refactor(ui): move positive and negative prompt to regional 2024-04-30 08:10:59 -04:00
c686625076 feat(ui): add 'control_layer' type 2024-04-30 08:10:59 -04:00
d861bc690e feat(mm): handle PC_PATH_MAX on external drives on macOS
`PC_PATH_MAX` doesn't exist for (some?) external drives on macOS. We need error handling when retrieving this value.

Also added error handling for `PC_NAME_MAX` just in case. This does work for me for external drives on macOS, though.

Closes #6277
2024-04-30 07:57:03 -04:00
1fe90c357c feat(backend): lift managed model loading out of depthanything class 2024-04-29 08:56:00 +10:00
fcb071f30c feat(backend): lift managed model loading out of lama class 2024-04-29 08:12:51 +10:00
57c831442e fix safe_filename() on windows 2024-04-28 14:42:40 -04:00
f65c7e2bfd Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 13:42:54 -04:00
7c39929758 support VRAM caching of dict models that lack to() 2024-04-28 13:41:06 -04:00
f262b9032d fix: changed validation to not error on connection 2024-04-28 12:48:56 -04:00
71c3197eab fix: denoise latents accepts CFG lists as input 2024-04-28 12:48:56 -04:00
a26667d3ca make download and convert cache keys safe for filename length 2024-04-28 12:24:36 -04:00
bb04f496e0 Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-28 11:33:26 -04:00
70903ef057 refactor load_ckpt_from_url() 2024-04-28 11:33:23 -04:00
241a1fdb57 feat(mm): support sdxl ckpt inpainting models
There are only a couple SDXL inpainting models, and my tests indicate they are not as good as SD1.5 inpainting, but at least we support them now.

- Add the config file. This matches what is used in A1111. The only difference from the non-inpainting SDXL config is the number of in-channels.
- Update the legacy config maps to use this config file.
2024-04-28 12:57:27 +10:00
3595beac1e docs: remove references to config script in CONFIGURATION.md 2024-04-25 17:49:32 -04:00
caa7c0f2bd docs: more pruning and tidying readme 2024-04-26 00:00:18 +10:00
d546823c4d docs: pruning and tidying readme 2024-04-26 00:00:18 +10:00
dac2d78da6 Update README.md 2024-04-26 00:00:18 +10:00
d72f272f16 Address change requests in first round of PR reviews.
Pending:

- Move model install calls into model manager and create passthrus in invocation_context.
- Consider splitting load_model_from_url() into a call to get the path and a call to load the path.
2024-04-24 23:53:30 -04:00
398f37c0ed tidy(backend): clean up controlnet_utils
- Use the our adaptation of the HWC3 function with better types
- Extraction some of the util functions, name them better, add comments
- Improve type annotations
- Remove unreachable codepaths
2024-04-25 13:20:09 +10:00
6b0bf59682 feat(backend): update nms util to make blur/thresholding optional 2024-04-25 13:20:09 +10:00
5b8f77f990 tidy(nodes): move cnet mode literals to utils
Now they can be used in type signatures without circular imports.
2024-04-25 13:20:09 +10:00
3207822738 Update invokeai_version.py 2024-04-25 12:31:59 +10:00
8d86fabf4b chore(ui): lint 2024-04-24 20:09:52 +10:00
af3e910ad3 fix(ui): fix layer arrangement 2024-04-24 20:09:52 +10:00
af25d00964 tidy(ui): use const for brush spacing 2024-04-24 20:09:52 +10:00
d4a30d08ef feat(ui): create new line when mouse held down, leaves canvas and comes back over 2024-04-24 20:09:52 +10:00
bd8a33e824 tidy(ui): clean up renderer functions
- Split logic to create layers/objects from the updating logic
- Organize and comment functions
2024-04-24 20:09:52 +10:00
b425646b7b chore(ui): lint 2024-04-24 20:09:52 +10:00
293e11cfa6 feat(ui): hide add prompt buttons when user has a prompt 2024-04-24 20:09:52 +10:00
c73aabdfbf feat(ui): regional control defaults to having a positive prompt 2024-04-24 20:09:52 +10:00
ca989c54b0 fix(ui): restore OG aspect ratio preview for non-t2i tabs 2024-04-24 20:09:52 +10:00
260e24733f fix: update SDXL IP Adpater starter model to be ViT-H 2024-04-24 00:08:21 -04:00
bb6e3e726d fix: update ip adapter starter models path (#6262)
## Summary

<!--A description of the changes in this PR. Include the kind of change
(fix, feature, docs, etc), the "why" and the "how". Screenshots or
videos are useful for frontend changes.-->

## Related Issues / Discussions

<!--WHEN APPLICABLE: List any related issues or discussions on github or
discord. If this PR closes an issue, please use the "Closes #1234"
format, so that the issue will be automatically closed when the PR
merges.-->

## QA Instructions

<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->

## Merge Plan

<!--WHEN APPLICABLE: Large PRs, or PRs that touch sensitive things like
DB schemas, may need some care when merging. For example, a careful
rebase by the change author, timing to not interfere with a pending
release, or a message to contributors on discord after merging.-->

## Checklist

- [ ] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-04-24 08:58:15 +05:30
6b394554e2 fix: update ip adapter starter models path 2024-04-24 08:48:25 +05:30
ae1955a1a8 feat(ui): update canvas graphs to provide unet 2024-04-23 07:32:53 -04:00
1bef13db37 feat(nodes): restore unet check on CreateGradientMaskInvocation
Special handling for inpainting models
2024-04-23 07:32:53 -04:00
a461537087 chore: ruff 2024-04-23 07:32:53 -04:00
99e28da19b feat(ui): add variant to model edit
Also simplify the layouting for all model view/edit components.
2024-04-23 07:32:53 -04:00
42a159beaa chore(ui): typegen 2024-04-23 07:32:53 -04:00
0aa5aadfe8 fix(mm): move variant to MainConfigBase
shoulda been here all along
2024-04-23 07:32:53 -04:00
2537d260e3 tests: add test for probing diffusers model variant type 2024-04-23 07:32:53 -04:00
bbf919a933 chore: frontend check error 2024-04-23 07:32:53 -04:00
01897ec576 remove extra inputs 2024-04-23 07:32:53 -04:00
bc12d6654e chore: comments and ruff 2024-04-23 07:32:53 -04:00
6d7c8d5f57 remove unet test 2024-04-23 07:32:53 -04:00
38604aa408 update canvas graphs 2024-04-23 07:32:53 -04:00
781de914f4 fix threshhold 2024-04-23 07:32:53 -04:00
c094bad233 add unet check in gradient mask node 2024-04-23 07:32:53 -04:00
0063014f2b gradient mask node test for inpaint 2024-04-23 07:32:53 -04:00
d7b5ad02e8 tests: add object serializer test for dangling folders
- Ensure they are deleted on init if ephemeral
- Ensure they are _not_ deleted on init if _not_ ephemeral
2024-04-23 17:12:14 +10:00
2cee436ecf tidy(app): remove unused class 2024-04-23 17:12:14 +10:00
e6386d969f fix(app): only clear tempdirs if ephemeral and before creating tempdir
Also, this needs to happen in init, else it deletes the temp dir created in init
2024-04-23 17:12:14 +10:00
4b2b983646 tidy(api): reverted unnecessary changes in dependencies.py 2024-04-23 17:12:14 +10:00
53808149fb moved cleanup routine into object_serializer_disk.py 2024-04-23 17:12:14 +10:00
21ba55d0a6 add an initialization function that removes dangling tmpdirs from outputs/tensors 2024-04-23 17:12:14 +10:00
28c28b2fc0 fix: 🐛 handle trigger phrase form submits 2024-04-23 16:42:40 +10:00
8b9c4c62a6 chore: v4.2.0a2 2024-04-23 13:08:26 +10:00
cf637ecaa6 fix(ui): disabled ip adapters applied to regional control 2024-04-23 13:08:26 +10:00
fca718bdd3 tidy(ui): remove extraneous cursor sync 2024-04-23 12:11:47 +10:00
5196a2efec fix(ui): minor canvas overflow 2024-04-23 12:11:47 +10:00
385e93443a feat(ui): rp hotkeys
- Shift+C: Reset selected layer mask (same as canvas)
- Shift+D: Delete selected layer (cannot be Del, that deletes an image in gallery)
- Shift+A: Add layer (cannot be Ctrl+Shift+N, that opens a new window)
- Ctrl/Cmd+Wheel: Brush size (same as canvas)
2024-04-23 12:11:47 +10:00
604217313a chore(ui): lint 2024-04-23 12:11:47 +10:00
229423b370 tidy(ui): memo aspectratiopreview 2024-04-23 12:11:47 +10:00
75a548e3eb perf(ui): debounce render wait = 300ms 2024-04-23 12:11:47 +10:00
24dbb65ebb perf(ui): add brush spacing
Only add point to line if the next point is 10 or more px from the last point
2024-04-23 12:11:47 +10:00
c915220965 feat(ui): aspect ratio preview is regional prompts canvas 2024-04-23 12:11:47 +10:00
bb37e25ed0 feat(ui): rp ui layout 2024-04-23 12:11:47 +10:00
dda1111f20 Make it alpha 2024-04-22 10:54:21 -04:00
9d71b91b7f chore: v4.2.0b1 2024-04-22 10:54:21 -04:00
714126b832 build(ui): temp disable circular dependency check
I'll need to think about how to fix this properly. For now, disable the check as the UI can still build fine.
2024-04-22 23:09:39 +10:00
a10c66797d chore(ui): lint 2024-04-22 23:09:39 +10:00
6dcaf75b5f feat(ui): regional prompts spray n pray
Trying a lot of different things as I iterated, so this is smooshed into one big commit... too hard to split it now.

- Iterated on IP adapter handling and UI. Unfortunately there is an bug related to undo/redo. The IP adapter state is split across the `controlAdapters` slice and the `regionalPrompts` slice, but only the `regionalPrompts` slice supports undo/redo. If you delete the IP adapter and then undo/redo to a history state where it existed, you'll get an error. The fix is likely to merge the slices... Maybe there's a workaround.
- Iterated on UI. I think the layers are OK now.
- Removed ability to disable RP globally for now. It's enabled if you have enabled RP layers.
- Many minor tweaks and fixes.
2024-04-22 23:09:39 +10:00
018845cda0 tidy(ui): regional prompts kind -> type 2024-04-22 23:09:39 +10:00
8c0a061fa0 fix(ui): hotkeys dependency array 2024-04-20 11:32:08 -04:00
4895875ded feat(ui): rects on regional prompt UI 2024-04-20 11:32:08 -04:00
cfddbda578 tidy(ui): clean up action names 2024-04-20 11:32:08 -04:00
58d3a9e7d4 refactor(ui): revise regional prompts state to support prompt-less mask layers
This structure is more adaptable to future features like IP-Adapter-only regions, controlnet layers, image masks, etc.
2024-04-20 11:32:08 -04:00
a00e703144 feat(nodes): image mask to tensor invocation
Thanks @JPPhoto!
2024-04-20 11:32:08 -04:00
e4024bdeb9 fix(ui): floor all pixel coords
This prevents rendering objects with sub-pixel positioning, which looks soft
2024-04-20 11:32:08 -04:00
944690ac8e feat(ui): remove drag distance on layers 2024-04-20 11:32:08 -04:00
a7d69aa0a9 fix(ui): brush preview cursor jank 2024-04-20 11:32:08 -04:00
15018fdbc0 fix(ui): brush preview not visible after hotkey 2024-04-20 11:32:08 -04:00
31ace9aff8 feat(ui): tool hotkeys for rp 2024-04-20 11:32:08 -04:00
3f4ea30113 fix(ui): fix missing bbox when a layer is empty 2024-04-20 11:32:08 -04:00
7edcadb371 fix(ui): bbox rendered slightly too small 2024-04-20 11:32:08 -04:00
d582203c62 chore(ui): lint 2024-04-20 14:54:49 +10:00
148a6c08ca fix(ui): fix bbox caching 2024-04-20 14:54:49 +10:00
1e904d281a feat(ui): hook up sd1.5 t2i graph to regional prompts 2024-04-20 14:54:49 +10:00
03d9a75720 feat(ui): better rp colors 2024-04-20 14:54:49 +10:00
5edce0a4de perf(ui): caching efficiency 2024-04-20 14:54:49 +10:00
604bf4e9ec perf(ui): use efficient group caching instead of a compositing rect
Seems to be the same speed and it's less complex.
2024-04-20 14:54:49 +10:00
39d036bb37 feat(ui): update move tool to show all bboxes, mouseover bbox strokes 2024-04-20 14:54:49 +10:00
8a69fbd336 perf(ui): more bbox optimizations
- Keep track of whether the bbox needs to be recalculated (e.g. had lines/points added)
- Keep track of whether the bbox has eraser strokes - if yes, we need to do the full pixel-perfect bbox calculation, otherwise we can use the faster getClientRect
- Use comparison rather than Math.min/max in bbox calculation (slightly faster)
- Return `null` if no pixel data at all in bbox
2024-04-20 14:54:49 +10:00
a71ed10b71 perf(ui): more efficient bbox method with smaller minimum offscreen canvas size 2024-04-20 14:54:49 +10:00
9d3978edcf fix(ui): give min dimensions to rp storybook 2024-04-20 14:54:49 +10:00
18e1d74917 fix(ui): group layer color change history 2024-04-20 14:54:49 +10:00
9276ecfd02 feat(ui): rp ui styling/layout 2024-04-19 09:32:56 -04:00
ea527f5fe1 feat(nodes): add beta classification to mask tensor nodes 2024-04-19 09:32:56 -04:00
d43f9732ab feat(ui): rp ui styling 2024-04-19 09:32:56 -04:00
c613839740 feat(ui): use translations for rp features 2024-04-19 09:32:56 -04:00
bb371cfeca feat(ui): minor styling rp 2024-04-19 09:32:56 -04:00
6a5510146c feat(ui): add default rp brush size 2024-04-19 09:32:56 -04:00
9667f77c41 feat(ui): rp editor styling 2024-04-19 09:32:56 -04:00
e93e0612af tidy(ui): selectedLayer -> selectedLayerId 2024-04-19 09:32:56 -04:00
9528287d56 feat(ui): move ephemeral tool state out of redux 2024-04-19 09:32:56 -04:00
14c722c265 tidy(ui): remove unused conditional 2024-04-19 09:32:56 -04:00
4b2cd2da9f feat(ui): remove special handling of main prompt
Until we have a good handle on what works best, leaving this to the user
2024-04-19 09:32:56 -04:00
3c5b728bee feat(ui): add enabled state for RP 2024-04-19 09:32:56 -04:00
9b5c47748d tidy(ui): isRegionalPromptLayer -> isRPLayer 2024-04-19 09:32:56 -04:00
eb781272f7 tidy(ui): organize rp layer components 2024-04-19 09:32:56 -04:00
642a0de3dd feat(ui): organize layer naming
prep for non-rp layer types
2024-04-19 09:32:56 -04:00
f3b4cecf2e feat(ui): invert tensor mask instead of loading mask image and converting to tensor second time
minor efficiency improvement
2024-04-19 09:32:56 -04:00
499e7a7b74 chore(ui): typegen 2024-04-19 09:32:56 -04:00
aace364677 feat(nodes): add InvertTensorMaskInvocation 2024-04-19 09:32:56 -04:00
c195094e91 fix(ui): do not open panels when collapsed and window resize 2024-04-19 09:32:56 -04:00
e6c57edf87 tidy(ui): shuffle graph builder logic 2024-04-19 09:32:56 -04:00
c217e052a8 tidy(ui): remove unused action 2024-04-19 09:32:56 -04:00
964e2236b9 feat(ui): do not add promptless conditioning nodes 2024-04-19 09:32:56 -04:00
a6e64423d9 feat(ui): per-layer autonegative 2024-04-19 09:32:56 -04:00
d3aa97ab99 feat(ui): add copy graph button to queue item detail view 2024-04-19 09:32:56 -04:00
0d8edd67ab fix(ui): group lines together in undo history 2024-04-19 09:32:56 -04:00
d9dd00ea20 feat(ui): undo/redo in regional prompts
using the `redux-undo` library
2024-04-19 09:32:56 -04:00
170763899a tidy(ui): tidy regional prompts graph helper, add comments 2024-04-19 09:32:56 -04:00
9e1a4a4a07 feat(ui): regional prompts default layer opacity 2024-04-19 09:32:56 -04:00
dcb4a40741 fix(ui): regional prompts brush preview wonkiness 2024-04-19 09:32:56 -04:00
f8bf985256 perf(ui): do not recreate map callback on every render 2024-04-19 09:32:56 -04:00
81f29b9624 tidy(ui): remove errant console.log 2024-04-19 09:32:56 -04:00
f2dde9a035 feat(ui): cleared selected layer styling 2024-04-19 09:32:56 -04:00
85f4a066fb feat(ui): use logger for stage renderer 2024-04-19 09:32:56 -04:00
b9e6b7ba48 feat(ui): restore layer arrange functionality 2024-04-19 09:32:56 -04:00
085f7bdbee feat(ui): add invert negative mode
Adds an additional negative conditioning using the inverted mask of the positive conditioning and the positive prompt. May be useful for mutually exclusive regions.
2024-04-19 09:32:56 -04:00
e4fcb6627a feat(ui): style regional prompt boxes 2024-04-19 09:32:56 -04:00
47aa6357d1 tidy(ui): organize regional prompts files 2024-04-19 09:32:56 -04:00
b81030fe27 tidy(ui): remove unused exports 2024-04-19 09:32:56 -04:00
a1a9f0da73 tidy(ui): remove more unused files 2024-04-19 09:32:56 -04:00
8f4f3b773c tidy(ui): remove unused files, code 2024-04-19 09:32:56 -04:00
00737efc31 tidy(ui): tidy naming of regional prompt utils 2024-04-19 09:32:56 -04:00
5924dc6ff6 feat(ui): transparency on regional prompts canvas 2024-04-19 09:32:56 -04:00
246fabf2a0 feat(ui): scaling regional prompt canvas 2024-04-19 09:32:56 -04:00
30e3e12513 feat(ui): layouting regional prompts 2024-04-19 09:32:56 -04:00
a5bfe2dccb feat(ui): support negative regional prompt 2024-04-19 09:32:56 -04:00
aa6bfc8645 fix(ui): wip misc regional prompting ui 2024-04-19 09:32:56 -04:00
20ccdb6c8f fix(ui): remove extra type in nodestate 2024-04-19 09:32:56 -04:00
8caa7bc2b1 feat(ui): abstract out bbox renderer 2024-04-19 09:32:56 -04:00
ede8826757 feat(ui): remove dep on stage in mouse handlers 2024-04-19 09:32:56 -04:00
ff7aa2558a feat(ui): display prompt when debugging regions 2024-04-19 09:32:56 -04:00
c9bf00b80b feat(ui): restore invoke button (wip) 2024-04-19 09:32:56 -04:00
1f8f429d55 feat(ui): abstract layer renderer 2024-04-19 09:32:56 -04:00
d34e431002 feat(ui): abstract brush preview logic 2024-04-19 09:32:56 -04:00
cdb481e836 feat(ui): use konva generics for types in selector functions 2024-04-19 09:32:56 -04:00
525e6d697c feat(ui): re-implement with imperative konva api (wip) 2024-04-19 09:32:56 -04:00
bbbb5479e8 feat(ui): re-implement with imperative konva api (wip) 2024-04-19 09:32:56 -04:00
ae7797f662 feat(ui): re-implement with imperative konva api (wip) 2024-04-19 09:32:56 -04:00
05deeb68fa feat(ui): draft of graph helper for regional prompts 2024-04-19 09:32:56 -04:00
602a59066e fix(nodes): handle invert in alpha_mask_to_tensor 2024-04-19 09:32:56 -04:00
d1db6198b5 perf(ui): memoize & otherwise optimize regional prompts ui 2024-04-19 09:32:56 -04:00
944fa1a847 chore(ui): lint 2024-04-19 09:32:56 -04:00
52e7daffe7 feat(ui): selected layer styling 2024-04-19 09:32:56 -04:00
cf4c1750cb fix(ui): caching broke layer rendering 2024-04-19 09:32:56 -04:00
de7ecc8e3e feat(ui): tweak bbox styling 2024-04-19 09:32:56 -04:00
6c0481ef51 fix(ui): do not reset layer position when toggling visibility 2024-04-19 09:32:56 -04:00
b9d0da44eb feat(ui): wip layer transparency 2024-04-19 09:32:56 -04:00
0a42d7d510 docs(ui): update docstrings for helper function 2024-04-19 09:32:56 -04:00
c1aae0815d feat(ui): regional prompting layout, styling 2024-04-19 09:32:56 -04:00
e7523bd1d9 fix(ui): fix layer debug 2024-04-19 09:32:56 -04:00
8911017bd1 feat(ui): selectable & draggable layers 2024-04-19 09:32:56 -04:00
fc26f3e430 feat(nodes): add alpha mask to tensor invocation 2024-04-19 09:32:56 -04:00
c89a24d1ea feat(ui): add util to get blobs from layers 2024-04-19 09:32:56 -04:00
52ba4966c9 feat(ui): wip regional prompting UI
- Add eraser tool, applies per layer
2024-04-19 09:32:56 -04:00
822dfa77fc feat(ui): wip regional prompting UI
- Arrange layers
- Layer visibility
- Layered brush preview
- Cleanup
2024-04-19 09:32:56 -04:00
83d359b681 feat(ui): wip regional prompting UI 2024-04-19 09:32:56 -04:00
f87eee810b feat(ui): rough out regional prompts components 2024-04-19 09:32:56 -04:00
1d1e4d02dc feat(ui): rough out regional prompts store 2024-04-19 09:32:56 -04:00
2b9f06dc4c Re-enable app shutdown actions (#6244)
* closes #6242

* only override sigINT during slow model scanning

* fix ruff formatting

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-19 06:45:42 -04:00
a35386f24c fix: IP Adapter Method having incorrect informational popover 2024-04-18 13:37:55 -04:00
ac1071a5e5 chore: v4.1.0 2024-04-18 07:19:22 +10:00
34cdfc61ab Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-17 17:18:13 -04:00
5295a398f3 translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1122 of 1140 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-04-17 08:41:57 +10:00
0c7283c82d translationBot(ui): update translation (Turkish)
Currently translated at 50.8% (580 of 1140 strings)

translationBot(ui): update translation (Korean)

Currently translated at 43.3% (494 of 1140 strings)

translationBot(ui): update translation (Chinese (Simplified))

Currently translated at 80.9% (923 of 1140 strings)

translationBot(ui): update translation (Russian)

Currently translated at 98.8% (1127 of 1140 strings)

translationBot(ui): update translation (Dutch)

Currently translated at 63.7% (727 of 1140 strings)

translationBot(ui): update translation (Japanese)

Currently translated at 50.4% (575 of 1140 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.3% (1121 of 1140 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 27.8% (317 of 1140 strings)

translationBot(ui): update translation (German)

Currently translated at 72.2% (824 of 1140 strings)

Co-authored-by: Anonymous <noreply@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-04-17 08:41:57 +10:00
73ad173c74 update labels for Style Only and CompositionOnly to be designated as beta 2024-04-17 08:29:10 +10:00
c828a4e59f Add IP Adapter Style & Composition Modes (#6213)
## Summary

Until now IP Adapter had complete control on the contents of the output.
With this PR, users are now able to select "Style Only" or "Composition
Only" to draw just the style or layout of the reference image.

Based off: https://arxiv.org/abs/2404.02733

### New IP Method Option

- `Full` - Both style and layout of the refence image are used.
- `Style Only` - Only the style of the image is used
- `Composition Only` - Only the composition of the image is used.


![opera_0BkqZTwObO](https://github.com/invoke-ai/InvokeAI/assets/54517381/1b2fbbba-44c9-4c25-87cb-3711a17d13e3)

### Example Result


![demo](https://github.com/invoke-ai/InvokeAI/assets/54517381/703f3de5-e685-4691-acda-9338a4c10796)

### Notes

- Supports both SDXL and SD1.5

### Testing

- Just check and test if it works as expected with all IP Adapter models
- both SDXL and SD1.5

## Merge Plan

Good to merge once tested for all edge cases.
2024-04-16 14:23:36 -04:00
6bab040d24 Merge branch 'main' into ip-adapter-style-comp 2024-04-16 21:14:06 +05:30
f46bbaf8c4 fix: make ip-adapter weights not be optional 2024-04-16 21:12:45 +05:30
fce6b3e44c maybe solve race issue 2024-04-16 13:09:26 +10:00
d27907cc6d fix: entire reshaping block needs to be skipped 2024-04-16 04:29:53 +05:30
7ee3fef2db cleanup: better var names for the ip adapter weight collection block 2024-04-16 04:23:50 +05:30
b39ce642b6 cleanup: raise ValueErrors when target_blocks dont match base model 2024-04-16 04:12:30 +05:30
a148c4322c fix: IP Adapter weights being incorrectly applied
They were being overwritten rather than being appended
2024-04-16 04:10:41 +05:30
f6b7bc5d98 fix: Dynamically adapt height of control adapter opts 2024-04-16 01:18:43 +05:30
5f6c6abf9c chore: change IPAdapterAttentionWeights to a dataclass 2024-04-15 23:38:55 +05:30
cd76a31a8f fix: IP Adapter method not being recalled 2024-04-15 22:29:32 +05:30
470a39935c fix merge conflicts with main 2024-04-15 09:24:57 -04:00
f1e79d5a8f Merge branch 'main' into lstein/feat/simple-mm2-api 2024-04-15 09:14:55 -04:00
f055e1edb6 Merge branch 'lstein/feat/simple-mm2-api' of github.com:invoke-ai/InvokeAI into lstein/feat/simple-mm2-api 2024-04-15 09:14:37 -04:00
e93f4d632d [util] Add generic torch device class (#6174)
* introduce new abstraction layer for GPU devices

* add unit test for device abstraction

* fix ruff

* convert TorchDeviceSelect into a stateless class

* move logic to select context-specific execution device into context API

* add mock hardware environments to pytest

* remove dangling mocker fixture

* fix unit test for running on non-CUDA systems

* remove unimplemented get_execution_device() call

* remove autocast precision

* Multiple changes:

1. Remove TorchDeviceSelect.get_execution_device(), as well as calls to
   context.models.get_execution_device().
2. Rename TorchDeviceSelect to TorchDevice
3. Added back the legacy public API defined in `invocation_api`, including
   choose_precision().
4. Added a config file migration script to accommodate removal of precision=autocast.

* add deprecation warnings to choose_torch_device() and choose_precision()

* fix test crash

* remove app_config argument from choose_torch_device() and choose_torch_dtype()

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-15 13:12:49 +00:00
5a8489bbfc perf(ui): memoize infill components 2024-04-15 22:50:54 +10:00
a24c9d0f7a perf(ui): optimize useFeatureStatus 2024-04-15 22:50:54 +10:00
7a92afc117 perf(ui): fix rerenders in nodes
Unmemoized selector tanking perf
2024-04-15 22:50:54 +10:00
b508945b11 feat(ui): edge labels
Add setting to render labels with format `Source Node label -> Target Node label` on edges.
2024-04-15 22:48:46 +10:00
7cf788e658 Update deps to their lastest versions (#6178)
* Update deps to their lastest versions

* missed huggingface_hub

* bump accelerate

---------

Co-authored-by: Lincoln Stein <lincoln.stein@gmail.com>
2024-04-15 00:48:39 +00:00
06bc38d3f4 Remove tag excluder 2024-04-15 09:14:49 +10:00
d3b0212da5 Scope project files to src dir (enables --production) 2024-04-15 09:14:49 +10:00
c2b79ce14c Replace @knipignore with paths config 2024-04-15 09:14:49 +10:00
70185b0173 translationBot(ui): update translation (Russian)
Currently translated at 99.5% (1128 of 1133 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-04-15 09:12:38 +10:00
a83a0c6146 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 81.5% (924 of 1133 strings)

Co-authored-by: 怀瑾 <symant233@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-04-15 09:12:38 +10:00
12f41039cc translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1122 of 1140 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1120 of 1138 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.4% (1115 of 1133 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-04-15 09:12:38 +10:00
b3b5b7e261 Include hardcoded count of one to avoid translation issues on missing keys 2024-04-15 09:10:15 +10:00
f706a13230 Adjust gallery image length handling 2024-04-15 09:10:15 +10:00
22c6400bb8 Refactor i18n pluralization 2024-04-15 09:10:15 +10:00
1ca152f6c8 Apply eslint/prettier fixes 2024-04-15 09:10:15 +10:00
982e255878 Add dynamic label to delete button located at the top toolbar 2024-04-15 09:10:15 +10:00
7899149144 Remove unnecessary word 2024-04-15 09:10:15 +10:00
bef97b46bf Apply eslint/prettier fixes 2024-04-15 09:10:15 +10:00
cc256fee0e Modify the modal title to include selected image array length 2024-04-15 09:10:15 +10:00
ec69a58c8d Include plural variation for delete image modal title 2024-04-15 09:10:15 +10:00
ec67ba61db Pass an array of selected images instead of imageDTO 2024-04-15 09:10:15 +10:00
66126996e7 Import image selection 2024-04-15 09:10:15 +10:00
4eb66a9198 remove hires fix badge from settings when using sdxl 2024-04-15 07:57:58 +10:00
14e41a1fd9 Remove unnecessary whitespace 2024-04-15 07:54:36 +10:00
fc55522003 Import hook in the main App script 2024-04-15 07:54:36 +10:00
cd6d8ae9cc Add a hook as a singleton to update favicon and title upon queueSize change 2024-04-15 07:54:36 +10:00
2933eb594d Remove unnecessary code 2024-04-15 07:54:36 +10:00
4e08fab3f5 Apply brand red color and a black border 2024-04-15 07:54:36 +10:00
8bca7e2aa2 Apply eslint/prettier fixes 2024-04-15 07:54:36 +10:00
3706cf0ad4 Add JSDoc strings 2024-04-15 07:54:36 +10:00
a459361376 Modify the processing to consider the active queue length instead of in_progress only 2024-04-15 07:54:36 +10:00
bb330d50a6 Increase favicon alert detail size 2024-04-15 07:54:36 +10:00
102cb62960 Apply eslint/prettier fixes 2024-04-15 07:54:36 +10:00
8eeab22ecd Replace let with const 2024-04-15 07:54:36 +10:00
4343852b83 Update HTML page title and favicon upon queue item event 2024-04-15 07:54:36 +10:00
0a9bf25bff Implement updatePageTitle and updatePageFavicon methods 2024-04-15 07:54:36 +10:00
4cd09850b8 Add ID to the HTML link element 2024-04-15 07:54:36 +10:00
dbc586e0b2 Add alert variation for Invoke favicon 2024-04-15 07:54:36 +10:00
fa6efac436 change names of convert and download caches and add migration script 2024-04-14 16:10:24 -04:00
3ead827d61 port dw_openpose, depth_anything, and lama processors to new model download scheme 2024-04-14 16:10:24 -04:00
c140d3b1df add invocation_context.load_ckpt_from_url() method 2024-04-14 16:10:24 -04:00
34438ce1af add simplified model manager install API to InvocationContext 2024-04-14 16:10:24 -04:00
3ddd7ced49 change names of convert and download caches and add migration script 2024-04-14 15:57:33 -04:00
41b909cbe3 port dw_openpose, depth_anything, and lama processors to new model download scheme 2024-04-14 15:57:03 -04:00
8426f1e7b2 fix(experimental): Possible fix for conflict with regional embed length mismatch
Pushing this so people can test it out and see if this needs to be handled in a different way.
2024-04-14 12:19:19 +05:30
c2e3c61f28 fix recall all when loras, controls, or hrf arent present 2024-04-14 16:49:14 +10:00
fbfa29c2ef Update GALLERY.md 2024-04-14 16:46:31 +10:00
9ee7b951eb Update GALLERY.md 2024-04-14 16:46:31 +10:00
29dd1bb35b Update GALLERY.md 2024-04-14 16:46:31 +10:00
68d8a2497e Update GALLERY.md 2024-04-14 16:46:31 +10:00
4b171fa696 Creation of GALLERY.md and related images
First draft of the walkthrough of the Gallery right-hand panel
2024-04-14 16:46:31 +10:00
d0beb45431 Create GALLERY.md 2024-04-14 16:46:31 +10:00
e724781a80 Update WEB.md
Correct stated location of Gallery panel.
2024-04-14 16:46:31 +10:00
636ece323f Update INSTALL_DEVELOPMENT.md 2024-04-14 15:24:00 +10:00
77b3281f08 prettier 2024-04-14 15:22:33 +10:00
bd7c8cd517 added info popover back to model, updated description hover to combobox only 2024-04-14 15:22:33 +10:00
489d485907 added missing description to control adapters hover 2024-04-14 15:22:33 +10:00
6eed5ad531 added button for hiding bounding box 2024-04-14 15:22:33 +10:00
9cb0f63c44 refactor: fix a bunch of type issues in custom_attention 2024-04-13 14:17:25 +05:30
2d5786d3bb fix: Incorrect composition blocks for SD1.5 2024-04-13 13:52:10 +05:30
27466ffa1a chore: update the ip adapter node version 2024-04-13 13:39:08 +05:30
f50b156511 chore: do not include custom nodes in schema 2024-04-13 12:43:49 +05:30
9fc73743b2 feat: support SD1.5 2024-04-13 12:30:39 +05:30
d4393e4170 chore: linter fixes 2024-04-13 12:14:45 +05:30
145a0b029e Merge branch 'ip-adapter-style-comp' of https://github.com/blessedcoolant/InvokeAI into ip-adapter-style-comp 2024-04-13 12:13:06 +05:30
f2506cc769 chore: ruff fixes
Revert "chore: ruff fixes"

This reverts commit af36fe8c1e.

Revert "chore: ruff fixes"

This reverts commit af36fe8c1e.
2024-04-13 12:12:33 +05:30
7a67fd6a06 Revert "chore: ruff fixes"
This reverts commit af36fe8c1e.
2024-04-13 12:10:20 +05:30
af36fe8c1e chore: ruff fixes 2024-04-13 12:08:52 +05:30
e9f16ac8c7 feat: add UI for IP Adapter Method 2024-04-13 12:06:59 +05:30
6ea183f0d4 wip: Initial Implementation IP Adapter Style & Comp Modes 2024-04-13 11:09:45 +05:30
3a26c7bb9e fix merge conflicts 2024-04-12 00:58:11 -04:00
df5ebdbc4f add invocation_context.load_ckpt_from_url() method 2024-04-12 00:55:21 -04:00
af1b57a01f add simplified model manager install API to InvocationContext 2024-04-11 21:46:00 -04:00
24f2cde862 Remove type="submit" from all tsx files.
Fixes a problem on firefox, at least for me.
2024-04-12 09:09:32 +10:00
b18442ded4 fix(queue): poll queue on finished queue item
When a queue item is finished (completed, canceled, failed), immediately poll the queue for the next queue item.

Closes #6189
2024-04-12 07:31:47 +10:00
651c0b39b1 clear cache on all exceptions 2024-04-12 07:19:16 +10:00
46d23cd868 catch RunTimeError during model to() call rather than OutOfMemoryError 2024-04-12 07:19:16 +10:00
dedf0c6ffa fix ruff issues 2024-04-12 07:19:16 +10:00
579082ac10 [mm] clear the cache entry for a model that got an OOM during loading 2024-04-12 07:19:16 +10:00
7bc77ddb40 fix(nodes): doubly-noised latents
When using refiner with a mask (i.e. inpainting), we don't have noise provided as an input to the node.

This situation uniquely hits a code path that wasn't reviewed when gradient denoising was implemented.

That code path does two things wrong:
- It lerp'd the input latents. This was fixed in 5a1f4cb1ce.
- It added noise to the latents an extra time. This is fixed in this change.

We don't need to add noise in `latents_from_embeddings` because we do it just a lines later in `AddsMaskGuidance`.

- Remove the extraneous call to `add_noise`
- Make `seed` a required arg. We never call the function without seed anyways. If we refactor this in the future, it will be clearer that we need to look at how seed is handled.
- Move the call to create the noise to a deeper conditional, just before we call `AddsMaskGuidance`. The created noise tensor is now only used in that function, no need to create it every time.

Note: Whether or not having both noise and latents as inputs on the node is correct is a separate conversation. This change just fixes the issue with the current setup.
2024-04-11 07:21:50 -04:00
026d095afe fix(nodes): do not set seed on output latents from denoise latents
`LatentsField` objects have an optional `seed` field. This should only be populated when the latents are noise, generated from a seed.

`DenoiseLatentsInvocation` needs a seed value for scheduler initialization. It's used in a few places, and there is some logic for determining the seed to use with a series of fallbacks:
- Use the seed from the noise (a `LatentsField` object)
- Use the seed from the latents (a `LatentsField` object - normally it won't have a seed)
- Use `0` as a final fallback

In `DenoisLatentsInvocation`, we set the seed in the `LatentsOutput`, even though the output latents are not noise.

This is normally fine, but when we use refiner, we re-use the those same latents for the refiner denoise. This causes that characteristic same-seed-fried look on the refiner pass.

Simple fix - do not set the field in the output latents.
2024-04-11 07:21:50 -04:00
7e2ade50e1 fix(ui): canvas staging area & batch handling fixes
Handful of intertwined fixes.

- Create and use helper function to reset staging area.
- Clear staging area when queue items are canceled, failed, cleared, etc. Fixes a bug where the bbox ends up offset and images are put into the wrong spot.
- Fix a number of similar bugs where canvas would "forget" it had pending generations, but they continued to generate. Canvas needs to track batches that should be displayed in it using `state.canvas.batchIds`, and this was getting cleared without actually canceling those batches.
- Disable the `discard current image` button on canvas if there is only one image. Prevents accidentally canceling all canvas batches if you spam the button.
2024-04-10 21:48:34 +10:00
c0d54d5414 Revert "always enqueue with fresh bounding box"
This reverts commit fae51da278b39c61cbbea5de88661b4bc546f1ce.
2024-04-10 21:48:34 +10:00
98bfbb73ac always enqueue with fresh bounding box 2024-04-10 21:48:34 +10:00
f9af32a6d1 Fix the padding behavior when max-pooling regional IP-Adapter masks to mirror the downscaling behavior of SD and SDXL. Prior to this change, denoising with input latent dimensions that were not evenly divisible by 8 would raise an exception. 2024-04-09 16:50:43 -04:00
fba40eb1bd Fix the padding behavior when max-pooling regional prompt masks to mirror the downscaling behavior of SD and SDXL. Prior to this change, denoising with input latent dimensions that were not evenly divisible by 8 would raise an exception. 2024-04-09 16:50:43 -04:00
69f6c24f52 Fix field ordering (#6186)
Changed fields to go in w/h x/y order.

## Summary

The prior ordering of height, then width, and y, then x, doesn't match
up with the expected UX. This has been changed.

## Checklist

- [X] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
2024-04-10 01:00:22 +05:30
80d631118d Fix field ordering
Changed fields to go in w/h x/y order.
2024-04-09 14:17:55 -05:00
0c6dd32ece (minor) Fix IP-Adapter conditional logic in CustomAttnProcessor2_0. 2024-04-09 15:06:51 -04:00
0bdbfd4d1d Add support for IP-Adapter masks. 2024-04-09 15:06:51 -04:00
2e27ed5f3d Pass IP-Adapter scales through the cross_attn_kwargs pathway, since they are the same for all attention layers. This change also helps to prepare for adding IP-Adapter region masks. 2024-04-09 15:06:51 -04:00
babdc64b17 (minor) Fix typo in IP-Adapter field description. 2024-04-09 15:06:51 -04:00
54327ec4a7 Remove documentation references to prompt-to-prompt cross-attention control. 2024-04-09 10:57:02 -04:00
4a828818da Remove support for Prompt-to-Prompt cross-attention control (aka .swap()). This feature is not widely used. It does not work with SDXL and is incompatible with IP-Adapter and regional prompting. The implementation is also intertwined with both text embedding and the UNet attention layers, resulting in a high maintenance burden. For all of these reasons, we have decided to drop support. 2024-04-09 10:57:02 -04:00
fe386252f3 Revert "feat(nodes): add prompt region from image nodes"
This reverts commit 3a531c5097.
2024-04-09 08:12:12 -04:00
182810337c Add utility to_standard_float_mask(...) to convert various mask formats to a standardized format. 2024-04-09 08:12:12 -04:00
338bf808d6 Rename MaskField to be a generice TensorField. 2024-04-09 08:12:12 -04:00
5b5a4204a1 Fix dimensions of mask produced by ExtractMasksAndPromptsInvocation. Also, added a clearer error message in case the same error is introduced in the future. 2024-04-09 08:12:12 -04:00
75ef473748 Pull the upstream changes from diffusers' AttnProcessor2_0 into CustomAttnProcessor2_0. This fixes a bug in CustomAttnProcessor2_0 that was being triggered when peft was not installed. The bug was present in a block of code that was previously copied from diffusers. The bug seems to have been introduced during diffusers' migration to PEFT for their LoRA handling. The upstream bug was fixed in 531e719163. 2024-04-09 08:12:12 -04:00
926b8d0efe feat(nodes): add prompt region from image nodes 2024-04-09 08:12:12 -04:00
9d9d1761f3 (minor) The latest ruff version has _slightly_ different formatting preferences. 2024-04-09 08:12:12 -04:00
a78df8123f Update the diffusion logic to use the new regional prompting feature. 2024-04-09 08:12:12 -04:00
7ca677578e Create a UNetAttentionPatcher for patching UNet models with CustomAttnProcessor2_0 modules. 2024-04-09 08:12:12 -04:00
31c456c1e6 Update CustomAttention to support both IP-Adapters and regional prompting. 2024-04-09 08:12:12 -04:00
2ce79b61f5 Initialize a RegionalPromptAttnProcessor2_0 class by copying AttnProcessor2_0 from diffusers. 2024-04-09 08:12:12 -04:00
109e3f0e7f Add RegionalPromptData class for managing prompt region masks. 2024-04-09 08:12:12 -04:00
dc64fec771 Add support for lists of prompt embeddings to be passed to the DenoiseLatents invocation, and add handling of the conditioning region masks in DenoiseLatents. 2024-04-09 08:12:12 -04:00
d1e45585d0 Add TextConditioningRegions to the TextConditioningData data structure. 2024-04-09 08:12:12 -04:00
aba023e0c5 Improve documentation of conditioning_data.py. 2024-04-09 08:12:12 -04:00
e354c29b52 Rename ConditioningData -> TextConditioningData. 2024-04-09 08:12:12 -04:00
a7f363e654 Split ip_adapter_conditioning out from ConditioningData. 2024-04-09 08:12:12 -04:00
9b2162e564 Remove scheduler_args from ConditioningData structure. 2024-04-09 08:12:12 -04:00
4e64b26702 Update compel nodes to accept an optional prompt mask. 2024-04-09 08:12:12 -04:00
c22d772062 Add RectangleMaskInvocation. 2024-04-09 08:12:12 -04:00
d6be7662c9 Add a MaskField primitive, and add a mask to the ConditioningField primitive type. 2024-04-09 08:12:12 -04:00
95050088d1 chore: lint fixes 2024-04-09 14:13:10 +10:00
94b5084cd5 fix: one man's max is another man's min 2024-04-09 14:13:10 +10:00
ca0d60bee6 fix: set coherence denoise to 0.2 min for refiner models 2024-04-09 14:13:10 +10:00
fd1f240853 fix: SDXL Refiner not working properly with Inpainting 2024-04-09 14:13:10 +10:00
381b41a56e fix: Update SDXL Refiner graphs to use Gradient Mask 2024-04-09 14:13:10 +10:00
b58494c420 feat(ui): add graph-to-workflow debug helper
This is intended for debug usage, so it's hidden away in the workflow library `...` menu. Hold shift to see the button for it.

- Paste a graph (from a network request, for example) and then click the convert button to convert it to a workflow.
- Disable auto layout to stack the nodes with an offset (try it out). If you change this, you must re-convert to get the changes.
- Edit the workflow JSON if you need to tweak something before loading it.
2024-04-08 20:38:04 -04:00
dca30d5462 (feat) add a method to get the path of an image from the invocation context
Fixes #6175
2024-04-08 18:42:55 +10:00
9ab6655491 feat(backend): clean up choose_precision
- Allow user-defined precision on MPS.
- Use more explicit logic to handle all possible cases.
- Add comments.
- Remove the app_config args (they were effectively unused, just get the config using the singleton getter util)
2024-04-07 09:41:05 -04:00
29cfe5a274 fix(ui): handle multipleOf on number fields
This data is already in the template but it wasn't ever used.

One big place where this improves UX is the noise node. Previously, the UI let you change width and height in increments of 1, despite the template requiring a multiple of 8. It now works in multiples of 8.
2024-04-06 13:15:20 -04:00
2c45697f3d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-04-06 15:19:20 +11:00
9a0a90e2a2 chore: v4.0.4 2024-04-06 15:15:16 +11:00
69f17da1a2 fix(nodes): add WithBoard to public API 2024-04-06 15:02:28 +11:00
4d0a49298c tidy(ui): remove extraneous zod schema 2024-04-06 14:54:12 +11:00
55f7a7737a feat(ui): shift around init image recall logic
Retrieving the DTO happens as part of the metadata parsing, not recall. This way, we don't show the option to recall a nonexistent image.

This matches the flow for other metadata entities like models - we don't show the model recall button if the model isn't available.
2024-04-06 14:54:12 +11:00
adc30045a6 addressed pr feedback 2024-04-06 14:54:12 +11:00
fdd0e57976 actually use the schema 2024-04-06 14:54:12 +11:00
9ba5ec4b67 fix typo Params set set 2024-04-06 14:54:12 +11:00
8a17616bf4 recall initial image from metadata and set to image2image 2024-04-06 14:54:12 +11:00
f56b9537cd added initial image to metadata viewer 2024-04-06 14:54:12 +11:00
a95756f3ed docs: update FAQ.md (shared GPU memory) 2024-04-06 14:35:36 +11:00
4068e817d6 fix(mm): typing issues in model cache 2024-04-06 14:35:36 +11:00
a09d705e4c fix(mm): remove vram check
This check prematurely reports insufficient VRAM on Windows. See #6106 for details.
2024-04-06 14:35:36 +11:00
540d506ec9 fix: Incorrect default clip vision opt in the node 2024-04-05 15:06:33 -04:00
e330966020 chore: v4.0.3 2024-04-05 15:32:30 +11:00
b783679b9f fix: typo, change shouldFitImageSize default value 2024-04-05 15:23:58 +11:00
d32e557e50 fix: add roundDownToMultiple 2024-04-05 15:23:58 +11:00
90686c7f9c feat: Unified Canvas Fit Image Size on Drop 2024-04-05 15:23:58 +11:00
4571986c63 fix misplaced lock call 2024-04-05 14:32:18 +11:00
fec989f015 navigate to workflow tab when clicking load workflow 2024-04-05 14:16:33 +11:00
b5c048d8bf translationBot(ui): update translation (Italian)
Currently translated at 98.4% (1108 of 1126 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-04-05 13:53:42 +11:00
577469be55 translationBot(ui): update translation (German)
Currently translated at 73.3% (826 of 1126 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-04-05 13:53:42 +11:00
812f10730f adjust free vram calculation for models that will be removed by lazy offloading (#6150)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-04-04 22:51:12 -04:00
3006285d13 fix(ui): display refiner models in mm 2024-04-05 09:46:03 +11:00
5d4a571778 feat(ui): disable mosaic infill in graph builders 2024-04-05 08:49:13 +11:00
90bdd74f30 chore(ui): typegen 2024-04-05 08:49:13 +11:00
d6ccd5bc81 feat(nodes): disable mosaic fill
Needs a bit of tweaking, leaving the code in just disabled/commented it out.
2024-04-05 08:49:13 +11:00
f0b1bb0327 feat(nodes): redo tile infill
The previous algorithm errored if the image wasn't divisible by the tile size. I've reimplemented it from scratch to mitigate this issue.

The new algorithm is simpler. We create a pool of tiles, then use them to create an image composed completely of tiles. If there is any awkwardly sized space on the edge of the image, the tiles are cropped to fit.

Finally, paste the original image over the tile image.

I've added a jupyter notebook to do a smoke test of infilling methods, and 10 test images.

The other infill algorithms can be easily tested with the notebook on the same images, though I didn't set that up yet.

Tested and confirmed this gives results just as good as the earlier infill, though of course they aren't the same due to the change in the algorithm.
2024-04-05 08:49:13 +11:00
b061db414f tidy(nodes): abstractmethod is noop 2024-04-05 08:49:13 +11:00
e55ab5b3a1 ui: Color Infill UI 2024-04-05 08:49:13 +11:00
adb7966bb3 ui: intial mosaic infill ui
Need to change color picking.
2024-04-05 08:49:13 +11:00
3c195d74a5 fix: bypass edge pixels which cannot transform to tile size
Still need to fix this somehow
2024-04-05 08:49:13 +11:00
32a6b758cd wip: Initial Infill Methods Refactor 2024-04-05 08:49:13 +11:00
3659219f46 Fix IdealSizeInvocation (#6145) 2024-04-05 08:38:40 +11:00
d284e0567a fix: ip adapter clip selection being broken 2024-04-05 07:49:04 +11:00
13027891d9 fix(ui): discarding last single canvas image breaks canvas
We need to reset the staging area if we are discarding the last image.
2024-04-04 08:00:08 -04:00
8a32baf2dc chore: v4.0.2 2024-04-04 15:46:51 +11:00
8c15d14099 fix: use locale encoding
We have had a few bugs with v4 related to file encodings, especially on Windows.

Windows uses its own character encodings instead of `utf-8`, often `cp1252`. Some characters cannot be decoded using `utf-8`, causing `UnicodeDecodeError`.

There are a couple places where this can cause problems:
- In the installer bootstrap, we install or upgrade `pip` and decode the result, using `subprocess`.

  The input to this includes the user's home dir. In #6105, the user had one of the problematic characters in their username. `subprocess` attempts and fails to decode the username, which crashes the installer.

  To fix this, we need to use `locale.getpreferredencoding()` when executing the command.
- Similarly, in the model install service and config class, we attempt to load a yaml config file. If a problematic character is in the path to the file (which often includes the user's home dir), we can get the same error.

  One example is  #6129 in which the models.yaml migration fails.

  To fix this, we need to open the file with `locale.getpreferredencoding()`.
2024-04-04 15:30:47 +11:00
9cc1f20ad5 add simplified model manager install API to InvocationContext 2024-04-03 23:26:48 -04:00
38718d8c65 docs: update 020_INSTALL_MANUAL.md, remove conda section 2024-04-04 11:28:09 +11:00
98ab387e2b docs: update 020_INSTALL_MANUAL.md
Redo the install the package section. It was inaccurate with respect to extra index URLs.
2024-04-04 10:54:23 +11:00
a0ae2f37d7 docs: update 020_INSTALL_MANUAL.md
Tidy verbiage around the invokeai root and how it is determined
2024-04-04 10:54:23 +11:00
9c51abb46e fix(config): get root from venv
This logic was a bit wonky. It only selected the `venv` parent if there was already an `invokeai.yaml` file in it. Removed this constraint.
2024-04-04 10:54:23 +11:00
f887e030bb docs: update 010_INSTALL_AUTOMATED.md
Remove reference to the autodetect GPU device option.
2024-04-04 08:43:17 +11:00
52b58b4a80 feat(installer): remove extra GPU options
- Remove `CUDA_AND_DML`. This was for onnx, which we have since removed.
- Remove `AUTODETECT`. This option causes problems for windows users, as it falls back on default pypi index resulting in a non-CUDA torch being installed.
- Add more explicit settings for extra index URL, based on the torch website
- Fix bug where `xformers` wasn't installed on linux and/or windows when autodetect was selected
2024-04-04 08:43:17 +11:00
9fdfd4267c fix(ui): fix model name overflow
Closes #3897
2024-04-04 08:03:30 +11:00
c4a6d3ddc0 docs: update FAQ for missing models solution
This will be fairly common in v4 updates. The root cause is models not being added to the `models.yaml` file in v3, so we don't correctly migrate the models to the db.

The docs describe how to use `Scan Folder` to restore missing models.
2024-04-04 07:58:11 +11:00
25bbaa73b9 feat(ui): add inplace option to scan folder install ui 2024-04-04 07:58:11 +11:00
2383fb93c7 fix(ui): show model install progress as 100 if finished 2024-04-04 07:58:11 +11:00
63c60e6d63 feat(ui): refresh model scan results on completed model install 2024-04-04 07:58:11 +11:00
3a10062b53 feat(mm): more reliable mm scan folder
Compare the installed paths to determine if the model is already installed. Fixes an issue where installed models showed up as uninstalled or vice-versa. Related to relative vs absolute path handling.
2024-04-04 07:58:11 +11:00
51ca59c088 Update probe to always use cpu for loading models 2024-04-04 07:34:43 +11:00
216b34ac44 tests: update model rename test 2024-04-04 07:17:38 +11:00
7ff2371c07 fix(mm): do not rename model file if model record is renamed
Renaming the model file to the model name introduces unnecessary contraints on model names.

For example, a model name can technically be any length, but a model _filename_ cannot be too long.

There are also constraints on valid characters for filenames which shouldn't be applied to model record names.

I believe the old behaviour is a holdover from the old system.
2024-04-04 07:17:38 +11:00
4927d1b7c9 add some test IDs for accordion targeting 2024-04-04 06:35:11 +11:00
85f53f94f8 feat(mm): include needed vs free in OOM
Gives us a bit more visibility into these errors, which seem to be popping up more frequently with the new MM.
2024-04-04 06:26:15 +11:00
7da04b8333 IP-Adapter Safetensor Support (#6041)
## Summary

This PR adds support for IP Adapter safetensor files for direct usage
inside InvokeAI.

# TEST

You can download the [Composition
Adapters](https://huggingface.co/ostris/ip-composition-adapter) which
weren't previously supported in Invoke and try them out. Every other IP
Adapter model should work too.

If you pick a Safetensor IP Adapter model, you will also need to set
ViT-H or ViT-G next to it. This is a raw implementation. Can refine it
further based on feedback.

Prompt: `Spiderman holding a bunny` -- Exact same composition as the
adapter image.

![opera_UHlo1IyXPT](https://github.com/invoke-ai/InvokeAI/assets/54517381/00bf9f0b-149f-478d-87ca-3252b68d1054)
2024-04-03 22:46:45 +05:30
be574cb764 fix: incorrect suffix check in ip adapter checkpoint file 2024-04-03 22:38:28 +05:30
5f01de1993 chore: ruff and lint fixes 2024-04-03 20:41:51 +05:30
cf88bd3294 Merge branch 'main' into checkpoint-ip-adapter 2024-04-03 20:30:02 +05:30
e574815413 chore: clean up merge conflicts 2024-04-03 20:28:00 +05:30
fb293dcd84 Merge branch 'checkpoint-ip-adapter' of https://github.com/blessedcoolant/InvokeAI into checkpoint-ip-adapter 2024-04-03 20:23:07 +05:30
414851f2f0 fix: raise and present the runtime error from the exception 2024-04-03 20:21:50 +05:30
2dcbb7223b fix: use Path for ip_adapter_ckpt_path instead of str 2024-04-03 20:21:03 +05:30
132aadca15 fix(ui): cancel batch status button greyed out
Closes #6110
2024-04-03 08:23:31 -04:00
14a9f74b17 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-03 12:40:13 +05:30
1372ef15b3 fix: Fail when unexpected keys are found in IP Adapter models 2024-04-03 12:40:11 +05:30
dc1681a0de fix: clip vision model auto param
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
2024-04-03 12:40:11 +05:30
be1212de9a fix: Raise a better error when incorrect CLIP Vision model is used 2024-04-03 12:40:10 +05:30
a14ce0edab chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-04-03 12:40:10 +05:30
4a0dfc3b2d ui: improve the clip vision model picker layout 2024-04-03 12:40:08 +05:30
91a70c8d07 feat: Let users pick CLIP Vision model for Checkpoint IP Adapters 2024-04-03 12:40:05 +05:30
936b99bd3c chore: improve types in ip_adapter backend file 2024-04-03 12:40:02 +05:30
9ff729a7e6 fix: Update ModelView to accommodate for the new config changes to IP Adapter 2024-04-03 12:40:01 +05:30
5829b87b8d ui: update the new ip adapter configs on the frontend 2024-04-03 12:40:01 +05:30
79f7b61dfe fix: cleanup across various ip adapter files 2024-04-03 12:39:52 +05:30
b1c8266e22 feat: add base model recognition for ip adapter safetensor files 2024-04-03 12:39:52 +05:30
67afb1763e wip: Initial implementation of safetensor support for IP Adapter 2024-04-03 12:39:52 +05:30
8584171a49 docs: fix broken link (#6116)
## Summary

Fix a broken link

## Related Issues / Discussions


https://discord.com/channels/1020123559063990373/1049495067846524939/1224970148058763376

## QA Instructions

n/a

## Merge Plan

n/a

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_ n/a
- [x] _Documentation added / updated (if applicable)_
2024-04-03 12:35:17 +05:30
50951439bd docs: fix broken link 2024-04-03 17:36:15 +11:00
07cb6c944e chore(ui): typegen 2024-04-03 17:18:12 +11:00
1d45ef529b fix(ui): move tcd scheduler to current zod schemas
It was in the v2 schemas which should be immutable and only used for migrations
2024-04-03 17:08:02 +11:00
0259114d9c Merge branch 'main' into main 2024-04-03 17:03:19 +11:00
51e515b925 tidy: use lowercase for tcd scheduler identifier 2024-04-03 17:03:02 +11:00
8c509295f9 chore: ruff 2024-04-03 17:02:45 +11:00
7b93b554d7 fix(ui): add default coherence mode to generation slice migration
The valid values for this parameter changed when inpainting changed to gradient denoise. The generation slice's redux migration wasn't updated, resulting in a generation error until you change the setting or reset web UI.
2024-04-03 08:46:31 +11:00
21b9e96a45 Run typegen, bump version 2024-04-02 10:14:39 -04:00
b6ad33ac1a perf(ui): reduce canvas max history to 100
This should further insulate canvas from excessive GCs.
2024-04-02 08:48:18 -04:00
69ec14c7bb perf(ui): use rfdc for deep copying of objects
- Add and use more performant `deepClone` method for deep copying throughout the UI.

Benchmarks indicate the Really Fast Deep Clone library (`rfdc`) is the best all-around way to deep-clone large objects.

This is particularly relevant in canvas. When drawing or otherwise manipulating canvas objects, we need to do a lot of deep cloning of the canvas layer state objects.

Previously, we were using lodash's `cloneDeep`.

I did some fairly realistic benchmarks with a handful of deep-cloning algorithms/libraries (including the native `structuredClone`). I used a snapshot of the canvas state as the data to be copied:

On Chromium, `rfdc` is by far the fastest, over an order of magnitude faster than `cloneDeep`.

On FF, `fastest-json-copy` and `recursiveDeepCopy` are even faster, but are rather limited in data types. `rfdc`, while only half as fast as the former 2, is still nearly an order of magnitude faster than `cloneDeep`.

On Safari, `structuredClone` is the fastest, about 2x as fast as `cloneDeep`. `rfdc` is only 30% faster than `cloneDeep`.

`rfdc`'s peak memory usage is about 10% more than `cloneDeep` on Chrome. I couldn't get memory measurements from FF and Safari, but let's just assume the memory usage is similar relative to the other algos.

Overall, `rfdc` is the best choice for a single algo for all browsers. It's definitely the best for Chromium, by far the most popular desktop browser and thus our primary target.

A future enhancement might be to detect the browser and use that to determine which algorithm to use.
2024-04-02 08:48:18 -04:00
a6c91979af fix(ui): prevent canvas history leak
There were two ways the canvas history could grow too large (past the `MAX_HISTORY` setting):
- Sometimes, when pushing to history, we didn't `shift` an item out when we exceeded the max history size.
- If the max history size was exceeded by more than one item, we still only `shift`, which removes one item.

These issue could appear after an extended canvas session, resulting in a memory leak and recurring major GCs/browser performance issues.

To fix these issues, a helper function is added for both past and future layer states, which uses slicing to ensure history never grows too large.
2024-04-02 08:48:18 -04:00
e655399324 fix(config): handle windows paths in invokeai.yaml migration for legacy_conf_dir
The logic incorrectly set the `legacy_conf_dir` on windows, where the slashes go the other direction. Handle this case and update tests to catch it.
2024-04-02 08:06:59 -04:00
f75de8a35c feat(db): add migration 9 - empty session queue
Empties the session queue. This is done to prevent any lingering session queue items from causing pydantic errors due to changed schemas.
2024-04-02 13:25:14 +11:00
d4be945dde fix(nodes): gracefully handle custom nodes init error
Previously, exceptions raised as custom nodes are initialized were fatal errors, causing the app to exit.

With this change, any error on import is caught and the error message printed. App continues to start up without the node.

For example, a custom node that isn't updated for v4.0.0 may raise an error on import if it is attempting to import things that no longer exist.
2024-04-02 13:25:14 +11:00
ab33acad5c translationBot(ui): update translation (Russian)
Currently translated at 99.5% (1119 of 1124 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-04-02 13:15:11 +11:00
8f3d7b2946 translationBot(ui): update translation (Italian)
Currently translated at 98.3% (1106 of 1124 strings)

translationBot(ui): update translation (Italian)

Currently translated at 98.3% (1104 of 1122 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-04-02 13:15:11 +11:00
54a30f66cb translationBot(ui): update translation (German)
Currently translated at 72.4% (813 of 1122 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-04-02 13:15:11 +11:00
23390f1516 cleanup: use load_file of safetensors directly for loading ip adapters 2024-04-01 06:37:38 +05:30
23da3de915 Update constants.ts 2024-03-29 12:39:08 +01:00
97579770e1 Update common.ts 2024-03-29 12:35:42 +01:00
1a83936cdd Merge branch 'invoke-ai:main' into main 2024-03-29 11:14:28 +01:00
298cae5bb9 Update schema.ts 2024-03-29 12:41:10 +05:30
cd52e99bb9 Merge branch 'main' into checkpoint-ip-adapter 2024-03-29 12:39:53 +05:30
6e4c2d3685 fix: Fail when unexpected keys are found in IP Adapter models 2024-03-29 12:34:56 +05:30
56ed697c23 fix: clip vision model auto param
Setting to 'auto' works only for InvokeAI config and auto detects the SD model but will override if user explicitly sets it. If auto used with checkpoint models, we raise an error. Checkpoints will always need to set to non-auto.
2024-03-29 12:12:16 +05:30
cd078b1865 fix: Raise a better error when incorrect CLIP Vision model is used 2024-03-29 11:58:10 +05:30
0d8b535131 chore: rename IPAdapterDiffusersConfig to IPAdapterInvokeAIConfig 2024-03-29 11:50:18 +05:30
80e311a069 Update schedulers.py 2024-03-28 22:52:15 +01:00
b6e6bdc195 Update schedulers.py 2024-03-28 22:51:59 +01:00
1a93f56d06 ui: improve the clip vision model picker layout 2024-03-27 22:11:07 +05:30
16c366a060 feat: Let users pick CLIP Vision model for Checkpoint IP Adapters 2024-03-27 22:08:23 +05:30
688a0f30bb chore: improve types in ip_adapter backend file 2024-03-27 22:08:23 +05:30
318bc938fe fix: Update ModelView to accommodate for the new config changes to IP Adapter 2024-03-27 22:08:23 +05:30
c4a856de4a ui: update the new ip adapter configs on the frontend 2024-03-27 22:08:23 +05:30
4ed2bf53ca fix: cleanup across various ip adapter files 2024-03-27 22:08:14 +05:30
60bf0caca3 feat: add base model recognition for ip adapter safetensor files 2024-03-27 22:08:14 +05:30
b013d0e064 wip: Initial implementation of safetensor support for IP Adapter 2024-03-27 22:08:14 +05:30
962 changed files with 54826 additions and 25694 deletions

View File

@ -9,9 +9,9 @@ runs:
node-version: '18'
- name: setup pnpm
uses: pnpm/action-setup@v2
uses: pnpm/action-setup@v4
with:
version: 8
version: 8.15.6
run_install: false
- name: get pnpm store directory

View File

@ -8,7 +8,7 @@
## QA Instructions
<!--WHEN APPLICABLE: Describe how we can test the changes in this PR.-->
<!--WHEN APPLICABLE: Describe how you have tested the changes in this PR. Provide enough detail that a reviewer can reproduce your tests.-->
## Merge Plan

View File

@ -18,6 +18,7 @@ help:
@echo "frontend-typegen Generate types for the frontend from the OpenAPI schema"
@echo "installer-zip Build the installer .zip file for the current version"
@echo "tag-release Tag the GitHub repository with the current version (use at release time only!)"
@echo "openapi Generate the OpenAPI schema for the app, outputting to stdout"
# Runs ruff, fixing any safely-fixable errors and formatting
ruff:
@ -70,3 +71,6 @@ installer-zip:
tag-release:
cd installer && ./tag_release.sh
# Generate the OpenAPI Schema for the app
openapi:
python scripts/generate_openapi_schema.py

536
README.md
View File

@ -2,21 +2,141 @@
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
# Invoke - Professional Creative AI Tools for Visual Media
## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
# Invoke - Professional Creative AI Tools for Visual Media
#### To learn more about Invoke, or implement our Business solutions, visit [invoke.com]
[![discord badge]][discord link] [![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link] [![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link] [![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
</div>
Invoke is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. Invoke offers an industry leading web-based UI, and serves as the foundation for multiple commercial products.
Invoke is available in two editions:
| **Community Edition** | **Professional Edition** |
|----------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------|
| **For users looking for a locally installed, self-hosted and self-managed service** | **For users or teams looking for a cloud-hosted, fully managed service** |
| - Free to use under a commercially-friendly license | - Monthly subscription fee with three different plan levels |
| - Download and install on compatible hardware | - Offers additional benefits, including multi-user support, improved model training, and more |
| - Includes all core studio features: generate, refine, iterate on images, and build workflows | - Hosted in the cloud for easy, secure model access and scalability |
| Quick Start -> [Installation and Updates][installation docs] | More Information -> [www.invoke.com/pricing](https://www.invoke.com/pricing) |
[![discord badge]][discord link]
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
[![latest release badge]][latest release link] [![github stars badge]][github stars link] [![github forks badge]][github forks link]
# Documentation
| **Quick Links** |
|----------------------------------------------------------------------------------------------------------------------------|
| [Installation and Updates][installation docs] - [Documentation and Tutorials][docs home] - [Bug Reports][github issues] - [Contributing][contributing docs] |
[![CI checks on main badge]][CI checks on main link] [![latest commit to main badge]][latest commit to main link]
</div>
[![github open issues badge]][github open issues link] [![github open prs badge]][github open prs link] [![translation status badge]][translation status link]
## Quick Start
1. Download and unzip the installer from the bottom of the [latest release][latest release link].
2. Run the installer script.
- **Windows**: Double-click on the `install.bat` script.
- **macOS**: Open a Terminal window, drag the file `install.sh` from Finder into the Terminal, and press enter.
- **Linux**: Run `install.sh`.
3. When prompted, enter a location for the install and select your GPU type.
4. Once the install finishes, find the directory you selected during install. The default location is `C:\Users\Username\invokeai` for Windows or `~/invokeai` for Linux/macOS.
5. Run the launcher script (`invoke.bat` for Windows, `invoke.sh` for macOS and Linux) the same way you ran the installer script in step 2.
6. Select option 1 to start the application. Once it starts up, open your browser and go to <http://localhost:9090>.
7. Open the model manager tab to install a starter model and then you'll be ready to generate.
More detail, including hardware requirements and manual install instructions, are available in the [installation documentation][installation docs].
## Docker Container
We publish official container images in Github Container Registry: https://github.com/invoke-ai/InvokeAI/pkgs/container/invokeai. Both CUDA and ROCm images are available. Check the above link for relevant tags.
> [!IMPORTANT]
> Ensure that Docker is set up to use the GPU. Refer to [NVIDIA][nvidia docker docs] or [AMD][amd docker docs] documentation.
### Generate!
Run the container, modifying the command as necessary:
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Then open `http://localhost:9090` and install some models using the Model Manager tab to begin generating.
For ROCm, add `--device /dev/kfd --device /dev/dri` to the `docker run` command.
### Persist your data
You will likely want to persist your workspace outside of the container. Use the `--volume /home/myuser/invokeai:/invokeai` flag to mount some local directory (using its **absolute** path) to the `/invokeai` path inside the container. Your generated images and models will reside there. You can use this directory with other InvokeAI installations, or switch between runtime directories as needed.
### DIY
Build your own image and customize the environment to match your needs using our `docker-compose` stack. See [README.md](./docker/README.md) in the [docker](./docker) directory.
## Troubleshooting, FAQ and Support
Please review our [FAQ][faq] for solutions to common installation problems and other issues.
For more help, please join our [Discord][discord link].
## Features
Full details on features can be found in [our documentation][features docs].
### Web Server & UI
Invoke runs a locally hosted web server & React UI with an industry-leading user experience.
### Unified Canvas
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/out-painting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### Workflows & Nodes
Invoke offers a fully featured workflow management solution, enabling users to combine the power of node-based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### Board & Gallery Management
Invoke features an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- Support for both ckpt and diffusers models
- SD1.5, SD2.0, and SDXL support
- Upscaling Tools
- Embedding Manager & Support
- Model Manager & Support
- Workflow creation & management
- Node-Based Architecture
## Contributing
Anyone who wishes to contribute to this project - whether documentation, features, bug fixes, code cleanup, testing, or code reviews - is very much encouraged to do so.
Get started with contributing by reading our [contribution documentation][contributing docs], joining the [#dev-chat] or the GitHub discussion board.
We hope you enjoy using Invoke as much as we enjoy creating it, and we hope you will elect to become part of our community.
## Thanks
Invoke is a combined effort of [passionate and talented people from across the world][contributors]. We thank them for their time, hard work and effort.
Original portions of the software are Copyright © 2024 by respective contributors.
[features docs]: https://invoke-ai.github.io/InvokeAI/features/
[faq]: https://invoke-ai.github.io/InvokeAI/help/FAQ/
[contributors]: https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/
[invoke.com]: https://www.invoke.com/about
[github issues]: https://github.com/invoke-ai/InvokeAI/issues
[docs home]: https://invoke-ai.github.io/InvokeAI
[installation docs]: https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/
[#dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
[contributing docs]: https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/
[CI checks on main badge]: https://flat.badgen.net/github/checks/invoke-ai/InvokeAI/main?label=CI%20status%20on%20main&cache=900&icon=github
[CI checks on main link]:https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[CI checks on main link]: https://github.com/invoke-ai/InvokeAI/actions?query=branch%3Amain
[discord badge]: https://flat.badgen.net/discord/members/ZmtBAhwWhy?icon=discord
[discord link]: https://discord.gg/ZmtBAhwWhy
[github forks badge]: https://flat.badgen.net/github/forks/invoke-ai/InvokeAI?icon=github
@ -30,402 +150,8 @@
[latest commit to main badge]: https://flat.badgen.net/github/last-commit/invoke-ai/InvokeAI/main?icon=github&color=yellow&label=last%20dev%20commit&cache=900
[latest commit to main link]: https://github.com/invoke-ai/InvokeAI/commits/main
[latest release badge]: https://flat.badgen.net/github/release/invoke-ai/InvokeAI/development?icon=github
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases
[latest release link]: https://github.com/invoke-ai/InvokeAI/releases/latest
[translation status badge]: https://hosted.weblate.org/widgets/invokeai/-/svg-badge.svg
[translation status link]: https://hosted.weblate.org/engage/invokeai/
</div>
InvokeAI is a leading creative engine built to empower professionals
and enthusiasts alike. Generate and create stunning visual media using
the latest AI-driven technologies. InvokeAI offers an industry leading
Web Interface, interactive Command Line Interface, and also serves as
the foundation for multiple commercial products.
**Quick links**: [[How to
Install](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)] [<a
href="https://discord.gg/ZmtBAhwWhy">Discord Server</a>] [<a
href="https://invoke-ai.github.io/InvokeAI/">Documentation and
Tutorials</a>]
[<a href="https://github.com/invoke-ai/InvokeAI/issues">Bug Reports</a>]
[<a
href="https://github.com/invoke-ai/InvokeAI/discussions">Discussion,
Ideas & Q&A</a>]
[<a
href="https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/">Contributing</a>]
<div align="center">
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
</div>
## Table of Contents
Table of Contents 📝
**Getting Started**
1. 🏁 [Quick Start](#quick-start)
3. 🖥️ [Hardware Requirements](#hardware-requirements)
**More About Invoke**
1. 🌟 [Features](#features)
2. 📣 [Latest Changes](#latest-changes)
3. 🛠️ [Troubleshooting](#troubleshooting)
**Supporting the Project**
1. 🤝 [Contributing](#contributing)
2. 👥 [Contributors](#contributors)
3. 💕 [Support](#support)
## Quick Start
For full installation and upgrade instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALLATION/)
If upgrading from version 2.3, please read [Migrating a 2.3 root
directory to 3.0](#migrating-to-3) first.
### Automatic Installer (suggested for 1st time users)
1. Go to the bottom of the [Latest Release Page](https://github.com/invoke-ai/InvokeAI/releases/latest)
2. Download the .zip file for your OS (Windows/macOS/Linux).
3. Unzip the file.
4. **Windows:** double-click on the `install.bat` script. **macOS:** Open a Terminal window, drag the file `install.sh` from Finder
into the Terminal, and press return. **Linux:** run `install.sh`.
5. You'll be asked to confirm the location of the folder in which
to install InvokeAI and its image generation model files. Pick a
location with at least 15 GB of free memory. More if you plan on
installing lots of models.
6. Wait while the installer does its thing. After installing the software,
the installer will launch a script that lets you configure InvokeAI and
select a set of starting image generation models.
7. Find the folder that InvokeAI was installed into (it is not the
same as the unpacked zip file directory!) The default location of this
folder (if you didn't change it in step 5) is `~/invokeai` on
Linux/Mac systems, and `C:\Users\YourName\invokeai` on Windows. This directory will contain launcher scripts named `invoke.sh` and `invoke.bat`.
8. On Windows systems, double-click on the `invoke.bat` file. On
macOS, open a Terminal window, drag `invoke.sh` from the folder into
the Terminal, and press return. On Linux, run `invoke.sh`
9. Press 2 to open the "browser-based UI", press enter/return, wait a
minute or two for Stable Diffusion to start up, then open your browser
and go to http://localhost:9090.
10. Type `banana sushi` in the box on the top left and click `Invoke`
### Command-Line Installation (for developers and users familiar with Terminals)
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with `pnpm` (can be installed with
the command `npm install -g pnpm` if needed)
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
```terminal
mkdir invokeai
````
3. Create a virtual environment named `.venv` inside this directory and activate it:
```terminal
cd invokeai
python -m venv .venv --prompt InvokeAI
```
4. Activate the virtual environment (do it every time you run InvokeAI)
_For Linux/Mac users:_
```sh
source .venv/bin/activate
```
_For Windows users:_
```ps
.venv\Scripts\activate
```
5. Install the InvokeAI module and its dependencies. Choose the command suited for your platform & GPU.
_For Windows/Linux with an NVIDIA GPU:_
```terminal
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2/M3:_
```sh
pip install InvokeAI --use-pep517
```
6. Configure InvokeAI and install a starting set of image generation models (you only need to do this once):
```terminal
invokeai-configure --root .
```
Don't miss the dot at the end!
7. Launch the web server (do it every time you run InvokeAI):
```terminal
invokeai-web
```
8. Point your browser to http://localhost:9090 to bring up the web interface.
9. Type `banana sushi` in the box on the top left and click `Invoke`.
Be sure to activate the virtual environment each time before re-launching InvokeAI,
using `source .venv/bin/activate` or `.venv\Scripts\activate`.
## Detailed Installation Instructions
This fork is supported across Linux, Windows and Macintosh. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver). For full installation and upgrade
instructions, please see:
[InvokeAI Installation Overview](https://invoke-ai.github.io/InvokeAI/installation/INSTALL_SOURCE/)
<a name="migrating-to-3"></a>
### Migrating a v2.3 InvokeAI root directory
The InvokeAI root directory is where the InvokeAI startup file,
installed models, and generated images are stored. It is ordinarily
named `invokeai` and located in your home directory. The contents and
layout of this directory has changed between versions 2.3 and 3.0 and
cannot be used directly.
We currently recommend that you use the installer to create a new root
directory named differently from the 2.3 one, e.g. `invokeai-3` and
then use a migration script to copy your 2.3 models into the new
location. However, if you choose, you can upgrade this directory in
place. This section gives both recipes.
#### Creating a new root directory and migrating old models
This is the safer recipe because it leaves your old root directory in
place to fall back on.
1. Follow the instructions above to create and install InvokeAI in a
directory that has a different name from the 2.3 invokeai directory.
In this example, we will use "invokeai-3"
2. When you are prompted to select models to install, select a minimal
set of models, such as stable-diffusion-v1.5 only.
3. After installation is complete launch `invokeai.sh` (Linux/Mac) or
`invokeai.bat` and select option 8 "Open the developers console". This
will take you to the command line.
4. Issue the command `invokeai-migrate3 --from /path/to/v2.3-root --to
/path/to/invokeai-3-root`. Provide the correct `--from` and `--to`
paths for your v2.3 and v3.0 root directories respectively.
This will copy and convert your old models from 2.3 format to 3.0
format and create a new `models` directory in the 3.0 directory. The
old models directory (which contains the models selected at install
time) will be renamed `models.orig` and can be deleted once you have
confirmed that the migration was successful.
If you wish, you can pass the 2.3 root directory to both `--from` and
`--to` in order to update in place. Warning: this directory will no
longer be usable with InvokeAI 2.3.
#### Migrating in place
For the adventurous, you may do an in-place upgrade from 2.3 to 3.0
without touching the command line. ***This recipe does not work on
Windows platforms due to a bug in the Windows version of the 2.3
upgrade script.** See the next section for a Windows recipe.
##### For Mac and Linux Users:
1. Launch the InvokeAI launcher script in your current v2.3 root directory.
2. Select option [9] "Update InvokeAI" to bring up the updater dialog.
3. Select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [6] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and
update it to the 3.0 format. The following files will be replaced:
- The invokeai.init file, replaced by invokeai.yaml
- The models directory
- The configs/models.yaml model index
The original versions of these files will be saved with the suffix
".orig" appended to the end. Once you have confirmed that the upgrade
worked, you can safely remove these files. Alternatively you can
restore a working v2.3 directory by removing the new files and
restoring the ".orig" files' original names.
##### For Windows Users:
Windows Users can upgrade with the
1. Enter the 2.3 root directory you wish to upgrade
2. Launch `invoke.sh` or `invoke.bat`
3. Select the "Developer's console" option [8]
4. Type the following commands
```
pip install "invokeai @ https://github.com/invoke-ai/InvokeAI/archive/refs/tags/v3.0.0" --use-pep517 --upgrade
invokeai-configure --root .
```
(Replace `v3.0.0` with the current release number if this document is out of date).
The first command will install and upgrade new software to run
InvokeAI. The second will prepare the 2.3 directory for use with 3.0.
You may now launch the WebUI in the usual way, by selecting option [1]
from the launcher script
#### Migrating Images
The migration script will migrate your invokeai settings and models,
including textual inversion models, LoRAs and merges that you may have
installed previously. However it does **not** migrate the generated
images stored in your 2.3-format outputs directory. To do this, you
need to run an additional step:
1. From a working InvokeAI 3.0 root directory, start the launcher and
enter menu option [8] to open the "developer's console".
2. At the developer's console command line, type the command:
```bash
invokeai-import-images
```
3. This will lead you through the process of confirming the desired
source and destination for the imported images. The images will
appear in the gallery board of your choice, and contain the
original prompt, model name, and other parameters used to generate
the image.
(Many kudos to **techjedi** for contributing this script.)
## Hardware Requirements
InvokeAI is supported across Linux, Windows and macOS. Linux
users can use either an Nvidia-based card (with CUDA support) or an
AMD card (using the ROCm driver).
### System
You will need one of the following:
- An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB
of VRAM is highly recommended for rendering using the Stable
Diffusion XL models
- An Apple computer with an M1 chip.
- An AMD-based graphics card with 4GB or more VRAM memory (Linux
only), 6-8 GB for XL rendering.
We do not recommend the GTX 1650 or 1660 series video cards. They are
unable to run in half-precision mode and do not have sufficient VRAM
to render 512x512 images.
**Memory** - At least 12 GB Main Memory RAM.
**Disk** - At least 12 GB of free disk space for the machine learning model, Python, and all its dependencies.
## Features
Feature documentation can be reviewed by navigating to [the InvokeAI Documentation page](https://invoke-ai.github.io/InvokeAI/features/)
### *Web Server & UI*
InvokeAI offers a locally hosted Web Server & React Frontend, with an industry leading user experience. The Web-based UI allows for simple and intuitive workflows, and is responsive for use on mobile devices and tablets accessing the web server.
### *Unified Canvas*
The Unified Canvas is a fully integrated canvas implementation with support for all core generation capabilities, in/outpainting, brush tools, and more. This creative tool unlocks the capability for artists to create with AI as a creative collaborator, and can be used to augment AI-generated imagery, sketches, photography, renders, and more.
### *Workflows & Nodes*
InvokeAI offers a fully featured workflow management solution, enabling users to combine the power of nodes based workflows with the easy of a UI. This allows for customizable generation pipelines to be developed and shared by users looking to create specific workflows to support their production use-cases.
### *Board & Gallery Management*
Invoke AI provides an organized gallery system for easily storing, accessing, and remixing your content in the Invoke workspace. Images can be dragged/dropped onto any Image-base UI element in the application, and rich metadata within the Image allows for easy recall of key prompts or settings used in your workflow.
### Other features
- *Support for both ckpt and diffusers models*
- *SD 2.0, 2.1, XL support*
- *Upscaling Tools*
- *Embedding Manager & Support*
- *Model Manager & Support*
- *Workflow creation & management*
- *Node-Based Architecture*
### Latest Changes
For our latest changes, view our [Release
Notes](https://github.com/invoke-ai/InvokeAI/releases) and the
[CHANGELOG](docs/CHANGELOG.md).
### Troubleshooting / FAQ
Please check out our **[FAQ](https://invoke-ai.github.io/InvokeAI/help/FAQ/)** to get solutions for common installation
problems and other issues. For more help, please join our [Discord][discord link]
## Contributing
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code
cleanup, testing, or code reviews, is very much encouraged to do so.
Get started with contributing by reading our [Contribution documentation](https://invoke-ai.github.io/InvokeAI/contributing/CONTRIBUTING/), joining the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) or the GitHub discussion board.
If you are unfamiliar with how
to contribute to GitHub projects, we have a new contributor checklist you can follow to get started contributing:
[New Contributor Checklist](https://invoke-ai.github.io/InvokeAI/contributing/contribution_guides/newContributorChecklist/).
We hope you enjoy using our software as much as we enjoy creating it,
and we hope that some of those of you who are reading this will elect
to become part of our community.
Welcome to InvokeAI!
### Contributors
This fork is a combined effort of various people from across the world.
[Check out the list of all these amazing people](https://invoke-ai.github.io/InvokeAI/other/CONTRIBUTORS/). We thank them for
their time, hard work and effort.
### Support
For support, please use this repository's GitHub Issues tracking service, or join the [Discord][discord link].
Original portions of the software are Copyright (c) 2023 by respective contributors.
[nvidia docker docs]: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
[amd docker docs]: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html

View File

@ -19,8 +19,9 @@
## INVOKEAI_PORT is the port on which the InvokeAI web interface will be available
# INVOKEAI_PORT=9090
## GPU_DRIVER can be set to either `nvidia` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=nvidia #| rocm
## GPU_DRIVER can be set to either `cuda` or `rocm` to enable GPU support in the container accordingly.
# GPU_DRIVER=cuda #| rocm
## CONTAINER_UID can be set to the UID of the user on the host system that should own the files in the container.
## It is usually not necessary to change this. Use `id -u` on the host system to find the UID.
# CONTAINER_UID=1000

View File

@ -55,6 +55,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
FROM node:20-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack use pnpm@8.x
RUN corepack enable
WORKDIR /build

View File

@ -1,41 +1,75 @@
# InvokeAI Containerized
# Invoke in Docker
All commands should be run within the `docker` directory: `cd docker`
- Ensure that Docker can use the GPU on your system
- This documentation assumes Linux, but should work similarly under Windows with WSL2
- We don't recommend running Invoke in Docker on macOS at this time. It works, but very slowly.
## Quickstart :rocket:
## Quickstart :lightning:
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
No `docker compose`, no persistence, just a simple one-liner using the official images:
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
**CUDA:**
## Detailed setup
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
**ROCm:**
```bash
docker run --device /dev/kfd --device /dev/dri --publish 9090:9090 ghcr.io/invoke-ai/invokeai:main-rocm
```
Open `http://localhost:9090` in your browser once the container finishes booting, install some models, and generate away!
> [!TIP]
> To persist your data (including downloaded models) outside of the container, add a `--volume/-v` flag to the above command, e.g.: `docker run --volume /some/local/path:/invokeai <...the rest of the command>`
## Customize the container
We ship the `run.sh` script, which is a convenient wrapper around `docker compose` for cases where custom image build args are needed. Alternatively, the familiar `docker compose` commands work just as well.
```bash
cd docker
cp .env.sample .env
# edit .env to your liking if you need to; it is well commented.
./run.sh
```
It will take a few minutes to build the image the first time. Once the application starts up, open `http://localhost:9090` in your browser to invoke!
## Docker setup in detail
#### Linux
1. Ensure builkit is enabled in the Docker daemon settings (`/etc/docker/daemon.json`)
2. Install the `docker compose` plugin using your package manager, or follow a [tutorial](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
- The deprecated `docker-compose` (hyphenated) CLI continues to work for now.
- The deprecated `docker-compose` (hyphenated) CLI probably won't work. Update to a recent version.
3. Ensure docker daemon is able to access the GPU.
- You may need to install [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
- [NVIDIA docs](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
- [AMD docs](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html)
#### macOS
> [!TIP]
> You'll be better off installing Invoke directly on your system, because Docker can not use the GPU on macOS.
If you are still reading:
1. Ensure Docker has at least 16GB RAM
2. Enable VirtioFS for file sharing
3. Enable `docker compose` V2 support
This is done via Docker Desktop preferences
This is done via Docker Desktop preferences.
### Configure Invoke environment
### Configure the Invoke Environment
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to the desired location of the InvokeAI runtime directory. It may be an existing directory from a previous installation (post 4.0.0).
1. Execute `run.sh`
The image will be built automatically if needed.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. The runtime directory will be populated with the base configs and models necessary to start generating.
The runtime directory (holding models and outputs) will be created in the location specified by `INVOKEAI_ROOT`. The default location is `~/invokeai`. Navigate to the Model Manager tab and install some models before generating.
### Use a GPU
@ -43,9 +77,9 @@ The runtime directory (holding models and outputs) will be created in the locati
- WSL2 is *required* for Windows.
- only `x86_64` architecture is supported.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker/NVIDIA/AMD documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file before running `./run.sh`.
## Customize
@ -59,12 +93,12 @@ Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The defa
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=nvidia
GPU_DRIVER=cuda
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
Any environment variables supported by InvokeAI can be set here. See the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.
## Even Moar Customizing!
## Even More Customizing!
See the `docker-compose.yml` file. The `command` instruction can be uncommented and used to run arbitrary startup commands. Some examples below.

View File

@ -1,7 +1,5 @@
# Copyright (c) 2023 Eugene Brodsky https://github.com/ebr
version: '3.8'
x-invokeai: &invokeai
image: "local/invokeai:latest"
build:
@ -32,7 +30,7 @@ x-invokeai: &invokeai
services:
invokeai-nvidia:
invokeai-cuda:
<<: *invokeai
deploy:
resources:

View File

@ -23,18 +23,18 @@ usermod -u ${USER_ID} ${USER} 1>/dev/null
# but it is useful to have the full SSH server e.g. on Runpod.
# (use SCP to copy files to/from the image, etc)
if [[ -v "PUBLIC_KEY" ]] && [[ ! -d "${HOME}/.ssh" ]]; then
apt-get update
apt-get install -y openssh-server
pushd "$HOME"
mkdir -p .ssh
echo "${PUBLIC_KEY}" > .ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
apt-get update
apt-get install -y openssh-server
pushd "$HOME"
mkdir -p .ssh
echo "${PUBLIC_KEY}" >.ssh/authorized_keys
chmod -R 700 .ssh
popd
service ssh start
fi
mkdir -p "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}"
chown --recursive ${USER} "${INVOKEAI_ROOT}" || true
cd "${INVOKEAI_ROOT}"
# Run the CMD as the Container User (not root).

View File

@ -8,11 +8,15 @@ run() {
local build_args=""
local profile=""
# create .env file if it doesn't exist, otherwise docker compose will fail
touch .env
# parse .env file for build args
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
[[ -z "$profile" ]] && profile="nvidia"
# default to 'cuda' profile
[[ -z "$profile" ]] && profile="cuda"
local service_name="invokeai-$profile"

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 786 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

View File

@ -128,7 +128,8 @@ The queue operates on a series of download job objects. These objects
specify the source and destination of the download, and keep track of
the progress of the download.
The only job type currently implemented is `DownloadJob`, a pydantic object with the
Two job types are defined. `DownloadJob` and
`MultiFileDownloadJob`. The former is a pydantic object with the
following fields:
| **Field** | **Type** | **Default** | **Description** |
@ -138,7 +139,7 @@ following fields:
| `dest` | Path | | Where to download to |
| `access_token` | str | | [optional] string containing authentication token for access |
| `on_start` | Callable | | [optional] callback when the download starts |
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
| `on_complete` | Callable | | [optional] callback called after successful download completion |
| `on_error` | Callable | | [optional] callback called after an error occurs |
| `id` | int | auto assigned | Job ID, an integer >= 0 |
@ -190,6 +191,33 @@ A cancelled job will have status `DownloadJobStatus.ERROR` and an
`error_type` field of "DownloadJobCancelledException". In addition,
the job's `cancelled` property will be set to True.
The `MultiFileDownloadJob` is used for diffusers model downloads,
which contain multiple files and directories under a common root:
| **Field** | **Type** | **Default** | **Description** |
|----------------|-----------------|---------------|-----------------|
| _Fields passed in at job creation time_ |
| `download_parts` | Set[DownloadJob]| | Component download jobs |
| `dest` | Path | | Where to download to |
| `on_start` | Callable | | [optional] callback when the download starts |
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
| `on_complete` | Callable | | [optional] callback called after successful download completion |
| `on_error` | Callable | | [optional] callback called after an error occurs |
| `id` | int | auto assigned | Job ID, an integer >= 0 |
| _Fields updated over the course of the download task_
| `status` | DownloadJobStatus| | Status code |
| `download_path` | Path | | Path to the root of the downloaded files |
| `bytes` | int | 0 | Bytes downloaded so far |
| `total_bytes` | int | 0 | Total size of the file at the remote site |
| `error_type` | str | | String version of the exception that caused an error during download |
| `error` | str | | String version of the traceback associated with an error |
| `cancelled` | bool | False | Set to true if the job was cancelled by the caller|
Note that the MultiFileDownloadJob does not support the `priority`,
`job_started`, `job_ended` or `content_type` attributes. You can get
these from the individual download jobs in `download_parts`.
### Callbacks
Download jobs can be associated with a series of callbacks, each with
@ -251,11 +279,40 @@ jobs using `list_jobs()`, fetch a single job by its with
running jobs with `cancel_all_jobs()`, and wait for all jobs to finish
with `join()`.
#### job = queue.download(source, dest, priority, access_token)
#### job = queue.download(source, dest, priority, access_token, on_start, on_progress, on_complete, on_cancelled, on_error)
Create a new download job and put it on the queue, returning the
DownloadJob object.
#### multifile_job = queue.multifile_download(parts, dest, access_token, on_start, on_progress, on_complete, on_cancelled, on_error)
This is similar to download(), but instead of taking a single source,
it accepts a `parts` argument consisting of a list of
`RemoteModelFile` objects. Each part corresponds to a URL/Path pair,
where the URL is the location of the remote file, and the Path is the
destination.
`RemoteModelFile` can be imported from `invokeai.backend.model_manager.metadata`, and
consists of a url/path pair. Note that the path *must* be relative.
The method returns a `MultiFileDownloadJob`.
```
from invokeai.backend.model_manager.metadata import RemoteModelFile
remote_file_1 = RemoteModelFile(url='http://www.foo.bar/my/pytorch_model.safetensors'',
path='my_model/textencoder/pytorch_model.safetensors'
)
remote_file_2 = RemoteModelFile(url='http://www.bar.baz/vae.ckpt',
path='my_model/vae/diffusers_model.safetensors'
)
job = queue.multifile_download(parts=[remote_file_1, remote_file_2],
dest='/tmp/downloads',
on_progress=TqdmProgress().update)
queue.wait_for_job(job)
print(f"The files were downloaded to {job.download_path}")
```
#### jobs = queue.list_jobs()
Return a list of all active and inactive `DownloadJob`s.

View File

@ -397,26 +397,25 @@ In the event you wish to create a new installer, you may use the
following initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.config import get_config
from invokeai.app.services.model_records import ModelRecordServiceSQL
from invokeai.app.services.model_install import ModelInstallService
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.shared.sqlite import SqliteDatabase
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
config = get_config()
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config, logger)
record_store = ModelRecordServiceSQL(db)
db = SqliteDatabase(config.db_path, logger)
record_store = ModelRecordServiceSQL(db, logger)
queue = DownloadQueueService()
queue.start()
installer = ModelInstallService(app_config=config,
installer = ModelInstallService(app_config=config,
record_store=record_store,
download_queue=queue
)
download_queue=queue
)
installer.start()
```
@ -1367,12 +1366,20 @@ the in-memory loaded model:
| `model` | AnyModel | The instantiated model (details below) |
| `locker` | ModelLockerBase | A context manager that mediates the movement of the model into VRAM |
Because the loader can return multiple model types, it is typed to
return `AnyModel`, a Union `ModelMixin`, `torch.nn.Module`,
`IAIOnnxRuntimeModel`, `IPAdapter`, `IPAdapterPlus`, and
`EmbeddingModelRaw`. `ModelMixin` is the base class of all diffusers
models, `EmbeddingModelRaw` is used for LoRA and TextualInversion
models. The others are obvious.
### get_model_by_key(key, [submodel]) -> LoadedModel
The `get_model_by_key()` method will retrieve the model using its
unique database key. For example:
loaded_model = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
`get_model_by_key()` may raise any of the following exceptions:
* `UnknownModelException` -- key not in database
* `ModelNotFoundException` -- key in database but model not found at path
* `NotImplementedException` -- the loader doesn't know how to load this type of model
### Using the Loaded Model in Inference
`LoadedModel` acts as a context manager. The context loads the model
into the execution device (e.g. VRAM on CUDA systems), locks the model
@ -1380,17 +1387,33 @@ in the execution device for the duration of the context, and returns
the model. Use it like this:
```
model_info = loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae:
loaded_model_= loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with loaded_model as vae:
image = vae.decode(latents)[0]
```
`get_model_by_key()` may raise any of the following exceptions:
The object returned by the LoadedModel context manager is an
`AnyModel`, which is a Union of `ModelMixin`, `torch.nn.Module`,
`IAIOnnxRuntimeModel`, `IPAdapter`, `IPAdapterPlus`, and
`EmbeddingModelRaw`. `ModelMixin` is the base class of all diffusers
models, `EmbeddingModelRaw` is used for LoRA and TextualInversion
models. The others are obvious.
In addition, you may call `LoadedModel.model_on_device()`, a context
manager that returns a tuple of the model's state dict in CPU and the
model itself in VRAM. It is used to optimize the LoRA patching and
unpatching process:
```
loaded_model_= loader.get_model_by_key('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with loaded_model.model_on_device() as (state_dict, vae):
image = vae.decode(latents)[0]
```
Since not all models have state dicts, the `state_dict` return value
can be None.
* `UnknownModelException` -- key not in database
* `ModelNotFoundException` -- key in database but model not found at path
* `NotImplementedException` -- the loader doesn't know how to load this type of model
### Emitting model loading events
When the `context` argument is passed to `load_model_*()`, it will
@ -1578,3 +1601,59 @@ This method takes a model key, looks it up using the
`ModelRecordServiceBase` object in `mm.store`, and passes the returned
model configuration to `load_model_by_config()`. It may raise a
`NotImplementedException`.
## Invocation Context Model Manager API
Within invocations, the following methods are available from the
`InvocationContext` object:
### context.download_and_cache_model(source) -> Path
This method accepts a `source` of a remote model, downloads and caches
it locally, and then returns a Path to the local model. The source can
be a direct download URL or a HuggingFace repo_id.
In the case of HuggingFace repo_id, the following variants are
recognized:
* stabilityai/stable-diffusion-v4 -- default model
* stabilityai/stable-diffusion-v4:fp16 -- fp16 variant
* stabilityai/stable-diffusion-v4:fp16:vae -- the fp16 vae subfolder
* stabilityai/stable-diffusion-v4:onnx:vae -- the onnx variant vae subfolder
You can also point at an arbitrary individual file within a repo_id
directory using this syntax:
* stabilityai/stable-diffusion-v4::/checkpoints/sd4.safetensors
### context.load_local_model(model_path, [loader]) -> LoadedModel
This method loads a local model from the indicated path, returning a
`LoadedModel`. The optional loader is a Callable that accepts a Path
to the object, and returns a `AnyModel` object. If no loader is
provided, then the method will use `torch.load()` for a .ckpt or .bin
checkpoint file, `safetensors.torch.load_file()` for a safetensors
checkpoint file, or `cls.from_pretrained()` for a directory that looks
like a diffusers directory.
### context.load_remote_model(source, [loader]) -> LoadedModel
This method accepts a `source` of a remote model, downloads and caches
it locally, loads it, and returns a `LoadedModel`. The source can be a
direct download URL or a HuggingFace repo_id.
In the case of HuggingFace repo_id, the following variants are
recognized:
* stabilityai/stable-diffusion-v4 -- default model
* stabilityai/stable-diffusion-v4:fp16 -- fp16 variant
* stabilityai/stable-diffusion-v4:fp16:vae -- the fp16 vae subfolder
* stabilityai/stable-diffusion-v4:onnx:vae -- the onnx variant vae subfolder
You can also point at an arbitrary individual file within a repo_id
directory using this syntax:
* stabilityai/stable-diffusion-v4::/checkpoints/sd4.safetensors

View File

@ -117,13 +117,13 @@ Stateless fields do not store their value in the node, so their field instances
"Custom" fields will always be treated as stateless fields.
##### Collection and Scalar Fields
##### Single and Collection Fields
Field types have a name and two flags which may identify it as a **collection** or **collection or scalar** field.
Field types have a name and cardinality property which may identify it as a **SINGLE**, **COLLECTION** or **SINGLE_OR_COLLECTION** field.
If a field is annotated in python as a list, its field type is parsed and flagged as a **collection** type (e.g. `list[int]`).
If it is annotated as a union of a type and list, the type will be flagged as a **collection or scalar** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
- If a field is annotated in python as a singular value or class, its field type is parsed as a **SINGLE** type (e.g. `int`, `ImageField`, `str`).
- If a field is annotated in python as a list, its field type is parsed as a **COLLECTION** type (e.g. `list[int]`).
- If it is annotated as a union of a type and list, the type will be parsed as a **SINGLE_OR_COLLECTION** type (e.g. `Union[int, list[int]]`). Fields may not be unions of different types (e.g. `Union[int, list[str]]` and `Union[int, str]` are not allowed).
## Implementation
@ -173,8 +173,7 @@ Field types are represented as structured objects:
```ts
type FieldType = {
name: string;
isCollection: boolean;
isCollectionOrScalar: boolean;
cardinality: 'SINGLE' | 'COLLECTION' | 'SINGLE_OR_COLLECTION';
};
```
@ -186,7 +185,7 @@ There are 4 general cases for field type parsing.
When a field is annotated as a primitive values (e.g. `int`, `str`, `float`), the field type parsing is fairly straightforward. The field is represented by a simple OpenAPI **schema object**, which has a `type` property.
We create a field type name from this `type` string (e.g. `string` -> `StringField`).
We create a field type name from this `type` string (e.g. `string` -> `StringField`). The cardinality is `"SINGLE"`.
##### Complex Types
@ -200,13 +199,13 @@ We need to **dereference** the schema to pull these out. Dereferencing may requi
When a field is annotated as a list of a single type, the schema object has an `items` property. They may be a schema object or reference object and must be parsed to determine the item type.
We use the item type for field type name, adding `isCollection: true` to the field type.
We use the item type for field type name. The cardinality is `"COLLECTION"`.
##### Collection or Scalar Types
##### Single or Collection Types
When a field is annotated as a union of a type and list of that type, the schema object has an `anyOf` property, which holds a list of valid types for the union.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, adding `isCollectionOrScalar: true` to the field type.
After verifying that the union has two members (a type and list of the same type), we use the type for field type name, with cardinality `"SINGLE_OR_COLLECTION"`.
##### Optional Fields
@ -300,7 +299,7 @@ Migration logic is in [migrations.ts].
[pydantic]: https://github.com/pydantic/pydantic 'pydantic'
[zod]: https://github.com/colinhacks/zod 'zod'
[openapi-types]: https://github.com/kogosoftwarellc/open-api/tree/main/packages/openapi-types 'openapi-types'
[reactflow]: https://github.com/xyflow/xyflow 'reactflow'
[reactflow]: https://github.com/xyflow/xyflow '@xyflow/react'
[reactflow-concepts]: https://reactflow.dev/learn/concepts/terms-and-definitions
[reactflow-events]: https://reactflow.dev/api-reference/react-flow#event-handlers
[buildWorkflow.ts]: https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/src/features/nodes/util/workflow/buildWorkflow.ts

View File

@ -51,13 +51,11 @@ The settings in this file will override the defaults. You only need
to change this file if the default for a particular setting doesn't
work for you.
You'll find an example file next to `invokeai.yaml` that shows the default values.
Some settings, like [Model Marketplace API Keys], require the YAML
to be formatted correctly. Here is a [basic guide to YAML files].
You can fix a broken `invokeai.yaml` by deleting it and running the
configuration script again -- option [6] in the launcher, "Re-run the
configure script".
#### Custom Config File Location
You can use any config file with the `--config` CLI arg. Pass in the path to the `invokeai.yaml` file you want to use.

View File

@ -165,7 +165,7 @@ Additionally, each section can be expanded with the "Show Advanced" button in o
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](https://www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
2. Through the Model Manager UI with models from the *Tools* section of [models.invoke.ai](https://models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
#### Using IP-Adapter

92
docs/features/GALLERY.md Normal file
View File

@ -0,0 +1,92 @@
---
title: InvokeAI Gallery Panel
---
# :material-web: InvokeAI Gallery Panel
## Quick guided walkthrough of the Gallery Panel's features
The Gallery Panel is a fast way to review, find, and make use of images you've
generated and loaded. The Gallery is divided into Boards. The Uncategorized board is always
present but you can create your own for better organization.
![image](../assets/gallery/gallery.png)
### Board Display and Settings
At the very top of the Gallery Panel are the boards disclosure and settings buttons.
![image](../assets/gallery/top_controls.png)
The disclosure button shows the name of the currently selected board and allows you to show and hide the board thumbnails (shown in the image below).
![image](../assets/gallery/board_thumbnails.png)
The settings button opens a list of options.
![image](../assets/gallery/board_settings.png)
- ***Image Size*** this slider lets you control the size of the image previews (images of three different sizes).
- ***Auto-Switch to New Images*** if you turn this on, whenever a new image is generated, it will automatically be loaded into the current image panel on the Text to Image tab and into the result panel on the [Image to Image](IMG2IMG.md) tab. This will happen invisibly if you are on any other tab when the image is generated.
- ***Auto-Assign Board on Click*** whenever an image is generated or saved, it always gets put in a board. The board it gets put into is marked with AUTO (image of board marked). Turning on Auto-Assign Board on Click will make whichever board you last selected be the destination when you click Invoke. That means you can click Invoke, select a different board, and then click Invoke again and the two images will be put in two different boards. (bold)It's the board selected when Invoke is clicked that's used, not the board that's selected when the image is finished generating.(bold) Turning this off, enables the Auto-Add Board drop down which lets you set one specific board to always put generated images into. This also enables and disables the Auto-add to this Board menu item described below.
- ***Always Show Image Size Badge*** this toggles whether to show image sizes for each image preview (show two images, one with sizes shown, one without)
Below these two buttons, you'll see the Search Boards text entry area. You use this to search for specific boards by the name of the board.
Next to it is the Add Board (+) button which lets you add new boards. Boards can be renamed by clicking on the name of the board under its thumbnail and typing in the new name.
### Board Thumbnail Menu
Each board has a context menu (ctrl+click / right-click).
![image](../assets/gallery/thumbnail_menu.png)
- ***Auto-add to this Board*** if you've disabled Auto-Assign Board on Click in the board settings, you can use this option to set this board to be where new images are put.
- ***Download Board*** this will add all the images in the board into a zip file and provide a link to it in a notification (image of notification)
- ***Delete Board*** this will delete the board
> [!CAUTION]
> This will delete all the images in the board and the board itself.
### Board Contents
Every board is organized by two tabs, Images and Assets.
![image](../assets/gallery/board_tabs.png)
Images are the Invoke-generated images that are placed into the board. Assets are images that you upload into Invoke to be used as an [Image Prompt](https://support.invoke.ai/support/solutions/articles/151000159340-using-the-image-prompt-adapter-ip-adapter-) or in the [Image to Image](IMG2IMG.md) tab.
### Image Thumbnail Menu
Every image generated by Invoke has its generation information stored as text inside the image file itself. This can be read directly by selecting the image and clicking on the Info button ![image](../assets/gallery/info_button.png) in any of the image result panels.
Each image also has a context menu (ctrl+click / right-click).
![image](../assets/gallery/image_menu.png)
The options are (items marked with an * will not work with images that lack generation information):
- ***Open in New Tab*** this will open the image alone in a new browser tab, separate from the Invoke interface.
- ***Download Image*** this will trigger your browser to download the image.
- ***Load Workflow **** this will load any workflow settings into the Workflow tab and automatically open it.
- ***Remix Image **** this will load all of the image's generation information, (bold)excluding its Seed, into the left hand control panel
- ***Use Prompt **** this will load only the image's text prompts into the left-hand control panel
- ***Use Seed **** this will load only the image's Seed into the left-hand control panel
- ***Use All **** this will load all of the image's generation information into the left-hand control panel
- ***Send to Image to Image*** this will put the image into the left-hand panel in the Image to Image tab ana automatically open it
- ***Send to Unified Canvas*** This will (bold)replace whatever is already present(bold) in the Unified Canvas tab with the image and automatically open the tab
- ***Change Board*** this will oipen a small window that will let you move the image to a different board. This is the same as dragging the image to that board's thumbnail.
- ***Star Image*** this will add the image to the board's list of starred images that are always kept at the top of the gallery. This is the same as clicking on the star on the top right-hand side of the image that appears when you hover over the image with the mouse
- ***Delete Image*** this will delete the image from the board
> [!CAUTION]
> This will delete the image entirely from Invoke.
## Summary
This walkthrough only covers the Gallery interface and Boards. Actually generating images is handled by [Prompts](PROMPTS.md), the [Image to Image](IMG2IMG.md) tab, and the [Unified Canvas](UNIFIED_CANVAS.md).
## Acknowledgements
A huge shout-out to the core team working to make the Web GUI a reality,
including [psychedelicious](https://github.com/psychedelicious),
[Kyle0654](https://github.com/Kyle0654) and
[blessedcoolant](https://github.com/blessedcoolant).
[hipsterusername](https://github.com/hipsterusername) was the team's unofficial
cheerleader and added tooltips/docs.

View File

@ -108,40 +108,6 @@ Can be used with .and():
Each will give you different results - try them out and see what you prefer!
### Cross-Attention Control ('prompt2prompt')
Sometimes an image you generate is almost right, and you just want to change one
detail without affecting the rest. You could use a photo editor and inpainting
to overpaint the area, but that's a pain. Here's where `prompt2prompt` comes in
handy.
Generate an image with a given prompt, record the seed of the image, and then
use the `prompt2prompt` syntax to substitute words in the original prompt for
words in a new prompt. This works for `img2img` as well.
For example, consider the prompt `a cat.swap(dog) playing with a ball in the forest`. Normally, because the words interact with each other when doing a stable diffusion image generation, these two prompts would generate different compositions:
- `a cat playing with a ball in the forest`
- `a dog playing with a ball in the forest`
| `a cat playing with a ball in the forest` | `a dog playing with a ball in the forest` |
| --- | --- |
| img | img |
- For multiple word swaps, use parentheses: `a (fluffy cat).swap(barking dog) playing with a ball in the forest`.
- To swap a comma, use quotes: `a ("fluffy, grey cat").swap("big, barking dog") playing with a ball in the forest`.
- Supports options `t_start` and `t_end` (each 0-1) loosely corresponding to (bloc97's)[(https://github.com/bloc97/CrossAttentionControl)] `prompt_edit_tokens_start/_end` but with the math swapped to make it easier to
intuitively understand. `t_start` and `t_end` are used to control on which steps cross-attention control should run. With the default values `t_start=0` and `t_end=1`, cross-attention control is active on every step of image generation. Other values can be used to turn cross-attention control off for part of the image generation process.
- For example, if doing a diffusion with 10 steps for the prompt is `a cat.swap(dog, t_start=0.3, t_end=1.0) playing with a ball in the forest`, the first 3 steps will be run as `a cat playing with a ball in the forest`, while the last 7 steps will run as `a dog playing with a ball in the forest`, but the pixels that represent `dog` will be locked to the pixels that would have represented `cat` if the `cat` prompt had been used instead.
- Conversely, for `a cat.swap(dog, t_start=0, t_end=0.7) playing with a ball in the forest`, the first 7 steps will run as `a dog playing with a ball in the forest` with the pixels that represent `dog` locked to the same pixels that would have represented `cat` if the `cat` prompt was being used instead. The final 3 steps will just run `a cat playing with a ball in the forest`.
> For img2img, the step sequence does not start at 0 but instead at `(1.0-strength)` - so if the img2img `strength` is `0.7`, `t_start` and `t_end` must both be greater than `0.3` (`1.0-0.7`) to have any effect.
Prompt2prompt `.swap()` is not compatible with xformers, which will be temporarily disabled when doing a `.swap()` - so you should expect to use more VRAM and run slower that with xformers enabled.
The `prompt2prompt` code is based off
[bloc97's colab](https://github.com/bloc97/CrossAttentionControl).
### Escaping parentheses and speech marks
If the model you are using has parentheses () or speech marks "" as part of its

View File

@ -4,278 +4,6 @@ title: Training
# :material-file-document: Training
# Textual Inversion Training
## **Personalizing Text-to-Image Generation**
Invoke Training has moved to its own repository, with a dedicated UI for accessing common scripts like Textual Inversion and LoRA training.
You may personalize the generated images to provide your own styles or objects
by training a new LDM checkpoint and introducing a new vocabulary to the fixed
model as a (.pt) embeddings file. Alternatively, you may use or train
HuggingFace Concepts embeddings files (.bin) from
<https://huggingface.co/sd-concepts-library> and its associated
notebooks.
## **Hardware and Software Requirements**
You will need a GPU to perform training in a reasonable length of
time, and at least 12 GB of VRAM. We recommend using the [`xformers`
library](../installation/070_INSTALL_XFORMERS.md) to accelerate the
training process further. During training, about ~8 GB is temporarily
needed in order to store intermediate models, checkpoints and logs.
## **Preparing for Training**
To train, prepare a folder that contains 3-5 images that illustrate
the object or concept. It is good to provide a variety of examples or
poses to avoid overtraining the system. Format these images as PNG
(preferred) or JPG. You do not need to resize or crop the images in
advance, but for more control you may wish to do so.
Place the training images in a directory on the machine InvokeAI runs
on. We recommend placing them in a subdirectory of the
`text-inversion-training-data` folder located in the InvokeAI root
directory, ordinarily `~/invokeai` (Linux/Mac), or
`C:\Users\your_name\invokeai` (Windows). For example, to create an
embedding for the "psychedelic" style, you'd place the training images
into the directory
`~invokeai/text-inversion-training-data/psychedelic`.
## **Launching Training Using the Console Front End**
InvokeAI 2.3 and higher comes with a text console-based training front
end. From within the `invoke.sh`/`invoke.bat` Invoke launcher script,
start training tool selecting choice (3):
```sh
1 "Generate images with a browser-based interface"
2 "Explore InvokeAI nodes using a command-line interface"
3 "Textual inversion training"
4 "Merge models (diffusers type only)"
5 "Download and install models"
6 "Change InvokeAI startup options"
7 "Re-run the configure script to fix a broken install or to complete a major upgrade"
8 "Open the developer console"
9 "Update InvokeAI"
```
Alternatively, you can select option (8) or from the command line, with the InvokeAI virtual environment active,
you can then launch the front end with the command `invokeai-ti --gui`.
This will launch a text-based front end that will look like this:
<figure markdown>
![ti-frontend](../assets/textual-inversion/ti-frontend.png)
</figure>
The interface is keyboard-based. Move from field to field using
control-N (^N) to move to the next field and control-P (^P) to the
previous one. <Tab> and <shift-TAB> work as well. Once a field is
active, use the cursor keys. In a checkbox group, use the up and down
cursor keys to move from choice to choice, and <space> to select a
choice. In a scrollbar, use the left and right cursor keys to increase
and decrease the value of the scroll. In textfields, type the desired
values.
The number of parameters may look intimidating, but in most cases the
predefined defaults work fine. The red circled fields in the above
illustration are the ones you will adjust most frequently.
### Model Name
This will list all the diffusers models that are currently
installed. Select the one you wish to use as the basis for your
embedding. Be aware that if you use a SD-1.X-based model for your
training, you will only be able to use this embedding with other
SD-1.X-based models. Similarly, if you train on SD-2.X, you will only
be able to use the embeddings with models based on SD-2.X.
### Trigger Term
This is the prompt term you will use to trigger the embedding. Type a
single word or phrase you wish to use as the trigger, example
"psychedelic" (without angle brackets). Within InvokeAI, you will then
be able to activate the trigger using the syntax `<psychedelic>`.
### Initializer
This is a single character that is used internally during the training
process as a placeholder for the trigger term. It defaults to "*" and
can usually be left alone.
### Resume from last saved checkpoint
As training proceeds, textual inversion will write a series of
intermediate files that can be used to resume training from where it
was left off in the case of an interruption. This checkbox will be
automatically selected if you provide a previously used trigger term
and at least one checkpoint file is found on disk.
Note that as of 20 January 2023, resume does not seem to be working
properly due to an issue with the upstream code.
### Data Training Directory
This is the location of the images to be used for training. When you
select a trigger term like "my-trigger", the frontend will prepopulate
this field with `~/invokeai/text-inversion-training-data/my-trigger`,
but you can change the path to wherever you want.
### Output Destination Directory
This is the location of the logs, checkpoint files, and embedding
files created during training. When you select a trigger term like
"my-trigger", the frontend will prepopulate this field with
`~/invokeai/text-inversion-output/my-trigger`, but you can change the
path to wherever you want.
### Image resolution
The images in the training directory will be automatically scaled to
the value you use here. For best results, you will want to use the
same default resolution of the underlying model (512 pixels for
SD-1.5, 768 for the larger version of SD-2.1).
### Center crop images
If this is selected, your images will be center cropped to make them
square before resizing them to the desired resolution. Center cropping
can indiscriminately cut off the top of subjects' heads for portrait
aspect images, so if you have images like this, you may wish to use a
photoeditor to manually crop them to a square aspect ratio.
### Mixed precision
Select the floating point precision for the embedding. "no" will
result in a full 32-bit precision, "fp16" will provide 16-bit
precision, and "bf16" will provide mixed precision (only available
when XFormers is used).
### Max training steps
How many steps the training will take before the model converges. Most
training sets will converge with 2000-3000 steps.
### Batch size
This adjusts how many training images are processed simultaneously in
each step. Higher values will cause the training process to run more
quickly, but use more memory. The default size will run with GPUs with
as little as 12 GB.
### Learning rate
The rate at which the system adjusts its internal weights during
training. Higher values risk overtraining (getting the same image each
time), and lower values will take more steps to train a good
model. The default of 0.0005 is conservative; you may wish to increase
it to 0.005 to speed up training.
### Scale learning rate by number of GPUs, steps and batch size
If this is selected (the default) the system will adjust the provided
learning rate to improve performance.
### Use xformers acceleration
This will activate XFormers memory-efficient attention. You need to
have XFormers installed for this to have an effect.
### Learning rate scheduler
This adjusts how the learning rate changes over the course of
training. The default "constant" means to use a constant learning rate
for the entire training session. The other values scale the learning
rate according to various formulas.
Only "constant" is supported by the XFormers library.
### Gradient accumulation steps
This is a parameter that allows you to use bigger batch sizes than
your GPU's VRAM would ordinarily accommodate, at the cost of some
performance.
### Warmup steps
If "constant_with_warmup" is selected in the learning rate scheduler,
then this provides the number of warmup steps. Warmup steps have a
very low learning rate, and are one way of preventing early
overtraining.
## The training run
Start the training run by advancing to the OK button (bottom right)
and pressing <enter>. A series of progress messages will be displayed
as the training process proceeds. This may take an hour or two,
depending on settings and the speed of your system. Various log and
checkpoint files will be written into the output directory (ordinarily
`~/invokeai/text-inversion-output/my-model/`)
At the end of successful training, the system will copy the file
`learned_embeds.bin` into the InvokeAI root directory's `embeddings`
directory, using a subdirectory named after the trigger token. For
example, if the trigger token was `psychedelic`, then look for the
embeddings file in
`~/invokeai/embeddings/psychedelic/learned_embeds.bin`
You may now launch InvokeAI and try out a prompt that uses the trigger
term. For example `a plate of banana sushi in <psychedelic> style`.
## **Training with the Command-Line Script**
Training can also be done using a traditional command-line script. It
can be launched from within the "developer's console", or from the
command line after activating InvokeAI's virtual environment.
It accepts a large number of arguments, which can be summarized by
passing the `--help` argument:
```sh
invokeai-ti --help
```
Typical usage is shown here:
```sh
invokeai-ti \
--model=stable-diffusion-1.5 \
--resolution=512 \
--learnable_property=style \
--initializer_token='*' \
--placeholder_token='<psychedelic>' \
--train_data_dir=/home/lstein/invokeai/training-data/psychedelic \
--output_dir=/home/lstein/invokeai/text-inversion-training/psychedelic \
--scale_lr \
--train_batch_size=8 \
--gradient_accumulation_steps=4 \
--max_train_steps=3000 \
--learning_rate=0.0005 \
--resume_from_checkpoint=latest \
--lr_scheduler=constant \
--mixed_precision=fp16 \
--only_save_embeds
```
## Troubleshooting
### `Cannot load embedding for <trigger>. It was trained on a model with token dimension 1024, but the current model has token dimension 768`
Messages like this indicate you trained the embedding on a different base model than the currently selected one.
For example, in the error above, the training was done on SD2.1 (768x768) but it was used on SD1.5 (512x512).
## Reading
For more information on textual inversion, please see the following
resources:
* The [textual inversion repository](https://github.com/rinongal/textual_inversion) and
associated paper for details and limitations.
* [HuggingFace's textual inversion training
page](https://huggingface.co/docs/diffusers/training/text_inversion)
* [HuggingFace example script
documentation](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
(Note that this script is similar to, but not identical, to
`textual_inversion`, but produces embed files that are completely compatible.
---
copyright (c) 2023, Lincoln Stein and the InvokeAI Development Team
You can find more by visiting the repo at https://github.com/invoke-ai/invoke-training

View File

@ -54,7 +54,7 @@ main sections:
of buttons at the top lets you modify and manipulate the image in
various ways.
3. A **gallery** section on the left that contains a history of the images you
3. A **gallery** section on the right that contains a history of the images you
have generated. These images are read and written to the directory specified
in the `INVOKEAIROOT/invokeai.yaml` initialization file, usually a directory
named `outputs` in `INVOKEAIROOT`.

View File

@ -18,12 +18,47 @@ Note that any releases marked as _pre-release_ are in a beta state. You may expe
The Model Manager tab in the UI provides a few ways to install models, including using your already-downloaded models. You'll see a popup directing you there on first startup. For more information, see the [model install docs].
## Missing models after updating to v4
If you find some models are missing after updating to v4, it's likely they weren't correctly registered before the update and didn't get picked up in the migration.
You can use the `Scan Folder` tab in the Model Manager UI to fix this. The models will either be in the old, now-unused `autoimport` folder, or your `models` folder.
- Find and copy your install's old `autoimport` folder path, install the main install folder.
- Go to the Model Manager and click `Scan Folder`.
- Paste the path and scan.
- IMPORTANT: Uncheck `Inplace install`.
- Click `Install All` to install all found models, or just install the models you want.
Next, find and copy your install's `models` folder path (this could be your custom models folder path, or the `models` folder inside the main install folder).
Follow the same steps to scan and import the missing models.
## Slow generation
- Check the [system requirements] to ensure that your system is capable of generating images.
- Check the `ram` setting in `invokeai.yaml`. This setting tells Invoke how much of your system RAM can be used to cache models. Having this too high or too low can slow things down. That said, it's generally safest to not set this at all and instead let Invoke manage it.
- Check the `vram` setting in `invokeai.yaml`. This setting tells Invoke how much of your GPU VRAM can be used to cache models. Counter-intuitively, if this setting is too high, Invoke will need to do a lot of shuffling of models as it juggles the VRAM cache and the currently-loaded model. The default value of 0.25 is generally works well for GPUs without 16GB or more VRAM. Even on a 24GB card, the default works well.
- Check that your generations are happening on your GPU (if you have one). InvokeAI will log what is being used for generation upon startup. If your GPU isn't used, re-install to ensure the correct versions of torch get installed.
- If you are on Windows, you may have exceeded your GPU's VRAM capacity and are using slower [shared GPU memory](#shared-gpu-memory-windows). There's a guide to opt out of this behaviour in the linked FAQ entry.
## Shared GPU Memory (Windows)
!!! tip "Nvidia GPUs with driver 536.40"
This only applies to current Nvidia cards with driver 536.40 or later, released in June 2023.
When the GPU doesn't have enough VRAM for a task, Windows is able to allocate some of its CPU RAM to the GPU. This is much slower than VRAM, but it does allow the system to generate when it otherwise might no have enough VRAM.
When shared GPU memory is used, generation slows down dramatically - but at least it doesn't crash.
If you'd like to opt out of this behavior and instead get an error when you exceed your GPU's VRAM, follow [this guide from Nvidia](https://nvidia.custhelp.com/app/answers/detail/a_id/5490).
Here's how to get the python path required in the linked guide:
- Run `invoke.bat`.
- Select option 2 for developer console.
- At least one python path will be printed. Copy the path that includes your invoke installation directory (typically the first).
## Installer cannot find python (Windows)
@ -119,6 +154,18 @@ This is caused by an invalid setting in the `invokeai.yaml` configuration file.
Check the [configuration docs] for more detail about the settings and how to specify them.
## `ModuleNotFoundError: No module named 'controlnet_aux'`
`controlnet_aux` is a dependency of Invoke and appears to have been packaged or distributed strangely. Sometimes, it doesn't install correctly. This is outside our control.
If you encounter this error, the solution is to remove the package from the `pip` cache and re-run the Invoke installer so a fresh, working version of `controlnet_aux` can be downloaded and installed:
- Run the Invoke launcher
- Choose the developer console option
- Run this command: `pip cache remove controlnet_aux`
- Close the terminal window
- Download and run the [installer](https://github.com/invoke-ai/InvokeAI/releases/latest), selecting your current install location
## Out of Memory Issues
The models are large, VRAM is expensive, and you may find yourself

View File

@ -20,7 +20,7 @@ When you generate an image using text-to-image, multiple steps occur in latent s
4. The VAE decodes the final latent image from latent space into image space.
Image-to-image is a similar process, with only step 1 being different:
1. The input image is encoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how may noise steps are added, and the amount of noise added at each step. A Denoising Strength of 0 means there are 0 steps and no noise added, resulting in an unchanged image, while a Denoising Strength of 1 results in the image being completely replaced with noise and a full set of denoising steps are performance. The process is then the same as steps 2-4 in the text-to-image process.
1. The input image is encoded from image space into latent space by the VAE. Noise is then added to the input latent image. Denoising Strength dictates how many noise steps are added, and the amount of noise added at each step. A Denoising Strength of 0 means there are 0 steps and no noise added, resulting in an unchanged image, while a Denoising Strength of 1 results in the image being completely replaced with noise and a full set of denoising steps are performance. The process is then the same as steps 2-4 in the text-to-image process.
Furthermore, a model provides the CLIP prompt tokenizer, the VAE, and a U-Net (where noise prediction occurs given a prompt and initial noise tensor).

View File

@ -1,8 +1,10 @@
# Automatic Install
# Automatic Install & Updates
The installer is used for both new installs and updates.
**The same packaged installer file can be used for both new installs and updates.**
Using the installer for updates will leave everything you've added since installation, and just update the core libraries used to run Invoke.
Simply use the same path you installed to originally.
Both release and pre-release versions can be installed using it. It also supports install a wheel if needed.
Both release and pre-release versions can be installed using the installer. It also supports install through a wheel if needed.
Be sure to review the [installation requirements] and ensure your system has everything it needs to install Invoke.
@ -44,7 +46,7 @@ The installation process is simple, with a few prompts:
- Select the version to install. Unless you have a specific reason to install a specific version, select the default (the latest version).
- Select location for the install. Be sure you have enough space in this folder for the base application, as described in the [installation requirements].
- Select a GPU device. If you are unsure, you can let the installer figure it out.
- Select a GPU device.
!!! info "Slow Installation"
@ -96,7 +98,7 @@ Updating is exactly the same as installing - download the latest installer, choo
If you have installation issues, please review the [FAQ]. You can also [create an issue] or ask for help on [discord].
[installation requirements]: INSTALLATION.md#installation-requirements
[installation requirements]: INSTALL_REQUIREMENTS.md
[FAQ]: ../help/FAQ.md
[install some models]: 050_INSTALLING_MODELS.md
[configuration docs]: ../features/CONFIGURATION.md

View File

@ -6,15 +6,11 @@
## Introduction
!!! tip "Conda"
As of InvokeAI v2.3.0 installation using the `conda` package manager is no longer being supported. It will likely still work, but we are not testing this installation method.
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer that you'll need to manage manually, described in this guide.
InvokeAI is distributed as a python package on PyPI, installable with `pip`. There are a few things that are handled by the installer and launcher that you'll need to manage manually, described in this guide.
### Requirements
Before you start, go through the [installation requirements].
Before you start, go through the [installation requirements](./INSTALL_REQUIREMENTS.md).
### Installation Walkthrough
@ -40,11 +36,11 @@ Before you start, go through the [installation requirements].
1. Enter the root (invokeai) directory and create a virtual Python environment within it named `.venv`.
!!! info "Virtual Environment Location"
!!! warning "Virtual Environment Location"
While you may create the virtual environment anywhere in the file system, we recommend that you create it within the root directory as shown here. This allows the application to automatically detect its data directories.
If you choose a different location for the venv, then you must set the `INVOKEAI_ROOT` environment variable or pass the directory using the `--root` CLI arg.
If you choose a different location for the venv, then you _must_ set the `INVOKEAI_ROOT` environment variable or specify the root directory using the `--root` CLI arg.
```terminal
cd $INVOKEAI_ROOT
@ -81,31 +77,23 @@ Before you start, go through the [installation requirements].
python3 -m pip install --upgrade pip
```
1. Install the InvokeAI Package. The `--extra-index-url` option is used to select the correct `torch` backend:
1. Install the InvokeAI Package. The base command is `pip install InvokeAI --use-pep517`, but you may need to change this depending on your system and the desired features.
=== "CUDA (NVidia)"
- You may need to provide an [extra index URL](https://pip.pypa.io/en/stable/cli/pip_install/#cmdoption-extra-index-url). Select your platform configuration using [this tool on the PyTorch website](https://pytorch.org/get-started/locally/). Copy the `--extra-index-url` string from this and append it to your install command.
```bash
pip install "InvokeAI[xformers]" --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
!!! example "Install with an extra index URL"
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
```
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
- If you have a CUDA GPU and want to install with `xformers`, you need to add an option to the package name. Note that `xformers` is not necessary. PyTorch includes an implementation of the SDP attention algorithm with the same performance.
=== "CPU (Intel Macs & non-GPU systems)"
!!! example "Install with `xformers`"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
=== "MPS (Apple Silicon)"
```bash
pip install InvokeAI --use-pep517
```
```bash
pip install "InvokeAI[xformers]" --use-pep517
```
1. Deactivate and reactivate your runtime directory so that the invokeai-specific commands become available in the environment:
@ -126,37 +114,6 @@ Before you start, go through the [installation requirements].
Run `invokeai-web` to start the UI. You must activate the virtual environment before running the app.
If the virtual environment you selected is NOT inside `INVOKEAI_ROOT`, then you must specify the path to the root directory by adding
`--root_dir \path\to\invokeai`.
!!! warning
!!! tip
You can permanently set the location of the runtime directory
by setting the environment variable `INVOKEAI_ROOT` to the
path of the directory. As mentioned previously, this is
recommended if your virtual environment is located outside of
your runtime directory.
## Unsupported Conda Install
Congratulations, you found the "secret" Conda installation instructions. If you really **really** want to use Conda with InvokeAI, you can do so using this unsupported recipe:
```sh
mkdir ~/invokeai
conda create -n invokeai python=3.11
conda activate invokeai
# Adjust this as described above for the appropriate torch backend
pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121
invokeai-web --root ~/invokeai
```
The `pip install` command shown in this recipe is for Linux/Windows
systems with an NVIDIA GPU. See step (6) above for the command to use
with other platforms/GPU combinations. If you don't wish to pass the
`--root` argument to `invokeai` with each launch, you may set the
environment variable `INVOKEAI_ROOT` to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[installation requirements]: INSTALL_REQUIREMENTS.md
If the virtual environment is _not_ inside the root directory, then you _must_ specify the path to the root directory with `--root \path\to\invokeai` or the `INVOKEAI_ROOT` environment variable.

View File

@ -4,50 +4,37 @@ title: Installing with Docker
# :fontawesome-brands-docker: Docker
!!! warning "macOS and AMD GPU Users"
!!! warning "macOS users"
We highly recommend to Install InvokeAI locally using [these instructions](INSTALLATION.md),
because Docker containers can not access the GPU on macOS.
!!! warning "AMD GPU Users"
Container support for AMD GPUs has been reported to work by the community, but has not received
extensive testing. Please make sure to set the `GPU_DRIVER=rocm` environment variable (see below), and
use the `build.sh` script to build the image for this to take effect at build time.
Docker can not access the GPU on macOS, so your generation speeds will be slow. [Install InvokeAI](INSTALLATION.md) instead.
!!! tip "Linux and Windows Users"
For optimal performance, configure your Docker daemon to access your machine's GPU.
Configure Docker to access your machine's GPU.
Docker Desktop on Windows [includes GPU support](https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/).
Linux users should install and configure the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
## Why containers?
They provide a flexible, reliable way to build and deploy InvokeAI.
See [Processes](https://12factor.net/processes) under the Twelve-Factor App
methodology for details on why running applications in such a stateless fashion is important.
The container is configured for CUDA by default, but can be built to support AMD GPUs
by setting the `GPU_DRIVER=rocm` environment variable at Docker image build time.
Developers on Apple silicon (M1/M2/M3): You
[can't access your GPU cores from Docker containers](https://github.com/pytorch/pytorch/issues/81224)
and performance is reduced compared with running it directly on macOS but for
development purposes it's fine. Once you're done with development tasks on your
laptop you can build for the target platform and architecture and deploy to
another environment with NVIDIA GPUs on-premises or in the cloud.
Linux users should follow the [NVIDIA](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) or [AMD](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/docker.html) documentation.
## TL;DR
This assumes properly configured Docker on Linux or Windows/WSL2. Read on for detailed customization options.
Ensure your Docker setup is able to use your GPU. Then:
```bash
docker run --runtime=nvidia --gpus=all --publish 9090:9090 ghcr.io/invoke-ai/invokeai
```
Once the container starts up, open http://localhost:9090 in your browser, install some models, and start generating.
## Build-It-Yourself
All the docker materials are located inside the [docker](https://github.com/invoke-ai/InvokeAI/tree/main/docker) directory in the Git repo.
```bash
# docker compose commands should be run from the `docker` directory
cd docker
cp .env.sample .env
docker compose up
```
## Installation in a Linux container (desktop)
We also ship the `run.sh` convenience script. See the `docker/README.md` file for detailed instructions on how to customize the docker setup to your needs.
### Prerequisites
@ -58,18 +45,9 @@ Preferences, Resources, Advanced. Increase the CPUs and Memory to avoid this
[Issue](https://github.com/invoke-ai/InvokeAI/issues/342). You may need to
increase Swap and Disk image size too.
#### Get a Huggingface-Token
Besides the Docker Agent you will need an Account on
[huggingface.co](https://huggingface.co/join).
After you succesfully registered your account, go to
[huggingface.co/settings/tokens](https://huggingface.co/settings/tokens), create
a token and copy it, since you will need in for the next step.
### Setup
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Set up your environment variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.
@ -103,10 +81,9 @@ Once the container starts up (and configures the InvokeAI root directory if this
## Troubleshooting / FAQ
- Q: I am running on Windows under WSL2, and am seeing a "no such file or directory" error.
- A: Your `docker-entrypoint.sh` file likely has Windows (CRLF) as opposed to Unix (LF) line endings,
and you may have cloned this repository before the issue was fixed. To solve this, please change
the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
- A: Your `docker-entrypoint.sh` might have has Windows (CRLF) line endings, depending how you cloned the repository.
To solve this, change the line endings in the `docker-entrypoint.sh` file to `LF`. You can do this in VSCode
(`Ctrl+P` and search for "line endings"), or by using the `dos2unix` utility in WSL.
Finally, you may delete `docker-entrypoint.sh` followed by `git pull; git checkout docker/docker-entrypoint.sh`
to reset the file to its most recent version.
For more information on this issue, please see the [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)
For more information on this issue, see [Docker Desktop documentation](https://docs.docker.com/desktop/troubleshoot/topics/#avoid-unexpected-syntax-errors-use-unix-style-line-endings-for-files-in-containers)

View File

@ -1,4 +1,4 @@
# Installation Overview
# Installation and Updating Overview
Before installing, review the [installation requirements] to ensure your system is set up properly.
@ -6,14 +6,21 @@ See the [FAQ] for frequently-encountered installation issues.
If you need more help, join our [discord] or [create an issue].
<h2>Automatic Install</h2>
<h2>Automatic Install & Updates </h2>
✅ The automatic install is the best way to run InvokeAI. Check out the [installation guide] to get started.
⬆️ The same installer is also the best way to update InvokeAI - Simply rerun it for the same folder you installed to.
The installation process simply manages installation for the core libraries & application dependencies that run Invoke.
Any models, images, or other assets in the Invoke root folder won't be affected by the installation process.
<h2>Manual Install</h2>
If you are familiar with python and want more control over the packages that are installed, you can [install InvokeAI manually via PyPI].
Updates are managed by reinstalling the latest version through PyPi.
<h2>Developer Install</h2>
If you want to contribute to InvokeAI, consult the [developer install guide].

View File

@ -23,6 +23,7 @@ If you have an interest in how InvokeAI works, or you would like to add features
1. [Fork and clone] the [InvokeAI repo].
1. Follow the [manual installation] docs to create a new virtual environment for the development install.
- Create a new folder outside the repo root for the installation and create the venv inside that folder.
- When installing the InvokeAI package, add `-e` to the command so you get an [editable install].
1. Install the [frontend dev toolchain] and do a production build of the UI as described.
1. You can now run the app as described in the [manual installation] docs.
@ -32,5 +33,5 @@ As described in the [frontend dev toolchain] docs, you can run the UI using a de
[Fork and clone]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/fork-a-repo
[InvokeAI repo]: https://github.com/invoke-ai/InvokeAI
[frontend dev toolchain]: ../contributing/frontend/OVERVIEW.md
[manual installation]: installation/020_INSTALL_MANUAL.md
[manual installation]: ./020_INSTALL_MANUAL.md
[editable install]: https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e

View File

@ -37,13 +37,13 @@ Invoke runs best with a dedicated GPU, but will fall back to running on CPU, alb
=== "Nvidia"
```
Any GPU with at least 8GB VRAM. Linux only.
Any GPU with at least 8GB VRAM.
```
=== "AMD"
```
Any GPU with at least 16GB VRAM.
Any GPU with at least 16GB VRAM. Linux only.
```
=== "Mac"

View File

@ -3,6 +3,7 @@
InvokeAI installer script
"""
import locale
import os
import platform
import re
@ -316,7 +317,9 @@ def upgrade_pip(venv_path: Path) -> str | None:
python = str(venv_path.expanduser().resolve() / python)
try:
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode(
encoding=locale.getpreferredencoding()
)
except subprocess.CalledProcessError as e:
print(e)
result = None
@ -404,22 +407,29 @@ def get_torch_source() -> Tuple[str | None, str | None]:
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = select_gpu()
# The correct extra index URLs for torch are inconsistent, see https://pytorch.org/get-started/locally/#start-locally
url = None
optional_modules = "[onnx]"
optional_modules: str | None = None
if OS == "Linux":
if device.value == "rocm":
url = "https://download.pytorch.org/whl/rocm5.6"
elif device.value == "cpu":
url = "https://download.pytorch.org/whl/cpu"
elif device.value == "cuda":
# CUDA uses the default PyPi index
optional_modules = "[xformers,onnx-cuda]"
elif OS == "Windows":
if device.value == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device.value == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
elif device.value == "cpu":
# CPU uses the default PyPi index, no optional modules
pass
elif OS == "Darwin":
# macOS uses the default PyPi index, no optional modules
pass
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13
# Fall back to defaults
return (url, optional_modules)

View File

@ -207,10 +207,8 @@ def dest_path(dest: Optional[str | Path] = None) -> Path | None:
class GpuType(Enum):
CUDA = "cuda"
CUDA_AND_DML = "cuda_and_dml"
ROCM = "rocm"
CPU = "cpu"
AUTODETECT = "autodetect"
def select_gpu() -> GpuType:
@ -226,10 +224,6 @@ def select_gpu() -> GpuType:
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
GpuType.CUDA,
)
nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
GpuType.CUDA_AND_DML,
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
GpuType.ROCM,
@ -238,27 +232,19 @@ def select_gpu() -> GpuType:
"Do not install any GPU support, use CPU for generation (slow)",
GpuType.CPU,
)
autodetect = (
"I'm not sure what to choose",
GpuType.AUTODETECT,
)
options = []
if OS == "Windows":
options = [nvidia, nvidia_with_dml, cpu]
options = [nvidia, cpu]
if OS == "Linux":
options = [nvidia, amd, cpu]
elif OS == "Darwin":
options = [cpu]
# future CoreML?
if len(options) == 1:
print(f'Your platform [gold1]{OS}-{ARCH}[/] only supports the "{options[0][1]}" driver. Proceeding with that.')
return options[0][1]
# "I don't know" is always added the last option
options.append(autodetect) # type: ignore
options = {str(i): opt for i, opt in enumerate(options, 1)}
console.rule(":space_invader: GPU (Graphics Card) selection :space_invader:")
@ -292,11 +278,6 @@ def select_gpu() -> GpuType:
),
)
if options[choice][1] is GpuType.AUTODETECT:
console.print(
"No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
)
return options[choice][1]

View File

@ -10,11 +10,10 @@ set INVOKEAI_ROOT=.
echo Desired action:
echo 1. Generate images with the browser-based interface
echo 2. Open the developer console
echo 3. Run the InvokeAI image database maintenance script
echo 4. Command-line help
echo 3. Command-line help
echo Q - Quit
echo.
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.
echo To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest
echo.
set /P choice="Please enter 1-4, Q: [1] "
if not defined choice set choice=1
@ -34,9 +33,6 @@ IF /I "%choice%" == "1" (
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "3" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
) ELSE IF /I "%choice%" == "4" (
echo Displaying command line help...
python .venv\Scripts\invokeai-web.exe --help %*
pause

View File

@ -47,11 +47,6 @@ do_choice() {
bash --init-file "$file_name"
;;
3)
clear
printf "Running the db maintenance script\n"
invokeai-db-maintenance --root ${INVOKEAI_ROOT}
;;
4)
clear
printf "Command-line help\n"
invokeai-web --help
@ -71,8 +66,7 @@ do_line_input() {
printf "What would you like to do?\n"
printf "1: Generate images using the browser-based interface\n"
printf "2: Open the developer console\n"
printf "3: Run the InvokeAI image database maintenance script\n"
printf "4: Command-line help\n"
printf "3: Command-line help\n"
printf "Q: Quit\n\n"
printf "To update, download and run the installer from https://github.com/invoke-ai/InvokeAI/releases/latest.\n\n"
read -p "Please enter 1-4, Q: [1] " yn

View File

@ -4,37 +4,39 @@ from logging import Logger
import torch
from invokeai.app.services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from invokeai.app.services.board_images.board_images_default import BoardImagesService
from invokeai.app.services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from invokeai.app.services.boards.boards_default import BoardService
from invokeai.app.services.bulk_download.bulk_download_default import BulkDownloadService
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.download.download_default import DownloadQueueService
from invokeai.app.services.events.events_fastapievents import FastAPIEventService
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
from invokeai.app.services.image_records.image_records_sqlite import SqliteImageRecordStorage
from invokeai.app.services.images.images_default import ImageService
from invokeai.app.services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from invokeai.app.services.invocation_services import InvocationServices
from invokeai.app.services.invocation_stats.invocation_stats_default import InvocationStatsService
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_images.model_images_default import ModelImageFileStorageDisk
from invokeai.app.services.model_manager.model_manager_default import ModelManagerService
from invokeai.app.services.model_records.model_records_sql import ModelRecordServiceSQL
from invokeai.app.services.names.names_default import SimpleNameService
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
from invokeai.app.services.session_processor.session_processor_default import (
DefaultSessionProcessor,
DefaultSessionRunner,
)
from invokeai.app.services.session_queue.session_queue_sqlite import SqliteSessionQueue
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.app.services.urls.urls_default import LocalUrlService
from invokeai.app.services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
from ..services.board_image_records.board_image_records_sqlite import SqliteBoardImageRecordStorage
from ..services.board_images.board_images_default import BoardImagesService
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from ..services.boards.boards_default import BoardService
from ..services.bulk_download.bulk_download_default import BulkDownloadService
from ..services.config import InvokeAIAppConfig
from ..services.download import DownloadQueueService
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.model_images.model_images_default import ModelImageFileStorageDisk
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
# TODO: is there a better way to achieve this?
def check_internet() -> bool:
@ -93,17 +95,17 @@ class ApiDependencies:
conditioning = ObjectSerializerForwardCache(
ObjectSerializerDisk[ConditioningFieldData](output_folder / "conditioning", ephemeral=True)
)
download_queue_service = DownloadQueueService(event_bus=events)
download_queue_service = DownloadQueueService(app_config=configuration, event_bus=events)
model_images_service = ModelImageFileStorageDisk(model_images_folder / "model_images")
model_manager = ModelManagerService.build_model_manager(
app_config=configuration,
model_record_service=ModelRecordServiceSQL(db=db),
model_record_service=ModelRecordServiceSQL(db=db, logger=logger),
download_queue=download_queue_service,
events=events,
)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
session_processor = DefaultSessionProcessor()
session_processor = DefaultSessionProcessor(session_runner=DefaultSessionRunner())
session_queue = SqliteSessionQueue(db=db)
urls = LocalUrlService()
workflow_records = SqliteWorkflowRecordsStorage(db=db)

View File

@ -1,52 +0,0 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import asyncio
import threading
from queue import Empty, Queue
from typing import Any
from fastapi_events.dispatcher import dispatch
from ..services.events.events_base import EventServiceBase
class FastAPIEventService(EventServiceBase):
event_handler_id: int
__queue: Queue
__stop_event: threading.Event
def __init__(self, event_handler_id: int) -> None:
self.event_handler_id = event_handler_id
self.__queue = Queue()
self.__stop_event = threading.Event()
asyncio.create_task(self.__dispatch_from_queue(stop_event=self.__stop_event))
super().__init__()
def stop(self, *args, **kwargs):
self.__stop_event.set()
self.__queue.put(None)
def dispatch(self, event_name: str, payload: Any) -> None:
self.__queue.put({"event_name": event_name, "payload": payload})
async def __dispatch_from_queue(self, stop_event: threading.Event):
"""Get events on from the queue and dispatch them, from the correct thread"""
while not stop_event.is_set():
try:
event = self.__queue.get(block=False)
if not event: # Probably stopping
continue
dispatch(
event.get("event_name"),
payload=event.get("payload"),
middleware_id=self.event_handler_id,
)
except Empty:
await asyncio.sleep(0.1)
pass
except asyncio.CancelledError as e:
raise e # Raise a proper error

View File

@ -10,15 +10,13 @@ from fastapi import Body
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.upscale import ESRGAN_MODELS
from invokeai.app.services.invocation_cache.invocation_cache_common import InvocationCacheStatus
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.safety_checker import SafetyChecker
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch
from invokeai.backend.util.logging import logging
from invokeai.version import __version__
from ..dependencies import ApiDependencies
class LogLevel(int, Enum):
NotSet = logging.NOTSET
@ -100,7 +98,7 @@ async def get_app_deps() -> AppDependencyVersions:
@app_router.get("/config", operation_id="get_config", status_code=200, response_model=AppConfig)
async def get_config() -> AppConfig:
infill_methods = ["tile", "lama", "cv2"]
infill_methods = ["tile", "lama", "cv2", "color"] # TODO: add mosaic back
if PatchMatch.patchmatch_available():
infill_methods.append("patchmatch")
@ -109,9 +107,7 @@ async def get_config() -> AppConfig:
upscaling_models.append(str(Path(model).stem))
upscaler = Upscaler(upscaling_method="esrgan", upscaling_models=upscaling_models)
nsfw_methods = []
if SafetyChecker.safety_checker_available():
nsfw_methods.append("nsfw_checker")
nsfw_methods = ["nsfw_checker"]
watermarking_methods = ["invisible_watermark"]

View File

@ -2,7 +2,7 @@ from fastapi import Body, HTTPException
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from ..dependencies import ApiDependencies
from invokeai.app.api.dependencies import ApiDependencies
board_images_router = APIRouter(prefix="/v1/board_images", tags=["boards"])

View File

@ -4,12 +4,11 @@ from fastapi import Body, HTTPException, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel, Field
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from ..dependencies import ApiDependencies
boards_router = APIRouter(prefix="/v1/boards", tags=["boards"])
@ -32,6 +31,7 @@ class DeleteBoardResult(BaseModel):
)
async def create_board(
board_name: str = Query(description="The name of the board to create"),
is_private: bool = Query(default=False, description="Whether the board is private"),
) -> BoardDTO:
"""Creates a board"""
try:
@ -118,15 +118,13 @@ async def list_boards(
all: Optional[bool] = Query(default=None, description="Whether to list all boards"),
offset: Optional[int] = Query(default=None, description="The page offset"),
limit: Optional[int] = Query(default=None, description="The number of boards per page"),
include_archived: bool = Query(default=False, description="Whether or not to include archived boards in list"),
) -> Union[OffsetPaginatedResults[BoardDTO], list[BoardDTO]]:
"""Gets a list of boards"""
if all:
return ApiDependencies.invoker.services.boards.get_all()
return ApiDependencies.invoker.services.boards.get_all(include_archived)
elif offset is not None and limit is not None:
return ApiDependencies.invoker.services.boards.get_many(
offset,
limit,
)
return ApiDependencies.invoker.services.boards.get_many(offset, limit, include_archived)
else:
raise HTTPException(
status_code=400,

View File

@ -8,13 +8,12 @@ from fastapi.routing import APIRouter
from pydantic.networks import AnyHttpUrl
from starlette.exceptions import HTTPException
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.download import (
DownloadJob,
UnknownJobIDException,
)
from ..dependencies import ApiDependencies
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])

View File

@ -6,15 +6,18 @@ from fastapi import BackgroundTasks, Body, HTTPException, Path, Query, Request,
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, ValidationError
from pydantic import BaseModel, Field, JsonValue
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.invocations.fields import MetadataField
from invokeai.app.services.image_records.image_records_common import (
ImageCategory,
ImageRecordChanges,
ResourceOrigin,
)
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID, WorkflowWithoutIDValidator
from ..dependencies import ApiDependencies
from invokeai.app.services.shared.sqlite.sqlite_common import SQLiteDirection
images_router = APIRouter(prefix="/v1/images", tags=["images"])
@ -42,13 +45,17 @@ async def upload_image(
board_id: Optional[str] = Query(default=None, description="The board to add this image to, if any"),
session_id: Optional[str] = Query(default=None, description="The session ID associated with this upload, if any"),
crop_visible: Optional[bool] = Query(default=False, description="Whether to crop the image"),
metadata: Optional[JsonValue] = Body(
default=None, description="The metadata to associate with the image", embed=True
),
) -> ImageDTO:
"""Uploads an image"""
if not file.content_type or not file.content_type.startswith("image"):
raise HTTPException(status_code=415, detail="Not an image")
metadata = None
workflow = None
_metadata = None
_workflow = None
_graph = None
contents = await file.read()
try:
@ -62,22 +69,28 @@ async def upload_image(
# TODO: retain non-invokeai metadata on upload?
# attempt to parse metadata from image
metadata_raw = pil_image.info.get("invokeai_metadata", None)
if metadata_raw:
try:
metadata = MetadataFieldValidator.validate_json(metadata_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
metadata_raw = metadata if isinstance(metadata, str) else pil_image.info.get("invokeai_metadata", None)
if isinstance(metadata_raw, str):
_metadata = metadata_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse metadata for uploaded image")
pass
# attempt to parse workflow from image
workflow_raw = pil_image.info.get("invokeai_workflow", None)
if workflow_raw is not None:
try:
workflow = WorkflowWithoutIDValidator.validate_json(workflow_raw)
except ValidationError:
ApiDependencies.invoker.services.logger.warn("Failed to parse metadata for uploaded image")
pass
if isinstance(workflow_raw, str):
_workflow = workflow_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse workflow for uploaded image")
pass
# attempt to extract graph from image
graph_raw = pil_image.info.get("invokeai_graph", None)
if isinstance(graph_raw, str):
_graph = graph_raw
else:
ApiDependencies.invoker.services.logger.debug("Failed to parse graph for uploaded image")
pass
try:
image_dto = ApiDependencies.invoker.services.images.create(
@ -86,8 +99,9 @@ async def upload_image(
image_category=image_category,
session_id=session_id,
board_id=board_id,
metadata=metadata,
workflow=workflow,
metadata=_metadata,
workflow=_workflow,
graph=_graph,
is_intermediate=is_intermediate,
)
@ -185,14 +199,21 @@ async def get_image_metadata(
raise HTTPException(status_code=404)
class WorkflowAndGraphResponse(BaseModel):
workflow: Optional[str] = Field(description="The workflow used to generate the image, as stringified JSON")
graph: Optional[str] = Field(description="The graph used to generate the image, as stringified JSON")
@images_router.get(
"/i/{image_name}/workflow", operation_id="get_image_workflow", response_model=Optional[WorkflowWithoutID]
"/i/{image_name}/workflow", operation_id="get_image_workflow", response_model=WorkflowAndGraphResponse
)
async def get_image_workflow(
image_name: str = Path(description="The name of image whose workflow to get"),
) -> Optional[WorkflowWithoutID]:
) -> WorkflowAndGraphResponse:
try:
return ApiDependencies.invoker.services.images.get_workflow(image_name)
workflow = ApiDependencies.invoker.services.images.get_workflow(image_name)
graph = ApiDependencies.invoker.services.images.get_graph(image_name)
return WorkflowAndGraphResponse(workflow=workflow, graph=graph)
except Exception:
raise HTTPException(status_code=404)
@ -212,21 +233,14 @@ async def get_image_workflow(
)
async def get_image_full(
image_name: str = Path(description="The name of full-resolution image file to get"),
) -> FileResponse:
) -> Response:
"""Gets a full-resolution image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name)
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
response = FileResponse(
path,
media_type="image/png",
filename=image_name,
content_disposition_type="inline",
)
with open(path, "rb") as f:
content = f.read()
response = Response(content, media_type="image/png")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@ -247,15 +261,14 @@ async def get_image_full(
)
async def get_image_thumbnail(
image_name: str = Path(description="The name of thumbnail image file to get"),
) -> FileResponse:
) -> Response:
"""Gets a thumbnail image file"""
try:
path = ApiDependencies.invoker.services.images.get_path(image_name, thumbnail=True)
if not ApiDependencies.invoker.services.images.validate_path(path):
raise HTTPException(status_code=404)
response = FileResponse(path, media_type="image/webp", content_disposition_type="inline")
with open(path, "rb") as f:
content = f.read()
response = Response(content, media_type="image/webp")
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
return response
except Exception:
@ -299,16 +312,14 @@ async def list_image_dtos(
),
offset: int = Query(default=0, description="The page offset"),
limit: int = Query(default=10, description="The number of images per page"),
order_dir: SQLiteDirection = Query(default=SQLiteDirection.Descending, description="The order of sort"),
starred_first: bool = Query(default=True, description="Whether to sort by starred images first"),
search_term: Optional[str] = Query(default=None, description="The term to search for"),
) -> OffsetPaginatedResults[ImageDTO]:
"""Gets a list of image DTOs"""
image_dtos = ApiDependencies.invoker.services.images.get_many(
offset,
limit,
image_origin,
categories,
is_intermediate,
board_id,
offset, limit, starred_first, order_dir, image_origin, categories, is_intermediate, board_id, search_term
)
return image_dtos

View File

@ -3,22 +3,23 @@
import io
import pathlib
import shutil
import traceback
from copy import deepcopy
from typing import Any, Dict, List, Optional
from tempfile import TemporaryDirectory
from typing import List, Optional, Type
from fastapi import Body, Path, Query, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import AnyHttpUrl, BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_install import ModelInstallJob
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.model_images.model_images_common import ModelImageFileNotFoundException
from invokeai.app.services.model_install.model_install_common import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordChanges,
UnknownModelException,
@ -29,15 +30,12 @@ from invokeai.backend.model_manager.config import (
MainCheckpointConfig,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.metadata.fetch.huggingface import HuggingFaceMetadataFetch
from invokeai.backend.model_manager.metadata.metadata_base import ModelMetadataWithFiles, UnknownMetadataException
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.model_manager.starter_models import STARTER_MODELS, StarterModel, StarterModelWithoutDependencies
from ..dependencies import ApiDependencies
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
# images are immutable; set a high max-age
@ -52,6 +50,13 @@ class ModelsList(BaseModel):
model_config = ConfigDict(use_enum_values=True)
def add_cover_image_to_model_config(config: AnyModelConfig, dependencies: Type[ApiDependencies]) -> AnyModelConfig:
"""Add a cover image URL to a model configuration."""
cover_image = dependencies.invoker.services.model_images.get_url(config.key)
config.cover_image = cover_image
return config
##############################################################################
# These are example inputs and outputs that are used in places where Swagger
# is unable to generate a correct example.
@ -118,8 +123,7 @@ async def list_model_records(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
for model in found_models:
cover_image = ApiDependencies.invoker.services.model_images.get_url(model.key)
model.cover_image = cover_image
model = add_cover_image_to_model_config(model, ApiDependencies)
return ModelsList(models=found_models)
@ -160,28 +164,13 @@ async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_manager.store
try:
config: AnyModelConfig = record_store.get_model(key)
cover_image = ApiDependencies.invoker.services.model_images.get_url(key)
config.cover_image = cover_image
return config
config = ApiDependencies.invoker.services.model_manager.store.get_model(key)
return add_cover_image_to_model_config(config, ApiDependencies)
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
# @model_manager_router.get("/summary", operation_id="list_model_summary")
# async def list_model_summary(
# page: int = Query(default=0, description="The page to get"),
# per_page: int = Query(default=10, description="The number of models per page"),
# order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
# ) -> PaginatedResults[ModelSummary]:
# """Gets a page of model summary data."""
# record_store = ApiDependencies.invoker.services.model_manager.store
# results: PaginatedResults[ModelSummary] = record_store.list_models(page=page, per_page=per_page, order_by=order_by)
# return results
class FoundModel(BaseModel):
path: str = Field(description="Path to the model")
is_installed: bool = Field(description="Whether or not the model is already installed")
@ -219,28 +208,13 @@ async def scan_for_models(
non_core_model_paths = [p for p in found_model_paths if not p.is_relative_to(core_models_path)]
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
resolved_installed_model_paths: list[str] = []
installed_model_sources: list[str] = []
# This call lists all installed models.
for model in installed_models:
path = pathlib.Path(model.path)
# If the model has a source, we need to add it to the list of installed sources.
if model.source:
installed_model_sources.append(model.source)
# If the path is not absolute, that means it is in the app models directory, and we need to join it with
# the models path before resolving.
if not path.is_absolute():
resolved_installed_model_paths.append(str(pathlib.Path(models_path, path).resolve()))
continue
resolved_installed_model_paths.append(str(path.resolve()))
scan_results: list[FoundModel] = []
# Check if the model is installed by comparing the resolved paths, appending to the scan result.
# Check if the model is installed by comparing paths, appending to the scan result.
for p in non_core_model_paths:
path = str(p)
is_installed = path in resolved_installed_model_paths or path in installed_model_sources
is_installed = any(str(models_path / m.path) == path for m in installed_models)
found_model = FoundModel(path=path, is_installed=is_installed)
scan_results.append(found_model)
except Exception as e:
@ -309,14 +283,15 @@ async def update_model_record(
installer = ApiDependencies.invoker.services.model_manager.install
try:
record_store.update_model(key, changes=changes)
model_response: AnyModelConfig = installer.sync_model_path(key)
config = installer.sync_model_path(key)
config = add_cover_image_to_model_config(config, ApiDependencies)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
return config
@model_manager_router.get(
@ -455,13 +430,11 @@ async def delete_model_image(
async def install_model(
source: str = Query(description="Model source to install, can be a local path, repo_id, or remote URL"),
inplace: Optional[bool] = Query(description="Whether or not to install a local model in place", default=False),
# TODO(MM2): Can we type this?
config: Optional[Dict[str, Any]] = Body(
description="Dict of fields that override auto-probed values in the model config record, such as name, description and prediction_type ",
default=None,
access_token: Optional[str] = Query(description="access token for the remote resource", default=None),
config: ModelRecordChanges = Body(
description="Object containing fields that override auto-probed values in the model config record, such as name, description and prediction_type ",
example={"name": "string", "description": "string"},
),
access_token: Optional[str] = None,
) -> ModelInstallJob:
"""Install a model using a string identifier.
@ -476,8 +449,9 @@ async def install_model(
- model/name:fp16:path/to/model.safetensors
- model/name::path/to/model.safetensors
`config` is an optional dict containing model configuration values that will override
the ones that are probed automatically.
`config` is a ModelRecordChanges object. Fields in this object will override
the ones that are probed automatically. Pass an empty object to accept
all the defaults.
`access_token` is an optional access token for use with Urls that require
authentication.
@ -512,6 +486,133 @@ async def install_model(
return result
@model_manager_router.get(
"/install/huggingface",
operation_id="install_hugging_face_model",
responses={
201: {"description": "The model is being installed"},
400: {"description": "Bad request"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
response_class=HTMLResponse,
)
async def install_hugging_face_model(
source: str = Query(description="HuggingFace repo_id to install"),
) -> HTMLResponse:
"""Install a Hugging Face model using a string identifier."""
def generate_html(title: str, heading: str, repo_id: str, is_error: bool, message: str | None = "") -> str:
if message:
message = f"<p>{message}</p>"
title_class = "error" if is_error else "success"
return f"""
<html>
<head>
<title>{title}</title>
<style>
body {{
text-align: center;
background-color: hsl(220 12% 10% / 1);
font-family: Helvetica, sans-serif;
color: hsl(220 12% 86% / 1);
}}
.repo-id {{
color: hsl(220 12% 68% / 1);
}}
.error {{
color: hsl(0 42% 68% / 1)
}}
.message-box {{
display: inline-block;
border-radius: 5px;
background-color: hsl(220 12% 20% / 1);
padding-inline-end: 30px;
padding: 20px;
padding-inline-start: 30px;
padding-inline-end: 30px;
}}
.container {{
display: flex;
width: 100%;
height: 100%;
align-items: center;
justify-content: center;
}}
a {{
color: inherit
}}
a:visited {{
color: inherit
}}
a:active {{
color: inherit
}}
</style>
</head>
<body style="background-color: hsl(220 12% 10% / 1);">
<div class="container">
<div class="message-box">
<h2 class="{title_class}">{heading}</h2>
{message}
<p class="repo-id">Repo ID: {repo_id}</p>
</div>
</div>
</body>
</html>
"""
try:
metadata = HuggingFaceMetadataFetch().from_id(source)
assert isinstance(metadata, ModelMetadataWithFiles)
except UnknownMetadataException:
title = "Unable to Install Model"
heading = "No HuggingFace repository found with that repo ID."
message = "Ensure the repo ID is correct and try again."
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=400)
logger = ApiDependencies.invoker.services.logger
try:
installer = ApiDependencies.invoker.services.model_manager.install
if metadata.is_diffusers:
installer.heuristic_import(
source=source,
inplace=False,
)
elif metadata.ckpt_urls is not None and len(metadata.ckpt_urls) == 1:
installer.heuristic_import(
source=str(metadata.ckpt_urls[0]),
inplace=False,
)
else:
title = "Unable to Install Model"
heading = "This HuggingFace repo has multiple models."
message = "Please use the Model Manager to install this model."
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=200)
title = "Model Install Started"
heading = "Your HuggingFace model is installing now."
message = "You can close this tab and check the Model Manager for installation progress."
return HTMLResponse(content=generate_html(title, heading, source, False, message), status_code=201)
except Exception as e:
logger.error(str(e))
title = "Unable to Install Model"
heading = "There was an problem installing this model."
message = 'Please use the Model Manager directly to install this model. If the issue persists, ask for help on <a href="https://discord.gg/ZmtBAhwWhy">discord</a>.'
return HTMLResponse(content=generate_html(title, heading, source, True, message), status_code=500)
@model_manager_router.get(
"/install",
operation_id="list_model_installs",
@ -629,48 +730,54 @@ async def convert_model(
logger.error(f"The model with key {key} is not a main checkpoint model.")
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
# loading the model will convert it into a cached diffusers file
with TemporaryDirectory(dir=ApiDependencies.invoker.services.configuration.models_path) as tmpdir:
convert_path = pathlib.Path(tmpdir) / pathlib.Path(model_config.path).stem
converted_model = loader.load_model(model_config)
# write the converted file to the convert path
raw_model = converted_model.model
assert hasattr(raw_model, "save_pretrained")
raw_model.save_pretrained(convert_path) # type: ignore
assert convert_path.exists()
# temporarily rename the original safetensors file so that there is no naming conflict
original_name = model_config.name
model_config.name = f"{original_name}.DELETE"
changes = ModelRecordChanges(name=model_config.name)
store.update_model(key, changes=changes)
# install the diffusers
try:
new_key = installer.install_path(
convert_path,
config=ModelRecordChanges(
name=original_name,
description=model_config.description,
hash=model_config.hash,
source=model_config.source,
),
)
except Exception as e:
logger.error(str(e))
store.update_model(key, changes=ModelRecordChanges(name=original_name))
raise HTTPException(status_code=409, detail=str(e))
# Update the model image if the model had one
try:
cc_size = loader.convert_cache.max_size
if cc_size == 0: # temporary set the convert cache to a positive number so that cached model is written
loader._convert_cache.max_size = 1.0
loader.load_model(model_config, submodel_type=SubModelType.Scheduler)
finally:
loader._convert_cache.max_size = cc_size
# Get the path of the converted model from the loader
cache_path = loader.convert_cache.cache_path(key)
assert cache_path.exists()
# temporarily rename the original safetensors file so that there is no naming conflict
original_name = model_config.name
model_config.name = f"{original_name}.DELETE"
changes = ModelRecordChanges(name=model_config.name)
store.update_model(key, changes=changes)
# install the diffusers
try:
new_key = installer.install_path(
cache_path,
config={
"name": original_name,
"description": model_config.description,
"hash": model_config.hash,
"source": model_config.source,
},
)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
model_image = ApiDependencies.invoker.services.model_images.get(key)
ApiDependencies.invoker.services.model_images.save(model_image, new_key)
ApiDependencies.invoker.services.model_images.delete(key)
except ModelImageFileNotFoundException:
pass
# delete the original safetensors file
installer.delete(key)
# delete the cached version
shutil.rmtree(cache_path)
# delete the temporary directory
# shutil.rmtree(cache_path)
# return the config record for the new diffusers directory
new_config: AnyModelConfig = store.get_model(new_key)
new_config = store.get_model(new_key)
new_config = add_cover_image_to_model_config(new_config, ApiDependencies)
return new_config

View File

@ -4,6 +4,7 @@ from fastapi import Body, Path, Query
from fastapi.routing import APIRouter
from pydantic import BaseModel
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.services.session_processor.session_processor_common import SessionProcessorStatus
from invokeai.app.services.session_queue.session_queue_common import (
QUEUE_ITEM_STATUS,
@ -19,8 +20,6 @@ from invokeai.app.services.session_queue.session_queue_common import (
)
from invokeai.app.services.shared.pagination import CursorPaginatedResults
from ..dependencies import ApiDependencies
session_queue_router = APIRouter(prefix="/v1/queue", tags=["queue"])
@ -203,6 +202,7 @@ async def get_batch_status(
responses={
200: {"model": SessionQueueItem},
},
response_model_exclude_none=True,
)
async def get_queue_item(
queue_id: str = Path(description="The queue id to perform this operation on"),

View File

@ -1,66 +1,125 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any
from fastapi import FastAPI
from fastapi_events.handlers.local import local_handler
from fastapi_events.typing import Event
from pydantic import BaseModel
from socketio import ASGIApp, AsyncServer
from ..services.events.events_base import EventServiceBase
from invokeai.app.services.events.events_common import (
BatchEnqueuedEvent,
BulkDownloadCompleteEvent,
BulkDownloadErrorEvent,
BulkDownloadEventBase,
BulkDownloadStartedEvent,
DownloadCancelledEvent,
DownloadCompleteEvent,
DownloadErrorEvent,
DownloadEventBase,
DownloadProgressEvent,
DownloadStartedEvent,
FastAPIEvent,
InvocationCompleteEvent,
InvocationDenoiseProgressEvent,
InvocationErrorEvent,
InvocationStartedEvent,
ModelEventBase,
ModelInstallCancelledEvent,
ModelInstallCompleteEvent,
ModelInstallDownloadProgressEvent,
ModelInstallDownloadsCompleteEvent,
ModelInstallErrorEvent,
ModelInstallStartedEvent,
ModelLoadCompleteEvent,
ModelLoadStartedEvent,
QueueClearedEvent,
QueueEventBase,
QueueItemStatusChangedEvent,
register_events,
)
class QueueSubscriptionEvent(BaseModel):
"""Event data for subscribing to the socket.io queue room.
This is a pydantic model to ensure the data is in the correct format."""
queue_id: str
class BulkDownloadSubscriptionEvent(BaseModel):
"""Event data for subscribing to the socket.io bulk downloads room.
This is a pydantic model to ensure the data is in the correct format."""
bulk_download_id: str
QUEUE_EVENTS = {
InvocationStartedEvent,
InvocationDenoiseProgressEvent,
InvocationCompleteEvent,
InvocationErrorEvent,
QueueItemStatusChangedEvent,
BatchEnqueuedEvent,
QueueClearedEvent,
}
MODEL_EVENTS = {
DownloadCancelledEvent,
DownloadCompleteEvent,
DownloadErrorEvent,
DownloadProgressEvent,
DownloadStartedEvent,
ModelLoadStartedEvent,
ModelLoadCompleteEvent,
ModelInstallDownloadProgressEvent,
ModelInstallDownloadsCompleteEvent,
ModelInstallStartedEvent,
ModelInstallCompleteEvent,
ModelInstallCancelledEvent,
ModelInstallErrorEvent,
}
BULK_DOWNLOAD_EVENTS = {BulkDownloadStartedEvent, BulkDownloadCompleteEvent, BulkDownloadErrorEvent}
class SocketIO:
__sio: AsyncServer
__app: ASGIApp
_sub_queue = "subscribe_queue"
_unsub_queue = "unsubscribe_queue"
__sub_queue: str = "subscribe_queue"
__unsub_queue: str = "unsubscribe_queue"
__sub_bulk_download: str = "subscribe_bulk_download"
__unsub_bulk_download: str = "unsubscribe_bulk_download"
_sub_bulk_download = "subscribe_bulk_download"
_unsub_bulk_download = "unsubscribe_bulk_download"
def __init__(self, app: FastAPI):
self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="/ws/socket.io")
app.mount("/ws", self.__app)
self._sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*")
self._app = ASGIApp(socketio_server=self._sio, socketio_path="/ws/socket.io")
app.mount("/ws", self._app)
self.__sio.on(self.__sub_queue, handler=self._handle_sub_queue)
self.__sio.on(self.__unsub_queue, handler=self._handle_unsub_queue)
local_handler.register(event_name=EventServiceBase.queue_event, _func=self._handle_queue_event)
local_handler.register(event_name=EventServiceBase.model_event, _func=self._handle_model_event)
self._sio.on(self._sub_queue, handler=self._handle_sub_queue)
self._sio.on(self._unsub_queue, handler=self._handle_unsub_queue)
self._sio.on(self._sub_bulk_download, handler=self._handle_sub_bulk_download)
self._sio.on(self._unsub_bulk_download, handler=self._handle_unsub_bulk_download)
self.__sio.on(self.__sub_bulk_download, handler=self._handle_sub_bulk_download)
self.__sio.on(self.__unsub_bulk_download, handler=self._handle_unsub_bulk_download)
local_handler.register(event_name=EventServiceBase.bulk_download_event, _func=self._handle_bulk_download_event)
register_events(QUEUE_EVENTS, self._handle_queue_event)
register_events(MODEL_EVENTS, self._handle_model_event)
register_events(BULK_DOWNLOAD_EVENTS, self._handle_bulk_image_download_event)
async def _handle_queue_event(self, event: Event):
await self.__sio.emit(
event=event[1]["event"],
data=event[1]["data"],
room=event[1]["data"]["queue_id"],
)
async def _handle_sub_queue(self, sid: str, data: Any) -> None:
await self._sio.enter_room(sid, QueueSubscriptionEvent(**data).queue_id)
async def _handle_sub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.enter_room(sid, data["queue_id"])
async def _handle_unsub_queue(self, sid: str, data: Any) -> None:
await self._sio.leave_room(sid, QueueSubscriptionEvent(**data).queue_id)
async def _handle_unsub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.leave_room(sid, data["queue_id"])
async def _handle_sub_bulk_download(self, sid: str, data: Any) -> None:
await self._sio.enter_room(sid, BulkDownloadSubscriptionEvent(**data).bulk_download_id)
async def _handle_model_event(self, event: Event) -> None:
await self.__sio.emit(event=event[1]["event"], data=event[1]["data"])
async def _handle_unsub_bulk_download(self, sid: str, data: Any) -> None:
await self._sio.leave_room(sid, BulkDownloadSubscriptionEvent(**data).bulk_download_id)
async def _handle_bulk_download_event(self, event: Event):
await self.__sio.emit(
event=event[1]["event"],
data=event[1]["data"],
room=event[1]["data"]["bulk_download_id"],
)
async def _handle_queue_event(self, event: FastAPIEvent[QueueEventBase]):
await self._sio.emit(event=event[0], data=event[1].model_dump(mode="json"), room=event[1].queue_id)
async def _handle_sub_bulk_download(self, sid, data, *args, **kwargs):
if "bulk_download_id" in data:
await self.__sio.enter_room(sid, data["bulk_download_id"])
async def _handle_model_event(self, event: FastAPIEvent[ModelEventBase | DownloadEventBase]) -> None:
await self._sio.emit(event=event[0], data=event[1].model_dump(mode="json"))
async def _handle_unsub_bulk_download(self, sid, data, *args, **kwargs):
if "bulk_download_id" in data:
await self.__sio.leave_room(sid, data["bulk_download_id"])
async def _handle_bulk_image_download_event(self, event: FastAPIEvent[BulkDownloadEventBase]) -> None:
await self._sio.emit(event=event[0], data=event[1].model_dump(mode="json"), room=event[1].bulk_download_id)

View File

@ -3,9 +3,7 @@ import logging
import mimetypes
import socket
from contextlib import asynccontextmanager
from inspect import signature
from pathlib import Path
from typing import Any
import torch
import uvicorn
@ -13,26 +11,18 @@ from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
from torch.backends.mps import is_available as is_mps_available
# for PyCharm:
# noinspection PyUnresolvedReferences
import invokeai.backend.util.hotfixes # noqa: F401 (monkeypatching on import)
import invokeai.frontend.web as web_dir
from invokeai.app.api.dependencies import ApiDependencies
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.config.config_default import get_config
from invokeai.app.services.session_processor.session_processor_common import ProgressImage
from invokeai.backend.util.devices import get_torch_device_name
from ..backend.util.logging import InvokeAILogger
from .api.dependencies import ApiDependencies
from .api.routers import (
from invokeai.app.api.routers import (
app_info,
board_images,
boards,
@ -43,12 +33,11 @@ from .api.routers import (
utilities,
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
UIConfigBase,
)
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
from invokeai.app.api.sockets import SocketIO
from invokeai.app.services.config.config_default import get_config
from invokeai.app.util.custom_openapi import get_openapi_func
from invokeai.backend.util.devices import TorchDevice
from invokeai.backend.util.logging import InvokeAILogger
app_config = get_config()
@ -63,7 +52,7 @@ logger = InvokeAILogger.get_logger(config=app_config)
mimetypes.add_type("application/javascript", ".js")
mimetypes.add_type("text/css", ".css")
torch_device_name = get_torch_device_name()
torch_device_name = TorchDevice.get_torch_device_name()
logger.info(f"Using torch device: {torch_device_name}")
@ -118,85 +107,7 @@ app.include_router(app_info.app_router, prefix="/api")
app.include_router(session_queue.session_queue_router, prefix="/api")
app.include_router(workflows.workflows_router, prefix="/api")
# Build a custom OpenAPI to include all outputs
# TODO: can outputs be included on metadata of invocation schemas somehow?
def custom_openapi() -> dict[str, Any]:
if app.openapi_schema:
return app.openapi_schema
openapi_schema = get_openapi(
title=app.title,
description="An API for invoking AI image operations",
version="1.0.0",
routes=app.routes,
separate_input_output_schemas=False, # https://fastapi.tiangolo.com/how-to/separate-openapi-schemas/
)
# Add all outputs
all_invocations = BaseInvocation.get_invocations()
output_types = set()
output_type_titles = {}
for invoker in all_invocations:
output_type = signature(invoker.invoke).return_annotation
output_types.add(output_type)
output_schemas = models_json_schema(
models=[(o, "serialization") for o in output_types], ref_template="#/components/schemas/{model}"
)
for schema_key, output_schema in output_schemas[1]["$defs"].items():
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
# Some models don't end up in the schemas as standalone definitions
additional_schemas = models_json_schema(
[
(UIConfigBase, "serialization"),
(InputFieldJSONSchemaExtra, "serialization"),
(OutputFieldJSONSchemaExtra, "serialization"),
(ModelIdentifierField, "serialization"),
(ProgressImage, "serialization"),
],
ref_template="#/components/schemas/{model}",
)
for schema_key, schema_json in additional_schemas[1]["$defs"].items():
openapi_schema["components"]["schemas"][schema_key] = schema_json
# Add a reference to the output type to additionalProperties of the invoker schema
for invoker in all_invocations:
invoker_name = invoker.__name__ # type: ignore [attr-defined] # this is a valid attribute
output_type = signature(obj=invoker.invoke).return_annotation
output_type_title = output_type_titles[output_type.__name__]
invoker_schema = openapi_schema["components"]["schemas"][f"{invoker_name}"]
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
app.openapi_schema = openapi_schema
return app.openapi_schema
app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid assignment
app.openapi = get_openapi_func(app)
@app.get("/docs", include_in_schema=False)
@ -250,6 +161,7 @@ def invoke_api() -> None:
# Taken from https://waylonwalker.com/python-find-available-port/, thanks Waylon!
# https://github.com/WaylonWalker
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(1)
if s.connect_ex(("localhost", port)) == 0:
return find_port(port=port + 1)
else:

View File

@ -40,7 +40,7 @@ from invokeai.app.util.misc import uuid_string
from invokeai.backend.util.logging import InvokeAILogger
if TYPE_CHECKING:
from ..services.invocation_services import InvocationServices
from invokeai.app.services.invocation_services import InvocationServices
logger = InvokeAILogger.get_logger()
@ -98,11 +98,13 @@ class BaseInvocationOutput(BaseModel):
_output_classes: ClassVar[set[BaseInvocationOutput]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
_typeadapter_needs_update: ClassVar[bool] = False
@classmethod
def register_output(cls, output: BaseInvocationOutput) -> None:
"""Registers an invocation output."""
cls._output_classes.add(output)
cls._typeadapter_needs_update = True
@classmethod
def get_outputs(cls) -> Iterable[BaseInvocationOutput]:
@ -112,11 +114,12 @@ class BaseInvocationOutput(BaseModel):
@classmethod
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation output types."""
if not cls._typeadapter:
InvocationOutputsUnion = TypeAliasType(
"InvocationOutputsUnion", Annotated[Union[tuple(cls._output_classes)], Field(discriminator="type")]
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocationOutput = TypeAliasType(
"AnyInvocationOutput", Annotated[Union[tuple(cls._output_classes)], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(InvocationOutputsUnion)
cls._typeadapter = TypeAdapter(AnyInvocationOutput)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
@ -125,12 +128,13 @@ class BaseInvocationOutput(BaseModel):
return (i.get_type() for i in BaseInvocationOutput.get_outputs())
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel]) -> None:
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseInvocationOutput]) -> None:
"""Adds various UI-facing attributes to the invocation output's OpenAPI schema."""
# Because we use a pydantic Literal field with default value for the invocation type,
# it will be typed as optional in the OpenAPI schema. Make it required manually.
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
schema["class"] = "output"
schema["required"].extend(["type"])
@classmethod
@ -167,6 +171,7 @@ class BaseInvocation(ABC, BaseModel):
_invocation_classes: ClassVar[set[BaseInvocation]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
_typeadapter_needs_update: ClassVar[bool] = False
@classmethod
def get_type(cls) -> str:
@ -177,15 +182,17 @@ class BaseInvocation(ABC, BaseModel):
def register_invocation(cls, invocation: BaseInvocation) -> None:
"""Registers an invocation."""
cls._invocation_classes.add(invocation)
cls._typeadapter_needs_update = True
@classmethod
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation types."""
if not cls._typeadapter:
InvocationsUnion = TypeAliasType(
"InvocationsUnion", Annotated[Union[tuple(cls._invocation_classes)], Field(discriminator="type")]
if not cls._typeadapter or cls._typeadapter_needs_update:
AnyInvocation = TypeAliasType(
"AnyInvocation", Annotated[Union[tuple(cls._invocation_classes)], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(InvocationsUnion)
cls._typeadapter = TypeAdapter(AnyInvocation)
cls._typeadapter_needs_update = False
return cls._typeadapter
@classmethod
@ -221,7 +228,7 @@ class BaseInvocation(ABC, BaseModel):
return signature(cls.invoke).return_annotation
@staticmethod
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseModel], *args, **kwargs) -> None:
def json_schema_extra(schema: dict[str, Any], model_class: Type[BaseInvocation]) -> None:
"""Adds various UI-facing attributes to the invocation's OpenAPI schema."""
uiconfig = cast(UIConfigBase | None, getattr(model_class, "UIConfig", None))
if uiconfig is not None:
@ -237,6 +244,7 @@ class BaseInvocation(ABC, BaseModel):
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
schema["class"] = "invocation"
schema["required"].extend(["type", "id"])
@abstractmethod
@ -310,7 +318,7 @@ class BaseInvocation(ABC, BaseModel):
protected_namespaces=(),
validate_assignment=True,
json_schema_extra=json_schema_extra,
json_schema_serialization_defaults_required=True,
json_schema_serialization_defaults_required=False,
coerce_numbers_to_str=True,
)

View File

@ -0,0 +1,98 @@
from typing import Any, Union
import numpy as np
import numpy.typing as npt
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, LatentsField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.util.devices import TorchDevice
@invocation(
"lblend",
title="Blend Latents",
tags=["latents", "blend"],
category="latents",
version="1.0.3",
)
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
latents_a: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
latents_b: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
alpha: float = InputField(default=0.5, description=FieldDescriptions.blend_alpha)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents_a = context.tensors.load(self.latents_a.latents_name)
latents_b = context.tensors.load(self.latents_b.latents_name)
if latents_a.shape != latents_b.shape:
raise Exception("Latents to blend must be the same size.")
device = TorchDevice.choose_torch_device()
def slerp(
t: Union[float, npt.NDArray[Any]], # FIXME: maybe use np.float32 here?
v0: Union[torch.Tensor, npt.NDArray[Any]],
v1: Union[torch.Tensor, npt.NDArray[Any]],
DOT_THRESHOLD: float = 0.9995,
) -> Union[torch.Tensor, npt.NDArray[Any]]:
"""
Spherical linear interpolation
Args:
t (float/np.ndarray): Float value between 0.0 and 1.0
v0 (np.ndarray): Starting vector
v1 (np.ndarray): Final vector
DOT_THRESHOLD (float): Threshold for considering the two vectors as
colineal. Not recommended to alter this.
Returns:
v2 (np.ndarray): Interpolation vector between v0 and v1
"""
inputs_are_torch = False
if not isinstance(v0, np.ndarray):
inputs_are_torch = True
v0 = v0.detach().cpu().numpy()
if not isinstance(v1, np.ndarray):
inputs_are_torch = True
v1 = v1.detach().cpu().numpy()
dot = np.sum(v0 * v1 / (np.linalg.norm(v0) * np.linalg.norm(v1)))
if np.abs(dot) > DOT_THRESHOLD:
v2 = (1 - t) * v0 + t * v1
else:
theta_0 = np.arccos(dot)
sin_theta_0 = np.sin(theta_0)
theta_t = theta_0 * t
sin_theta_t = np.sin(theta_t)
s0 = np.sin(theta_0 - theta_t) / sin_theta_0
s1 = sin_theta_t / sin_theta_0
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2_torch: torch.Tensor = torch.from_numpy(v2).to(device)
return v2_torch
else:
assert isinstance(v2, np.ndarray)
return v2
# blend
bl = slerp(self.alpha, latents_a, latents_b)
assert isinstance(bl, torch.Tensor)
blended_latents: torch.Tensor = bl # for type checking convenience
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
blended_latents = blended_latents.to("cpu")
TorchDevice.empty_cache()
name = context.tensors.save(tensor=blended_latents)
return LatentsOutput.build(latents_name=name, latents=blended_latents, seed=self.latents_a.seed)

View File

@ -4,13 +4,12 @@
import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(
"range", title="Integer Range", tags=["collection", "integer", "range"], category="collections", version="1.0.0"

View File

@ -5,7 +5,17 @@ from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIComponent
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import (
ConditioningField,
FieldDescriptions,
Input,
InputField,
OutputField,
TensorField,
UIComponent,
)
from invokeai.app.invocations.model import CLIPField
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
@ -14,13 +24,9 @@ from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
ConditioningFieldData,
ExtraConditioningInfo,
SDXLConditioningInfo,
)
from invokeai.backend.util.devices import torch_dtype
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from .model import CLIPField
from invokeai.backend.util.devices import TorchDevice
# unconditioned: Optional[torch.Tensor]
@ -36,7 +42,7 @@ from .model import CLIPField
title="Prompt",
tags=["prompt", "compel"],
category="conditioning",
version="1.1.1",
version="1.2.0",
)
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@ -51,15 +57,14 @@ class CompelInvocation(BaseInvocation):
description=FieldDescriptions.clip,
input=Input.Connection,
)
mask: Optional[TensorField] = InputField(
default=None, description="A mask defining the region that this conditioning prompt applies to."
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.models.load(self.clip.tokenizer)
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(self.clip.text_encoder)
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, CLIPTextModel)
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.clip.loras:
@ -74,51 +79,49 @@ class CompelInvocation(BaseInvocation):
ti_list = generate_ti_list(self.prompt, text_encoder_info.config.base, context)
with (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
tokenizer,
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
ModelPatcher.apply_lora_text_encoder(
text_encoder,
loras=_lora_loader(),
cached_weights=cached_weights,
),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder, self.clip.skipped_layers),
ModelPatcher.apply_ti(tokenizer, text_encoder, ti_list) as (
patched_tokenizer,
ti_manager,
),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_model, self.clip.skipped_layers),
):
assert isinstance(text_encoder, CLIPTextModel)
assert isinstance(tokenizer, CLIPTokenizer)
compel = Compel(
tokenizer=tokenizer,
tokenizer=patched_tokenizer,
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
dtype_for_device_getter=TorchDevice.choose_torch_dtype,
truncate_long_prompts=False,
)
conjunction = Compel.parse_prompt_string(self.prompt)
if context.config.get().log_tokenization:
log_tokenization_for_conjunction(conjunction, tokenizer)
log_tokenization_for_conjunction(conjunction, patched_tokenizer)
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
ec = ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None),
)
c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
c = c.detach().to("cpu")
conditioning_data = ConditioningFieldData(
conditionings=[
BasicConditioningInfo(
embeds=c,
extra_conditioning=ec,
)
]
)
conditioning_data = ConditioningFieldData(conditionings=[BasicConditioningInfo(embeds=c)])
conditioning_name = context.conditioning.save(conditioning_data)
return ConditioningOutput.build(conditioning_name)
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
mask=self.mask,
)
)
class SDXLPromptInvocationBase:
@ -132,13 +135,9 @@ class SDXLPromptInvocationBase:
get_pooled: bool,
lora_prefix: str,
zero_on_empty: bool,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[ExtraConditioningInfo]]:
) -> Tuple[torch.Tensor, Optional[torch.Tensor]]:
tokenizer_info = context.models.load(clip_field.tokenizer)
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(clip_field.text_encoder)
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, (CLIPTextModel, CLIPTextModelWithProjection))
# return zero on empty
if prompt == "" and zero_on_empty:
@ -159,7 +158,7 @@ class SDXLPromptInvocationBase:
)
else:
c_pooled = None
return c, c_pooled, None
return c, c_pooled
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in clip_field.loras:
@ -175,23 +174,31 @@ class SDXLPromptInvocationBase:
ti_list = generate_ti_list(prompt, text_encoder_info.config.base, context)
with (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
tokenizer,
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
ModelPatcher.apply_lora(
text_encoder,
loras=_lora_loader(),
prefix=lora_prefix,
cached_weights=cached_weights,
),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder, clip_field.skipped_layers),
ModelPatcher.apply_ti(tokenizer, text_encoder, ti_list) as (
patched_tokenizer,
ti_manager,
),
text_encoder_info as text_encoder,
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_model, clip_field.skipped_layers),
):
assert isinstance(text_encoder, (CLIPTextModel, CLIPTextModelWithProjection))
assert isinstance(tokenizer, CLIPTokenizer)
text_encoder = cast(CLIPTextModel, text_encoder)
compel = Compel(
tokenizer=tokenizer,
tokenizer=patched_tokenizer,
text_encoder=text_encoder,
textual_inversion_manager=ti_manager,
dtype_for_device_getter=torch_dtype,
dtype_for_device_getter=TorchDevice.choose_torch_dtype,
truncate_long_prompts=False, # TODO:
returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, # TODO: clip skip
requires_pooled=get_pooled,
@ -201,20 +208,15 @@ class SDXLPromptInvocationBase:
if context.config.get().log_tokenization:
# TODO: better logging for and syntax
log_tokenization_for_conjunction(conjunction, tokenizer)
log_tokenization_for_conjunction(conjunction, patched_tokenizer)
# TODO: ask for optimizations? to not run text_encoder twice
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
c, _options = compel.build_conditioning_tensor_for_conjunction(conjunction)
if get_pooled:
c_pooled = compel.conditioning_provider.get_pooled_embeddings([prompt])
else:
c_pooled = None
ec = ExtraConditioningInfo(
tokens_count_including_eos_bos=get_max_token_count(tokenizer, conjunction),
cross_attention_control_args=options.get("cross_attention_control", None),
)
del tokenizer
del text_encoder
del tokenizer_info
@ -224,7 +226,7 @@ class SDXLPromptInvocationBase:
if c_pooled is not None:
c_pooled = c_pooled.detach().to("cpu")
return c, c_pooled, ec
return c, c_pooled
@invocation(
@ -232,7 +234,7 @@ class SDXLPromptInvocationBase:
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.1.1",
version="1.2.0",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -255,20 +257,19 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
target_height: int = InputField(default=1024, description="")
clip: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 1")
clip2: CLIPField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP 2")
mask: Optional[TensorField] = InputField(
default=None, description="A mask defining the region that this conditioning prompt applies to."
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
c1, c1_pooled, ec1 = self.run_clip_compel(
context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True
)
c1, c1_pooled = self.run_clip_compel(context, self.clip, self.prompt, False, "lora_te1_", zero_on_empty=True)
if self.style.strip() == "":
c2, c2_pooled, ec2 = self.run_clip_compel(
c2, c2_pooled = self.run_clip_compel(
context, self.clip2, self.prompt, True, "lora_te2_", zero_on_empty=True
)
else:
c2, c2_pooled, ec2 = self.run_clip_compel(
context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True
)
c2, c2_pooled = self.run_clip_compel(context, self.clip2, self.style, True, "lora_te2_", zero_on_empty=True)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
@ -307,17 +308,19 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
embeds=torch.cat([c1, c2], dim=-1),
pooled_embeds=c2_pooled,
add_time_ids=add_time_ids,
extra_conditioning=ec1,
embeds=torch.cat([c1, c2], dim=-1), pooled_embeds=c2_pooled, add_time_ids=add_time_ids
)
]
)
conditioning_name = context.conditioning.save(conditioning_data)
return ConditioningOutput.build(conditioning_name)
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
mask=self.mask,
)
)
@invocation(
@ -345,7 +348,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
# TODO: if there will appear lora for refiner - write proper prefix
c2, c2_pooled, ec2 = self.run_clip_compel(context, self.clip2, self.style, True, "<NONE>", zero_on_empty=False)
c2, c2_pooled = self.run_clip_compel(context, self.clip2, self.style, True, "<NONE>", zero_on_empty=False)
original_size = (self.original_height, self.original_width)
crop_coords = (self.crop_top, self.crop_left)
@ -354,14 +357,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
assert c2_pooled is not None
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
embeds=c2,
pooled_embeds=c2_pooled,
add_time_ids=add_time_ids,
extra_conditioning=ec2, # or None
)
]
conditionings=[SDXLConditioningInfo(embeds=c2, pooled_embeds=c2_pooled, add_time_ids=add_time_ids)]
)
conditioning_name = context.conditioning.save(conditioning_data)

View File

@ -1,6 +1,6 @@
from typing import Literal
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
from invokeai.backend.util.devices import TorchDevice
LATENT_SCALE_FACTOR = 8
"""
@ -10,8 +10,7 @@ factor is hard-coded to a literal '8' rather than using this constant.
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
"""
SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
"""A literal type representing the valid scheduler names."""
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
"""A literal type for PIL image modes supported by Invoke"""
DEFAULT_PRECISION = TorchDevice.choose_torch_dtype()

View File

@ -2,6 +2,7 @@
# initial implementation by Gregg Helt, 2023
# heavily leverages controlnet_aux package: https://github.com/patrickvonplaten/controlnet_aux
from builtins import bool, float
from pathlib import Path
from typing import Dict, List, Literal, Union
import cv2
@ -20,11 +21,19 @@ from controlnet_aux import (
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, Field, field_validator, model_validator
from transformers import pipeline
from transformers.pipelines import DepthEstimationPipeline
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
OutputField,
UIType,
@ -35,22 +44,14 @@ from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES, heuristic_resize
from invokeai.backend.image_util.canny import get_canny_edges
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from invokeai.backend.image_util.depth_anything.depth_anything_pipeline import DepthAnythingPipeline
from invokeai.backend.image_util.dw_openpose import DWPOSE_MODELS, DWOpenposeDetector
from invokeai.backend.image_util.hed import HEDProcessor
from invokeai.backend.image_util.lineart import LineartProcessor
from invokeai.backend.image_util.lineart_anime import LineartAnimeProcessor
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
CONTROLNET_RESIZE_VALUES = Literal[
"just_resize",
"crop_resize",
"fill_resize",
"just_resize_simple",
]
from invokeai.backend.image_util.util import np_to_pil, pil_to_np
class ControlField(BaseModel):
@ -86,13 +87,13 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.1")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.2")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ModelIdentifierField = InputField(
description=FieldDescriptions.controlnet_model, input=Input.Direct, ui_type=UIType.ControlNetModel
description=FieldDescriptions.controlnet_model, ui_type=UIType.ControlNetModel
)
control_weight: Union[float, List[float]] = InputField(
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
@ -146,6 +147,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
return context.images.get_pil(self.image.image_name, "RGB")
def invoke(self, context: InvocationContext) -> ImageOutput:
self._context = context
raw_image = self.load_image(context)
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
@ -171,13 +173,13 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.3.2",
version="1.3.3",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
low_threshold: int = InputField(
default=100, ge=0, le=255, description="The low threshold of the Canny pixel gradient (0-255)"
)
@ -205,13 +207,13 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
# safe not supported in controlnet_aux v0.0.3
# safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
@ -234,13 +236,13 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
coarse: bool = InputField(default=False, description="Whether to use coarse mode")
def run_processor(self, image: Image.Image) -> Image.Image:
@ -256,13 +258,13 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image) -> Image.Image:
processor = LineartAnimeProcessor()
@ -279,19 +281,20 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.3",
version="1.2.4",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
a_mult: float = InputField(default=2.0, ge=0, description="Midas parameter `a_mult` (a = a_mult * PI)")
bg_th: float = InputField(default=0.1, ge=0, description="Midas parameter `bg_th`")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
# depth_and_normal not supported in controlnet_aux v0.0.3
# depth_and_normal: bool = InputField(default=False, description="whether to use depth and normal mode")
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
# TODO: replace from_pretrained() calls with context.models.download_and_cache() (or similar)
midas_processor = MidasDetector.from_pretrained("lllyasviel/Annotators")
processed_image = midas_processor(
image,
@ -310,15 +313,15 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
normalbae_processor = NormalBaeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = normalbae_processor(
image, detect_resolution=self.detect_resolution, image_resolution=self.image_resolution
@ -327,17 +330,17 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.2"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.3"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
thr_v: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_v`")
thr_d: float = InputField(default=0.1, ge=0, description="MLSD parameter `thr_d`")
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
mlsd_processor = MLSDdetector.from_pretrained("lllyasviel/Annotators")
processed_image = mlsd_processor(
image,
@ -350,17 +353,17 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.2"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.3"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
safe: bool = InputField(default=False, description=FieldDescriptions.safe_mode)
scribble: bool = InputField(default=False, description=FieldDescriptions.scribble_mode)
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
pidi_processor = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
processed_image = pidi_processor(
image,
@ -377,18 +380,18 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
h: int = InputField(default=512, ge=0, description="Content shuffle `h` parameter")
w: int = InputField(default=512, ge=0, description="Content shuffle `w` parameter")
f: int = InputField(default=256, ge=0, description="Content shuffle `f` parameter")
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
content_shuffle_processor = ContentShuffleDetector()
processed_image = content_shuffle_processor(
image,
@ -407,12 +410,12 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
zoe_depth_processor = ZoeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = zoe_depth_processor(image)
return processed_image
@ -423,17 +426,17 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.3",
version="1.2.4",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
max_faces: int = InputField(default=1, ge=1, description="Maximum number of faces to detect")
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(
image,
@ -450,7 +453,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@ -458,10 +461,10 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
thr_a: float = InputField(default=0, description="Leres parameter `thr_a`")
thr_b: float = InputField(default=0, description="Leres parameter `thr_b`")
boost: bool = InputField(default=False, description="Whether to use boost mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
leres_processor = LeresDetector.from_pretrained("lllyasviel/Annotators")
processed_image = leres_processor(
image,
@ -479,7 +482,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@ -503,8 +506,8 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
np_img = cv2.resize(np_img, (W, H), interpolation=cv2.INTER_AREA)
return np_img
def run_processor(self, img):
np_img = np.array(img, dtype=np.uint8)
def run_processor(self, image: Image.Image) -> Image.Image:
np_img = np.array(image, dtype=np.uint8)
processed_np_image = self.tile_resample(
np_img,
# res=self.tile_size,
@ -519,15 +522,15 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.3",
version="1.2.4",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
detect_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image):
def run_processor(self, image: Image.Image) -> Image.Image:
# segment_anything_processor = SamDetector.from_pretrained("ybelkada/segment-anything", subfolder="checkpoints")
segment_anything_processor = SamDetectorReproducibleColors.from_pretrained(
"ybelkada/segment-anything", subfolder="checkpoints"
@ -566,14 +569,14 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.2",
version="1.2.3",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
color_map_tile_size: int = InputField(default=64, ge=1, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
def run_processor(self, image: Image.Image) -> Image.Image:
np_image = np.array(image, dtype=np.uint8)
height, width = np_image.shape[:2]
@ -590,7 +593,14 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small", "small_v2"]
# DepthAnything V2 Small model is licensed under Apache 2.0 but not the base and large models.
DEPTH_ANYTHING_MODELS = {
"large": "LiheYoung/depth-anything-large-hf",
"base": "LiheYoung/depth-anything-base-hf",
"small": "LiheYoung/depth-anything-small-hf",
"small_v2": "depth-anything/Depth-Anything-V2-Small-hf",
}
@invocation(
@ -598,22 +608,33 @@ DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.1.1",
version="1.1.3",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
default="small_v2", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
def run_processor(self, image: Image.Image) -> Image.Image:
def load_depth_anything(model_path: Path):
depth_anything_pipeline = pipeline(model=str(model_path), task="depth-estimation", local_files_only=True)
assert isinstance(depth_anything_pipeline, DepthEstimationPipeline)
return DepthAnythingPipeline(depth_anything_pipeline)
processed_image = depth_anything_detector(image=image, resolution=self.resolution)
return processed_image
with self._context.models.load_remote_model(
source=DEPTH_ANYTHING_MODELS[self.model_size], loader=load_depth_anything
) as depth_anything_detector:
assert isinstance(depth_anything_detector, DepthAnythingPipeline)
depth_map = depth_anything_detector.generate_depth(image)
# Resizing to user target specified size
new_height = int(image.size[1] * (self.resolution / image.size[0]))
depth_map = depth_map.resize((self.resolution, new_height))
return depth_map
@invocation(
@ -621,7 +642,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.1.0",
version="1.1.1",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
@ -629,10 +650,13 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
image_resolution: int = InputField(default=512, ge=1, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
dw_openpose = DWOpenposeDetector()
def run_processor(self, image: Image.Image) -> Image.Image:
onnx_det = self._context.models.download_and_cache_model(DWPOSE_MODELS["yolox_l.onnx"])
onnx_pose = self._context.models.download_and_cache_model(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"])
dw_openpose = DWOpenposeDetector(onnx_det=onnx_det, onnx_pose=onnx_pose)
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
@ -641,3 +665,27 @@ class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
resolution=self.image_resolution,
)
return processed_image
@invocation(
"heuristic_resize",
title="Heuristic Resize",
tags=["image, controlnet"],
category="image",
version="1.0.1",
classification=Classification.Prototype,
)
class HeuristicResizeInvocation(BaseInvocation):
"""Resize an image using a heuristic method. Preserves edge maps."""
image: ImageField = InputField(description="The image to resize")
width: int = InputField(default=512, ge=1, description="The width to resize to (px)")
height: int = InputField(default=512, ge=1, description="The height to resize to (px)")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name, "RGB")
np_img = pil_to_np(image)
np_resized = heuristic_resize(np_img, (self.width, self.height))
resized = np_to_pil(np_resized)
image_dto = context.images.save(image=resized)
return ImageOutput.build(image_dto)

View File

@ -0,0 +1,80 @@
from typing import Optional
import torch
import torchvision.transforms as T
from PIL import Image
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import DEFAULT_PRECISION
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField
from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import DenoiseMaskOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
@invocation(
"create_denoise_mask",
title="Create Denoise Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.2",
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
vae: VAEField = InputField(description=FieldDescriptions.vae, input=Input.Connection, ui_order=0)
image: Optional[ImageField] = InputField(default=None, description="Image which will be masked", ui_order=1)
mask: ImageField = InputField(description="The mask to use when pasting", ui_order=2)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=3)
fp32: bool = InputField(
default=DEFAULT_PRECISION == torch.float32,
description=FieldDescriptions.fp32,
ui_order=4,
)
def prep_mask_tensor(self, mask_image: Image.Image) -> torch.Tensor:
if mask_image.mode != "L":
mask_image = mask_image.convert("L")
mask_tensor: torch.Tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
# if shape is not None:
# mask_tensor = tv_resize(mask_tensor, shape, T.InterpolationMode.BILINEAR)
return mask_tensor
@torch.no_grad()
def invoke(self, context: InvocationContext) -> DenoiseMaskOutput:
if self.image is not None:
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = image_tensor.unsqueeze(0)
else:
image_tensor = None
mask = self.prep_mask_tensor(
context.images.get_pil(self.mask.image_name),
)
if image_tensor is not None:
vae_info = context.models.load(self.vae.vae)
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
# TODO:
masked_latents = ImageToLatentsInvocation.vae_encode(vae_info, self.fp32, self.tiled, masked_image.clone())
masked_latents_name = context.tensors.save(tensor=masked_latents)
else:
masked_latents_name = None
mask_name = context.tensors.save(tensor=mask)
return DenoiseMaskOutput.build(
mask_name=mask_name,
masked_latents_name=masked_latents_name,
gradient=False,
)

View File

@ -0,0 +1,139 @@
from typing import Literal, Optional
import numpy as np
import torch
import torchvision.transforms as T
from PIL import Image, ImageFilter
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import DEFAULT_PRECISION
from invokeai.app.invocations.fields import (
DenoiseMaskField,
FieldDescriptions,
ImageField,
Input,
InputField,
OutputField,
)
from invokeai.app.invocations.image_to_latents import ImageToLatentsInvocation
from invokeai.app.invocations.model import UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.model_manager.config import MainConfigBase, ModelVariantType
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
@invocation_output("gradient_mask_output")
class GradientMaskOutput(BaseInvocationOutput):
"""Outputs a denoise mask and an image representing the total gradient of the mask."""
denoise_mask: DenoiseMaskField = OutputField(description="Mask for denoise model run")
expanded_mask_area: ImageField = OutputField(
description="Image representing the total gradient area of the mask. For paste-back purposes."
)
@invocation(
"create_gradient_mask",
title="Create Gradient Mask",
tags=["mask", "denoise"],
category="latents",
version="1.2.0",
)
class CreateGradientMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
mask: ImageField = InputField(default=None, description="Image which will be masked", ui_order=1)
edge_radius: int = InputField(
default=16, ge=0, description="How far to blur/expand the edges of the mask", ui_order=2
)
coherence_mode: Literal["Gaussian Blur", "Box Blur", "Staged"] = InputField(default="Gaussian Blur", ui_order=3)
minimum_denoise: float = InputField(
default=0.0, ge=0, le=1, description="Minimum denoise level for the coherence region", ui_order=4
)
image: Optional[ImageField] = InputField(
default=None,
description="OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE",
title="[OPTIONAL] Image",
ui_order=6,
)
unet: Optional[UNetField] = InputField(
description="OPTIONAL: If the Unet is a specialized Inpainting model, masked_latents will be generated from the image with the VAE",
default=None,
input=Input.Connection,
title="[OPTIONAL] UNet",
ui_order=5,
)
vae: Optional[VAEField] = InputField(
default=None,
description="OPTIONAL: Only connect for specialized Inpainting models, masked_latents will be generated from the image with the VAE",
title="[OPTIONAL] VAE",
input=Input.Connection,
ui_order=7,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled, ui_order=8)
fp32: bool = InputField(
default=DEFAULT_PRECISION == torch.float32,
description=FieldDescriptions.fp32,
ui_order=9,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> GradientMaskOutput:
mask_image = context.images.get_pil(self.mask.image_name, mode="L")
if self.edge_radius > 0:
if self.coherence_mode == "Box Blur":
blur_mask = mask_image.filter(ImageFilter.BoxBlur(self.edge_radius))
else: # Gaussian Blur OR Staged
# Gaussian Blur uses standard deviation. 1/2 radius is a good approximation
blur_mask = mask_image.filter(ImageFilter.GaussianBlur(self.edge_radius / 2))
blur_tensor: torch.Tensor = image_resized_to_grid_as_tensor(blur_mask, normalize=False)
# redistribute blur so that the original edges are 0 and blur outwards to 1
blur_tensor = (blur_tensor - 0.5) * 2
blur_tensor[blur_tensor < 0] = 0.0
threshold = 1 - self.minimum_denoise
if self.coherence_mode == "Staged":
# wherever the blur_tensor is less than fully masked, convert it to threshold
blur_tensor = torch.where((blur_tensor < 1) & (blur_tensor > 0), threshold, blur_tensor)
else:
# wherever the blur_tensor is above threshold but less than 1, drop it to threshold
blur_tensor = torch.where((blur_tensor > threshold) & (blur_tensor < 1), threshold, blur_tensor)
else:
blur_tensor: torch.Tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
mask_name = context.tensors.save(tensor=blur_tensor.unsqueeze(1))
# compute a [0, 1] mask from the blur_tensor
expanded_mask = torch.where((blur_tensor < 1), 0, 1)
expanded_mask_image = Image.fromarray((expanded_mask.squeeze(0).numpy() * 255).astype(np.uint8), mode="L")
expanded_image_dto = context.images.save(expanded_mask_image)
masked_latents_name = None
if self.unet is not None and self.vae is not None and self.image is not None:
# all three fields must be present at the same time
main_model_config = context.models.get_config(self.unet.unet.key)
assert isinstance(main_model_config, MainConfigBase)
if main_model_config.variant is ModelVariantType.Inpaint:
mask = blur_tensor
vae_info: LoadedModel = context.models.load(self.vae.vae)
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = image_tensor.unsqueeze(0)
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
masked_latents = ImageToLatentsInvocation.vae_encode(
vae_info, self.fp32, self.tiled, masked_image.clone()
)
masked_latents_name = context.tensors.save(tensor=masked_latents)
return GradientMaskOutput(
denoise_mask=DenoiseMaskField(mask_name=mask_name, masked_latents_name=masked_latents_name, gradient=True),
expanded_mask_area=ImageField(image_name=expanded_image_dto.image_name),
)

View File

@ -0,0 +1,61 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, LatentsField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
# The Crop Latents node was copied from @skunkworxdark's implementation here:
# https://github.com/skunkworxdark/XYGrid_nodes/blob/74647fa9c1fa57d317a94bd43ca689af7f0aae5e/images_to_grids.py#L1117C1-L1167C80
@invocation(
"crop_latents",
title="Crop Latents",
tags=["latents", "crop"],
category="latents",
version="1.0.2",
)
# TODO(ryand): Named `CropLatentsCoreInvocation` to prevent a conflict with custom node `CropLatentsInvocation`.
# Currently, if the class names conflict then 'GET /openapi.json' fails.
class CropLatentsCoreInvocation(BaseInvocation):
"""Crops a latent-space tensor to a box specified in image-space. The box dimensions and coordinates must be
divisible by the latent scale factor of 8.
"""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
x: int = InputField(
ge=0,
multiple_of=LATENT_SCALE_FACTOR,
description="The left x coordinate (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
y: int = InputField(
ge=0,
multiple_of=LATENT_SCALE_FACTOR,
description="The top y coordinate (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
width: int = InputField(
ge=1,
multiple_of=LATENT_SCALE_FACTOR,
description="The width (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
height: int = InputField(
ge=1,
multiple_of=LATENT_SCALE_FACTOR,
description="The height (in px) of the crop rectangle in image space. This value will be converted to a dimension in latent space.",
)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.tensors.load(self.latents.latents_name)
x1 = self.x // LATENT_SCALE_FACTOR
y1 = self.y // LATENT_SCALE_FACTOR
x2 = x1 + (self.width // LATENT_SCALE_FACTOR)
y2 = y1 + (self.height // LATENT_SCALE_FACTOR)
cropped_latents = latents[..., y1:y2, x1:x2]
name = context.tensors.save(tensor=cropped_latents)
return LatentsOutput.build(latents_name=name, latents=cropped_latents)

View File

@ -3,6 +3,7 @@ Invoke-managed custom node loader. See README.md for more information.
"""
import sys
import traceback
from importlib.util import module_from_spec, spec_from_file_location
from pathlib import Path
@ -41,11 +42,15 @@ for d in Path(__file__).parent.iterdir():
logger.info(f"Loading node pack {module_name}")
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
try:
module = module_from_spec(spec)
sys.modules[spec.name] = module
spec.loader.exec_module(module)
loaded_count += 1
loaded_count += 1
except Exception:
full_error = traceback.format_exc()
logger.error(f"Failed to load node pack {module_name}:\n{full_error}")
del init, module_name

View File

@ -5,13 +5,11 @@ import cv2 as cv
import numpy
from PIL import Image, ImageOps
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.3.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,7 @@
from enum import Enum
from typing import Any, Callable, Optional, Tuple
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter, model_validator
from pydantic.fields import _Unset
from pydantic_core import PydanticUndefined
@ -48,6 +48,7 @@ class UIType(str, Enum, metaclass=MetaEnum):
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
T2IAdapterModel = "T2IAdapterModelField"
SpandrelImageToImageModel = "SpandrelImageToImageModelField"
# endregion
# region Misc Field Types
@ -134,6 +135,7 @@ class FieldDescriptions:
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
spandrel_image_to_image_model = "Image-to-Image model"
lora_weight = "The weight at which the LoRA is applied to each model"
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
raw_prompt = "Raw prompt text (no parsing)"
@ -160,6 +162,7 @@ class FieldDescriptions:
fp32 = "Whether or not to use full float32 precision"
precision = "Precision to use"
tiled = "Processing using overlapping tiles (reduce memory consumption)"
vae_tile_size = "The tile size for VAE tiling in pixels (image space). If set to 0, the default tile size for the model will be used. Larger tile sizes generally produce better results at the cost of higher memory usage."
detect_res = "Pixel resolution for detection"
image_res = "Pixel resolution for output image"
safe_mode = "Whether or not to use safe mode"
@ -203,6 +206,12 @@ class DenoiseMaskField(BaseModel):
gradient: bool = Field(default=False, description="Used for gradient inpainting")
class TensorField(BaseModel):
"""A tensor primitive field."""
tensor_name: str = Field(description="The name of a tensor.")
class LatentsField(BaseModel):
"""A latents tensor primitive field"""
@ -226,7 +235,36 @@ class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
# endregion
mask: Optional[TensorField] = Field(
default=None,
description="The mask associated with this conditioning tensor. Excluded regions should be set to False, "
"included regions should be set to True.",
)
class BoundingBoxField(BaseModel):
"""A bounding box primitive value."""
x_min: int = Field(ge=0, description="The minimum x-coordinate of the bounding box (inclusive).")
x_max: int = Field(ge=0, description="The maximum x-coordinate of the bounding box (exclusive).")
y_min: int = Field(ge=0, description="The minimum y-coordinate of the bounding box (inclusive).")
y_max: int = Field(ge=0, description="The maximum y-coordinate of the bounding box (exclusive).")
score: Optional[float] = Field(
default=None,
ge=0.0,
le=1.0,
description="The score associated with the bounding box. In the range [0, 1]. This value is typically set "
"when the bounding box was produced by a detector and has an associated confidence score.",
)
@model_validator(mode="after")
def check_coords(self):
if self.x_min > self.x_max:
raise ValueError(f"x_min ({self.x_min}) is greater than x_max ({self.x_max}).")
if self.y_min > self.y_max:
raise ValueError(f"y_min ({self.y_min}) is greater than y_max ({self.y_max}).")
return self
class MetadataField(RootModel[dict[str, Any]]):

View File

@ -0,0 +1,100 @@
from pathlib import Path
from typing import Literal
import torch
from PIL import Image
from transformers import pipeline
from transformers.pipelines import ZeroShotObjectDetectionPipeline
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import BoundingBoxField, ImageField, InputField
from invokeai.app.invocations.primitives import BoundingBoxCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.grounding_dino.detection_result import DetectionResult
from invokeai.backend.image_util.grounding_dino.grounding_dino_pipeline import GroundingDinoPipeline
GroundingDinoModelKey = Literal["grounding-dino-tiny", "grounding-dino-base"]
GROUNDING_DINO_MODEL_IDS: dict[GroundingDinoModelKey, str] = {
"grounding-dino-tiny": "IDEA-Research/grounding-dino-tiny",
"grounding-dino-base": "IDEA-Research/grounding-dino-base",
}
@invocation(
"grounding_dino",
title="Grounding DINO (Text Prompt Object Detection)",
tags=["prompt", "object detection"],
category="image",
version="1.0.0",
)
class GroundingDinoInvocation(BaseInvocation):
"""Runs a Grounding DINO model. Performs zero-shot bounding-box object detection from a text prompt."""
# Reference:
# - https://arxiv.org/pdf/2303.05499
# - https://huggingface.co/docs/transformers/v4.43.3/en/model_doc/grounding-dino#grounded-sam
# - https://github.com/NielsRogge/Transformers-Tutorials/blob/a39f33ac1557b02ebfb191ea7753e332b5ca933f/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb
model: GroundingDinoModelKey = InputField(description="The Grounding DINO model to use.")
prompt: str = InputField(description="The prompt describing the object to segment.")
image: ImageField = InputField(description="The image to segment.")
detection_threshold: float = InputField(
description="The detection threshold for the Grounding DINO model. All detected bounding boxes with scores above this threshold will be returned.",
ge=0.0,
le=1.0,
default=0.3,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> BoundingBoxCollectionOutput:
# The model expects a 3-channel RGB image.
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
detections = self._detect(
context=context, image=image_pil, labels=[self.prompt], threshold=self.detection_threshold
)
# Convert detections to BoundingBoxCollectionOutput.
bounding_boxes: list[BoundingBoxField] = []
for detection in detections:
bounding_boxes.append(
BoundingBoxField(
x_min=detection.box.xmin,
x_max=detection.box.xmax,
y_min=detection.box.ymin,
y_max=detection.box.ymax,
score=detection.score,
)
)
return BoundingBoxCollectionOutput(collection=bounding_boxes)
@staticmethod
def _load_grounding_dino(model_path: Path):
grounding_dino_pipeline = pipeline(
model=str(model_path),
task="zero-shot-object-detection",
local_files_only=True,
# TODO(ryand): Setting the torch_dtype here doesn't work. Investigate whether fp16 is supported by the
# model, and figure out how to make it work in the pipeline.
# torch_dtype=TorchDevice.choose_torch_dtype(),
)
assert isinstance(grounding_dino_pipeline, ZeroShotObjectDetectionPipeline)
return GroundingDinoPipeline(grounding_dino_pipeline)
def _detect(
self,
context: InvocationContext,
image: Image.Image,
labels: list[str],
threshold: float = 0.3,
) -> list[DetectionResult]:
"""Use Grounding DINO to detect bounding boxes for a set of labels in an image."""
# TODO(ryand): I copied this "."-handling logic from the transformers example code. Test it and see if it
# actually makes a difference.
labels = [label if label.endswith(".") else label + "." for label in labels]
with context.models.load_remote_model(
source=GROUNDING_DINO_MODEL_IDS[self.model], loader=GroundingDinoInvocation._load_grounding_dino
) as detector:
assert isinstance(detector, GroundingDinoPipeline)
return detector.detect(image=image, candidate_labels=labels, threshold=threshold)

View File

@ -0,0 +1,65 @@
import math
from typing import Tuple
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField
from invokeai.app.invocations.model import UNetField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType
@invocation_output("ideal_size_output")
class IdealSizeOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
width: int = OutputField(description="The ideal width of the image (in pixels)")
height: int = OutputField(description="The ideal height of the image (in pixels)")
@invocation(
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.3",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""
width: int = InputField(default=1024, description="Final image width")
height: int = InputField(default=576, description="Final image height")
unet: UNetField = InputField(default=None, description=FieldDescriptions.unet)
multiplier: float = InputField(
default=1.0,
description="Amount to multiply the model's dimensions by when calculating the ideal size (may result in "
"initial generation artifacts if too large)",
)
def trim_to_multiple_of(self, *args: int, multiple_of: int = LATENT_SCALE_FACTOR) -> Tuple[int, ...]:
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
unet_config = context.models.get_config(self.unet.unet.key)
aspect = self.width / self.height
dimension: float = 512
if unet_config.base == BaseModelType.StableDiffusion2:
dimension = 768
elif unet_config.base == BaseModelType.StableDiffusionXL:
dimension = 1024
dimension = dimension * self.multiplier
min_dimension = math.floor(dimension * 0.5)
model_area = dimension * dimension # hardcoded for now since all models are trained on square images
if aspect > 1.0:
init_height = max(min_dimension, math.sqrt(model_area / aspect))
init_width = init_height * aspect
else:
init_width = max(min_dimension, math.sqrt(model_area * aspect))
init_height = init_width / aspect
scaled_width, scaled_height = self.trim_to_multiple_of(
math.floor(init_width),
math.floor(init_height),
)
return IdealSizeOutput(width=scaled_width, height=scaled_height)

View File

@ -1,12 +1,12 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from pathlib import Path
from typing import Literal, Optional
import cv2
import numpy
from PIL import Image, ImageChops, ImageFilter, ImageOps
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import IMAGE_MODES
from invokeai.app.invocations.fields import (
ColorField,
@ -22,8 +22,6 @@ from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, Classification, invocation
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.1")
class ShowImageInvocation(BaseInvocation):
@ -504,7 +502,7 @@ class ImageInverseLerpInvocation(BaseInvocation, WithMetadata, WithBoard):
title="Blur NSFW Image",
tags=["image", "nsfw"],
category="image",
version="1.2.2",
version="1.2.3",
)
class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Add blur to NSFW-flagged images"""
@ -516,23 +514,12 @@ class ImageNSFWBlurInvocation(BaseInvocation, WithMetadata, WithBoard):
logger = context.logger
logger.debug("Running NSFW checker")
if SafetyChecker.has_nsfw_concept(image):
logger.info("A potentially NSFW image has been detected. Image will be blurred.")
blurry_image = image.filter(filter=ImageFilter.GaussianBlur(radius=32))
caution = self._get_caution_img()
blurry_image.paste(caution, (0, 0), caution)
image = blurry_image
image = SafetyChecker.blur_if_nsfw(image)
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)
def _get_caution_img(self) -> Image.Image:
import invokeai.app.assets.images as image_assets
caution = Image.open(Path(image_assets.__path__[0]) / "caution.png")
return caution.resize((caution.width // 2, caution.height // 2))
@invocation(
"img_watermark",

View File

@ -0,0 +1,143 @@
from contextlib import nullcontext
from functools import singledispatchmethod
import einops
import torch
from diffusers.models.attention_processor import (
AttnProcessor2_0,
LoRAAttnProcessor2_0,
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import LoadedModel
from invokeai.backend.stable_diffusion.diffusers_pipeline import image_resized_to_grid_as_tensor
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
@invocation(
"i2l",
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.1.0",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
image: ImageField = InputField(
description="The image to encode",
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
# offer a way to directly set None values.
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@staticmethod
def vae_encode(
vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor, tile_size: int = 0
) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
orig_dtype = vae.dtype
if upcast:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
XFormersAttnProcessor,
LoRAXFormersAttnProcessor,
LoRAAttnProcessor2_0,
),
)
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can save lots of memory
if use_torch_2_0_or_xformers:
vae.post_quant_conv.to(orig_dtype)
vae.decoder.conv_in.to(orig_dtype)
vae.decoder.mid_block.to(orig_dtype)
# else:
# latents = latents.float()
else:
vae.to(dtype=torch.float16)
# latents = latents.half()
if tiled:
vae.enable_tiling()
else:
vae.disable_tiling()
tiling_context = nullcontext()
if tile_size > 0:
tiling_context = patch_vae_tiling_params(
vae,
tile_sample_min_size=tile_size,
tile_latent_min_size=tile_size // LATENT_SCALE_FACTOR,
tile_overlap_factor=0.25,
)
# non_noised_latents_from_image
image_tensor = image_tensor.to(device=vae.device, dtype=vae.dtype)
with torch.inference_mode(), tiling_context:
latents = ImageToLatentsInvocation._encode_to_tensor(vae, image_tensor)
latents = vae.config.scaling_factor * latents
latents = latents.to(dtype=orig_dtype)
return latents
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.images.get_pil(self.image.image_name)
vae_info = context.models.load(self.vae.vae)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = einops.rearrange(image_tensor, "c h w -> 1 c h w")
latents = self.vae_encode(
vae_info=vae_info, upcast=self.fp32, tiled=self.tiled, image_tensor=image_tensor, tile_size=self.tile_size
)
latents = latents.to("cpu")
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
@singledispatchmethod
@staticmethod
def _encode_to_tensor(vae: AutoencoderKL, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
assert isinstance(vae, torch.nn.Module)
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents: torch.Tensor = image_tensor_dist.sample().to(
dtype=vae.dtype
) # FIXME: uses torch.randn. make reproducible!
return latents
@_encode_to_tensor.register
@staticmethod
def _(vae: AutoencoderTiny, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
assert isinstance(vae, torch.nn.Module)
latents: torch.FloatTensor = vae.encode(image_tensor).latents
return latents

View File

@ -1,154 +1,90 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) and the InvokeAI Team
from abc import abstractmethod
from typing import Literal, get_args
import math
from typing import Literal, Optional, get_args
from PIL import Image
import numpy as np
from PIL import Image, ImageOps
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ColorField, ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
from invokeai.backend.image_util.infill_methods.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.infill_methods.lama import LaMA
from invokeai.backend.image_util.infill_methods.mosaic import infill_mosaic
from invokeai.backend.image_util.infill_methods.patchmatch import PatchMatch, infill_patchmatch
from invokeai.backend.image_util.infill_methods.tile import infill_tile
from invokeai.backend.util.logging import InvokeAILogger
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
logger = InvokeAILogger.get_logger()
def infill_methods() -> list[str]:
methods = ["tile", "solid", "lama", "cv2"]
def get_infill_methods():
methods = Literal["tile", "color", "lama", "cv2"] # TODO: add mosaic back
if PatchMatch.patchmatch_available():
methods.insert(0, "patchmatch")
methods = Literal["patchmatch", "tile", "color", "lama", "cv2"] # TODO: add mosaic back
return methods
INFILL_METHODS = Literal[tuple(infill_methods())]
INFILL_METHODS = get_infill_methods()
DEFAULT_INFILL_METHOD = "patchmatch" if "patchmatch" in get_args(INFILL_METHODS) else "tile"
def infill_lama(im: Image.Image) -> Image.Image:
lama = LaMA()
return lama(im)
class InfillImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Base class for invocations that preprocess images for Infilling"""
image: ImageField = InputField(description="The image to process")
def infill_patchmatch(im: Image.Image) -> Image.Image:
if im.mode != "RGBA":
return im
@abstractmethod
def infill(self, image: Image.Image) -> Image.Image:
"""Infill the image with the specified method"""
pass
# Skip patchmatch if patchmatch isn't available
if not PatchMatch.patchmatch_available():
return im
def load_image(self) -> tuple[Image.Image, bool]:
"""Process the image to have an alpha channel before being infilled"""
image = self._context.images.get_pil(self.image.image_name)
has_alpha = True if image.mode == "RGBA" else False
return image, has_alpha
# Patchmatch (note, we may want to expose patch_size? Increasing it significantly impacts performance though)
im_patched_np = PatchMatch.inpaint(im.convert("RGB"), ImageOps.invert(im.split()[-1]), patch_size=3)
im_patched = Image.fromarray(im_patched_np, mode="RGB")
return im_patched
def invoke(self, context: InvocationContext) -> ImageOutput:
self._context = context
# Retrieve and process image to be infilled
input_image, has_alpha = self.load_image()
# If the input image has no alpha channel, return it
if has_alpha is False:
return ImageOutput.build(context.images.get_dto(self.image.image_name))
def infill_cv2(im: Image.Image) -> Image.Image:
return cv2_inpaint(im)
# Perform Infill action
infilled_image = self.infill(input_image)
# Create ImageDTO for Infilled Image
infilled_image_dto = context.images.save(image=infilled_image)
def get_tile_images(image: np.ndarray, width=8, height=8):
_nrows, _ncols, depth = image.shape
_strides = image.strides
nrows, _m = divmod(_nrows, height)
ncols, _n = divmod(_ncols, width)
if _m != 0 or _n != 0:
return None
return np.lib.stride_tricks.as_strided(
np.ravel(image),
shape=(nrows, ncols, height, width, depth),
strides=(height * _strides[0], width * _strides[1], *_strides),
writeable=False,
)
def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int] = None) -> Image.Image:
# Only fill if there's an alpha layer
if im.mode != "RGBA":
return im
a = np.asarray(im, dtype=np.uint8)
tile_size_tuple = (tile_size, tile_size)
# Get the image as tiles of a specified size
tiles = get_tile_images(a, *tile_size_tuple).copy()
# Get the mask as tiles
tiles_mask = tiles[:, :, :, :, 3]
# Find any mask tiles with any fully transparent pixels (we will be replacing these later)
tmask_shape = tiles_mask.shape
tiles_mask = tiles_mask.reshape(math.prod(tiles_mask.shape))
n, ny = (math.prod(tmask_shape[0:2])), math.prod(tmask_shape[2:])
tiles_mask = tiles_mask > 0
tiles_mask = tiles_mask.reshape((n, ny)).all(axis=1)
# Get RGB tiles in single array and filter by the mask
tshape = tiles.shape
tiles_all = tiles.reshape((math.prod(tiles.shape[0:2]), *tiles.shape[2:]))
filtered_tiles = tiles_all[tiles_mask]
if len(filtered_tiles) == 0:
return im
# Find all invalid tiles and replace with a random valid tile
replace_count = (tiles_mask == False).sum() # noqa: E712
rng = np.random.default_rng(seed=seed)
tiles_all[np.logical_not(tiles_mask)] = filtered_tiles[rng.choice(filtered_tiles.shape[0], replace_count), :, :, :]
# Convert back to an image
tiles_all = tiles_all.reshape(tshape)
tiles_all = tiles_all.swapaxes(1, 2)
st = tiles_all.reshape(
(
math.prod(tiles_all.shape[0:2]),
math.prod(tiles_all.shape[2:4]),
tiles_all.shape[4],
)
)
si = Image.fromarray(st, mode="RGBA")
return si
# Return Infilled Image
return ImageOutput.build(infilled_image_dto)
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
class InfillColorInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image with a solid color"""
image: ImageField = InputField(description="The image to infill")
color: ColorField = InputField(
default=ColorField(r=127, g=127, b=127, a=255),
description="The color to use to infill",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
def infill(self, image: Image.Image):
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
return infilled
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.3")
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
class InfillTileInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image with tiles of the image"""
image: ImageField = InputField(description="The image to infill")
tile_size: int = InputField(default=32, ge=1, description="The tile size (px)")
seed: int = InputField(
default=0,
@ -157,92 +93,78 @@ class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
description="The seed to use for tile generation (omit for random)",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
infilled = tile_fill_missing(image.copy(), seed=self.seed, tile_size=self.tile_size)
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
def infill(self, image: Image.Image):
output = infill_tile(image, seed=self.seed, tile_size=self.tile_size)
return output.infilled
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2"
)
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
class InfillPatchMatchInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
image: ImageField = InputField(description="The image to infill")
downscale: float = InputField(default=2.0, gt=0, description="Run patchmatch on downscaled image to speedup infill")
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name).convert("RGBA")
def infill(self, image: Image.Image):
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
infill_image = image.copy()
width = int(image.width / self.downscale)
height = int(image.height / self.downscale)
infill_image = infill_image.resize(
infilled = image.resize(
(width, height),
resample=resample_mode,
)
if PatchMatch.patchmatch_available():
infilled = infill_patchmatch(infill_image)
else:
raise ValueError("PatchMatch is not available on this system")
infilled = infill_patchmatch(image)
infilled = infilled.resize(
(image.width, image.height),
resample=resample_mode,
)
infilled.paste(image, (0, 0), mask=image.split()[-1])
# image.paste(infilled, (0, 0), mask=image.split()[-1])
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
return infilled
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
class LaMaInfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image using the LaMa model"""
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
# Downloads the LaMa model if it doesn't already exist
download_with_progress_bar(
name="LaMa Inpainting Model",
url="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
dest_path=context.config.get().models_path / "core/misc/lama/lama.pt",
)
infilled = infill_lama(image.copy())
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
def infill(self, image: Image.Image):
with self._context.models.load_remote_model(
source="https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt",
loader=LaMA.load_jit_model,
) as model:
lama = LaMA(model)
return lama(image)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
class CV2InfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image using OpenCV Inpainting"""
def infill(self, image: Image.Image):
return cv2_inpaint(image)
# @invocation(
# "infill_mosaic", title="Mosaic Infill", tags=["image", "inpaint", "outpaint"], category="inpaint", version="1.0.0"
# )
class MosaicInfillInvocation(InfillImageProcessorInvocation):
"""Infills transparent areas of an image with a mosaic pattern drawing colors from the rest of the image"""
image: ImageField = InputField(description="The image to infill")
tile_width: int = InputField(default=64, description="Width of the tile")
tile_height: int = InputField(default=64, description="Height of the tile")
min_color: ColorField = InputField(
default=ColorField(r=0, g=0, b=0, a=255),
description="The min threshold for color",
)
max_color: ColorField = InputField(
default=ColorField(r=255, g=255, b=255, a=255),
description="The max threshold for color",
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.images.get_pil(self.image.image_name)
infilled = infill_cv2(image.copy())
image_dto = context.images.save(image=infilled)
return ImageOutput.build(image_dto)
def infill(self, image: Image.Image):
return infill_mosaic(image, (self.tile_width, self.tile_height), self.min_color.tuple(), self.max_color.tuple())

View File

@ -1,34 +1,41 @@
from builtins import float
from typing import List, Union
from typing import List, Literal, Optional, Union
from pydantic import BaseModel, Field, field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, TensorField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, IPAdapterConfig, ModelType
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
IPAdapterCheckpointConfig,
IPAdapterInvokeAIConfig,
ModelType,
)
class IPAdapterField(BaseModel):
image: Union[ImageField, List[ImageField]] = Field(description="The IP-Adapter image prompt(s).")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model to use.")
image_encoder_model: ModelIdentifierField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the IP-Adapter.")
target_blocks: List[str] = Field(default=[], description="The IP Adapter blocks to apply")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = Field(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
mask: Optional[TensorField] = Field(
default=None,
description="The bool mask associated with this IP-Adapter. Excluded regions should be set to False, included "
"regions should be set to True.",
)
@field_validator("weight")
@classmethod
@ -48,29 +55,41 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.2.2")
CLIP_VISION_MODEL_MAP = {"ViT-H": "ip_adapter_sd_image_encoder", "ViT-G": "ip_adapter_sdxl_image_encoder"}
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.4.1")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
# Inputs
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).")
image: Union[ImageField, List[ImageField]] = InputField(description="The IP-Adapter image prompt(s).", ui_order=1)
ip_adapter_model: ModelIdentifierField = InputField(
description="The IP-Adapter model.",
title="IP-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.IPAdapterModel,
)
clip_vision_model: Literal["ViT-H", "ViT-G"] = InputField(
description="CLIP Vision model to use. Overrides model settings. Mandatory for checkpoint models.",
default="ViT-H",
ui_order=2,
)
weight: Union[float, List[float]] = InputField(
default=1, description="The weight given to the IP-Adapter", title="Weight"
)
method: Literal["full", "style", "composition"] = InputField(
default="full", description="The method to apply the IP-Adapter"
)
begin_step_percent: float = InputField(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
mask: Optional[TensorField] = InputField(
default=None, description="A mask defining the region that this IP-Adapter applies to."
)
@field_validator("weight")
@classmethod
@ -86,35 +105,68 @@ class IPAdapterInvocation(BaseInvocation):
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
assert isinstance(ip_adapter_info, IPAdapterConfig)
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
assert isinstance(ip_adapter_info, (IPAdapterInvokeAIConfig, IPAdapterCheckpointConfig))
if isinstance(ip_adapter_info, IPAdapterInvokeAIConfig):
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
else:
image_encoder_model_name = CLIP_VISION_MODEL_MAP[self.clip_vision_model]
image_encoder_model = self._get_image_encoder(context, image_encoder_model_name)
if self.method == "style":
if ip_adapter_info.base == "sd-1":
target_blocks = ["up_blocks.1"]
elif ip_adapter_info.base == "sdxl":
target_blocks = ["up_blocks.0.attentions.1"]
else:
raise ValueError(f"Unsupported IP-Adapter base type: '{ip_adapter_info.base}'.")
elif self.method == "composition":
if ip_adapter_info.base == "sd-1":
target_blocks = ["down_blocks.2", "mid_block"]
elif ip_adapter_info.base == "sdxl":
target_blocks = ["down_blocks.2.attentions.1"]
else:
raise ValueError(f"Unsupported IP-Adapter base type: '{ip_adapter_info.base}'.")
elif self.method == "full":
target_blocks = ["block"]
else:
raise ValueError(f"Unexpected IP-Adapter method: '{self.method}'.")
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,
ip_adapter_model=self.ip_adapter_model,
image_encoder_model=ModelIdentifierField.from_config(image_encoder_model),
weight=self.weight,
target_blocks=target_blocks,
begin_step_percent=self.begin_step_percent,
end_step_percent=self.end_step_percent,
mask=self.mask,
),
)
def _get_image_encoder(self, context: InvocationContext, image_encoder_model_name: str) -> AnyModelConfig:
found = False
while not found:
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
if not len(image_encoder_models) > 0:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed. \
Downloading and installing now. This may take a while."
)
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # Wait for up to 10 minutes
image_encoder_models = context.models.search_by_attrs(
name=image_encoder_model_name, base=BaseModelType.Any, type=ModelType.CLIPVision
)
found = len(image_encoder_models) > 0
if not found:
context.logger.warning(
f"The image encoder required by this IP Adapter ({image_encoder_model_name}) is not installed."
)
context.logger.warning("Downloading and installing now. This may take a while.")
installer = context._services.model_manager.install
job = installer.heuristic_import(f"InvokeAI/{image_encoder_model_name}")
installer.wait_for_job(job, timeout=600) # wait up to 10 minutes - then raise a TimeoutException
assert len(image_encoder_models) == 1
if len(image_encoder_models) == 0:
context.logger.error("Error while fetching CLIP Vision Image Encoder")
assert len(image_encoder_models) == 1
return image_encoder_models[0]

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,121 @@
from contextlib import nullcontext
import torch
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.attention_processor import (
AttnProcessor2_0,
LoRAAttnProcessor2_0,
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.autoencoders.autoencoder_tiny import AutoencoderTiny
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import DEFAULT_PRECISION, LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import VAEField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.extensions.seamless import SeamlessExt
from invokeai.backend.stable_diffusion.vae_tiling import patch_vae_tiling_params
from invokeai.backend.util.devices import TorchDevice
@invocation(
"l2i",
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.3.0",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
vae: VAEField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
tiled: bool = InputField(default=False, description=FieldDescriptions.tiled)
# NOTE: tile_size = 0 is a special value. We use this rather than `int | None`, because the workflow UI does not
# offer a way to directly set None values.
tile_size: int = InputField(default=0, multiple_of=8, description=FieldDescriptions.vae_tile_size)
fp32: bool = InputField(default=DEFAULT_PRECISION == torch.float32, description=FieldDescriptions.fp32)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.models.load(self.vae.vae)
assert isinstance(vae_info.model, (AutoencoderKL, AutoencoderTiny))
with SeamlessExt.static_patch_model(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, (AutoencoderKL, AutoencoderTiny))
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
use_torch_2_0_or_xformers = hasattr(vae.decoder, "mid_block") and isinstance(
vae.decoder.mid_block.attentions[0].processor,
(
AttnProcessor2_0,
XFormersAttnProcessor,
LoRAXFormersAttnProcessor,
LoRAAttnProcessor2_0,
),
)
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can save lots of memory
if use_torch_2_0_or_xformers:
vae.post_quant_conv.to(latents.dtype)
vae.decoder.conv_in.to(latents.dtype)
vae.decoder.mid_block.to(latents.dtype)
else:
latents = latents.float()
else:
vae.to(dtype=torch.float16)
latents = latents.half()
if self.tiled or context.config.get().force_tiled_decode:
vae.enable_tiling()
else:
vae.disable_tiling()
tiling_context = nullcontext()
if self.tile_size > 0:
tiling_context = patch_vae_tiling_params(
vae,
tile_sample_min_size=self.tile_size,
tile_latent_min_size=self.tile_size // LATENT_SCALE_FACTOR,
tile_overlap_factor=0.25,
)
# clear memory as vae decode can request a lot
TorchDevice.empty_cache()
with torch.inference_mode(), tiling_context:
# copied from diffusers pipeline
latents = latents / vae.config.scaling_factor
image = vae.decode(latents, return_dict=False)[0]
image = (image / 2 + 0.5).clamp(0, 1) # denormalize
# we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
np_image = image.cpu().permute(0, 2, 3, 1).float().numpy()
image = VaeImageProcessor.numpy_to_pil(np_image)[0]
TorchDevice.empty_cache()
image_dto = context.images.save(image=image)
return ImageOutput.build(image_dto)

View File

@ -0,0 +1,145 @@
import numpy as np
import torch
from PIL import Image
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, InvocationContext, invocation
from invokeai.app.invocations.fields import ImageField, InputField, TensorField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput, MaskOutput
@invocation(
"rectangle_mask",
title="Create Rectangle Mask",
tags=["conditioning"],
category="conditioning",
version="1.0.1",
)
class RectangleMaskInvocation(BaseInvocation, WithMetadata):
"""Create a rectangular mask."""
width: int = InputField(description="The width of the entire mask.")
height: int = InputField(description="The height of the entire mask.")
x_left: int = InputField(description="The left x-coordinate of the rectangular masked region (inclusive).")
y_top: int = InputField(description="The top y-coordinate of the rectangular masked region (inclusive).")
rectangle_width: int = InputField(description="The width of the rectangular masked region.")
rectangle_height: int = InputField(description="The height of the rectangular masked region.")
def invoke(self, context: InvocationContext) -> MaskOutput:
mask = torch.zeros((1, self.height, self.width), dtype=torch.bool)
mask[:, self.y_top : self.y_top + self.rectangle_height, self.x_left : self.x_left + self.rectangle_width] = (
True
)
mask_tensor_name = context.tensors.save(mask)
return MaskOutput(
mask=TensorField(tensor_name=mask_tensor_name),
width=self.width,
height=self.height,
)
@invocation(
"alpha_mask_to_tensor",
title="Alpha Mask to Tensor",
tags=["conditioning"],
category="conditioning",
version="1.0.0",
classification=Classification.Beta,
)
class AlphaMaskToTensorInvocation(BaseInvocation):
"""Convert a mask image to a tensor. Opaque regions are 1 and transparent regions are 0."""
image: ImageField = InputField(description="The mask image to convert.")
invert: bool = InputField(default=False, description="Whether to invert the mask.")
def invoke(self, context: InvocationContext) -> MaskOutput:
image = context.images.get_pil(self.image.image_name)
mask = torch.zeros((1, image.height, image.width), dtype=torch.bool)
if self.invert:
mask[0] = torch.tensor(np.array(image)[:, :, 3] == 0, dtype=torch.bool)
else:
mask[0] = torch.tensor(np.array(image)[:, :, 3] > 0, dtype=torch.bool)
return MaskOutput(
mask=TensorField(tensor_name=context.tensors.save(mask)),
height=mask.shape[1],
width=mask.shape[2],
)
@invocation(
"invert_tensor_mask",
title="Invert Tensor Mask",
tags=["conditioning"],
category="conditioning",
version="1.0.0",
classification=Classification.Beta,
)
class InvertTensorMaskInvocation(BaseInvocation):
"""Inverts a tensor mask."""
mask: TensorField = InputField(description="The tensor mask to convert.")
def invoke(self, context: InvocationContext) -> MaskOutput:
mask = context.tensors.load(self.mask.tensor_name)
inverted = ~mask
return MaskOutput(
mask=TensorField(tensor_name=context.tensors.save(inverted)),
height=inverted.shape[1],
width=inverted.shape[2],
)
@invocation(
"image_mask_to_tensor",
title="Image Mask to Tensor",
tags=["conditioning"],
category="conditioning",
version="1.0.0",
)
class ImageMaskToTensorInvocation(BaseInvocation, WithMetadata):
"""Convert a mask image to a tensor. Converts the image to grayscale and uses thresholding at the specified value."""
image: ImageField = InputField(description="The mask image to convert.")
cutoff: int = InputField(ge=0, le=255, description="Cutoff (<)", default=128)
invert: bool = InputField(default=False, description="Whether to invert the mask.")
def invoke(self, context: InvocationContext) -> MaskOutput:
image = context.images.get_pil(self.image.image_name, mode="L")
mask = torch.zeros((1, image.height, image.width), dtype=torch.bool)
if self.invert:
mask[0] = torch.tensor(np.array(image)[:, :] >= self.cutoff, dtype=torch.bool)
else:
mask[0] = torch.tensor(np.array(image)[:, :] < self.cutoff, dtype=torch.bool)
return MaskOutput(
mask=TensorField(tensor_name=context.tensors.save(mask)),
height=mask.shape[1],
width=mask.shape[2],
)
@invocation(
"tensor_mask_to_image",
title="Tensor Mask to Image",
tags=["mask"],
category="mask",
version="1.0.0",
)
class MaskTensorToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Convert a mask tensor to an image."""
mask: TensorField = InputField(description="The mask tensor to convert.")
def invoke(self, context: InvocationContext) -> ImageOutput:
mask = context.tensors.load(self.mask.tensor_name)
# Ensure that the mask is binary.
if mask.dtype != torch.bool:
mask = mask > 0.5
mask_np = (mask.float() * 255).byte().cpu().numpy()
mask_pil = Image.fromarray(mask_np, mode="L")
image_dto = context.images.save(image=mask_pil)
return ImageOutput.build(image_dto)

View File

@ -5,12 +5,11 @@ from typing import Literal
import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import FieldDescriptions, InputField
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.1")
class AddInvocation(BaseInvocation):

View File

@ -2,16 +2,7 @@ from typing import Any, Literal, Optional, Union
from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import (
CONTROLNET_MODE_VALUES,
CONTROLNET_RESIZE_VALUES,
)
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
@ -22,8 +13,8 @@ from invokeai.app.invocations.fields import (
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.services.shared.invocation_context import InvocationContext
from ...version import __version__
from invokeai.app.util.controlnet_utils import CONTROLNET_MODE_VALUES, CONTROLNET_RESIZE_VALUES
from invokeai.version.invokeai_version import __version__
class MetadataItemField(BaseModel):
@ -43,6 +34,8 @@ class IPAdapterMetadataField(BaseModel):
image: ImageField = Field(description="The IP-Adapter image prompt.")
ip_adapter_model: ModelIdentifierField = Field(description="The IP-Adapter model.")
clip_vision_model: Literal["ViT-H", "ViT-G"] = Field(description="The CLIP Vision model")
method: Literal["full", "style", "composition"] = Field(description="Method to apply IP Weights with")
weight: Union[float, list[float]] = Field(description="The weight given to the IP-Adapter")
begin_step_percent: float = Field(description="When the IP-Adapter is first applied (% of total steps)")
end_step_percent: float = Field(description="When the IP-Adapter is last applied (% of total steps)")

View File

@ -3,18 +3,18 @@ from typing import List, Optional
from pydantic import BaseModel, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType, SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
class ModelIdentifierField(BaseModel):
key: str = Field(description="The model's unique key")
@ -93,19 +93,46 @@ class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
pass
@invocation_output("model_identifier_output")
class ModelIdentifierOutput(BaseInvocationOutput):
"""Model identifier output"""
model: ModelIdentifierField = OutputField(description="Model identifier", title="Model")
@invocation(
"model_identifier",
title="Model identifier",
tags=["model"],
category="model",
version="1.0.0",
classification=Classification.Prototype,
)
class ModelIdentifierInvocation(BaseInvocation):
"""Selects any model, outputting it its identifier. Be careful with this one! The identifier will be accepted as
input for any model, even if the model types don't match. If you connect this to a mismatched input, you'll get an
error."""
model: ModelIdentifierField = InputField(description="The model to select", title="Model")
def invoke(self, context: InvocationContext) -> ModelIdentifierOutput:
if not context.models.exists(self.model.key):
raise Exception(f"Unknown model {self.model.key}")
return ModelIdentifierOutput(model=self.model)
@invocation(
"main_model_loader",
title="Main Model",
tags=["model"],
category="model",
version="1.0.2",
version="1.0.3",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.main_model, input=Input.Direct, ui_type=UIType.MainModel
)
model: ModelIdentifierField = InputField(description=FieldDescriptions.main_model, ui_type=UIType.MainModel)
# TODO: precision?
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
@ -134,12 +161,12 @@ class LoRALoaderOutput(BaseInvocationOutput):
clip: Optional[CLIPField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.2")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.3")
class LoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@ -190,6 +217,75 @@ class LoRALoaderInvocation(BaseInvocation):
return output
@invocation_output("lora_selector_output")
class LoRASelectorOutput(BaseInvocationOutput):
"""Model loader output"""
lora: LoRAField = OutputField(description="LoRA model and weight", title="LoRA")
@invocation("lora_selector", title="LoRA Selector", tags=["model"], category="model", version="1.0.1")
class LoRASelectorInvocation(BaseInvocation):
"""Selects a LoRA model and weight."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
def invoke(self, context: InvocationContext) -> LoRASelectorOutput:
return LoRASelectorOutput(lora=LoRAField(lora=self.lora, weight=self.weight))
@invocation("lora_collection_loader", title="LoRA Collection Loader", tags=["model"], category="model", version="1.0.0")
class LoRACollectionLoader(BaseInvocation):
"""Applies a collection of LoRAs to the provided UNet and CLIP models."""
loras: LoRAField | list[LoRAField] = InputField(
description="LoRA models and weights. May be a single LoRA or collection.", title="LoRAs"
)
unet: Optional[UNetField] = InputField(
default=None,
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
)
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
)
def invoke(self, context: InvocationContext) -> LoRALoaderOutput:
output = LoRALoaderOutput()
loras = self.loras if isinstance(self.loras, list) else [self.loras]
added_loras: list[str] = []
for lora in loras:
if lora.lora.key in added_loras:
continue
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
assert lora.lora.base in (BaseModelType.StableDiffusion1, BaseModelType.StableDiffusion2)
added_loras.append(lora.lora.key)
if self.unet is not None:
if output.unet is None:
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(lora)
if self.clip is not None:
if output.clip is None:
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(lora)
return output
@invocation_output("sdxl_lora_loader_output")
class SDXLLoRALoaderOutput(BaseInvocationOutput):
"""SDXL LoRA Loader Output"""
@ -204,13 +300,13 @@ class SDXLLoRALoaderOutput(BaseInvocationOutput):
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.2",
version="1.0.3",
)
class SDXLLoRALoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, input=Input.Direct, title="LoRA", ui_type=UIType.LoRAModel
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
unet: Optional[UNetField] = InputField(
@ -279,12 +375,78 @@ class SDXLLoRALoaderInvocation(BaseInvocation):
return output
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.2")
@invocation(
"sdxl_lora_collection_loader",
title="SDXL LoRA Collection Loader",
tags=["model"],
category="model",
version="1.0.0",
)
class SDXLLoRACollectionLoader(BaseInvocation):
"""Applies a collection of SDXL LoRAs to the provided UNet and CLIP models."""
loras: LoRAField | list[LoRAField] = InputField(
description="LoRA models and weights. May be a single LoRA or collection.", title="LoRAs"
)
unet: Optional[UNetField] = InputField(
default=None,
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
)
clip: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP",
)
clip2: Optional[CLIPField] = InputField(
default=None,
description=FieldDescriptions.clip,
input=Input.Connection,
title="CLIP 2",
)
def invoke(self, context: InvocationContext) -> SDXLLoRALoaderOutput:
output = SDXLLoRALoaderOutput()
loras = self.loras if isinstance(self.loras, list) else [self.loras]
added_loras: list[str] = []
for lora in loras:
if lora.lora.key in added_loras:
continue
if not context.models.exists(lora.lora.key):
raise Exception(f"Unknown lora: {lora.lora.key}!")
assert lora.lora.base is BaseModelType.StableDiffusionXL
added_loras.append(lora.lora.key)
if self.unet is not None:
if output.unet is None:
output.unet = self.unet.model_copy(deep=True)
output.unet.loras.append(lora)
if self.clip is not None:
if output.clip is None:
output.clip = self.clip.model_copy(deep=True)
output.clip.loras.append(lora)
if self.clip2 is not None:
if output.clip2 is None:
output.clip2 = self.clip2.model_copy(deep=True)
output.clip2.loras.append(lora)
return output
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.3")
class VAELoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
vae_model: ModelIdentifierField = InputField(
description=FieldDescriptions.vae_model, input=Input.Direct, title="VAE", ui_type=UIType.VAEModel
description=FieldDescriptions.vae_model, title="VAE", ui_type=UIType.VAEModel
)
def invoke(self, context: InvocationContext) -> VAEOutput:

View File

@ -4,18 +4,12 @@
import torch
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from ...backend.util.devices import choose_torch_device, torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from invokeai.backend.util.devices import TorchDevice
"""
Utilities
@ -46,7 +40,7 @@ def get_noise(
height // downsampling_factor,
width // downsampling_factor,
],
dtype=torch_dtype(device),
dtype=TorchDevice.choose_torch_dtype(device=device),
device=noise_device_type,
generator=generator,
).to("cpu")
@ -111,14 +105,14 @@ class NoiseInvocation(BaseInvocation):
@field_validator("seed", mode="before")
def modulo_seed(cls, v):
"""Returns the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
"""Return the seed modulo (SEED_MAX + 1) to ensure it is within the valid range."""
return v % (SEED_MAX + 1)
def invoke(self, context: InvocationContext) -> NoiseOutput:
noise = get_noise(
width=self.width,
height=self.height,
device=choose_torch_device(),
device=TorchDevice.choose_torch_device(),
seed=self.seed,
use_cpu=self.use_cpu,
)

View File

@ -39,12 +39,11 @@ from easing_functions import (
)
from matplotlib.ticker import MaxNLocator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField
from invokeai.app.invocations.primitives import FloatCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(
"float_range",

View File

@ -4,8 +4,10 @@ from typing import Optional
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
BoundingBoxField,
ColorField,
ConditioningField,
DenoiseMaskField,
@ -15,18 +17,12 @@ from invokeai.app.invocations.fields import (
InputField,
LatentsField,
OutputField,
TensorField,
UIComponent,
)
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
"""
Primitives: Boolean, Integer, Float, String, Image, Latents, Conditioning, Color
- primitive nodes
@ -405,9 +401,19 @@ class ColorInvocation(BaseInvocation):
# endregion
# region Conditioning
@invocation_output("mask_output")
class MaskOutput(BaseInvocationOutput):
"""A torch mask tensor."""
mask: TensorField = OutputField(description="The mask.")
width: int = OutputField(description="The width of the mask in pixels.")
height: int = OutputField(description="The height of the mask in pixels.")
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""
@ -464,3 +470,42 @@ class ConditioningCollectionInvocation(BaseInvocation):
# endregion
# region BoundingBox
@invocation_output("bounding_box_output")
class BoundingBoxOutput(BaseInvocationOutput):
"""Base class for nodes that output a single bounding box"""
bounding_box: BoundingBoxField = OutputField(description="The output bounding box.")
@invocation_output("bounding_box_collection_output")
class BoundingBoxCollectionOutput(BaseInvocationOutput):
"""Base class for nodes that output a collection of bounding boxes"""
collection: list[BoundingBoxField] = OutputField(description="The output bounding boxes.", title="Bounding Boxes")
@invocation(
"bounding_box",
title="Bounding Box",
tags=["primitives", "segmentation", "collection", "bounding box"],
category="primitives",
version="1.0.0",
)
class BoundingBoxInvocation(BaseInvocation):
"""Create a bounding box manually by supplying box coordinates"""
x_min: int = InputField(default=0, description="x-coordinate of the bounding box's top left vertex")
y_min: int = InputField(default=0, description="y-coordinate of the bounding box's top left vertex")
x_max: int = InputField(default=0, description="x-coordinate of the bounding box's bottom right vertex")
y_max: int = InputField(default=0, description="y-coordinate of the bounding box's bottom right vertex")
def invoke(self, context: InvocationContext) -> BoundingBoxOutput:
bounding_box = BoundingBoxField(x_min=self.x_min, y_min=self.y_min, x_max=self.x_max, y_max=self.y_max)
return BoundingBoxOutput(bounding_box=bounding_box)
# endregion

View File

@ -5,12 +5,11 @@ import numpy as np
from dynamicprompts.generators import CombinatorialPromptGenerator, RandomPromptGenerator
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import InputField, UIComponent
from invokeai.app.invocations.primitives import StringCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, UIComponent
@invocation(
"dynamic_prompt",

View File

@ -0,0 +1,103 @@
from typing import Literal
import torch
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
LatentsField,
)
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.util.devices import TorchDevice
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@invocation(
"lresize",
title="Resize Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.2",
)
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
width: int = InputField(
ge=64,
multiple_of=LATENT_SCALE_FACTOR,
description=FieldDescriptions.width,
)
height: int = InputField(
ge=64,
multiple_of=LATENT_SCALE_FACTOR,
description=FieldDescriptions.width,
)
mode: LATENTS_INTERPOLATION_MODE = InputField(default="bilinear", description=FieldDescriptions.interp_mode)
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.tensors.load(self.latents.latents_name)
device = TorchDevice.choose_torch_device()
resized_latents = torch.nn.functional.interpolate(
latents.to(device),
size=(self.height // LATENT_SCALE_FACTOR, self.width // LATENT_SCALE_FACTOR),
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
resized_latents = resized_latents.to("cpu")
TorchDevice.empty_cache()
name = context.tensors.save(tensor=resized_latents)
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation(
"lscale",
title="Scale Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.2",
)
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
latents: LatentsField = InputField(
description=FieldDescriptions.latents,
input=Input.Connection,
)
scale_factor: float = InputField(gt=0, description=FieldDescriptions.scale_factor)
mode: LATENTS_INTERPOLATION_MODE = InputField(default="bilinear", description=FieldDescriptions.interp_mode)
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.tensors.load(self.latents.latents_name)
device = TorchDevice.choose_torch_device()
# resizing
resized_latents = torch.nn.functional.interpolate(
latents.to(device),
scale_factor=self.scale_factor,
mode=self.mode,
antialias=self.antialias if self.mode in ["bilinear", "bicubic"] else False,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
resized_latents = resized_latents.to("cpu")
TorchDevice.empty_cache()
name = context.tensors.save(tensor=resized_latents)
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)

View File

@ -0,0 +1,34 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import (
FieldDescriptions,
InputField,
OutputField,
UIType,
)
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
@invocation_output("scheduler_output")
class SchedulerOutput(BaseInvocationOutput):
scheduler: SCHEDULER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
@invocation(
"scheduler",
title="Scheduler",
tags=["scheduler"],
category="latents",
version="1.0.0",
)
class SchedulerInvocation(BaseInvocation):
"""Selects a scheduler."""
scheduler: SCHEDULER_NAME_VALUES = InputField(
default="euler",
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
)
def invoke(self, context: InvocationContext) -> SchedulerOutput:
return SchedulerOutput(scheduler=self.scheduler)

View File

@ -1,15 +1,9 @@
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, InputField, OutputField, UIType
from invokeai.app.invocations.model import CLIPField, ModelIdentifierField, UNetField, VAEField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager import SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .model import CLIPField, ModelIdentifierField, UNetField, VAEField
@invocation_output("sdxl_model_loader_output")
class SDXLModelLoaderOutput(BaseInvocationOutput):
@ -30,12 +24,12 @@ class SDXLRefinerModelLoaderOutput(BaseInvocationOutput):
vae: VAEField = OutputField(description=FieldDescriptions.vae, title="VAE")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.2")
@invocation("sdxl_model_loader", title="SDXL Main Model", tags=["model", "sdxl"], category="model", version="1.0.3")
class SDXLModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl base model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_main_model, input=Input.Direct, ui_type=UIType.SDXLMainModel
description=FieldDescriptions.sdxl_main_model, ui_type=UIType.SDXLMainModel
)
# TODO: precision?
@ -67,13 +61,13 @@ class SDXLModelLoaderInvocation(BaseInvocation):
title="SDXL Refiner Model",
tags=["model", "sdxl", "refiner"],
category="model",
version="1.0.2",
version="1.0.3",
)
class SDXLRefinerModelLoaderInvocation(BaseInvocation):
"""Loads an sdxl refiner model, outputting its submodels."""
model: ModelIdentifierField = InputField(
description=FieldDescriptions.sdxl_refiner_model, input=Input.Direct, ui_type=UIType.SDXLRefinerModel
description=FieldDescriptions.sdxl_refiner_model, ui_type=UIType.SDXLRefinerModel
)
# TODO: precision?

View File

@ -0,0 +1,161 @@
from pathlib import Path
from typing import Literal
import numpy as np
import torch
from PIL import Image
from transformers import AutoModelForMaskGeneration, AutoProcessor
from transformers.models.sam import SamModel
from transformers.models.sam.processing_sam import SamProcessor
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import BoundingBoxField, ImageField, InputField, TensorField
from invokeai.app.invocations.primitives import MaskOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.segment_anything.mask_refinement import mask_to_polygon, polygon_to_mask
from invokeai.backend.image_util.segment_anything.segment_anything_pipeline import SegmentAnythingPipeline
SegmentAnythingModelKey = Literal["segment-anything-base", "segment-anything-large", "segment-anything-huge"]
SEGMENT_ANYTHING_MODEL_IDS: dict[SegmentAnythingModelKey, str] = {
"segment-anything-base": "facebook/sam-vit-base",
"segment-anything-large": "facebook/sam-vit-large",
"segment-anything-huge": "facebook/sam-vit-huge",
}
@invocation(
"segment_anything",
title="Segment Anything",
tags=["prompt", "segmentation"],
category="segmentation",
version="1.0.0",
)
class SegmentAnythingInvocation(BaseInvocation):
"""Runs a Segment Anything Model."""
# Reference:
# - https://arxiv.org/pdf/2304.02643
# - https://huggingface.co/docs/transformers/v4.43.3/en/model_doc/grounding-dino#grounded-sam
# - https://github.com/NielsRogge/Transformers-Tutorials/blob/a39f33ac1557b02ebfb191ea7753e332b5ca933f/Grounding%20DINO/GroundingDINO_with_Segment_Anything.ipynb
model: SegmentAnythingModelKey = InputField(description="The Segment Anything model to use.")
image: ImageField = InputField(description="The image to segment.")
bounding_boxes: list[BoundingBoxField] = InputField(description="The bounding boxes to prompt the SAM model with.")
apply_polygon_refinement: bool = InputField(
description="Whether to apply polygon refinement to the masks. This will smooth the edges of the masks slightly and ensure that each mask consists of a single closed polygon (before merging).",
default=True,
)
mask_filter: Literal["all", "largest", "highest_box_score"] = InputField(
description="The filtering to apply to the detected masks before merging them into a final output.",
default="all",
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> MaskOutput:
# The models expect a 3-channel RGB image.
image_pil = context.images.get_pil(self.image.image_name, mode="RGB")
if len(self.bounding_boxes) == 0:
combined_mask = torch.zeros(image_pil.size[::-1], dtype=torch.bool)
else:
masks = self._segment(context=context, image=image_pil)
masks = self._filter_masks(masks=masks, bounding_boxes=self.bounding_boxes)
# masks contains bool values, so we merge them via max-reduce.
combined_mask, _ = torch.stack(masks).max(dim=0)
mask_tensor_name = context.tensors.save(combined_mask)
height, width = combined_mask.shape
return MaskOutput(mask=TensorField(tensor_name=mask_tensor_name), width=width, height=height)
@staticmethod
def _load_sam_model(model_path: Path):
sam_model = AutoModelForMaskGeneration.from_pretrained(
model_path,
local_files_only=True,
# TODO(ryand): Setting the torch_dtype here doesn't work. Investigate whether fp16 is supported by the
# model, and figure out how to make it work in the pipeline.
# torch_dtype=TorchDevice.choose_torch_dtype(),
)
assert isinstance(sam_model, SamModel)
sam_processor = AutoProcessor.from_pretrained(model_path, local_files_only=True)
assert isinstance(sam_processor, SamProcessor)
return SegmentAnythingPipeline(sam_model=sam_model, sam_processor=sam_processor)
def _segment(
self,
context: InvocationContext,
image: Image.Image,
) -> list[torch.Tensor]:
"""Use Segment Anything (SAM) to generate masks given an image + a set of bounding boxes."""
# Convert the bounding boxes to the SAM input format.
sam_bounding_boxes = [[bb.x_min, bb.y_min, bb.x_max, bb.y_max] for bb in self.bounding_boxes]
with (
context.models.load_remote_model(
source=SEGMENT_ANYTHING_MODEL_IDS[self.model], loader=SegmentAnythingInvocation._load_sam_model
) as sam_pipeline,
):
assert isinstance(sam_pipeline, SegmentAnythingPipeline)
masks = sam_pipeline.segment(image=image, bounding_boxes=sam_bounding_boxes)
masks = self._process_masks(masks)
if self.apply_polygon_refinement:
masks = self._apply_polygon_refinement(masks)
return masks
def _process_masks(self, masks: torch.Tensor) -> list[torch.Tensor]:
"""Convert the tensor output from the Segment Anything model from a tensor of shape
[num_masks, channels, height, width] to a list of tensors of shape [height, width].
"""
assert masks.dtype == torch.bool
# [num_masks, channels, height, width] -> [num_masks, height, width]
masks, _ = masks.max(dim=1)
# Split the first dimension into a list of masks.
return list(masks.cpu().unbind(dim=0))
def _apply_polygon_refinement(self, masks: list[torch.Tensor]) -> list[torch.Tensor]:
"""Apply polygon refinement to the masks.
Convert each mask to a polygon, then back to a mask. This has the following effect:
- Smooth the edges of the mask slightly.
- Ensure that each mask consists of a single closed polygon
- Removes small mask pieces.
- Removes holes from the mask.
"""
# Convert tensor masks to np masks.
np_masks = [mask.cpu().numpy().astype(np.uint8) for mask in masks]
# Apply polygon refinement.
for idx, mask in enumerate(np_masks):
shape = mask.shape
assert len(shape) == 2 # Assert length to satisfy type checker.
polygon = mask_to_polygon(mask)
mask = polygon_to_mask(polygon, shape)
np_masks[idx] = mask
# Convert np masks back to tensor masks.
masks = [torch.tensor(mask, dtype=torch.bool) for mask in np_masks]
return masks
def _filter_masks(self, masks: list[torch.Tensor], bounding_boxes: list[BoundingBoxField]) -> list[torch.Tensor]:
"""Filter the detected masks based on the specified mask filter."""
assert len(masks) == len(bounding_boxes)
if self.mask_filter == "all":
return masks
elif self.mask_filter == "largest":
# Find the largest mask.
return [max(masks, key=lambda x: float(x.sum()))]
elif self.mask_filter == "highest_box_score":
# Find the index of the bounding box with the highest score.
# Note that we fallback to -1.0 if the score is None. This is mainly to satisfy the type checker. In most
# cases the scores should all be non-None when using this filtering mode. That being said, -1.0 is a
# reasonable fallback since the expected score range is [0.0, 1.0].
max_score_idx = max(range(len(bounding_boxes)), key=lambda i: bounding_boxes[i].score or -1.0)
return [masks[max_score_idx]]
else:
raise ValueError(f"Invalid mask filter: {self.mask_filter}")

View File

@ -0,0 +1,253 @@
from typing import Callable
import numpy as np
import torch
from PIL import Image
from tqdm import tqdm
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
UIType,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.session_processor.session_processor_common import CanceledException
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.spandrel_image_to_image_model import SpandrelImageToImageModel
from invokeai.backend.tiles.tiles import calc_tiles_min_overlap
from invokeai.backend.tiles.utils import TBLR, Tile
@invocation("spandrel_image_to_image", title="Image-to-Image", tags=["upscale"], category="upscale", version="1.3.0")
class SpandrelImageToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Run any spandrel image-to-image model (https://github.com/chaiNNer-org/spandrel)."""
image: ImageField = InputField(description="The input image")
image_to_image_model: ModelIdentifierField = InputField(
title="Image-to-Image Model",
description=FieldDescriptions.spandrel_image_to_image_model,
ui_type=UIType.SpandrelImageToImageModel,
)
tile_size: int = InputField(
default=512, description="The tile size for tiled image-to-image. Set to 0 to disable tiling."
)
@classmethod
def scale_tile(cls, tile: Tile, scale: int) -> Tile:
return Tile(
coords=TBLR(
top=tile.coords.top * scale,
bottom=tile.coords.bottom * scale,
left=tile.coords.left * scale,
right=tile.coords.right * scale,
),
overlap=TBLR(
top=tile.overlap.top * scale,
bottom=tile.overlap.bottom * scale,
left=tile.overlap.left * scale,
right=tile.overlap.right * scale,
),
)
@classmethod
def upscale_image(
cls,
image: Image.Image,
tile_size: int,
spandrel_model: SpandrelImageToImageModel,
is_canceled: Callable[[], bool],
) -> Image.Image:
# Compute the image tiles.
if tile_size > 0:
min_overlap = 20
tiles = calc_tiles_min_overlap(
image_height=image.height,
image_width=image.width,
tile_height=tile_size,
tile_width=tile_size,
min_overlap=min_overlap,
)
else:
# No tiling. Generate a single tile that covers the entire image.
min_overlap = 0
tiles = [
Tile(
coords=TBLR(top=0, bottom=image.height, left=0, right=image.width),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
# Sort tiles first by left x coordinate, then by top y coordinate. During tile processing, we want to iterate
# over tiles left-to-right, top-to-bottom.
tiles = sorted(tiles, key=lambda x: x.coords.left)
tiles = sorted(tiles, key=lambda x: x.coords.top)
# Prepare input image for inference.
image_tensor = SpandrelImageToImageModel.pil_to_tensor(image)
# Scale the tiles for re-assembling the final image.
scale = spandrel_model.scale
scaled_tiles = [cls.scale_tile(tile, scale=scale) for tile in tiles]
# Prepare the output tensor.
_, channels, height, width = image_tensor.shape
output_tensor = torch.zeros(
(height * scale, width * scale, channels), dtype=torch.uint8, device=torch.device("cpu")
)
image_tensor = image_tensor.to(device=spandrel_model.device, dtype=spandrel_model.dtype)
# Run the model on each tile.
for tile, scaled_tile in tqdm(list(zip(tiles, scaled_tiles, strict=True)), desc="Upscaling Tiles"):
# Exit early if the invocation has been canceled.
if is_canceled():
raise CanceledException
# Extract the current tile from the input tensor.
input_tile = image_tensor[
:, :, tile.coords.top : tile.coords.bottom, tile.coords.left : tile.coords.right
].to(device=spandrel_model.device, dtype=spandrel_model.dtype)
# Run the model on the tile.
output_tile = spandrel_model.run(input_tile)
# Convert the output tile into the output tensor's format.
# (N, C, H, W) -> (C, H, W)
output_tile = output_tile.squeeze(0)
# (C, H, W) -> (H, W, C)
output_tile = output_tile.permute(1, 2, 0)
output_tile = output_tile.clamp(0, 1)
output_tile = (output_tile * 255).to(dtype=torch.uint8, device=torch.device("cpu"))
# Merge the output tile into the output tensor.
# We only keep half of the overlap on the top and left side of the tile. We do this in case there are
# edge artifacts. We don't bother with any 'blending' in the current implementation - for most upscalers
# it seems unnecessary, but we may find a need in the future.
top_overlap = scaled_tile.overlap.top // 2
left_overlap = scaled_tile.overlap.left // 2
output_tensor[
scaled_tile.coords.top + top_overlap : scaled_tile.coords.bottom,
scaled_tile.coords.left + left_overlap : scaled_tile.coords.right,
:,
] = output_tile[top_overlap:, left_overlap:, :]
# Convert the output tensor to a PIL image.
np_image = output_tensor.detach().numpy().astype(np.uint8)
pil_image = Image.fromarray(np_image)
return pil_image
@torch.inference_mode()
def invoke(self, context: InvocationContext) -> ImageOutput:
# Images are converted to RGB, because most models don't support an alpha channel. In the future, we may want to
# revisit this.
image = context.images.get_pil(self.image.image_name, mode="RGB")
# Load the model.
spandrel_model_info = context.models.load(self.image_to_image_model)
# Do the upscaling.
with spandrel_model_info as spandrel_model:
assert isinstance(spandrel_model, SpandrelImageToImageModel)
# Upscale the image
pil_image = self.upscale_image(image, self.tile_size, spandrel_model, context.util.is_canceled)
image_dto = context.images.save(image=pil_image)
return ImageOutput.build(image_dto)
@invocation(
"spandrel_image_to_image_autoscale",
title="Image-to-Image (Autoscale)",
tags=["upscale"],
category="upscale",
version="1.0.0",
)
class SpandrelImageToImageAutoscaleInvocation(SpandrelImageToImageInvocation):
"""Run any spandrel image-to-image model (https://github.com/chaiNNer-org/spandrel) until the target scale is reached."""
scale: float = InputField(
default=4.0,
gt=0.0,
le=16.0,
description="The final scale of the output image. If the model does not upscale the image, this will be ignored.",
)
fit_to_multiple_of_8: bool = InputField(
default=False,
description="If true, the output image will be resized to the nearest multiple of 8 in both dimensions.",
)
@torch.inference_mode()
def invoke(self, context: InvocationContext) -> ImageOutput:
# Images are converted to RGB, because most models don't support an alpha channel. In the future, we may want to
# revisit this.
image = context.images.get_pil(self.image.image_name, mode="RGB")
# Load the model.
spandrel_model_info = context.models.load(self.image_to_image_model)
# The target size of the image, determined by the provided scale. We'll run the upscaler until we hit this size.
# Later, we may mutate this value if the model doesn't upscale the image or if the user requested a multiple of 8.
target_width = int(image.width * self.scale)
target_height = int(image.height * self.scale)
# Do the upscaling.
with spandrel_model_info as spandrel_model:
assert isinstance(spandrel_model, SpandrelImageToImageModel)
# First pass of upscaling. Note: `pil_image` will be mutated.
pil_image = self.upscale_image(image, self.tile_size, spandrel_model, context.util.is_canceled)
# Some models don't upscale the image, but we have no way to know this in advance. We'll check if the model
# upscaled the image and run the loop below if it did. We'll require the model to upscale both dimensions
# to be considered an upscale model.
is_upscale_model = pil_image.width > image.width and pil_image.height > image.height
if is_upscale_model:
# This is an upscale model, so we should keep upscaling until we reach the target size.
iterations = 1
while pil_image.width < target_width or pil_image.height < target_height:
pil_image = self.upscale_image(pil_image, self.tile_size, spandrel_model, context.util.is_canceled)
iterations += 1
# Sanity check to prevent excessive or infinite loops. All known upscaling models are at least 2x.
# Our max scale is 16x, so with a 2x model, we should never exceed 16x == 2^4 -> 4 iterations.
# We'll allow one extra iteration "just in case" and bail at 5 upscaling iterations. In practice,
# we should never reach this limit.
if iterations >= 5:
context.logger.warning(
"Upscale loop reached maximum iteration count of 5, stopping upscaling early."
)
break
else:
# This model doesn't upscale the image. We should ignore the scale parameter, modifying the output size
# to be the same as the processed image size.
# The output size is now the size of the processed image.
target_width = pil_image.width
target_height = pil_image.height
# Warn the user if they requested a scale greater than 1.
if self.scale > 1:
context.logger.warning(
"Model does not increase the size of the image, but a greater scale than 1 was requested. Image will not be scaled."
)
# We may need to resize the image to a multiple of 8. Use floor division to ensure we don't scale the image up
# in the final resize
if self.fit_to_multiple_of_8:
target_width = int(target_width // 8 * 8)
target_height = int(target_height // 8 * 8)
# Final resize. Per PIL documentation, Lanczos provides the best quality for both upscale and downscale.
# See: https://pillow.readthedocs.io/en/stable/handbook/concepts.html#filters-comparison-table
pil_image = pil_image.resize((target_width, target_height), resample=Image.Resampling.LANCZOS)
image_dto = context.images.save(image=pil_image)
return ImageOutput.build(image_dto)

View File

@ -2,17 +2,11 @@
import re
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import InputField, OutputField, UIComponent
from invokeai.app.invocations.primitives import StringOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
invocation,
invocation_output,
)
from .fields import InputField, OutputField, UIComponent
from .primitives import StringOutput
@invocation_output("string_pos_neg_output")
class StringPosNegOutput(BaseInvocationOutput):

View File

@ -8,11 +8,11 @@ from invokeai.app.invocations.baseinvocation import (
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, Input, InputField, OutputField, UIType
from invokeai.app.invocations.fields import FieldDescriptions, ImageField, InputField, OutputField, UIType
from invokeai.app.invocations.model import ModelIdentifierField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import CONTROLNET_RESIZE_VALUES
class T2IAdapterField(BaseModel):
@ -45,7 +45,7 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.2"
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.3"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
@ -55,7 +55,6 @@ class T2IAdapterInvocation(BaseInvocation):
t2i_adapter_model: ModelIdentifierField = InputField(
description="The T2I-Adapter model.",
title="T2I-Adapter Model",
input=Input.Direct,
ui_order=-1,
ui_type=UIType.T2IAdapterModel,
)

View File

@ -0,0 +1,287 @@
import copy
from contextlib import ExitStack
from typing import Iterator, Tuple
import torch
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers.scheduling_utils import SchedulerMixin
from pydantic import field_validator
from invokeai.app.invocations.baseinvocation import BaseInvocation, Classification, invocation
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.denoise_latents import DenoiseLatentsInvocation, get_scheduler
from invokeai.app.invocations.fields import (
ConditioningField,
FieldDescriptions,
Input,
InputField,
LatentsField,
UIType,
)
from invokeai.app.invocations.model import UNetField
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusers_pipeline import ControlNetData, PipelineIntermediateState
from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
MultiDiffusionPipeline,
MultiDiffusionRegionConditioning,
)
from invokeai.backend.stable_diffusion.schedulers.schedulers import SCHEDULER_NAME_VALUES
from invokeai.backend.tiles.tiles import (
calc_tiles_min_overlap,
)
from invokeai.backend.tiles.utils import TBLR
from invokeai.backend.util.devices import TorchDevice
def crop_controlnet_data(control_data: ControlNetData, latent_region: TBLR) -> ControlNetData:
"""Crop a ControlNetData object to a region."""
# Create a shallow copy of the control_data object.
control_data_copy = copy.copy(control_data)
# The ControlNet reference image is the only attribute that needs to be cropped.
control_data_copy.image_tensor = control_data.image_tensor[
:,
:,
latent_region.top * LATENT_SCALE_FACTOR : latent_region.bottom * LATENT_SCALE_FACTOR,
latent_region.left * LATENT_SCALE_FACTOR : latent_region.right * LATENT_SCALE_FACTOR,
]
return control_data_copy
@invocation(
"tiled_multi_diffusion_denoise_latents",
title="Tiled Multi-Diffusion Denoise Latents",
tags=["upscale", "denoise"],
category="latents",
classification=Classification.Beta,
version="1.0.0",
)
class TiledMultiDiffusionDenoiseLatents(BaseInvocation):
"""Tiled Multi-Diffusion denoising.
This node handles automatically tiling the input image, and is primarily intended for global refinement of images
in tiled upscaling workflows. Future Multi-Diffusion nodes should allow the user to specify custom regions with
different parameters for each region to harness the full power of Multi-Diffusion.
This node has a similar interface to the `DenoiseLatents` node, but it has a reduced feature set (no IP-Adapter,
T2I-Adapter, masking, etc.).
"""
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond, input=Input.Connection
)
negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond, input=Input.Connection
)
noise: LatentsField | None = InputField(
default=None,
description=FieldDescriptions.noise,
input=Input.Connection,
)
latents: LatentsField | None = InputField(
default=None,
description=FieldDescriptions.latents,
input=Input.Connection,
)
tile_height: int = InputField(
default=1024, gt=0, multiple_of=LATENT_SCALE_FACTOR, description="Height of the tiles in image space."
)
tile_width: int = InputField(
default=1024, gt=0, multiple_of=LATENT_SCALE_FACTOR, description="Width of the tiles in image space."
)
tile_overlap: int = InputField(
default=32,
multiple_of=LATENT_SCALE_FACTOR,
gt=0,
description="The overlap between adjacent tiles in pixel space. (Of course, tile merging is applied in latent "
"space.) Tiles will be cropped during merging (if necessary) to ensure that they overlap by exactly this "
"amount.",
)
steps: int = InputField(default=18, gt=0, description=FieldDescriptions.steps)
cfg_scale: float | list[float] = InputField(default=6.0, description=FieldDescriptions.cfg_scale, title="CFG Scale")
denoising_start: float = InputField(
default=0.0,
ge=0,
le=1,
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
scheduler: SCHEDULER_NAME_VALUES = InputField(
default="euler",
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
)
unet: UNetField = InputField(
description=FieldDescriptions.unet,
input=Input.Connection,
title="UNet",
)
cfg_rescale_multiplier: float = InputField(
title="CFG Rescale Multiplier", default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
)
control: ControlField | list[ControlField] | None = InputField(
default=None,
input=Input.Connection,
)
@field_validator("cfg_scale")
def ge_one(cls, v: list[float] | float) -> list[float] | float:
"""Validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError("cfg_scale must be greater than 1")
else:
if v < 1:
raise ValueError("cfg_scale must be greater than 1")
return v
@staticmethod
def create_pipeline(
unet: UNet2DConditionModel,
scheduler: SchedulerMixin,
) -> MultiDiffusionPipeline:
# TODO(ryand): Get rid of this FakeVae hack.
class FakeVae:
class FakeVaeConfig:
def __init__(self) -> None:
self.block_out_channels = [0]
def __init__(self) -> None:
self.config = FakeVae.FakeVaeConfig()
return MultiDiffusionPipeline(
vae=FakeVae(),
text_encoder=None,
tokenizer=None,
unet=unet,
scheduler=scheduler,
safety_checker=None,
feature_extractor=None,
requires_safety_checker=False,
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
# Convert tile image-space dimensions to latent-space dimensions.
latent_tile_height = self.tile_height // LATENT_SCALE_FACTOR
latent_tile_width = self.tile_width // LATENT_SCALE_FACTOR
latent_tile_overlap = self.tile_overlap // LATENT_SCALE_FACTOR
seed, noise, latents = DenoiseLatentsInvocation.prepare_noise_and_latents(context, self.noise, self.latents)
_, _, latent_height, latent_width = latents.shape
# Calculate the tile locations to cover the latent-space image.
# TODO(ryand): In the future, we may want to revisit the tile overlap strategy. Things to consider:
# - How much overlap 'context' to provide for each denoising step.
# - How much overlap to use during merging/blending.
# - Should we 'jitter' the tile locations in each step so that the seams are in different places?
tiles = calc_tiles_min_overlap(
image_height=latent_height,
image_width=latent_width,
tile_height=latent_tile_height,
tile_width=latent_tile_width,
min_overlap=latent_tile_overlap,
)
# Get the unet's config so that we can pass the base to sd_step_callback().
unet_config = context.models.get_config(self.unet.unet.key)
def step_callback(state: PipelineIntermediateState) -> None:
context.util.sd_step_callback(state, unet_config.base)
# Prepare an iterator that yields the UNet's LoRA models and their weights.
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.unet.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info
# Load the UNet model.
unet_info = context.models.load(self.unet.unet)
with ExitStack() as exit_stack, unet_info as unet, ModelPatcher.apply_lora_unet(unet, _lora_loader()):
assert isinstance(unet, UNet2DConditionModel)
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:
noise = noise.to(device=unet.device, dtype=unet.dtype)
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
seed=seed,
)
pipeline = self.create_pipeline(unet=unet, scheduler=scheduler)
# Prepare the prompt conditioning data. The same prompt conditioning is applied to all tiles.
conditioning_data = DenoiseLatentsInvocation.get_conditioning_data(
context=context,
positive_conditioning_field=self.positive_conditioning,
negative_conditioning_field=self.negative_conditioning,
device=unet.device,
dtype=unet.dtype,
latent_height=latent_tile_height,
latent_width=latent_tile_width,
cfg_scale=self.cfg_scale,
steps=self.steps,
cfg_rescale_multiplier=self.cfg_rescale_multiplier,
)
controlnet_data = DenoiseLatentsInvocation.prep_control_data(
context=context,
control_input=self.control,
latents_shape=list(latents.shape),
# do_classifier_free_guidance=(self.cfg_scale >= 1.0))
do_classifier_free_guidance=True,
exit_stack=exit_stack,
)
# Split the controlnet_data into tiles.
# controlnet_data_tiles[t][c] is the c'th control data for the t'th tile.
controlnet_data_tiles: list[list[ControlNetData]] = []
for tile in tiles:
tile_controlnet_data = [crop_controlnet_data(cn, tile.coords) for cn in controlnet_data or []]
controlnet_data_tiles.append(tile_controlnet_data)
# Prepare the MultiDiffusionRegionConditioning list.
multi_diffusion_conditioning: list[MultiDiffusionRegionConditioning] = []
for tile, tile_controlnet_data in zip(tiles, controlnet_data_tiles, strict=True):
multi_diffusion_conditioning.append(
MultiDiffusionRegionConditioning(
region=tile,
text_conditioning_data=conditioning_data,
control_data=tile_controlnet_data,
)
)
timesteps, init_timestep, scheduler_step_kwargs = DenoiseLatentsInvocation.init_scheduler(
scheduler,
device=unet.device,
steps=self.steps,
denoising_start=self.denoising_start,
denoising_end=self.denoising_end,
seed=seed,
)
# Run Multi-Diffusion denoising.
result_latents = pipeline.multi_diffusion_denoise(
multi_diffusion_conditioning=multi_diffusion_conditioning,
target_overlap=latent_tile_overlap,
latents=latents,
scheduler_step_kwargs=scheduler_step_kwargs,
noise=noise,
timesteps=timesteps,
init_timestep=init_timestep,
callback=step_callback,
)
result_latents = result_latents.to("cpu")
# TODO(ryand): I copied this from DenoiseLatentsInvocation. I'm not sure if it's actually important.
TorchDevice.empty_cache()
name = context.tensors.save(tensor=result_latents)
return LatentsOutput.build(latents_name=name, latents=result_latents, seed=None)

View File

@ -1,23 +1,17 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) & the InvokeAI Team
from pathlib import Path
from typing import Literal
import cv2
import numpy as np
import torch
from PIL import Image
from pydantic import ConfigDict
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.baseinvocation import BaseInvocation, invocation
from invokeai.app.invocations.fields import ImageField, InputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.download_with_progress import download_with_progress_bar
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import choose_torch_device
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
# TODO: Populate this from disk?
# TODO: Use model manager to load?
@ -35,9 +29,6 @@ ESRGAN_MODEL_URLS: dict[str, str] = {
"RealESRGAN_x2plus.pth": "https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.1/RealESRGAN_x2plus.pth",
}
if choose_torch_device() == torch.device("mps"):
from torch import mps
@invocation("esrgan", title="Upscale (RealESRGAN)", tags=["esrgan", "upscale"], category="esrgan", version="1.3.2")
class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
@ -56,7 +47,6 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
rrdbnet_model = None
netscale = None
esrgan_model_path = None
if self.model_name in [
"RealESRGAN_x4plus.pth",
@ -99,30 +89,25 @@ class ESRGANInvocation(BaseInvocation, WithMetadata, WithBoard):
context.logger.error(msg)
raise ValueError(msg)
esrgan_model_path = Path(context.config.get().models_path, f"core/upscaling/realesrgan/{self.model_name}")
# Downloads the ESRGAN model if it doesn't already exist
download_with_progress_bar(
name=self.model_name, url=ESRGAN_MODEL_URLS[self.model_name], dest_path=esrgan_model_path
loadnet = context.models.load_remote_model(
source=ESRGAN_MODEL_URLS[self.model_name],
)
upscaler = RealESRGAN(
scale=netscale,
model_path=esrgan_model_path,
model=rrdbnet_model,
half=False,
tile=self.tile_size,
)
with loadnet as loadnet_model:
upscaler = RealESRGAN(
scale=netscale,
loadnet=loadnet_model,
model=rrdbnet_model,
half=False,
tile=self.tile_size,
)
# prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL
# TODO: This strips the alpha... is that okay?
cv2_image = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
upscaled_image = upscaler.upscale(cv2_image)
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
# prepare image - Real-ESRGAN uses cv2 internally, and cv2 uses BGR vs RGB for PIL
# TODO: This strips the alpha... is that okay?
cv2_image = cv2.cvtColor(np.array(image.convert("RGB")), cv2.COLOR_RGB2BGR)
upscaled_image = upscaler.upscale(cv2_image)
torch.cuda.empty_cache()
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
pil_image = Image.fromarray(cv2.cvtColor(upscaled_image, cv2.COLOR_BGR2RGB)).convert("RGBA")
image_dto = context.images.save(image=pil_image)

View File

@ -2,12 +2,11 @@ import sqlite3
import threading
from typing import Optional, cast
from invokeai.app.services.board_image_records.board_image_records_base import BoardImageRecordStorageBase
from invokeai.app.services.image_records.image_records_common import ImageRecord, deserialize_image_record
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .board_image_records_base import BoardImageRecordStorageBase
class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
_conn: sqlite3.Connection

View File

@ -1,9 +1,8 @@
from typing import Optional
from invokeai.app.services.board_images.board_images_base import BoardImagesServiceABC
from invokeai.app.services.invoker import Invoker
from .board_images_base import BoardImagesServiceABC
class BoardImagesService(BoardImagesServiceABC):
__invoker: Invoker

View File

@ -1,9 +1,8 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges, BoardRecord
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .board_records_common import BoardChanges, BoardRecord
class BoardRecordStorageBase(ABC):
"""Low-level service responsible for interfacing with the board record store."""
@ -40,16 +39,12 @@ class BoardRecordStorageBase(ABC):
@abstractmethod
def get_many(
self,
offset: int = 0,
limit: int = 10,
self, offset: int = 0, limit: int = 10, include_archived: bool = False
) -> OffsetPaginatedResults[BoardRecord]:
"""Gets many board records."""
pass
@abstractmethod
def get_all(
self,
) -> list[BoardRecord]:
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
"""Gets all board records."""
pass

View File

@ -22,6 +22,10 @@ class BoardRecord(BaseModelExcludeNull):
"""The updated timestamp of the image."""
cover_image_name: Optional[str] = Field(default=None, description="The name of the cover image of the board.")
"""The name of the cover image of the board."""
archived: bool = Field(description="Whether or not the board is archived.")
"""Whether or not the board is archived."""
is_private: Optional[bool] = Field(default=None, description="Whether the board is private.")
"""Whether the board is private."""
def deserialize_board_record(board_dict: dict) -> BoardRecord:
@ -35,6 +39,8 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
created_at = board_dict.get("created_at", get_iso_timestamp())
updated_at = board_dict.get("updated_at", get_iso_timestamp())
deleted_at = board_dict.get("deleted_at", get_iso_timestamp())
archived = board_dict.get("archived", False)
is_private = board_dict.get("is_private", False)
return BoardRecord(
board_id=board_id,
@ -43,12 +49,15 @@ def deserialize_board_record(board_dict: dict) -> BoardRecord:
created_at=created_at,
updated_at=updated_at,
deleted_at=deleted_at,
archived=archived,
is_private=is_private,
)
class BoardChanges(BaseModel, extra="forbid"):
board_name: Optional[str] = Field(default=None, description="The board's new name.")
cover_image_name: Optional[str] = Field(default=None, description="The name of the board's new cover image.")
archived: Optional[bool] = Field(default=None, description="Whether or not the board is archived")
class BoardRecordNotFoundException(Exception):

View File

@ -2,12 +2,8 @@ import sqlite3
import threading
from typing import Union, cast
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.util.misc import uuid_string
from .board_records_base import BoardRecordStorageBase
from .board_records_common import (
from invokeai.app.services.board_records.board_records_base import BoardRecordStorageBase
from invokeai.app.services.board_records.board_records_common import (
BoardChanges,
BoardRecord,
BoardRecordDeleteException,
@ -15,6 +11,9 @@ from .board_records_common import (
BoardRecordSaveException,
deserialize_board_record,
)
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.util.misc import uuid_string
class SqliteBoardRecordStorage(BoardRecordStorageBase):
@ -125,6 +124,17 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
(changes.cover_image_name, board_id),
)
# Change the archived status of a board
if changes.archived is not None:
self._cursor.execute(
"""--sql
UPDATE boards
SET archived = ?
WHERE board_id = ?;
""",
(changes.archived, board_id),
)
self._conn.commit()
except sqlite3.Error as e:
self._conn.rollback()
@ -134,35 +144,49 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
return self.get(board_id)
def get_many(
self,
offset: int = 0,
limit: int = 10,
self, offset: int = 0, limit: int = 10, include_archived: bool = False
) -> OffsetPaginatedResults[BoardRecord]:
try:
self._lock.acquire()
# Get all the boards
self._cursor.execute(
"""--sql
# Build base query
base_query = """
SELECT *
FROM boards
{archived_filter}
ORDER BY created_at DESC
LIMIT ? OFFSET ?;
""",
(limit, offset),
)
"""
# Determine archived filter condition
if include_archived:
archived_filter = ""
else:
archived_filter = "WHERE archived = 0"
final_query = base_query.format(archived_filter=archived_filter)
# Execute query to fetch boards
self._cursor.execute(final_query, (limit, offset))
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = [deserialize_board_record(dict(r)) for r in result]
# Get the total number of boards
self._cursor.execute(
"""--sql
SELECT COUNT(*)
FROM boards
WHERE 1=1;
# Determine count query
if include_archived:
count_query = """
SELECT COUNT(*)
FROM boards;
"""
)
else:
count_query = """
SELECT COUNT(*)
FROM boards
WHERE archived = 0;
"""
# Execute count query
self._cursor.execute(count_query)
count = cast(int, self._cursor.fetchone()[0])
@ -174,20 +198,25 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
finally:
self._lock.release()
def get_all(
self,
) -> list[BoardRecord]:
def get_all(self, include_archived: bool = False) -> list[BoardRecord]:
try:
self._lock.acquire()
# Get all the boards
self._cursor.execute(
"""--sql
base_query = """
SELECT *
FROM boards
{archived_filter}
ORDER BY created_at DESC
"""
)
"""
if include_archived:
archived_filter = ""
else:
archived_filter = "WHERE archived = 0"
final_query = base_query.format(archived_filter=archived_filter)
self._cursor.execute(final_query)
result = cast(list[sqlite3.Row], self._cursor.fetchall())
boards = [deserialize_board_record(dict(r)) for r in result]

View File

@ -1,10 +1,9 @@
from abc import ABC, abstractmethod
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .boards_common import BoardDTO
class BoardServiceABC(ABC):
"""High-level service for board management."""
@ -44,16 +43,12 @@ class BoardServiceABC(ABC):
@abstractmethod
def get_many(
self,
offset: int = 0,
limit: int = 10,
self, offset: int = 0, limit: int = 10, include_archived: bool = False
) -> OffsetPaginatedResults[BoardDTO]:
"""Gets many boards."""
pass
@abstractmethod
def get_all(
self,
) -> list[BoardDTO]:
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
"""Gets all boards."""
pass

View File

@ -2,7 +2,7 @@ from typing import Optional
from pydantic import Field
from ..board_records.board_records_common import BoardRecord
from invokeai.app.services.board_records.board_records_common import BoardRecord
class BoardDTO(BoardRecord):

View File

@ -1,11 +1,9 @@
from invokeai.app.services.board_records.board_records_common import BoardChanges
from invokeai.app.services.boards.boards_common import BoardDTO
from invokeai.app.services.boards.boards_base import BoardServiceABC
from invokeai.app.services.boards.boards_common import BoardDTO, board_record_to_dto
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
from .boards_base import BoardServiceABC
from .boards_common import board_record_to_dto
class BoardService(BoardServiceABC):
__invoker: Invoker
@ -48,8 +46,10 @@ class BoardService(BoardServiceABC):
def delete(self, board_id: str) -> None:
self.__invoker.services.board_records.delete(board_id)
def get_many(self, offset: int = 0, limit: int = 10) -> OffsetPaginatedResults[BoardDTO]:
board_records = self.__invoker.services.board_records.get_many(offset, limit)
def get_many(
self, offset: int = 0, limit: int = 10, include_archived: bool = False
) -> OffsetPaginatedResults[BoardDTO]:
board_records = self.__invoker.services.board_records.get_many(offset, limit, include_archived)
board_dtos = []
for r in board_records.items:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)
@ -63,8 +63,8 @@ class BoardService(BoardServiceABC):
return OffsetPaginatedResults[BoardDTO](items=board_dtos, offset=offset, limit=limit, total=len(board_dtos))
def get_all(self) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all()
def get_all(self, include_archived: bool = False) -> list[BoardDTO]:
board_records = self.__invoker.services.board_records.get_all(include_archived)
board_dtos = []
for r in board_records:
cover_image = self.__invoker.services.image_records.get_most_recent_image_for_board(r.board_id)

View File

@ -4,6 +4,7 @@ from typing import Optional, Union
from zipfile import ZipFile
from invokeai.app.services.board_records.board_records_common import BoardRecordNotFoundException
from invokeai.app.services.bulk_download.bulk_download_base import BulkDownloadBase
from invokeai.app.services.bulk_download.bulk_download_common import (
DEFAULT_BULK_DOWNLOAD_ID,
BulkDownloadException,
@ -15,8 +16,6 @@ from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.invoker import Invoker
from invokeai.app.util.misc import uuid_string
from .bulk_download_base import BulkDownloadBase
class BulkDownloadService(BulkDownloadBase):
def start(self, invoker: Invoker) -> None:
@ -106,9 +105,7 @@ class BulkDownloadService(BulkDownloadBase):
if self._invoker:
assert bulk_download_id is not None
self._invoker.services.events.emit_bulk_download_started(
bulk_download_id=bulk_download_id,
bulk_download_item_id=bulk_download_item_id,
bulk_download_item_name=bulk_download_item_name,
bulk_download_id, bulk_download_item_id, bulk_download_item_name
)
def _signal_job_completed(
@ -118,10 +115,8 @@ class BulkDownloadService(BulkDownloadBase):
if self._invoker:
assert bulk_download_id is not None
assert bulk_download_item_name is not None
self._invoker.services.events.emit_bulk_download_completed(
bulk_download_id=bulk_download_id,
bulk_download_item_id=bulk_download_item_id,
bulk_download_item_name=bulk_download_item_name,
self._invoker.services.events.emit_bulk_download_complete(
bulk_download_id, bulk_download_item_id, bulk_download_item_name
)
def _signal_job_failed(
@ -131,11 +126,8 @@ class BulkDownloadService(BulkDownloadBase):
if self._invoker:
assert bulk_download_id is not None
assert exception is not None
self._invoker.services.events.emit_bulk_download_failed(
bulk_download_id=bulk_download_id,
bulk_download_item_id=bulk_download_item_id,
bulk_download_item_name=bulk_download_item_name,
error=str(exception),
self._invoker.services.events.emit_bulk_download_error(
bulk_download_id, bulk_download_item_id, bulk_download_item_name, str(exception)
)
def stop(self, *args, **kwargs):

View File

@ -1,7 +1,6 @@
"""Init file for InvokeAI configure package."""
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_config
from invokeai.app.services.config.config_default import InvokeAIAppConfig, get_config
__all__ = ["InvokeAIAppConfig", "get_config", "PagingArgumentParser"]

View File

@ -3,6 +3,8 @@
from __future__ import annotations
import copy
import locale
import os
import re
import shutil
@ -24,14 +26,13 @@ DB_FILE = Path("invokeai.db")
LEGACY_INIT_FILE = Path("invokeai.init")
DEFAULT_RAM_CACHE = 10.0
DEFAULT_VRAM_CACHE = 0.25
DEFAULT_CONVERT_CACHE = 20.0
DEVICE = Literal["auto", "cpu", "cuda", "cuda:1", "mps"]
PRECISION = Literal["auto", "float16", "bfloat16", "float32", "autocast"]
PRECISION = Literal["auto", "float16", "bfloat16", "float32"]
ATTENTION_TYPE = Literal["auto", "normal", "xformers", "sliced", "torch-sdp"]
ATTENTION_SLICE_SIZE = Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8]
LOG_FORMAT = Literal["plain", "color", "syslog", "legacy"]
LOG_LEVEL = Literal["debug", "info", "warning", "error", "critical"]
CONFIG_SCHEMA_VERSION = "4.0.0"
CONFIG_SCHEMA_VERSION = "4.0.2"
def get_default_ram_cache_size() -> float:
@ -84,7 +85,8 @@ class InvokeAIAppConfig(BaseSettings):
log_tokenization: Enable logging of parsed prompt tokens.
patchmatch: Enable patchmatch inpaint code.
models_dir: Path to the models directory.
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
convert_cache_dir: Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).
download_cache_dir: Path to the directory that contains dynamically downloaded models.
legacy_conf_dir: Path to directory of legacy checkpoint config files.
db_dir: Path to InvokeAI databases directory.
outputs_dir: Path to directory for outputs.
@ -100,17 +102,17 @@ class InvokeAIAppConfig(BaseSettings):
profiles_dir: Path to profiles output directory.
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
vram: Amount of VRAM reserved for model storage (GB).
convert_cache: Maximum size of on-disk converted models cache (GB).
lazy_offload: Keep models in VRAM until their space is needed.
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`, `autocast`
precision: Floating point precision. `float16` will consume half the memory of `float32` but produce slightly lower-quality images. The `auto` setting will guess the proper precision based on your video card and operating system.<br>Valid values: `auto`, `float16`, `bfloat16`, `float32`
sequential_guidance: Whether to calculate guidance in serial instead of in parallel, lowering memory requirements.
attention_type: Attention type.<br>Valid values: `auto`, `normal`, `xformers`, `sliced`, `torch-sdp`
attention_slice_size: Slice size, valid when attention_type=="sliced".<br>Valid values: `auto`, `balanced`, `max`, `1`, `2`, `3`, `4`, `5`, `6`, `7`, `8`
force_tiled_decode: Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).
pil_compress_level: The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.
max_queue_size: Maximum number of items in the session queue.
clear_queue_on_startup: Empties session queue on startup.
allow_nodes: List of nodes to allow. Omit to allow all.
deny_nodes: List of nodes to deny. Omit to deny none.
node_cache_size: How many cached nodes to keep in memory.
@ -145,7 +147,8 @@ class InvokeAIAppConfig(BaseSettings):
# PATHS
models_dir: Path = Field(default=Path("models"), description="Path to the models directory.")
convert_cache_dir: Path = Field(default=Path("models/.cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
convert_cache_dir: Path = Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).")
download_cache_dir: Path = Field(default=Path("models/.download_cache"), description="Path to the directory that contains dynamically downloaded models.")
legacy_conf_dir: Path = Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
db_dir: Path = Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
outputs_dir: Path = Field(default=Path("outputs"), description="Path to directory for outputs.")
@ -166,9 +169,8 @@ class InvokeAIAppConfig(BaseSettings):
profiles_dir: Path = Field(default=Path("profiles"), description="Path to profiles output directory.")
# CACHE
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
convert_cache: float = Field(default=DEFAULT_CONVERT_CACHE, ge=0, description="Maximum size of on-disk converted models cache (GB).")
ram: float = Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
vram: float = Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
lazy_offload: bool = Field(default=True, description="Keep models in VRAM until their space is needed.")
log_memory_usage: bool = Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
@ -183,6 +185,7 @@ class InvokeAIAppConfig(BaseSettings):
force_tiled_decode: bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty).")
pil_compress_level: int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = no compression, 1 = fastest with slightly larger filesize, 9 = slowest with smallest filesize. 1 is typically the best setting.")
max_queue_size: int = Field(default=10000, gt=0, description="Maximum number of items in the session queue.")
clear_queue_on_startup: bool = Field(default=False, description="Empties session queue on startup.")
# NODES
allow_nodes: Optional[list[str]] = Field(default=None, description="List of nodes to allow. Omit to allow all.")
@ -302,6 +305,11 @@ class InvokeAIAppConfig(BaseSettings):
"""Path to the converted cache models directory, resolved to an absolute path.."""
return self._resolve(self.convert_cache_dir)
@property
def download_cache_path(self) -> Path:
"""Path to the downloaded models directory, resolved to an absolute path.."""
return self._resolve(self.download_cache_dir)
@property
def custom_nodes_path(self) -> Path:
"""Path to the custom nodes directory, resolved to an absolute path.."""
@ -317,11 +325,10 @@ class InvokeAIAppConfig(BaseSettings):
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""
venv = Path(os.environ.get("VIRTUAL_ENV") or ".")
if os.environ.get("INVOKEAI_ROOT"):
root = Path(os.environ["INVOKEAI_ROOT"])
elif any((venv.parent / x).exists() for x in [INIT_FILE, LEGACY_INIT_FILE]):
root = (venv.parent).resolve()
elif venv := os.environ.get("VIRTUAL_ENV", None):
root = Path(venv).parent.resolve()
else:
root = Path("~/invokeai").expanduser().resolve()
return root
@ -348,14 +355,14 @@ class DefaultInvokeAIAppConfig(InvokeAIAppConfig):
return (init_settings,)
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
"""Migrate a v3 config dictionary to a current config object.
def migrate_v3_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate a v3 config dictionary to a v4.0.0.
Args:
config_dict: A dictionary of settings from a v3 config file.
Returns:
An instance of `InvokeAIAppConfig` with the migrated settings.
An `InvokeAIAppConfig` config dict.
"""
parsed_config_dict: dict[str, Any] = {}
@ -370,23 +377,60 @@ def migrate_v3_config_dict(config_dict: dict[str, Any]) -> InvokeAIAppConfig:
# `max_vram_cache_size` was renamed to `vram` some time in v3, but both names were used
if k == "max_vram_cache_size" and "vram" not in category_dict:
parsed_config_dict["vram"] = v
# autocast was removed in v4.0.1
if k == "precision" and v == "autocast":
parsed_config_dict["precision"] = "auto"
if k == "conf_path":
parsed_config_dict["legacy_models_yaml_path"] = v
if k == "legacy_conf_dir":
# The old default for this was "configs/stable-diffusion". If if the incoming config has that as the value, we won't set it.
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
# Else we do not attempt to migrate this setting
if v != "configs/stable-diffusion":
parsed_config_dict["legacy_conf_dir"] = v
# The old default for this was "configs/stable-diffusion" ("configs\stable-diffusion" on Windows).
if v == "configs/stable-diffusion" or v == "configs\\stable-diffusion":
# If if the incoming config has the default value, skip
continue
elif Path(v).name == "stable-diffusion":
# Else if the path ends in "stable-diffusion", we assume the parent is the new correct path.
parsed_config_dict["legacy_conf_dir"] = str(Path(v).parent)
else:
# Else we do not attempt to migrate this setting
parsed_config_dict["legacy_conf_dir"] = v
elif k in InvokeAIAppConfig.model_fields:
# skip unknown fields
parsed_config_dict[k] = v
# When migrating the config file, we should not include currently-set environment variables.
config = DefaultInvokeAIAppConfig.model_validate(parsed_config_dict)
parsed_config_dict["schema_version"] = "4.0.0"
return parsed_config_dict
return config
def migrate_v4_0_0_to_4_0_1_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate v4.0.0 config dictionary to a v4.0.1 config dictionary
Args:
config_dict: A dictionary of settings from a v4.0.0 config file.
Returns:
A config dict with the settings migrated to v4.0.1.
"""
parsed_config_dict: dict[str, Any] = copy.deepcopy(config_dict)
# precision "autocast" was replaced by "auto" in v4.0.1
if parsed_config_dict.get("precision") == "autocast":
parsed_config_dict["precision"] = "auto"
parsed_config_dict["schema_version"] = "4.0.1"
return parsed_config_dict
def migrate_v4_0_1_to_4_0_2_config_dict(config_dict: dict[str, Any]) -> dict[str, Any]:
"""Migrate v4.0.1 config dictionary to a v4.0.2 config dictionary.
Args:
config_dict: A dictionary of settings from a v4.0.1 config file.
Returns:
An config dict with the settings migrated to v4.0.2.
"""
parsed_config_dict: dict[str, Any] = copy.deepcopy(config_dict)
# convert_cache was removed in 4.0.2
parsed_config_dict.pop("convert_cache", None)
parsed_config_dict["schema_version"] = "4.0.2"
return parsed_config_dict
def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
@ -399,33 +443,41 @@ def load_and_migrate_config(config_path: Path) -> InvokeAIAppConfig:
An instance of `InvokeAIAppConfig` with the loaded and migrated settings.
"""
assert config_path.suffix == ".yaml"
with open(config_path) as file:
loaded_config_dict = yaml.safe_load(file)
with open(config_path, "rt", encoding=locale.getpreferredencoding()) as file:
loaded_config_dict: dict[str, Any] = yaml.safe_load(file)
assert isinstance(loaded_config_dict, dict)
migrated = False
if "InvokeAI" in loaded_config_dict:
# This is a v3 config file, attempt to migrate it
migrated = True
loaded_config_dict = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
if loaded_config_dict["schema_version"] == "4.0.0":
migrated = True
loaded_config_dict = migrate_v4_0_0_to_4_0_1_config_dict(loaded_config_dict)
if loaded_config_dict["schema_version"] == "4.0.1":
migrated = True
loaded_config_dict = migrate_v4_0_1_to_4_0_2_config_dict(loaded_config_dict)
if migrated:
shutil.copy(config_path, config_path.with_suffix(".yaml.bak"))
try:
# loaded_config_dict could be the wrong shape, but we will catch all exceptions below
migrated_config = migrate_v3_config_dict(loaded_config_dict) # pyright: ignore [reportUnknownArgumentType]
# load and write without environment variables
migrated_config = DefaultInvokeAIAppConfig.model_validate(loaded_config_dict)
migrated_config.write_file(config_path)
except Exception as e:
shutil.copy(config_path.with_suffix(".yaml.bak"), config_path)
raise RuntimeError(f"Failed to load and migrate v3 config file {config_path}: {e}") from e
migrated_config.write_file(config_path)
return migrated_config
else:
# Attempt to load as a v4 config file
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
assert (
config.schema_version == CONFIG_SCHEMA_VERSION
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
return config
except Exception as e:
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
try:
# Meta is not included in the model fields, so we need to validate it separately
config = InvokeAIAppConfig.model_validate(loaded_config_dict)
assert (
config.schema_version == CONFIG_SCHEMA_VERSION
), f"Invalid schema version, expected {CONFIG_SCHEMA_VERSION}: {config.schema_version}"
return config
except Exception as e:
raise RuntimeError(f"Failed to load config file {config_path}: {e}") from e
@lru_cache(maxsize=1)

View File

@ -1,10 +1,17 @@
"""Init file for download queue."""
from .download_base import DownloadJob, DownloadJobStatus, DownloadQueueServiceBase, UnknownJobIDException
from .download_default import DownloadQueueService, TqdmProgress
from invokeai.app.services.download.download_base import (
DownloadJob,
DownloadJobStatus,
DownloadQueueServiceBase,
MultiFileDownloadJob,
UnknownJobIDException,
)
from invokeai.app.services.download.download_default import DownloadQueueService, TqdmProgress
__all__ = [
"DownloadJob",
"MultiFileDownloadJob",
"DownloadQueueServiceBase",
"DownloadQueueService",
"TqdmProgress",

Some files were not shown because too many files have changed in this diff Show More