## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
feat(ui): add translation strings for clear intermediates
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes#4851
## [optional] Are there any post deployment tasks we need to perform?
@Millu this can go into 3.3.0
* UI for bulk downloading boards or groups of images
* placeholder route for bulk downloads that does nothing
* lint
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Description
@Millu pointed out this safetensors PR a few weeks ago, which claimed to
offer a performance benefit:
https://github.com/huggingface/safetensors/pull/362 . It was superseded
by https://github.com/huggingface/safetensors/pull/363 and included in
the latest [safetensors 0.4.0
release](https://github.com/huggingface/safetensors/releases/tag/v0.4.0).
Here are the results from my local performance comparison:
```
Before(0.3.1) / After(0.4.0)
sdxl:main:tokenizer from disk to cpu in 0.46s / 0.46s
sdxl:main:text_encoder from disk to cpu in 2.12s / 2.32s
embroidered_style_v1_sdxl.safetensors:sdxl:lora' from disk to cpu in 0.67s / 0.36s
VoxelXL_v1.safetensors:sdxl:lora' from disk to cpu in 1.64s / 0.60s
ryan_db_sdxl_epoch640.safetensors:sdxl:lora' from disk to cpu in 2.46s / 1.40s
sdxl:main:tokenizer_2 from disk to cpu in 0.37s / 0.39s
sdxl:main:text_encoder_2 from disk to cpu in 3.78s / 4.70s
sdxl:main:unet from disk to cpu in 4.66s / 3.08s
sdxl:main:scheduler from disk to cpu in 0.34s / 0.33s
sdxl:main:vae from disk to cpu in 0.66s / 0.51s
TOTAL GRAPH EXECUTION TIME: 56.489s / 53.416s
```
The benefit was marginal on my system (maybe even within measurement
error), but I figured we might as well pull it.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR optimizes the time to load models from disk.
In my local testing, SDXL text_encoder_2 models saw the greatest
improvement:
- Before change, load time (disk to cpu): 14 secs
- After change, load time (disk to cpu): 4 secs
See the in-code documentation for an explanation of how this speedup is
achieved.
## Related Tickets & Documents
This change was previously proposed on the HF transformers repo, but did
not get any traction:
https://github.com/huggingface/transformers/issues/18505#issue-1330728188
## QA Instructions, Screenshots, Recordings
I don't expect any adverse effects, but the new context manager is
applied while loading **all** models, so it would make sense to exercise
everything.
## Added/updated tests?
- [x] Yes
- [ ] No
The canvas needs to be set to staging mode as soon as a canvas-destined batch is enqueued. If the batch is is fully canceled before an image is generated, we need to remove that batch from the canvas `batchIds` watchlist, else canvas gets stuck in staging mode with no way to exit.
The changes here allow the batch status to be tracked, and if a batch has all its items completed, we can remove it from the `batchIds` watchlist. The `batchIds` watchlist now accurately represents *incomplete* canvas batches, fixing this cause of soft lock.
The UI will always re-fetch queue and batch status on receiving this event, so we may as well jsut include that data in the event and save the extra network roundtrips.
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] No, because: Non-controversial
## Have you updated all relevant documentation?
- [ ] Yes
- [X] N/A
## Description
This adds a list of T2I adapters to the “starter models” offered by the
TUI installer. None of the models is selected by default; this can be
done easily if requested. The models offered to the user are:
```
TencentARC/t2iadapter_canny_sd15v2
TencentARC/t2iadapter_sketch_sd15v2
TencentARC/t2iadapter_depth_sd15v2
TencentARC/t2iadapter_zoedepth_sd15v1
TencentARC/t2i-adapter-canny-sdxl-1.0
TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
TencentARC/t2i-adapter-lineart-sdxl-1.0
TencentARC/t2i-adapter-sketch-sdxl-1.0
```
## Related Tickets & Documents
PR #4612
## QA Instructions, Screenshots, Recordings
The revised installer has a new IP-ADAPTERS tab that looks like this:
![IMG_0255](https://github.com/invoke-ai/InvokeAI/assets/111189/0e01b1f6-7191-49a1-ac63-2c913826d299)
## Added/updated tests?
- [ ] Yes
- [X] No : It would be good to have a suite of model download tests, but
not set up yet.
- Update backend metadata for t2i adapter
- Fix typo in `T2IAdapterInvocation`: `ip_adapter_model` -> `t2i_adapter_model`
- Update linear graphs to use t2i adapter
- Add client metadata recall for t2i adapter
- Fix bug with controlnet metadata recall - processor should be set to 'none' when recalling a control adapter
Control adapters logic/state/ui is now generalized to hold controlnet, ip_adapter and t2i_adapter. In the future, other control adapter types can be added.
TODO:
- Limit IP adapter to 1
- Add T2I adapter to linear graphs
- Fix autoprocess
- T2I metadata saving & recall
- Improve on control adapters UI
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR adds support for slow unit tests that depend on models. It
includes:
- Documentation explaining the handling of fast vs. slow unit tests.
- Utilities to assist with writing tests that depend on models.
- A sample test that loads and runs an IP-Adapter model. This is far
from complete test coverage of IP-Adapter - it's just intended as a
first example of how to write tests with models.
**Suggestion for reviewers**: Start with docs/contributing/TESTS.md
## QA Instructions, Screenshots, Recordings
I've tested it all, but it would make sense for others to try running
both the fast tests and the slow tests.
## Added/updated tests?
- [x] Yes
- [ ] No