- Update backend metadata for t2i adapter
- Fix typo in `T2IAdapterInvocation`: `ip_adapter_model` -> `t2i_adapter_model`
- Update linear graphs to use t2i adapter
- Add client metadata recall for t2i adapter
- Fix bug with controlnet metadata recall - processor should be set to 'none' when recalling a control adapter
Control adapters logic/state/ui is now generalized to hold controlnet, ip_adapter and t2i_adapter. In the future, other control adapter types can be added.
TODO:
- Limit IP adapter to 1
- Add T2I adapter to linear graphs
- Fix autoprocess
- T2I metadata saving & recall
- Improve on control adapters UI
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR adds support for slow unit tests that depend on models. It
includes:
- Documentation explaining the handling of fast vs. slow unit tests.
- Utilities to assist with writing tests that depend on models.
- A sample test that loads and runs an IP-Adapter model. This is far
from complete test coverage of IP-Adapter - it's just intended as a
first example of how to write tests with models.
**Suggestion for reviewers**: Start with docs/contributing/TESTS.md
## QA Instructions, Screenshots, Recordings
I've tested it all, but it would make sense for others to try running
both the fast tests and the slow tests.
## Added/updated tests?
- [x] Yes
- [ ] No
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
This PR adds detailed debug logging to the model cache in order to give
more visibility into the model cache's memory utilization. **This PR
does not make any functional changes to the model cache.**
Every time a model is moved from disk to CPU, or between CPU/CUDA, a log
like this is emitted:
```bash
[2023-10-03 15:17:20,599]::[InvokeAI]::DEBUG --> Moved model '/home/ryan/invokeai/models/.cache/63742ed45b499e55620c402d6df26a20:sdxl:main:unet' from cpu to cuda in 1.23s.
Estimated model size: 4.782 GB.
Process RAM (-4.722): 6.987GB -> 2.265GB
libc mmap allocated (-4.722): 6.030GB -> 1.308GB
libc arena used (-0.061): 0.402GB -> 0.341GB
libc arena free (+0.061): 0.006GB -> 0.067GB
libc total allocated (-4.722): 6.439GB -> 1.717GB
libc total used (-4.783): 6.433GB -> 1.649GB
VRAM (+4.881): 1.538GB -> 6.418GB
```
## Related Tickets & Documents
https://github.com/invoke-ai/InvokeAI/pull/4694 contains related fixes
to some known memory issues.
## QA Instructions, Screenshots, Recordings
Make sure debug logs are enabled and you should see the new logs.
We should test each of the following environments:
- [x] Linux
- [x] Mac OS + MPS
- [x] Windows
## Added/updated tests?
- [x] Yes
- [ ] No
Added unit tests for the new utilities. Test coverage is still low for
the ModelCache, but not worse than before.
* Bump diffusers to 0.21.2.
* Add T2IAdapterInvocation boilerplate.
* Add T2I-Adapter model to model-management.
* (minor) Tidy prepare_control_image(...).
* Add logic to run the T2I-Adapter models at the start of the DenoiseLatentsInvocation.
* Add logic for applying T2I-Adapter weights and accumulating.
* Add T2IAdapter to MODEL_CLASSES map.
* yarn typegen
* Add model probes for T2I-Adapter models.
* Add all of the frontend boilerplate required to use T2I-Adapter in the nodes editor.
* Add T2IAdapterModel.convert_if_required(...).
* Fix errors in T2I-Adapter input image sizing logic.
* Fix bug with handling of multiple T2I-Adapters.
* black / flake8
* Fix typo
* yarn build
* Add num_channels param to prepare_control_image(...).
* Link to upstream diffusers bugfix PR that currently requires a workaround.
* feat: Add Color Map Preprocessor
Needed for the color T2I Adapter
* feat: Add Color Map Preprocessor to Linear UI
* Revert "feat: Add Color Map Preprocessor"
This reverts commit a1119a00bf.
* Revert "feat: Add Color Map Preprocessor to Linear UI"
This reverts commit bd8a9b82d8.
* Fix T2I-Adapter field rendering in workflow editor.
* yarn build, yarn typegen
---------
Co-authored-by: blessedcoolant <54517381+blessedcoolant@users.noreply.github.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
When the processor has an error and it has a queue item, mark that item failed.
This addresses processor errors resulting in `in_progress` queue items, which create a soft lock of the processor, requiring the user to cancel the `in_progress` item before anything else processes.
Makes graph validation logic more rigorous, validating graphs when they are created as part of a session or batch.
`validate_self()` method added to `Graph` model. It does all the validation that `is_valid()` did, plus a few extras:
- unique `node.id` values across graph
- node ids match their key in `Graph.nodes`
- recursively validate subgraphs
- validate all edges
- validate graph is acyclical
The new method is required because `is_valid()` just returned a boolean. That behaviour is retained, but `validate_self()` now raises appropriate exceptions for validation errors. This are then surfaced to the client.
The function is named `validate_self()` because pydantic reserves `validate()`.
There are two main places where graphs are created - in batches and in sessions.
Field validators are added to each of these for their `graph` fields, which call the new validation logic.
**Closes #4744**
In this issue, a batch is enqueued with an invalid graph. The output field is typed as optional while the input field is required. The field types themselves are not relevant - this change addresses the case where an invalid graph was created.
The mismatched types problem is not noticed until we attempt to invoke the graph, because the graph was never *fully* validated. An error is raised during the call to `graph_execution_state.next()` in `invoker.py`. This function prepares the edges and validates them, raising an exception due to the mismatched types.
This exception is caught by the session processor, but it doesn't handle this situation well - the graph is not marked as having an error and the queue item status is never changed. The queue item is therefore forever `in_progress`, so no new queue items are popped - the app won't do anything until the queue item is canceled manually.
This commit addresses this by preventing invalid graphs from being created in the first place, addressing a substantial number of fail cases.
The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize
Closes#4786
This is fired when the dnd image is moved over the 'none' board. Weren't defaulting to 'none' for the image's board_id, resulting in it being possible to drag a 'none' image onto 'none'.
Selections were not being `uniqBy()`'d, or were `uniqBy()`'d without a proper iteratee. This results in duplicate images in selections in certain situations.
Add correct `uniqBy()` to the reducer to prevent this in the future.