## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
This hook was rerendering any time anything changed. Moved it to a logical component, put its useEffects inside the component. This reduces the effect of the rerenders to just that tiny always-null component.
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because:
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No
## Description
The IP-Adapter memory footprint was not being calculated correctly.
I think we could put checks in place to catch this type of error in the
future, but for now I'm just fixing the bug.
## QA Instructions, Screenshots, Recordings
I tested manually in a debugger. There are 3 pathways for calculating
the model size. All were tested:
- From file
- From state_dict
- From model weights
## Added/updated tests?
- [ ] Yes
- [x] No : This would require the ability to run tests that depend on
models. I'm working on this in another branch, but not ready quite yet.
* add control net to useRecallParams
* got recall controlnets working
* fix metadata viewer controlnet
* fix type errors
* fix controlnet metadata viewer
* set control image and use correct processor type and node
* clean up logs
* recall processor using substring
* feat(ui): enable controlNet when recalling one
---------
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
- Current image number & total are displayed
- Left/right wrap around instead of stopping on first/last image
- Disable the left/right/number buttons when showing base layer
- improved translations
- Drag the end of an edge away from its handle to disconnect it
- Drop in empty space to delete the edge
- Drop on valid handle to reconnect it
- Update connection logic slightly to allow edge updates
* feat(ui): add error handling for enqueueBatch route, remove sessions
This re-implements the handling for the session create/invoke errors, but for batches.
Also remove all references to the old sessions routes in the UI.
* feat(ui): improve canvas image error UI
* make canvas error state gray instead of red
---------
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
## What type of PR is this? (check all applicable)
- [X] Feature
## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [X] No - this should go into release notes.
## Description
During installation, the installer will now ask the user whether they
wish to perform a manual or automatic configuration of invokeai. If they
choose automatic (the default), then the install is performed without
running the TUI of the `invokeai-configure` script. Otherwise the
console-based interface is activated as usual.
This script also bumps up the default model RAM cache size to 7.5, which
improves performance on SDXL models.