## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because probably not needed
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
These are another minor dep updates that I was able to test without any
regressions. This will ensure we are up-to-date again.
The fixes are very minor, probably not noticeable in InvokeAI (at least
for diffusers) but it's still good to have them.
This is also to make sure that the RC is releasing with the latest
packages to ensure extended testing.
Greetings
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?
## What type of PR is this? (check all applicable)
- [x] Community Node Submission
## Description
- Adds BriaAI's new 1.4 model for background removal. Far superior
results from what I've tested compared to any other BG removal so far:
https://github.com/blessedcoolant/invoke_bria_rmbg
The stats service was logging error messages when attempting to retrieve stats for a graph that it wasn't tracking. This was rather noisy.
Instead of logging these errors within the service, we now will just raise the error and let the consumer of the service decide whether or not to log. Our usage of the service at this time is to suppress errors - we don't want to log anything to the console.
Note: With the improvements in the previous two commits, we shouldn't get these errors moving forward, but I still think this change is correct.
When an invocation is canceled, we consider the graph canceled. Log its graph's stats before resetting its graph's stats. No reason to not log these stats.
We also should stop the profiler at this point, because this graph is finished. If we don't stop it manually, it will stop itself and write the profile to disk when it is next started, but the resultant profile will include more than just its target graph.
Now we get both stats and profiles for canceled graphs.
When an invocation errored, we clear the stats for the whole graph. Later on, we check the graph for errors and see the failed invocation, and we consider the graph failed. We then attempts to log the stats for the failed graph.
Except now the failed graph has no stats, and the stats raises an error.
The user sees, in the terminal:
- An invocation error
- A stats error (scary!)
- No stats for the failed graph (uninformative!)
What the user should see:
- An invocation error
- Graph stats
The fix is simple - don't reset the graph stats when an invocation has an error.
Hardcode the options in the dropdown, don't rely on translators to fill this in.
Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
Closes#5647
The alpha values in the UI are `0-1` but the backend wants `0-255`.
Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.
The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.
Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
Closes#5616
Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.
When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.
This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.
I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
Currently translated at 40.6% (582 of 1433 strings)
translationBot(ui): update translation (Turkish)
Currently translated at 38.8% (557 of 1433 strings)
Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.
The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.
The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.
This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.
In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.
Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.
So we have to imperatively refetch, by refetching as we open the modal.
Closes#5639
* Port the command-line tools to use model_manager2
1.Reimplement the following:
- invokeai-model-install
- invokeai-merge
- invokeai-ti
To avoid breaking the original modeal manager, the udpated tools
have been renamed invokeai-model-install2 and invokeai-merge2. The
textual inversion training script should continue to work with
existing installations. The "starter" models now live in
`invokeai/configs/INITIAL_MODELS2.yaml`.
When the full model manager 2 is in place and working, I'll rename
these files and commands.
2. Add the `merge` route to the web API. This will merge two or three models,
resulting a new one.
- Note that because the model installer selectively installs the `fp16` variant
of models (rather than both 16- and 32-bit versions as previous),
the diffusers merge script will choke on any huggingface diffuserse models
that were downloaded with the new installer. Previously-downloaded models
should continue to merge correctly. I have a PR
upstream https://github.com/huggingface/diffusers/pull/6670 to fix
this.
3. (more important!)
During implementation of the CLI tools, found and fixed a number of small
runtime bugs in the model_manager2 implementation:
- During model database migration, if a registered models file was
not found on disk, the migration would be aborted. Now the
offending model is skipped with a log warning.
- Caught and fixed a condition in which the installer would download the
entire diffusers repo when the user provided a single `.safetensors`
file URL.
- Caught and fixed a condition in which the installer would raise an
exception and stop the app when a request for an unknown model's metadata
was passed to Civitai. Now an error is logged and the installer continues.
- Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
from Civitai.
* fix ruff issue
---------
Co-authored-by: Lincoln Stein <lstein@gmail.com>
## What type of PR is this? (check all applicable)
- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission
## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No
## Description
Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.
## Related Tickets & Documents
<!--
For pull requests that relate or close an issue, please include them
below.
For example having the text: "closes #1234" would connect the current
pull
request to issue 1234. And when we merge the pull request, Github will
automatically close the issue.
-->
- Related Issue #
- Closes #
## QA Instructions, Screenshots, Recordings
<!--
Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is
approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"
A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## Added/updated tests?
- [ ] Yes
- [ ] No : _please replace this line with details on why tests
have not been included_
## [optional] Are there any post deployment tasks we need to perform?