Compare commits

...

68 Commits

Author SHA1 Message Date
913ad69c34 fix typos 2023-12-20 17:02:54 +11:00
Sam
a4f9bfc8f7 Update Dockerfile 2023-12-19 18:38:36 -05:00
Sam
9afdd0f4a8 Update Dockerfile 2023-12-19 18:38:36 -05:00
bee6ad1547 fix(pnpm): replace npm with pnpm in dockerfile 2023-12-19 18:38:36 -05:00
fa3f1b6e41 [Feat] reimport model config records after schema migration (#5281)
* add code to repopulate model config records after schema update

* reformat for ruff

* migrate model records using db cursor rather than the ModelRecordConfigService

* ruff fixes

* tweak exception reporting

* fix: build frontend in  pypi-release workflow

This was missing, resulting in the 3.5.0rc1 having no frontend.

* fix: use node 18, set working directory

- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands

* Don't copy extraneous paths into installer .zip

* feat(installer): delete frontend build after creating installer

This prevents an empty `dist/` from breaking the app on startup.

* feat: add python dist as release artifact, as input to enable publish to pypi

- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* freeze yaml migration logic at upgrade to 3.5

* moved migration code to migration_3

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
2023-12-19 17:01:47 -05:00
d0fa131010 (feat) updater installs from PyPi instead of GitHub releases (#5316)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updater script pulls from PyPI instead of GitHub releases (this is why
the RC packages are having issues when updating through the launcher
script)

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-19 13:15:38 +11:00
2f438431bd (fix) update logic for installing specific version 2023-12-19 11:05:15 +11:00
bbeb5cb477 Merge branch 'main' into feat/updater_use_pypi 2023-12-19 10:09:03 +11:00
cd3111c324 fix ruff errors 2023-12-19 09:58:10 +11:00
16b7246412 (feat) updater installs from PyPi instead of GitHub releases 2023-12-19 09:30:40 +11:00
42be78d328 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1327 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-19 07:20:14 +11:00
e469e24a58 Update model_probe to work with diffuser-format SD TI embeddings. (#5301)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

      
## Have you updated all relevant documentation?
- [x] Yes (N/A)
- [ ] No


## Description

This change enables the model probe to work with TI embeddings that have
the follow state_dict structure:

```python
{
    "<any_key>": torch.Tensor(...), # where the tensor has shape (N, embedding_dim)
}
```

## QA Instructions, Screenshots, Recordings

I can't imagine an embedding format that would previously have passed
the model probe, and would now fail after this change. That being said,
I'll exercise a bunch of existing TIs before merging.

- [x] Exercise existing TI formats


## Added/updated tests?

- [ ] Yes
- [x] No : _We could really benefit from tests for all of the supported
TI formats... but I'm not taking on that project right now._
2023-12-18 10:01:04 -05:00
cb698ff1fb Update model_probe to work with diffuser-format SD TI embeddings. 2023-12-18 09:51:16 -05:00
0e738c4290 Tag model manager v2 api as unstable (#5311)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

As discussed with @psychedelicious , this PR changes the swagger label
on the model manager V2 routes to `model_manager_v2_unstable` in order
to warn community members that the API is liable to change.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-18 07:09:40 -05:00
09d1bc513d Merge branch 'main' into refactor/model-manager2/mark-api-experimental 2023-12-18 07:04:00 -05:00
aefa828237 Tiled upscaling - EvenSplit to use overlap in pixels instead tile fraction (#5309)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Change CalculateImageTilesEvenSplitInvocation to have an overlap in
pixels rather than as a percentage of the tile. This makes it easier to
have predictable blending of the seams as you have a known overlap size.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-17 21:13:45 -05:00
74ea592d02 tag model manager v2 api as unstable 2023-12-17 14:16:45 -05:00
457b0dfac0 Merge branch 'main' into tiled-upscaling-graph 2023-12-17 15:12:16 +00:00
96a717c4ba In CalculateImageTilesEvenSplitInvocation to have overlap_fraction becomes just overlap. This is now in pixels rather than as a fraction of the tile size.
Update calc_tiles_even_split() with the same change. Ensuring Overlap is within allowed size

Update even_split tests
2023-12-17 15:10:50 +00:00
77b74264a8 Simplify docker compose setup (#5046)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes -
https://github.com/invoke-ai/InvokeAI/pull/5007#discussion_r1378792615
- [ ] No, because: 

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Simplify Docker image creation and execution to a single script that
spins up the right service in the docker compose file.
## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Depends on #5007

## QA Instructions, Screenshots, Recordings
N/A
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : same tests should work.

## [optional] Are there any post deployment tasks we need to perform?

Not to my knowledge.
2023-12-17 17:10:56 +11:00
351078e8aa Merge branch 'main' into simplify-docker-compose-setup 2023-12-17 17:07:55 +11:00
b8354bd1a4 Merge branch 'main' into tiled-upscaling-graph 2023-12-16 19:09:28 +00:00
3b944b8af6 fix: build frontend in pypi-release workflow (#5298)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

This was missing, resulting in the 3.5.0rc1 having no frontend.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Discord installer thread:
https://discord.com/channels/1020123559063990373/1149513695567810630/1185200427717898260
- Comments from here in the release chat:
https://discord.com/channels/1020123559063990373/1020123559831539744/1185004017521279007

## QA Instructions, Screenshots, Recordings

I've run this locally and it works (I commented out the final steps of
the workflow that do PyPi stuff to ensure I didn't accidentally deploy
something).

You can run the workflow locally with https://github.com/nektos/act.
Suggest using the `gh` CLI version, its very easy to set up if you have
the github CLI installed. Then you can run `gh act -W
.github/workflows/pypi-release.yml` to run the workflow locally in a
docker image.

I don't know this local action runner would actually release to PyPi -
as mentioned, I commented those steps out when testing - but it does
successfully do both frontend and backend builds.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This needs @lstein 's approval.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?

Cut an RC2
2023-12-16 10:40:36 -05:00
b811c037bd Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:36:03 -05:00
5bf61382a4 feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
2023-12-16 20:02:09 +11:00
0f1c5f382a feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
2023-12-16 19:39:29 +11:00
4af1695c60 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-16 13:10:47 +11:00
df9a903a50 fix(ui): do not cache VAE decode on linear
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.

For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.

There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.

The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.

This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
2023-12-16 12:37:49 +11:00
311be8f97d Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:15:32 +11:00
3f970c8326 Don't copy extraneous paths into installer .zip 2023-12-15 11:27:21 -05:00
fc150acde5 [feat] Make model prober recognize yet another LoRA format (#5296)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This adds a probe for the SDXL LoRA format found in the wild at
https://civitai.com/models/224641.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

See discord message at:
https://discord.com/channels/1020123559063990373/1149510134058471514/1184982133912113182

## QA Instructions, Screenshots, Recordings

Try installing the SDXL LoRA at the URL given above.
## Merge Plan

This can be merged when approved.
## Added/updated tests?

- [ ] Yes
- [X] No : we do not yet have a comprehensive suite of models to test
probing on.

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 09:49:51 -05:00
1615df3aa1 fix: use node 18, set working directory
- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
2023-12-16 00:32:31 +11:00
b2a8c45553 fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
2023-12-15 23:56:31 +11:00
212dbaf9a2 fix comment 2023-12-15 00:25:27 -05:00
ac3cf48d7f make probe recognize lora format at https://civitai.com/models/224641 2023-12-15 00:25:27 -05:00
454f01e0c1 [feature] add ability to filter model listings by format (#5286)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This minor change adds the ability to filter the model lists returned by
V2 of the model manager using the model file format (e.g. "checkpoint").
Just thought this would be a useful feature.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This can be merged when approved without any adverse effects.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : minor feature - tested informally using the router API

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 00:03:01 -05:00
72dca55e44 Merge branch 'feat/model_manager/search-by-format' of github.com:invoke-ai/InvokeAI into feat/model_manager/search-by-format 2023-12-14 23:55:08 -05:00
264ea6d94d fix ruff errors 2023-12-14 23:54:59 -05:00
60e3e653fa Merge branch 'main' into feat/model_manager/search-by-format 2023-12-14 23:53:54 -05:00
082894c377 Adding Kapa Assistant to Docs (#5290)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ x ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ x ] Yes
- [ ] No


## Description
This adds the Kapa assistant to our docs.
2023-12-15 09:47:40 +11:00
4b00f8fc82 Merge branch 'main' into Adding-Kapa-assistant-to-docs 2023-12-15 09:46:25 +11:00
6ea09ba0b6 feat(ui): workflow menu tweaks
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
2023-12-14 08:30:59 -05:00
296060db63 Add cpu and rocm profiles. Let invokeai-nvidia service be the default. 2023-12-13 23:23:43 -05:00
d1d8ee71fc Simplify docker compose setup 2023-12-13 23:23:43 -05:00
42c04db167 adding kapa widget to docs 2023-12-13 22:33:50 -05:00
b935768eeb Update mkdocs.yml 2023-12-13 22:28:47 -05:00
ea4ef042f3 Ruff fixes 2023-12-14 12:47:10 +11:00
18b2bcbbee Added Classification from baseinvocation 2023-12-14 12:47:10 +11:00
5ad88c7f86 Fixed classification 2023-12-14 12:47:10 +11:00
3b04fef31d Added classification 2023-12-14 12:47:10 +11:00
bec888923a Fix for ruff 2023-12-14 12:47:10 +11:00
c6235049c7 Add an unsharp mask node to core nodes
Unsharp mask is an image operation that, despite its name, sharpens an image. Like a Gaussian blur, it takes a radius and strength.
2023-12-14 12:47:10 +11:00
e10f6e8962 fix(nodes): mark CalculateImageTilesInvocation as beta
missed this when I added classification
2023-12-13 20:33:25 -05:00
77f04ff8d6 docs: add warning to developer install about database & main 2023-12-14 11:47:33 +11:00
461e474394 fix(nodes): fix embedded workflows with IDs
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.

Now it strips unknown attributes, allowing those workflows to load.
2023-12-14 11:38:04 +11:00
f0c70fe3f1 fix(db): add error handling for workflow migration
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
2023-12-14 10:16:56 +11:00
442ac2b828 fix(ui): fix frontend workflow migration when node is missing version
This should default to "1.0.0" to match the behaviour of the backend.
2023-12-14 09:59:11 +11:00
bb986b97f3 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
98655db57b translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1340 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 84.2% (1150 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 83.1% (1135 of 1365 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
8845894e83 translationBot(ui): update translation (Italian)
Currently translated at 97.0% (1325 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
937c7e957d add merge plan to PR template 2023-12-13 16:59:31 -05:00
569ae7c482 add ability to filter model listings by format 2023-12-13 15:59:21 -05:00
340957f920 Update torch to 2.1.1 and xformers to 0.0.23 2023-12-13 14:49:32 -05:00
076d9b05ea Update transformers to 4.36 and Accelerate to 0.25 2023-12-13 14:42:34 -05:00
612912a6c9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 14:12:22 +00:00
bca2372280 updated comment 2023-12-12 14:02:28 +00:00
0b860582f0 remove unneeded if else 2023-12-12 14:00:06 +00:00
87ff380fe4 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 13:40:28 +00:00
52 changed files with 1612 additions and 297 deletions

View File

@ -42,6 +42,21 @@ Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
## Added/updated tests?
- [ ] Yes

View File

@ -21,16 +21,16 @@ jobs:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 20
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '20'
node-version: '18'
- name: Checkout
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: 8
version: '8.12.1'
- name: Install dependencies
run: 'pnpm install --prefer-frozen-lockfile'
- name: Typescript

View File

@ -1,13 +1,15 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
inputs:
publish_package:
description: 'Publish build on PyPi? [true/false]'
required: true
default: 'false'
jobs:
release:
build-and-release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
@ -15,19 +17,43 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: Checkout
uses: actions/checkout@v4
- name: install deps
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install frontend dependencies
run: pnpm install --prefer-frozen-lockfile
working-directory: invokeai/frontend/web
- name: Build frontend
run: pnpm run build
working-directory: invokeai/frontend/web
- name: Install python dependencies
run: pip install --upgrade build twine
- name: build package
- name: Build python package
run: python3 -m build
- name: check distribution
- name: Upload build as workflow artifact
uses: actions/upload-artifact@v4
with:
name: dist
path: dist
- name: Check distribution
run: twine check dist/*
- name: check PyPI versions
- name: Check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
@ -36,6 +62,6 @@ jobs:
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
- name: Publish build on PyPi
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != '' && github.event.inputs.publish_package == 'true'
run: twine upload dist/*

View File

@ -59,14 +59,16 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# #### Build the Web UI ------------------------------------
FROM node:18 AS web-builder
FROM node:18-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN pnpm run build
#### Runtime stage ---------------------------------------

View File

@ -23,7 +23,7 @@ This is done via Docker Desktop preferences
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
1. Execute `run.sh`
The image will be built automatically if needed.
@ -39,7 +39,7 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `run.sh`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
set -e
build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker compose build args:"
echo $build_args
docker compose build $build_args

View File

@ -2,23 +2,8 @@
version: '3.8'
services:
invokeai:
x-invokeai: &invokeai
image: "local/invokeai:latest"
# edit below to run on a container runtime other than nvidia-container-runtime.
# not yet tested with rocm/AMD GPUs
# Comment out the "deploy" section to run on CPU only
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile
@ -50,3 +35,27 @@ services:
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0
services:
invokeai-nvidia:
<<: *invokeai
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
invokeai-cpu:
<<: *invokeai
profiles:
- cpu
invokeai-rocm:
<<: *invokeai
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
profiles:
- rocm

View File

@ -1,11 +1,28 @@
#!/usr/bin/env bash
set -e
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
run() {
local scriptdir=$(dirname "${BASH_SOURCE[0]}")
cd "$scriptdir" || exit 1
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
local build_args=""
local profile=""
docker compose up -d
docker compose logs -f
[[ -f ".env" ]] &&
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
local service_name="invokeai-$profile"
printf "%s\n" "docker compose build args:"
printf "%s\n" "$build_args"
docker compose build $build_args
unset build_args
printf "%s\n" "starting service $service_name"
docker compose --profile "$profile" up -d "$service_name"
docker compose logs -f
}
run

View File

@ -11,7 +11,7 @@ complex functionality.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
New nodes should be added to a subfolder in the `nodes` directory found at the root level of the InvokeAI installation location. Nodes added to this folder will be imported upon application startup.
Example `nodes` subfolder structure:
```py

View File

@ -293,6 +293,19 @@ manager, please follow these steps:
## Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
@ -388,3 +401,5 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@ -0,0 +1,10 @@
document.addEventListener("DOMContentLoaded", function () {
var script = document.createElement("script");
script.src = "https://widget.kapa.ai/kapa-widget.bundle.js";
script.setAttribute("data-website-id", "b5973bb1-476b-451e-8cf4-98de86745a10");
script.setAttribute("data-project-name", "Invoke.AI");
script.setAttribute("data-project-color", "#11213C");
script.setAttribute("data-project-logo", "https://avatars.githubusercontent.com/u/113954515?s=280&v=4");
script.async = true;
document.head.appendChild(script);
});

View File

@ -91,9 +91,11 @@ rm -rf InvokeAI-Installer
# copy content
mkdir InvokeAI-Installer
for f in templates lib *.txt *.reg; do
for f in templates *.txt *.reg; do
cp -r ${f} InvokeAI-Installer/
done
mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
@ -111,6 +113,6 @@ cp WinLongPathsEnabled.reg InvokeAI-Installer/
zip -r InvokeAI-installer-$VERSION.zip InvokeAI-Installer
# clean up
rm -rf InvokeAI-Installer tmp dist
rm -rf InvokeAI-Installer tmp dist ../invokeai/frontend/web/dist/
exit 0

View File

@ -244,9 +244,9 @@ class InvokeAiInstance:
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.0",
"torch==2.1.1",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"torchvision>=0.16.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,

View File

@ -26,7 +26,7 @@ from invokeai.backend.model_manager.config import (
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2"])
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2_unstable"])
class ModelsList(BaseModel):
@ -45,6 +45,9 @@ async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[str] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
@ -52,10 +55,14 @@ async def list_model_records(
if base_models:
for base_model in base_models:
found_models.extend(
record_store.search_by_attr(base_model=base_model, model_type=model_type, model_name=model_name)
record_store.search_by_attr(
base_model=base_model, model_type=model_type, model_name=model_name, model_format=model_format
)
)
else:
found_models.extend(record_store.search_by_attr(model_type=model_type, model_name=model_name))
found_models.extend(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
return ModelsList(models=found_models)

View File

@ -13,7 +13,15 @@ from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, Input, InputField, InvocationContext, WithMetadata, invocation
from .baseinvocation import (
BaseInvocation,
Classification,
Input,
InputField,
InvocationContext,
WithMetadata,
invocation,
)
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
@ -421,6 +429,64 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata):
)
@invocation(
"unsharp_mask",
title="Unsharp Mask",
tags=["image", "unsharp_mask"],
category="image",
version="1.2.0",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata):
"""Applies an unsharp mask filter to an image"""
image: ImageField = InputField(description="The image to use")
radius: float = InputField(gt=0, description="Unsharp mask radius", default=2)
strength: float = InputField(ge=0, description="Unsharp mask strength", default=50)
def pil_from_array(self, arr):
return Image.fromarray((arr * 255).astype("uint8"))
def array_from_pil(self, img):
return numpy.array(img) / 255
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
mode = image.mode
alpha_channel = image.getchannel("A") if mode == "RGBA" else None
image = image.convert("RGB")
image_blurred = self.array_from_pil(image.filter(ImageFilter.GaussianBlur(radius=self.radius)))
image = self.array_from_pil(image)
image += (image - image_blurred) * (self.strength / 100.0)
image = numpy.clip(image, 0, 1)
image = self.pil_from_array(image)
image = image.convert(mode)
# Make the image RGBA if we had a source alpha channel
if alpha_channel is not None:
image.putalpha(alpha_channel)
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image.width,
height=image.height,
)
PIL_RESAMPLING_MODES = Literal[
"nearest",
"box",

View File

@ -38,7 +38,14 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
tiles: list[Tile] = OutputField(description="The tiles coordinates that cover a particular image shape.")
@invocation("calculate_image_tiles", title="Calculate Image Tiles", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"calculate_image_tiles",
title="Calculate Image Tiles",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@ -70,7 +77,7 @@ class CalculateImageTilesInvocation(BaseInvocation):
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.1.0",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
@ -90,11 +97,11 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
ge=1,
description="Number of tiles to divide image into on the y axis",
)
overlap_fraction: float = InputField(
default=0.25,
overlap: int = InputField(
default=128,
ge=0,
lt=1,
description="Overlap between adjacent tiles as a fraction of the tile's dimensions (0-1)",
multiple_of=8,
description="The overlap, in pixels, between adjacent tiles.",
)
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
@ -103,7 +110,7 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
image_width=self.image_width,
num_tiles_x=self.num_tiles_x,
num_tiles_y=self.num_tiles_y,
overlap_fraction=self.overlap_fraction,
overlap=self.overlap,
)
return CalculateImageTilesOutput(tiles=tiles)

View File

@ -7,7 +7,7 @@ from abc import ABC, abstractmethod
from pathlib import Path
from typing import List, Optional, Union
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType
class DuplicateModelException(Exception):
@ -106,6 +106,7 @@ class ModelRecordServiceBase(ABC):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -113,6 +114,7 @@ class ModelRecordServiceBase(ABC):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.

View File

@ -49,6 +49,7 @@ from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelFormat,
ModelType,
)
@ -225,6 +226,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -232,6 +234,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.
@ -248,6 +251,9 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
if model_type:
where_clause.append("type=?")
bindings.append(model_type)
if model_format:
where_clause.append("format=?")
bindings.append(model_format)
where = f"WHERE {' AND '.join(where_clause)}" if where_clause else ""
with self._db.lock:
self._cursor.execute(

View File

@ -5,6 +5,7 @@ from invokeai.app.services.image_files.image_files_base import ImageFileStorageB
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@ -27,6 +28,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3())
migrator.run_migrations()
return db

View File

@ -1,10 +1,17 @@
import sqlite3
from logging import Logger
from pydantic import ValidationError
from tqdm import tqdm
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.image_files.image_files_common import ImageFileNotFoundException
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator,
)
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration2Callback:
@ -19,6 +26,7 @@ class Migration2Callback:
self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
@ -126,6 +134,11 @@ class Migration2Callback:
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
"""
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in
@ -134,7 +147,7 @@ class Migration2Callback:
This migrate callback checks each image for the presence of an embedded workflow, then updates its entry
in the database accordingly.
"""
# Get the total number of images and chunk it into pages
# Get all image names
cursor.execute("SELECT image_name FROM images")
image_names: list[str] = [image[0] for image in cursor.fetchall()]
total_image_names = len(image_names)
@ -149,8 +162,17 @@ class Migration2Callback:
pbar = tqdm(image_names)
for idx, image_name in enumerate(pbar):
pbar.set_description(f"Checking image {idx + 1}/{total_image_names} for workflow")
pil_image = self._image_files.get(image_name)
try:
pil_image = self._image_files.get(image_name)
except ImageFileNotFoundException:
self._logger.warning(f"Image {image_name} not found, skipping")
continue
if "invokeai_workflow" in pil_image.info:
try:
UnsafeWorkflowWithVersionValidator.validate_json(pil_image.info.get("invokeai_workflow", ""))
except ValidationError:
self._logger.warning(f"Image {image_name} has invalid embedded workflow, skipping")
continue
to_migrate.append((True, image_name))
self._logger.info(f"Adding {len(to_migrate)} embedded workflows to database")

View File

@ -0,0 +1,75 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback:
def __init__(self) -> None:
pass
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def build_migration_3() -> Migration:
"""
Build the migration from database version 2 to 3.
This migration does the following:
- Drops the `model_config` table, recreating it
- Migrates data from `models.yaml` into the `model_config` table
"""
migration_3 = Migration(
from_version=2,
to_version=3,
callback=Migration3Callback(),
)
return migration_3

View File

@ -1,8 +1,12 @@
# Copyright (c) 2023 Lincoln D. Stein
"""Migrate from the InvokeAI v2 models.yaml format to the v3 sqlite format."""
import json
import sqlite3
from hashlib import sha1
from logging import Logger
from pathlib import Path
from typing import Optional
from omegaconf import DictConfig, OmegaConf
from pydantic import TypeAdapter
@ -10,13 +14,12 @@ from pydantic import TypeAdapter
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import (
DuplicateModelException,
ModelRecordServiceSQL,
UnknownModelException,
)
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
@ -25,9 +28,9 @@ from invokeai.backend.util.logging import InvokeAILogger
ModelsValidator = TypeAdapter(AnyModelConfig)
class MigrateModelYamlToDb:
class MigrateModelYamlToDb1:
"""
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.2.0)
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.5.0).
The class has one externally useful method, migrate(), which scans the
currently models.yaml file and imports all its entries into invokeai.db.
@ -41,17 +44,13 @@ class MigrateModelYamlToDb:
config: InvokeAIAppConfig
logger: Logger
cursor: sqlite3.Cursor
def __init__(self) -> None:
def __init__(self, cursor: sqlite3.Cursor = None) -> None:
self.config = InvokeAIAppConfig.get_config()
self.config.parse_args()
self.logger = InvokeAILogger.get_logger()
def get_db(self) -> ModelRecordServiceSQL:
"""Fetch the sqlite3 database for this installation."""
db_path = None if self.config.use_memory_db else self.config.db_path
db = SqliteDatabase(db_path=db_path, logger=self.logger, verbose=self.config.log_sql)
return ModelRecordServiceSQL(db)
self.cursor = cursor
def get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
@ -62,8 +61,10 @@ class MigrateModelYamlToDb:
def migrate(self) -> None:
"""Do the migration from models.yaml to invokeai.db."""
db = self.get_db()
yaml = self.get_yaml()
try:
yaml = self.get_yaml()
except OSError:
return
for model_key, stanza in yaml.items():
if model_key == "__metadata__":
@ -86,22 +87,62 @@ class MigrateModelYamlToDb:
new_config: AnyModelConfig = ModelsValidator.validate_python(stanza) # type: ignore # see https://github.com/pydantic/pydantic/discussions/7094
try:
if original_record := db.search_by_path(stanza.path):
key = original_record[0].key
if original_record := self._search_by_path(stanza.path):
key = original_record.key
self.logger.info(f"Updating model {model_name} with information from models.yaml using key {key}")
db.update_model(key, new_config)
self._update_model(key, new_config)
else:
self.logger.info(f"Adding model {model_name} with key {model_key}")
db.add_model(new_key, new_config)
self._add_model(new_key, new_config)
except DuplicateModelException:
self.logger.warning(f"Model {model_name} is already in the database")
except UnknownModelException:
self.logger.warning(f"Model at {stanza.path} could not be found in database")
def _search_by_path(self, path: Path) -> Optional[AnyModelConfig]:
self.cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE path=?;
""",
(str(path),),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self.cursor.fetchall()]
return results[0] if results else None
def main():
MigrateModelYamlToDb().migrate()
def _update_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect
json_serialized = record.model_dump_json() # and turn it into a json string.
self.cursor.execute(
"""--sql
UPDATE model_config
SET
config=?
WHERE id=?;
""",
(json_serialized, key),
)
if self.cursor.rowcount == 0:
raise UnknownModelException("model not found")
if __name__ == "__main__":
main()
def _add_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect.
json_serialized = record.model_dump_json() # and turn it into a json string.
try:
self.cursor.execute(
"""--sql
INSERT INTO model_config (
id,
original_hash,
config
)
VALUES (?,?,?);
""",
(
key,
record.original_hash,
json_serialized,
),
)
except sqlite3.IntegrityError as exc:
raise DuplicateModelException(f"{record.name}: model is already in database") from exc

View File

@ -65,12 +65,24 @@ class WorkflowWithoutID(BaseModel):
nodes: list[dict[str, JsonValue]] = Field(description="The nodes of the workflow.")
edges: list[dict[str, JsonValue]] = Field(description="The edges of the workflow.")
model_config = ConfigDict(extra="forbid")
model_config = ConfigDict(extra="ignore")
WorkflowWithoutIDValidator = TypeAdapter(WorkflowWithoutID)
class UnsafeWorkflowWithVersion(BaseModel):
"""
This utility model only requires a workflow to have a valid version string.
It is used to validate a workflow version without having to validate the entire workflow.
"""
meta: WorkflowMeta = Field(description="The meta of the workflow.")
UnsafeWorkflowWithVersionValidator = TypeAdapter(UnsafeWorkflowWithVersion)
class Workflow(WorkflowWithoutID):
id: str = Field(description="The id of the workflow.")

View File

@ -389,7 +389,7 @@ class TextualInversionCheckpointProbe(CheckpointProbeBase):
elif "clip_g" in checkpoint:
token_dim = checkpoint["clip_g"].shape[-1]
else:
token_dim = list(checkpoint.values())[0].shape[0]
token_dim = list(checkpoint.values())[0].shape[-1]
if token_dim == 768:
return BaseModelType.StableDiffusion1
elif token_dim == 1024:

View File

@ -9,7 +9,7 @@ def lora_token_vector_length(checkpoint: dict) -> int:
:param checkpoint: The checkpoint
"""
def _get_shape_1(key, tensor, checkpoint):
def _get_shape_1(key: str, tensor, checkpoint) -> int:
lora_token_vector_length = None
if "." not in key:
@ -57,6 +57,10 @@ def lora_token_vector_length(checkpoint: dict) -> int:
for key, tensor in checkpoint.items():
if key.startswith("lora_unet_") and ("_attn2_to_k." in key or "_attn2_to_v." in key):
lora_token_vector_length = _get_shape_1(key, tensor, checkpoint)
elif key.startswith("lora_unet_") and (
"time_emb_proj.lora_down" in key
): # recognizes format at https://civitai.com/models/224641
lora_token_vector_length = _get_shape_1(key, tensor, checkpoint)
elif key.startswith("lora_te") and "_self_attn_" in key:
tmp_length = _get_shape_1(key, tensor, checkpoint)
if key.startswith("lora_te_"):

View File

@ -400,6 +400,8 @@ class LoRACheckpointProbe(CheckpointProbeBase):
return BaseModelType.StableDiffusion1
elif token_vector_length == 1024:
return BaseModelType.StableDiffusion2
elif token_vector_length == 1280:
return BaseModelType.StableDiffusionXL # recognizes format at https://civitai.com/models/224641
elif token_vector_length == 2048:
return BaseModelType.StableDiffusionXL
else:

View File

@ -102,7 +102,7 @@ def calc_tiles_with_overlap(
def calc_tiles_even_split(
image_height: int, image_width: int, num_tiles_x: int, num_tiles_y: int, overlap_fraction: float = 0
image_height: int, image_width: int, num_tiles_x: int, num_tiles_y: int, overlap: int = 0
) -> list[Tile]:
"""Calculate the tile coordinates for a given image shape with the number of tiles requested.
@ -111,31 +111,35 @@ def calc_tiles_even_split(
image_width (int): The image width in px.
num_x_tiles (int): The number of tile to split the image into on the X-axis.
num_y_tiles (int): The number of tile to split the image into on the Y-axis.
overlap_fraction (float, optional): The target overlap as fraction of the tiles size. Defaults to 0.
overlap (int, optional): The overlap between adjacent tiles in pixels. Defaults to 0.
Returns:
list[Tile]: A list of tiles that cover the image shape. Ordered from left-to-right, top-to-bottom.
"""
# Ensure tile size is divisible by 8
# Ensure the image is divisible by LATENT_SCALE_FACTOR
if image_width % LATENT_SCALE_FACTOR != 0 or image_height % LATENT_SCALE_FACTOR != 0:
raise ValueError(f"image size (({image_width}, {image_height})) must be divisible by {LATENT_SCALE_FACTOR}")
# Calculate the overlap size based on the percentage and adjust it to be divisible by 8 (rounding up)
overlap_x = LATENT_SCALE_FACTOR * math.ceil(
int((image_width / num_tiles_x) * overlap_fraction) / LATENT_SCALE_FACTOR
)
overlap_y = LATENT_SCALE_FACTOR * math.ceil(
int((image_height / num_tiles_y) * overlap_fraction) / LATENT_SCALE_FACTOR
)
# Calculate the tile size based on the number of tiles and overlap, and ensure it's divisible by 8 (rounding down)
tile_size_x = LATENT_SCALE_FACTOR * math.floor(
((image_width + overlap_x * (num_tiles_x - 1)) // num_tiles_x) / LATENT_SCALE_FACTOR
)
tile_size_y = LATENT_SCALE_FACTOR * math.floor(
((image_height + overlap_y * (num_tiles_y - 1)) // num_tiles_y) / LATENT_SCALE_FACTOR
)
if num_tiles_x > 1:
# ensure the overlap is not more than the maximum overlap if we only have 1 tile then we dont care about overlap
assert overlap <= image_width - (LATENT_SCALE_FACTOR * (num_tiles_x - 1))
tile_size_x = LATENT_SCALE_FACTOR * math.floor(
((image_width + overlap * (num_tiles_x - 1)) // num_tiles_x) / LATENT_SCALE_FACTOR
)
assert overlap < tile_size_x
else:
tile_size_x = image_width
if num_tiles_y > 1:
# ensure the overlap is not more than the maximum overlap if we only have 1 tile then we dont care about overlap
assert overlap <= image_height - (LATENT_SCALE_FACTOR * (num_tiles_y - 1))
tile_size_y = LATENT_SCALE_FACTOR * math.floor(
((image_height + overlap * (num_tiles_y - 1)) // num_tiles_y) / LATENT_SCALE_FACTOR
)
assert overlap < tile_size_y
else:
tile_size_y = image_height
# tiles[y * num_tiles_x + x] is the tile for the y'th row, x'th column.
tiles: list[Tile] = []
@ -143,7 +147,7 @@ def calc_tiles_even_split(
# Calculate tile coordinates. (Ignore overlap values for now.)
for tile_idx_y in range(num_tiles_y):
# Calculate the top and bottom of the row
top = tile_idx_y * (tile_size_y - overlap_y)
top = tile_idx_y * (tile_size_y - overlap)
bottom = min(top + tile_size_y, image_height)
# For the last row adjust bottom to be the height of the image
if tile_idx_y == num_tiles_y - 1:
@ -151,7 +155,7 @@ def calc_tiles_even_split(
for tile_idx_x in range(num_tiles_x):
# Calculate the left & right coordinate of each tile
left = tile_idx_x * (tile_size_x - overlap_x)
left = tile_idx_x * (tile_size_x - overlap)
right = min(left + tile_size_x, image_width)
# For the last tile in the row adjust right to be the width of the image
if tile_idx_x == num_tiles_x - 1:

View File

@ -4,6 +4,7 @@ pip install <path_to_git_source>.
"""
import os
import platform
from distutils.version import LooseVersion
import pkg_resources
import psutil
@ -31,10 +32,6 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def get_versions() -> dict:
return requests.get(url=INVOKE_AI_REL).json()
def invokeai_is_running() -> bool:
for p in psutil.process_iter():
try:
@ -50,6 +47,20 @@ def invokeai_is_running() -> bool:
return False
def get_pypi_versions():
url = "https://pypi.org/pypi/invokeai/json"
try:
data = requests.get(url).json()
except Exception:
raise Exception("Unable to fetch version information from PyPi")
versions = list(data["releases"].keys())
versions.sort(key=LooseVersion, reverse=True)
latest_version = [v for v in versions if "rc" not in v][0]
latest_release_candidate = [v for v in versions if "rc" in v][0]
return latest_version, latest_release_candidate, versions
def welcome(latest_release: str, latest_prerelease: str):
@group()
def text():
@ -63,8 +74,7 @@ def welcome(latest_release: str, latest_prerelease: str):
yield "[bold yellow]Options:"
yield f"""[1] Update to the latest [bold]official release[/bold] ([italic]{latest_release}[/italic])
[2] Update to the latest [bold]pre-release[/bold] (may be buggy; caveat emptor!) ([italic]{latest_prerelease}[/italic])
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to"""
[3] Manually enter the [bold]version[/bold] you wish to update to"""
console.rule()
print(
@ -92,44 +102,35 @@ def get_extras():
def main():
versions = get_versions()
released_versions = [x for x in versions if not (x["draft"] or x["prerelease"])]
prerelease_versions = [x for x in versions if not x["draft"] and x["prerelease"]]
latest_release = released_versions[0]["tag_name"] if len(released_versions) else None
latest_prerelease = prerelease_versions[0]["tag_name"] if len(prerelease_versions) else None
if invokeai_is_running():
print(":exclamation: [bold red]Please terminate all running instances of InvokeAI before updating.[/red bold]")
input("Press any key to continue...")
return
latest_release, latest_prerelease, versions = get_pypi_versions()
welcome(latest_release, latest_prerelease)
tag = None
branch = None
release = None
choice = Prompt.ask("Choice:", choices=["1", "2", "3", "4"], default="1")
release = latest_release
choice = Prompt.ask("Choice:", choices=["1", "2", "3"], default="1")
if choice == "1":
release = latest_release
elif choice == "2":
release = latest_prerelease
elif choice == "3":
while not tag:
tag = Prompt.ask("Enter an InvokeAI tag name")
elif choice == "4":
while not branch:
branch = Prompt.ask("Enter an InvokeAI branch name")
while True:
release = Prompt.ask("Enter an InvokeAI version")
release.strip()
if release in versions:
break
print(f":exclamation: [bold red]'{release}' is not a recognized InvokeAI release.[/red bold]")
extras = get_extras()
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
if release:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
elif tag:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
else:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
print(f":crossed_fingers: Upgrading to [yellow]{release}[/yellow]")
cmd = f'pip install "invokeai{extras}=={release}" --use-pep517 --upgrade'
print("")
print("")
if os.system(cmd) == 0:

View File

@ -950,9 +950,9 @@
"problemSettingTitle": "Problem Setting Title",
"reloadNodeTemplates": "Reload Node Templates",
"removeLinearView": "Remove from Linear View",
"resetWorkflow": "Reset Workflow Editor",
"resetWorkflowDesc": "Are you sure you want to reset the Workflow Editor?",
"resetWorkflowDesc2": "Resetting the Workflow Editor will clear all nodes, edges and workflow details. Saved workflows will not be affected.",
"newWorkflow": "New Workflow",
"newWorkflowDesc": "Create a new workflow?",
"newWorkflowDesc2": "Your current workflow has unsaved changes.",
"scheduler": "Scheduler",
"schedulerDescription": "TODO",
"sDXLMainModelField": "SDXL Model",
@ -1634,10 +1634,10 @@
"userWorkflows": "My Workflows",
"defaultWorkflows": "Default Workflows",
"openWorkflow": "Open Workflow",
"uploadWorkflow": "Upload Workflow",
"uploadWorkflow": "Load from File",
"deleteWorkflow": "Delete Workflow",
"unnamedWorkflow": "Unnamed Workflow",
"downloadWorkflow": "Download Workflow",
"downloadWorkflow": "Save to File",
"saveWorkflow": "Save Workflow",
"saveWorkflowAs": "Save Workflow As",
"savingWorkflow": "Saving Workflow...",
@ -1652,7 +1652,7 @@
"searchWorkflows": "Search Workflows",
"clearWorkflowSearchFilter": "Clear Workflow Search Filter",
"workflowName": "Workflow Name",
"workflowEditorReset": "Workflow Editor Reset",
"newWorkflowCreated": "New Workflow Created",
"workflowEditorMenu": "Workflow Editor Menu",
"workflowIsOpen": "Workflow is Open"
},

View File

@ -727,9 +727,6 @@
"showMinimapnodes": "Mostrar el minimapa",
"reloadNodeTemplates": "Recargar las plantillas de nodos",
"loadWorkflow": "Cargar el flujo de trabajo",
"resetWorkflow": "Reiniciar e flujo de trabajo",
"resetWorkflowDesc": "¿Está seguro de que deseas restablecer este flujo de trabajo?",
"resetWorkflowDesc2": "Al reiniciar el flujo de trabajo se borrarán todos los nodos, aristas y detalles del flujo de trabajo.",
"downloadWorkflow": "Descargar el flujo de trabajo en un archivo JSON"
}
}

View File

@ -104,7 +104,16 @@
"copyError": "$t(gallery.copy) Errore",
"input": "Ingresso",
"notInstalled": "Non $t(common.installed)",
"unknownError": "Errore sconosciuto"
"unknownError": "Errore sconosciuto",
"updated": "Aggiornato",
"save": "Salva",
"created": "Creato",
"prevPage": "Pagina precedente",
"delete": "Elimina",
"orderBy": "Ordinato per",
"nextPage": "Pagina successiva",
"saveAs": "Salva come",
"unsaved": "Non salvato"
},
"gallery": {
"generations": "Generazioni",
@ -763,7 +772,10 @@
"setIPAdapterImage": "Imposta come immagine per l'Adattatore IP",
"problemSavingMaskDesc": "Impossibile salvare la maschera",
"setAsCanvasInitialImage": "Imposta come immagine iniziale della tela",
"invalidUpload": "Caricamento non valido"
"invalidUpload": "Caricamento non valido",
"problemDeletingWorkflow": "Problema durante l'eliminazione del flusso di lavoro",
"workflowDeleted": "Flusso di lavoro eliminato",
"problemRetrievingWorkflow": "Problema nel recupero del flusso di lavoro"
},
"tooltip": {
"feature": {
@ -886,11 +898,8 @@
"zoomInNodes": "Ingrandire",
"fitViewportNodes": "Adatta vista",
"showGraphNodes": "Mostra sovrapposizione grafico",
"resetWorkflowDesc2": "Reimpostare il flusso di lavoro cancellerà tutti i nodi, i bordi e i dettagli del flusso di lavoro.",
"reloadNodeTemplates": "Ricarica i modelli di nodo",
"loadWorkflow": "Importa flusso di lavoro JSON",
"resetWorkflow": "Reimposta flusso di lavoro",
"resetWorkflowDesc": "Sei sicuro di voler reimpostare questo flusso di lavoro?",
"downloadWorkflow": "Esporta flusso di lavoro JSON",
"scheduler": "Campionatore",
"addNode": "Aggiungi nodo",
@ -1080,25 +1089,30 @@
"collectionOrScalarFieldType": "{{name}} Raccolta|Scalare",
"nodeVersion": "Versione Nodo",
"inputFieldTypeParseError": "Impossibile analizzare il tipo di campo di input {{node}}.{{field}} ({{message}})",
"unsupportedArrayItemType": "tipo di elemento dell'array non supportato \"{{type}}\"",
"unsupportedArrayItemType": "Tipo di elemento dell'array non supportato \"{{type}}\"",
"targetNodeFieldDoesNotExist": "Connessione non valida: il campo di destinazione/input {{node}}.{{field}} non esiste",
"unsupportedMismatchedUnion": "tipo CollectionOrScalar non corrispondente con tipi di base {{firstType}} e {{secondType}}",
"allNodesUpdated": "Tutti i nodi sono aggiornati",
"sourceNodeDoesNotExist": "Connessione non valida: il nodo di origine/output {{node}} non esiste",
"unableToExtractEnumOptions": "impossibile estrarre le opzioni enum",
"unableToParseFieldType": "impossibile analizzare il tipo di campo",
"unableToExtractEnumOptions": "Impossibile estrarre le opzioni enum",
"unableToParseFieldType": "Impossibile analizzare il tipo di campo",
"unrecognizedWorkflowVersion": "Versione dello schema del flusso di lavoro non riconosciuta {{version}}",
"outputFieldTypeParseError": "Impossibile analizzare il tipo di campo di output {{node}}.{{field}} ({{message}})",
"sourceNodeFieldDoesNotExist": "Connessione non valida: il campo di origine/output {{node}}.{{field}} non esiste",
"unableToGetWorkflowVersion": "Impossibile ottenere la versione dello schema del flusso di lavoro",
"nodePack": "Pacchetto di nodi",
"unableToExtractSchemaNameFromRef": "impossibile estrarre il nome dello schema dal riferimento",
"unableToExtractSchemaNameFromRef": "Impossibile estrarre il nome dello schema dal riferimento",
"unknownOutput": "Output sconosciuto: {{name}}",
"unknownNodeType": "Tipo di nodo sconosciuto",
"targetNodeDoesNotExist": "Connessione non valida: il nodo di destinazione/input {{node}} non esiste",
"unknownFieldType": "$t(nodes.unknownField) tipo: {{type}}",
"deletedInvalidEdge": "Eliminata connessione non valida {{source}} -> {{target}}",
"unknownInput": "Input sconosciuto: {{name}}"
"unknownInput": "Input sconosciuto: {{name}}",
"prototypeDesc": "Questa invocazione è un prototipo. Potrebbe subire modifiche sostanziali durante gli aggiornamenti dell'app e potrebbe essere rimossa in qualsiasi momento.",
"betaDesc": "Questa invocazione è in versione beta. Fino a quando non sarà stabile, potrebbe subire modifiche importanti durante gli aggiornamenti dell'app. Abbiamo intenzione di supportare questa invocazione a lungo termine.",
"newWorkflow": "Nuovo flusso di lavoro",
"newWorkflowDesc": "Creare un nuovo flusso di lavoro?",
"newWorkflowDesc2": "Il flusso di lavoro attuale presenta modifiche non salvate."
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@ -1594,5 +1608,36 @@
"hrf": "Correzione Alta Risoluzione",
"hrfStrength": "Forza della Correzione Alta Risoluzione",
"strengthTooltip": "Valori più bassi comportano meno dettagli, il che può ridurre potenziali artefatti."
},
"workflows": {
"saveWorkflowAs": "Salva flusso di lavoro come",
"workflowEditorMenu": "Menu dell'editor del flusso di lavoro",
"noSystemWorkflows": "Nessun flusso di lavoro del sistema",
"workflowName": "Nome del flusso di lavoro",
"noUserWorkflows": "Nessun flusso di lavoro utente",
"defaultWorkflows": "Flussi di lavoro predefiniti",
"saveWorkflow": "Salva flusso di lavoro",
"openWorkflow": "Apri flusso di lavoro",
"clearWorkflowSearchFilter": "Cancella il filtro di ricerca del flusso di lavoro",
"workflowLibrary": "Libreria",
"noRecentWorkflows": "Nessun flusso di lavoro recente",
"workflowSaved": "Flusso di lavoro salvato",
"workflowIsOpen": "Il flusso di lavoro è aperto",
"unnamedWorkflow": "Flusso di lavoro senza nome",
"savingWorkflow": "Salvataggio del flusso di lavoro...",
"problemLoading": "Problema durante il caricamento dei flussi di lavoro",
"loading": "Caricamento dei flussi di lavoro",
"searchWorkflows": "Cerca flussi di lavoro",
"problemSavingWorkflow": "Problema durante il salvataggio del flusso di lavoro",
"deleteWorkflow": "Elimina flusso di lavoro",
"workflows": "Flussi di lavoro",
"noDescription": "Nessuna descrizione",
"userWorkflows": "I miei flussi di lavoro",
"newWorkflowCreated": "Nuovo flusso di lavoro creato",
"downloadWorkflow": "Salva su file",
"uploadWorkflow": "Carica da file"
},
"app": {
"storeNotInitialized": "Il negozio non è inizializzato"
}
}

View File

@ -844,9 +844,6 @@
"hideLegendNodes": "Typelegende veld verbergen",
"reloadNodeTemplates": "Herlaad knooppuntsjablonen",
"loadWorkflow": "Laad werkstroom",
"resetWorkflow": "Herstel werkstroom",
"resetWorkflowDesc": "Weet je zeker dat je deze werkstroom wilt herstellen?",
"resetWorkflowDesc2": "Herstel van een werkstroom haalt alle knooppunten, randen en werkstroomdetails weg.",
"downloadWorkflow": "Download JSON van werkstroom",
"booleanPolymorphicDescription": "Een verzameling Booleanse waarden.",
"scheduler": "Planner",

File diff suppressed because it is too large Load Diff

View File

@ -110,7 +110,17 @@
"copyError": "$t(gallery.copy) 错误",
"input": "输入",
"notInstalled": "非 $t(common.installed)",
"delete": "删除"
"delete": "删除",
"updated": "已上传",
"save": "保存",
"created": "已创建",
"prevPage": "上一页",
"unknownError": "未知错误",
"direction": "指向",
"orderBy": "排序方式:",
"nextPage": "下一页",
"saveAs": "保存为",
"unsaved": "未保存"
},
"gallery": {
"generations": "生成的图像",
@ -146,7 +156,11 @@
"image": "图像",
"drop": "弃用",
"dropOrUpload": "$t(gallery.drop) 或上传",
"dropToUpload": "$t(gallery.drop) 以上传"
"dropToUpload": "$t(gallery.drop) 以上传",
"problemDeletingImagesDesc": "有一张或多张图像无法被删除",
"problemDeletingImages": "删除图像时出现问题",
"unstarImage": "取消收藏图像",
"starImage": "收藏图像"
},
"hotkeys": {
"keyboardShortcuts": "键盘快捷键",
@ -724,7 +738,7 @@
"nodesUnrecognizedTypes": "无法加载。节点图有无法识别的节点类型",
"nodesNotValidJSON": "无效的 JSON",
"nodesNotValidGraph": "无效的 InvokeAi 节点图",
"nodesLoadedFailed": "节点加载失败",
"nodesLoadedFailed": "节点加载失败",
"modelAddedSimple": "已添加模型",
"modelAdded": "已添加模型: {{modelName}}",
"imageSavingFailed": "图像保存失败",
@ -760,7 +774,10 @@
"problemImportingMask": "导入遮罩时出现问题",
"baseModelChangedCleared_other": "基础模型已更改, 已清除或禁用 {{count}} 个不兼容的子模型",
"setAsCanvasInitialImage": "设为画布初始图像",
"invalidUpload": "无效的上传"
"invalidUpload": "无效的上传",
"problemDeletingWorkflow": "删除工作流时出现问题",
"workflowDeleted": "已删除工作流",
"problemRetrievingWorkflow": "检索工作流时发生问题"
},
"unifiedCanvas": {
"layer": "图层",
@ -875,11 +892,8 @@
},
"nodes": {
"zoomInNodes": "放大",
"resetWorkflowDesc": "是否确定要清空节点图?",
"resetWorkflow": "清空节点图",
"loadWorkflow": "读取节点图",
"loadWorkflow": "加载工作流",
"zoomOutNodes": "缩小",
"resetWorkflowDesc2": "重置节点图将清除所有节点、边际和节点图详情.",
"reloadNodeTemplates": "重载节点模板",
"hideGraphNodes": "隐藏节点图信息",
"fitViewportNodes": "自适应视图",
@ -888,7 +902,7 @@
"showLegendNodes": "显示字段类型图例",
"hideLegendNodes": "隐藏字段类型图例",
"showGraphNodes": "显示节点图信息",
"downloadWorkflow": "下载节点图 JSON",
"downloadWorkflow": "下载工作流 JSON",
"workflowDescription": "简述",
"versionUnknown": " 未知版本",
"noNodeSelected": "无选中的节点",
@ -1103,7 +1117,9 @@
"collectionOrScalarFieldType": "{{name}} 合集 | 标量",
"nodeVersion": "节点版本",
"deletedInvalidEdge": "已删除无效的边缘 {{source}} -> {{target}}",
"unknownInput": "未知输入:{{name}}"
"unknownInput": "未知输入:{{name}}",
"prototypeDesc": "此调用是一个原型 (prototype)。它可能会在本项目更新期间发生破坏性更改,并且随时可能被删除。",
"betaDesc": "此调用尚处于测试阶段。在稳定之前,它可能会在项目更新期间发生破坏性更改。本项目计划长期支持这种调用。"
},
"controlnet": {
"resize": "直接缩放",
@ -1607,5 +1623,35 @@
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
},
"workflows": {
"saveWorkflowAs": "保存工作流为",
"workflowEditorMenu": "工作流编辑器菜单",
"noSystemWorkflows": "无系统工作流",
"workflowName": "工作流名称",
"noUserWorkflows": "无用户工作流",
"defaultWorkflows": "默认工作流",
"saveWorkflow": "保存工作流",
"openWorkflow": "打开工作流",
"clearWorkflowSearchFilter": "清除工作流检索过滤器",
"workflowLibrary": "工作流库",
"downloadWorkflow": "下载工作流",
"noRecentWorkflows": "无最近工作流",
"workflowSaved": "已保存工作流",
"workflowIsOpen": "工作流已打开",
"unnamedWorkflow": "未命名的工作流",
"savingWorkflow": "保存工作流中...",
"problemLoading": "加载工作流时出现问题",
"loading": "加载工作流中",
"searchWorkflows": "检索工作流",
"problemSavingWorkflow": "保存工作流时出现问题",
"deleteWorkflow": "删除工作流",
"workflows": "工作流",
"noDescription": "无描述",
"uploadWorkflow": "上传工作流",
"userWorkflows": "我的工作流"
},
"app": {
"storeNotInitialized": "商店尚未初始化"
}
}

View File

@ -144,6 +144,7 @@ export const buildCanvasImageToImageGraph = (
type: 'l2i',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [
@ -255,6 +256,7 @@ export const buildCanvasImageToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -295,6 +297,7 @@ export const buildCanvasImageToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
(graph.nodes[IMAGE_TO_LATENTS] as ImageToLatentsInvocation).image =

View File

@ -191,6 +191,7 @@ export const buildCanvasInpaintGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
reference: canvasInitImage,
use_cache: false,
},
},
edges: [

View File

@ -199,6 +199,7 @@ export const buildCanvasOutpaintGraph = (
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -266,6 +266,7 @@ export const buildCanvasSDXLImageToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -306,6 +307,7 @@ export const buildCanvasSDXLImageToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
(graph.nodes[IMAGE_TO_LATENTS] as ImageToLatentsInvocation).image =

View File

@ -196,6 +196,7 @@ export const buildCanvasSDXLInpaintGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
reference: canvasInitImage,
use_cache: false,
},
},
edges: [

View File

@ -204,6 +204,7 @@ export const buildCanvasSDXLOutpaintGraph = (
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -258,6 +258,7 @@ export const buildCanvasSDXLTextToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -288,6 +289,7 @@ export const buildCanvasSDXLTextToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
graph.edges.push({

View File

@ -246,6 +246,7 @@ export const buildCanvasTextToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -276,6 +277,7 @@ export const buildCanvasTextToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
graph.edges.push({

View File

@ -143,6 +143,7 @@ export const buildLinearImageToImageGraph = (
// },
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -154,6 +154,7 @@ export const buildLinearSDXLImageToImageGraph = (
// },
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -127,6 +127,7 @@ export const buildLinearSDXLTextToImageGraph = (
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -146,6 +146,7 @@ export const buildLinearTextToImageGraph = (
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -64,7 +64,10 @@ const migrateV1toV2 = (workflowToMigrate: WorkflowV1): WorkflowV2 => {
const nodePack = invocationTemplate
? invocationTemplate.nodePack
: t('common.unknown');
(node.data as unknown as InvocationNodeData).nodePack = nodePack;
// Fallback to 1.0.0 if not specified - this matches the behavior of the backend
node.data.version ||= '1.0.0';
}
});
// Bump version

View File

@ -11,44 +11,48 @@ import {
Text,
useDisclosure,
} from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { memo, useCallback, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import { FaTrash } from 'react-icons/fa';
import { FaCircleNodes } from 'react-icons/fa6';
const ResetWorkflowEditorMenuItem = () => {
const NewWorkflowMenuItem = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { isOpen, onOpen, onClose } = useDisclosure();
const cancelRef = useRef<HTMLButtonElement | null>(null);
const isTouched = useAppSelector((state) => state.workflow.isTouched);
const handleConfirmClear = useCallback(() => {
const handleNewWorkflow = useCallback(() => {
dispatch(nodeEditorReset());
dispatch(
addToast(
makeToast({
title: t('workflows.workflowEditorReset'),
title: t('workflows.newWorkflowCreated'),
status: 'success',
})
)
);
onClose();
}, [dispatch, t, onClose]);
}, [dispatch, onClose, t]);
const onClick = useCallback(() => {
if (!isTouched) {
handleNewWorkflow();
return;
}
onOpen();
}, [handleNewWorkflow, isTouched, onOpen]);
return (
<>
<MenuItem
as="button"
icon={<FaTrash />}
sx={{ color: 'error.600', _dark: { color: 'error.300' } }}
onClick={onOpen}
>
{t('nodes.resetWorkflow')}
<MenuItem as="button" icon={<FaCircleNodes />} onClick={onClick}>
{t('nodes.newWorkflow')}
</MenuItem>
<AlertDialog
@ -61,13 +65,13 @@ const ResetWorkflowEditorMenuItem = () => {
<AlertDialogContent>
<AlertDialogHeader fontSize="lg" fontWeight="bold">
{t('nodes.resetWorkflow')}
{t('nodes.newWorkflow')}
</AlertDialogHeader>
<AlertDialogBody py={4}>
<Flex flexDir="column" gap={2}>
<Text>{t('nodes.resetWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.resetWorkflowDesc2')}</Text>
<Text>{t('nodes.newWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.newWorkflowDesc2')}</Text>
</Flex>
</AlertDialogBody>
@ -75,7 +79,7 @@ const ResetWorkflowEditorMenuItem = () => {
<Button ref={cancelRef} onClick={onClose}>
{t('common.cancel')}
</Button>
<Button colorScheme="error" ml={3} onClick={handleConfirmClear}>
<Button colorScheme="error" ml={3} onClick={handleNewWorkflow}>
{t('common.accept')}
</Button>
</AlertDialogFooter>
@ -85,4 +89,4 @@ const ResetWorkflowEditorMenuItem = () => {
);
};
export default memo(ResetWorkflowEditorMenuItem);
export default memo(NewWorkflowMenuItem);

View File

@ -9,7 +9,7 @@ import IAIIconButton from 'common/components/IAIIconButton';
import { useGlobalMenuCloseTrigger } from 'common/hooks/useGlobalMenuCloseTrigger';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import DownloadWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/DownloadWorkflowMenuItem';
import ResetWorkflowEditorMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/ResetWorkflowEditorMenuItem';
import NewWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/NewWorkflowMenuItem';
import SaveWorkflowAsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowAsMenuItem';
import SaveWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowMenuItem';
import SettingsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SettingsMenuItem';
@ -39,7 +39,7 @@ const WorkflowLibraryMenu = () => {
{isWorkflowLibraryEnabled && <SaveWorkflowAsMenuItem />}
<DownloadWorkflowMenuItem />
<UploadWorkflowMenuItem />
<ResetWorkflowEditorMenuItem />
<NewWorkflowMenuItem />
<MenuDivider />
<SettingsMenuItem />
</MenuList>

View File

@ -101,6 +101,8 @@ plugins:
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
- javascripts/tablesort.js
- https://widget.kapa.ai/kapa-widget.bundle.js
- javascript/init_kapa_widget.js
extra:
analytics:

View File

@ -32,7 +32,7 @@ classifiers = [
'Topic :: Scientific/Engineering :: Image Processing',
]
dependencies = [
"accelerate~=0.24.0",
"accelerate~=0.25.0",
"albumentations",
"basicsr",
"click",
@ -80,11 +80,11 @@ dependencies = [
"semver~=3.0.1",
"send2trash",
"test-tube~=0.7.5",
"torch==2.1.0",
"torchvision==0.16.0",
"torch==2.1.1",
"torchvision==0.16.1",
"torchmetrics~=0.11.0",
"torchsde~=0.2.5",
"transformers~=4.35.0",
"transformers~=4.36.0",
"uvicorn[standard]~=0.21.1",
"windows-curses; sys_platform=='win32'",
]
@ -107,7 +107,7 @@ dependencies = [
"pytest-datadir",
]
"xformers" = [
"xformers==0.0.22post7; sys_platform!='darwin'",
"xformers==0.0.23; sys_platform!='darwin'",
"triton; sys_platform=='linux'",
]
"onnx" = ["onnxruntime"]
@ -138,7 +138,6 @@ dependencies = [
"invokeai-node-web" = "invokeai.app.api_app:invoke_api"
"invokeai-import-images" = "invokeai.frontend.install.import_images:main"
"invokeai-db-maintenance" = "invokeai.backend.util.db_maintenance:main"
"invokeai-migrate-models-to-db" = "invokeai.backend.model_manager.migrate_to_db:main"
[project.urls]
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"

View File

@ -305,9 +305,7 @@ def test_calc_tiles_min_overlap_input_validation(
def test_calc_tiles_even_split_single_tile():
"""Test calc_tiles_even_split() behavior when a single tile covers the image."""
tiles = calc_tiles_even_split(
image_height=512, image_width=1024, num_tiles_x=1, num_tiles_y=1, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=512, image_width=1024, num_tiles_x=1, num_tiles_y=1, overlap=64)
expected_tiles = [
Tile(
@ -322,36 +320,34 @@ def test_calc_tiles_even_split_single_tile():
def test_calc_tiles_even_split_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image is evenly covered by multiple tiles."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=576, image_width=1600, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=576, image_width=1600, num_tiles_x=3, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=320, left=0, right=624),
overlap=TBLR(top=0, bottom=72, left=0, right=136),
coords=TBLR(top=0, bottom=320, left=0, right=576),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=488, right=1112),
overlap=TBLR(top=0, bottom=72, left=136, right=136),
coords=TBLR(top=0, bottom=320, left=512, right=1088),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=976, right=1600),
overlap=TBLR(top=0, bottom=72, left=136, right=0),
coords=TBLR(top=0, bottom=320, left=1024, right=1600),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=248, bottom=576, left=0, right=624),
overlap=TBLR(top=72, bottom=0, left=0, right=136),
coords=TBLR(top=256, bottom=576, left=0, right=576),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=248, bottom=576, left=488, right=1112),
overlap=TBLR(top=72, bottom=0, left=136, right=136),
coords=TBLR(top=256, bottom=576, left=512, right=1088),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=248, bottom=576, left=976, right=1600),
overlap=TBLR(top=72, bottom=0, left=136, right=0),
coords=TBLR(top=256, bottom=576, left=1024, right=1600),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
assert tiles == expected_tiles
@ -360,36 +356,34 @@ def test_calc_tiles_even_split_evenly_divisible():
def test_calc_tiles_even_split_not_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image requires 'uneven' overlaps to achieve proper coverage."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=400, image_width=1200, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=400, image_width=1200, num_tiles_x=3, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=224, left=0, right=464),
overlap=TBLR(top=0, bottom=56, left=0, right=104),
coords=TBLR(top=0, bottom=232, left=0, right=440),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=224, left=360, right=824),
overlap=TBLR(top=0, bottom=56, left=104, right=104),
coords=TBLR(top=0, bottom=232, left=376, right=816),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=224, left=720, right=1200),
overlap=TBLR(top=0, bottom=56, left=104, right=0),
coords=TBLR(top=0, bottom=232, left=752, right=1200),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=168, bottom=400, left=0, right=464),
overlap=TBLR(top=56, bottom=0, left=0, right=104),
coords=TBLR(top=168, bottom=400, left=0, right=440),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=168, bottom=400, left=360, right=824),
overlap=TBLR(top=56, bottom=0, left=104, right=104),
coords=TBLR(top=168, bottom=400, left=376, right=816),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=168, bottom=400, left=720, right=1200),
overlap=TBLR(top=56, bottom=0, left=104, right=0),
coords=TBLR(top=168, bottom=400, left=752, right=1200),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
@ -399,28 +393,26 @@ def test_calc_tiles_even_split_not_evenly_divisible():
def test_calc_tiles_even_split_difficult_size():
"""Test calc_tiles_even_split() behavior when the image is a difficult size to spilt evenly and keep div8."""
# Parameters are a difficult size for other tile gen routines to calculate
tiles = calc_tiles_even_split(
image_height=1000, image_width=1000, num_tiles_x=2, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=1000, image_width=1000, num_tiles_x=2, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=560, left=0, right=560),
overlap=TBLR(top=0, bottom=128, left=0, right=128),
coords=TBLR(top=0, bottom=528, left=0, right=528),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=560, left=432, right=1000),
overlap=TBLR(top=0, bottom=128, left=128, right=0),
coords=TBLR(top=0, bottom=528, left=464, right=1000),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=432, bottom=1000, left=0, right=560),
overlap=TBLR(top=128, bottom=0, left=0, right=128),
coords=TBLR(top=464, bottom=1000, left=0, right=528),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=432, bottom=1000, left=432, right=1000),
overlap=TBLR(top=128, bottom=0, left=128, right=0),
coords=TBLR(top=464, bottom=1000, left=464, right=1000),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
@ -428,11 +420,13 @@ def test_calc_tiles_even_split_difficult_size():
@pytest.mark.parametrize(
["image_height", "image_width", "num_tiles_x", "num_tiles_y", "overlap_fraction", "raises"],
["image_height", "image_width", "num_tiles_x", "num_tiles_y", "overlap", "raises"],
[
(128, 128, 1, 1, 0.25, False), # OK
(128, 128, 1, 1, 127, False), # OK
(128, 128, 1, 1, 0, False), # OK
(128, 128, 2, 1, 0, False), # OK
(128, 128, 2, 2, 0, False), # OK
(128, 128, 2, 1, 120, True), # overlap equals tile_height.
(128, 128, 1, 2, 120, True), # overlap equals tile_width.
(127, 127, 1, 1, 0, True), # image size must be dividable by 8
],
)
@ -441,15 +435,15 @@ def test_calc_tiles_even_split_input_validation(
image_width: int,
num_tiles_x: int,
num_tiles_y: int,
overlap_fraction: float,
overlap: int,
raises: bool,
):
"""Test that calc_tiles_even_split() raises an exception if the inputs are invalid."""
if raises:
with pytest.raises(ValueError):
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
with pytest.raises((AssertionError, ValueError)):
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap)
else:
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap)
#############################################