Compare commits

...

41 Commits

Author SHA1 Message Date
913ad69c34 fix typos 2023-12-20 17:02:54 +11:00
Sam
a4f9bfc8f7 Update Dockerfile 2023-12-19 18:38:36 -05:00
Sam
9afdd0f4a8 Update Dockerfile 2023-12-19 18:38:36 -05:00
bee6ad1547 fix(pnpm): replace npm with pnpm in dockerfile 2023-12-19 18:38:36 -05:00
fa3f1b6e41 [Feat] reimport model config records after schema migration (#5281)
* add code to repopulate model config records after schema update

* reformat for ruff

* migrate model records using db cursor rather than the ModelRecordConfigService

* ruff fixes

* tweak exception reporting

* fix: build frontend in  pypi-release workflow

This was missing, resulting in the 3.5.0rc1 having no frontend.

* fix: use node 18, set working directory

- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands

* Don't copy extraneous paths into installer .zip

* feat(installer): delete frontend build after creating installer

This prevents an empty `dist/` from breaking the app on startup.

* feat: add python dist as release artifact, as input to enable publish to pypi

- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* freeze yaml migration logic at upgrade to 3.5

* moved migration code to migration_3

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
2023-12-19 17:01:47 -05:00
d0fa131010 (feat) updater installs from PyPi instead of GitHub releases (#5316)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updater script pulls from PyPI instead of GitHub releases (this is why
the RC packages are having issues when updating through the launcher
script)

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-19 13:15:38 +11:00
2f438431bd (fix) update logic for installing specific version 2023-12-19 11:05:15 +11:00
bbeb5cb477 Merge branch 'main' into feat/updater_use_pypi 2023-12-19 10:09:03 +11:00
cd3111c324 fix ruff errors 2023-12-19 09:58:10 +11:00
16b7246412 (feat) updater installs from PyPi instead of GitHub releases 2023-12-19 09:30:40 +11:00
42be78d328 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1327 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-19 07:20:14 +11:00
e469e24a58 Update model_probe to work with diffuser-format SD TI embeddings. (#5301)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

      
## Have you updated all relevant documentation?
- [x] Yes (N/A)
- [ ] No


## Description

This change enables the model probe to work with TI embeddings that have
the follow state_dict structure:

```python
{
    "<any_key>": torch.Tensor(...), # where the tensor has shape (N, embedding_dim)
}
```

## QA Instructions, Screenshots, Recordings

I can't imagine an embedding format that would previously have passed
the model probe, and would now fail after this change. That being said,
I'll exercise a bunch of existing TIs before merging.

- [x] Exercise existing TI formats


## Added/updated tests?

- [ ] Yes
- [x] No : _We could really benefit from tests for all of the supported
TI formats... but I'm not taking on that project right now._
2023-12-18 10:01:04 -05:00
cb698ff1fb Update model_probe to work with diffuser-format SD TI embeddings. 2023-12-18 09:51:16 -05:00
0e738c4290 Tag model manager v2 api as unstable (#5311)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

As discussed with @psychedelicious , this PR changes the swagger label
on the model manager V2 routes to `model_manager_v2_unstable` in order
to warn community members that the API is liable to change.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-18 07:09:40 -05:00
09d1bc513d Merge branch 'main' into refactor/model-manager2/mark-api-experimental 2023-12-18 07:04:00 -05:00
aefa828237 Tiled upscaling - EvenSplit to use overlap in pixels instead tile fraction (#5309)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Change CalculateImageTilesEvenSplitInvocation to have an overlap in
pixels rather than as a percentage of the tile. This makes it easier to
have predictable blending of the seams as you have a known overlap size.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-17 21:13:45 -05:00
74ea592d02 tag model manager v2 api as unstable 2023-12-17 14:16:45 -05:00
457b0dfac0 Merge branch 'main' into tiled-upscaling-graph 2023-12-17 15:12:16 +00:00
96a717c4ba In CalculateImageTilesEvenSplitInvocation to have overlap_fraction becomes just overlap. This is now in pixels rather than as a fraction of the tile size.
Update calc_tiles_even_split() with the same change. Ensuring Overlap is within allowed size

Update even_split tests
2023-12-17 15:10:50 +00:00
77b74264a8 Simplify docker compose setup (#5046)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes -
https://github.com/invoke-ai/InvokeAI/pull/5007#discussion_r1378792615
- [ ] No, because: 

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Simplify Docker image creation and execution to a single script that
spins up the right service in the docker compose file.
## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Depends on #5007

## QA Instructions, Screenshots, Recordings
N/A
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : same tests should work.

## [optional] Are there any post deployment tasks we need to perform?

Not to my knowledge.
2023-12-17 17:10:56 +11:00
351078e8aa Merge branch 'main' into simplify-docker-compose-setup 2023-12-17 17:07:55 +11:00
b8354bd1a4 Merge branch 'main' into tiled-upscaling-graph 2023-12-16 19:09:28 +00:00
3b944b8af6 fix: build frontend in pypi-release workflow (#5298)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

This was missing, resulting in the 3.5.0rc1 having no frontend.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Discord installer thread:
https://discord.com/channels/1020123559063990373/1149513695567810630/1185200427717898260
- Comments from here in the release chat:
https://discord.com/channels/1020123559063990373/1020123559831539744/1185004017521279007

## QA Instructions, Screenshots, Recordings

I've run this locally and it works (I commented out the final steps of
the workflow that do PyPi stuff to ensure I didn't accidentally deploy
something).

You can run the workflow locally with https://github.com/nektos/act.
Suggest using the `gh` CLI version, its very easy to set up if you have
the github CLI installed. Then you can run `gh act -W
.github/workflows/pypi-release.yml` to run the workflow locally in a
docker image.

I don't know this local action runner would actually release to PyPi -
as mentioned, I commented those steps out when testing - but it does
successfully do both frontend and backend builds.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This needs @lstein 's approval.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?

Cut an RC2
2023-12-16 10:40:36 -05:00
b811c037bd Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:36:03 -05:00
5bf61382a4 feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
2023-12-16 20:02:09 +11:00
0f1c5f382a feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
2023-12-16 19:39:29 +11:00
4af1695c60 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-16 13:10:47 +11:00
df9a903a50 fix(ui): do not cache VAE decode on linear
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.

For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.

There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.

The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.

This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
2023-12-16 12:37:49 +11:00
311be8f97d Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:15:32 +11:00
3f970c8326 Don't copy extraneous paths into installer .zip 2023-12-15 11:27:21 -05:00
fc150acde5 [feat] Make model prober recognize yet another LoRA format (#5296)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This adds a probe for the SDXL LoRA format found in the wild at
https://civitai.com/models/224641.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

See discord message at:
https://discord.com/channels/1020123559063990373/1149510134058471514/1184982133912113182

## QA Instructions, Screenshots, Recordings

Try installing the SDXL LoRA at the URL given above.
## Merge Plan

This can be merged when approved.
## Added/updated tests?

- [ ] Yes
- [X] No : we do not yet have a comprehensive suite of models to test
probing on.

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 09:49:51 -05:00
1615df3aa1 fix: use node 18, set working directory
- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
2023-12-16 00:32:31 +11:00
b2a8c45553 fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
2023-12-15 23:56:31 +11:00
212dbaf9a2 fix comment 2023-12-15 00:25:27 -05:00
ac3cf48d7f make probe recognize lora format at https://civitai.com/models/224641 2023-12-15 00:25:27 -05:00
296060db63 Add cpu and rocm profiles. Let invokeai-nvidia service be the default. 2023-12-13 23:23:43 -05:00
d1d8ee71fc Simplify docker compose setup 2023-12-13 23:23:43 -05:00
612912a6c9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 14:12:22 +00:00
bca2372280 updated comment 2023-12-12 14:02:28 +00:00
0b860582f0 remove unneeded if else 2023-12-12 14:00:06 +00:00
87ff380fe4 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 13:40:28 +00:00
39 changed files with 390 additions and 210 deletions

View File

@ -21,16 +21,16 @@ jobs:
if: github.event.pull_request.draft == false
runs-on: ubuntu-22.04
steps:
- name: Setup Node 20
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '20'
node-version: '18'
- name: Checkout
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: 8
version: '8.12.1'
- name: Install dependencies
run: 'pnpm install --prefer-frozen-lockfile'
- name: Typescript

View File

@ -1,13 +1,15 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
inputs:
publish_package:
description: 'Publish build on PyPi? [true/false]'
required: true
default: 'false'
jobs:
release:
build-and-release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
@ -15,19 +17,43 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: Checkout
uses: actions/checkout@v4
- name: install deps
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install frontend dependencies
run: pnpm install --prefer-frozen-lockfile
working-directory: invokeai/frontend/web
- name: Build frontend
run: pnpm run build
working-directory: invokeai/frontend/web
- name: Install python dependencies
run: pip install --upgrade build twine
- name: build package
- name: Build python package
run: python3 -m build
- name: check distribution
- name: Upload build as workflow artifact
uses: actions/upload-artifact@v4
with:
name: dist
path: dist
- name: Check distribution
run: twine check dist/*
- name: check PyPI versions
- name: Check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
@ -36,6 +62,6 @@ jobs:
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
- name: Publish build on PyPi
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != '' && github.event.inputs.publish_package == 'true'
run: twine upload dist/*

View File

@ -59,14 +59,16 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# #### Build the Web UI ------------------------------------
FROM node:18 AS web-builder
FROM node:18-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN pnpm run build
#### Runtime stage ---------------------------------------

View File

@ -23,7 +23,7 @@ This is done via Docker Desktop preferences
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
1. Execute `run.sh`
The image will be built automatically if needed.
@ -39,7 +39,7 @@ The Docker daemon on the system must be already set up to use the GPU. In case o
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `run.sh`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
set -e
build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker compose build args:"
echo $build_args
docker compose build $build_args

View File

@ -2,23 +2,8 @@
version: '3.8'
services:
invokeai:
x-invokeai: &invokeai
image: "local/invokeai:latest"
# edit below to run on a container runtime other than nvidia-container-runtime.
# not yet tested with rocm/AMD GPUs
# Comment out the "deploy" section to run on CPU only
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile
@ -50,3 +35,27 @@ services:
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0
services:
invokeai-nvidia:
<<: *invokeai
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
invokeai-cpu:
<<: *invokeai
profiles:
- cpu
invokeai-rocm:
<<: *invokeai
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
profiles:
- rocm

View File

@ -1,11 +1,28 @@
#!/usr/bin/env bash
set -e
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
run() {
local scriptdir=$(dirname "${BASH_SOURCE[0]}")
cd "$scriptdir" || exit 1
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
local build_args=""
local profile=""
docker compose up -d
docker compose logs -f
[[ -f ".env" ]] &&
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
local service_name="invokeai-$profile"
printf "%s\n" "docker compose build args:"
printf "%s\n" "$build_args"
docker compose build $build_args
unset build_args
printf "%s\n" "starting service $service_name"
docker compose --profile "$profile" up -d "$service_name"
docker compose logs -f
}
run

View File

@ -11,7 +11,7 @@ complex functionality.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
New nodes should be added to a subfolder in the `nodes` directory found at the root level of the InvokeAI installation location. Nodes added to this folder will be imported upon application startup.
Example `nodes` subfolder structure:
```py

View File

@ -91,9 +91,11 @@ rm -rf InvokeAI-Installer
# copy content
mkdir InvokeAI-Installer
for f in templates lib *.txt *.reg; do
for f in templates *.txt *.reg; do
cp -r ${f} InvokeAI-Installer/
done
mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
@ -111,6 +113,6 @@ cp WinLongPathsEnabled.reg InvokeAI-Installer/
zip -r InvokeAI-installer-$VERSION.zip InvokeAI-Installer
# clean up
rm -rf InvokeAI-Installer tmp dist
rm -rf InvokeAI-Installer tmp dist ../invokeai/frontend/web/dist/
exit 0

View File

@ -26,7 +26,7 @@ from invokeai.backend.model_manager.config import (
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2"])
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2_unstable"])
class ModelsList(BaseModel):

View File

@ -77,7 +77,7 @@ class CalculateImageTilesInvocation(BaseInvocation):
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.0.0",
version="1.1.0",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
@ -97,11 +97,11 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
ge=1,
description="Number of tiles to divide image into on the y axis",
)
overlap_fraction: float = InputField(
default=0.25,
overlap: int = InputField(
default=128,
ge=0,
lt=1,
description="Overlap between adjacent tiles as a fraction of the tile's dimensions (0-1)",
multiple_of=8,
description="The overlap, in pixels, between adjacent tiles.",
)
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
@ -110,7 +110,7 @@ class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
image_width=self.image_width,
num_tiles_x=self.num_tiles_x,
num_tiles_y=self.num_tiles_y,
overlap_fraction=self.overlap_fraction,
overlap=self.overlap,
)
return CalculateImageTilesOutput(tiles=tiles)

View File

@ -5,6 +5,7 @@ from invokeai.app.services.image_files.image_files_base import ImageFileStorageB
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@ -27,6 +28,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3())
migrator.run_migrations()
return db

View File

@ -11,6 +11,8 @@ from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator,
)
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration2Callback:
def __init__(self, image_files: ImageFileStorageBase, logger: Logger):
@ -24,6 +26,7 @@ class Migration2Callback:
self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
@ -131,6 +134,11 @@ class Migration2Callback:
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
"""
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in

View File

@ -0,0 +1,75 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback:
def __init__(self) -> None:
pass
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def build_migration_3() -> Migration:
"""
Build the migration from database version 2 to 3.
This migration does the following:
- Drops the `model_config` table, recreating it
- Migrates data from `models.yaml` into the `model_config` table
"""
migration_3 = Migration(
from_version=2,
to_version=3,
callback=Migration3Callback(),
)
return migration_3

View File

@ -1,8 +1,12 @@
# Copyright (c) 2023 Lincoln D. Stein
"""Migrate from the InvokeAI v2 models.yaml format to the v3 sqlite format."""
import json
import sqlite3
from hashlib import sha1
from logging import Logger
from pathlib import Path
from typing import Optional
from omegaconf import DictConfig, OmegaConf
from pydantic import TypeAdapter
@ -10,13 +14,12 @@ from pydantic import TypeAdapter
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import (
DuplicateModelException,
ModelRecordServiceSQL,
UnknownModelException,
)
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
@ -25,9 +28,9 @@ from invokeai.backend.util.logging import InvokeAILogger
ModelsValidator = TypeAdapter(AnyModelConfig)
class MigrateModelYamlToDb:
class MigrateModelYamlToDb1:
"""
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.2.0)
Migrate the InvokeAI models.yaml format (VERSION 3.0.0) to SQL3 database format (VERSION 3.5.0).
The class has one externally useful method, migrate(), which scans the
currently models.yaml file and imports all its entries into invokeai.db.
@ -41,17 +44,13 @@ class MigrateModelYamlToDb:
config: InvokeAIAppConfig
logger: Logger
cursor: sqlite3.Cursor
def __init__(self) -> None:
def __init__(self, cursor: sqlite3.Cursor = None) -> None:
self.config = InvokeAIAppConfig.get_config()
self.config.parse_args()
self.logger = InvokeAILogger.get_logger()
def get_db(self) -> ModelRecordServiceSQL:
"""Fetch the sqlite3 database for this installation."""
db_path = None if self.config.use_memory_db else self.config.db_path
db = SqliteDatabase(db_path=db_path, logger=self.logger, verbose=self.config.log_sql)
return ModelRecordServiceSQL(db)
self.cursor = cursor
def get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
@ -62,8 +61,10 @@ class MigrateModelYamlToDb:
def migrate(self) -> None:
"""Do the migration from models.yaml to invokeai.db."""
db = self.get_db()
yaml = self.get_yaml()
try:
yaml = self.get_yaml()
except OSError:
return
for model_key, stanza in yaml.items():
if model_key == "__metadata__":
@ -86,22 +87,62 @@ class MigrateModelYamlToDb:
new_config: AnyModelConfig = ModelsValidator.validate_python(stanza) # type: ignore # see https://github.com/pydantic/pydantic/discussions/7094
try:
if original_record := db.search_by_path(stanza.path):
key = original_record[0].key
if original_record := self._search_by_path(stanza.path):
key = original_record.key
self.logger.info(f"Updating model {model_name} with information from models.yaml using key {key}")
db.update_model(key, new_config)
self._update_model(key, new_config)
else:
self.logger.info(f"Adding model {model_name} with key {model_key}")
db.add_model(new_key, new_config)
self._add_model(new_key, new_config)
except DuplicateModelException:
self.logger.warning(f"Model {model_name} is already in the database")
except UnknownModelException:
self.logger.warning(f"Model at {stanza.path} could not be found in database")
def _search_by_path(self, path: Path) -> Optional[AnyModelConfig]:
self.cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE path=?;
""",
(str(path),),
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self.cursor.fetchall()]
return results[0] if results else None
def main():
MigrateModelYamlToDb().migrate()
def _update_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect
json_serialized = record.model_dump_json() # and turn it into a json string.
self.cursor.execute(
"""--sql
UPDATE model_config
SET
config=?
WHERE id=?;
""",
(json_serialized, key),
)
if self.cursor.rowcount == 0:
raise UnknownModelException("model not found")
if __name__ == "__main__":
main()
def _add_model(self, key: str, config: AnyModelConfig) -> None:
record = ModelConfigFactory.make_config(config, key=key) # ensure it is a valid config obect.
json_serialized = record.model_dump_json() # and turn it into a json string.
try:
self.cursor.execute(
"""--sql
INSERT INTO model_config (
id,
original_hash,
config
)
VALUES (?,?,?);
""",
(
key,
record.original_hash,
json_serialized,
),
)
except sqlite3.IntegrityError as exc:
raise DuplicateModelException(f"{record.name}: model is already in database") from exc

View File

@ -389,7 +389,7 @@ class TextualInversionCheckpointProbe(CheckpointProbeBase):
elif "clip_g" in checkpoint:
token_dim = checkpoint["clip_g"].shape[-1]
else:
token_dim = list(checkpoint.values())[0].shape[0]
token_dim = list(checkpoint.values())[0].shape[-1]
if token_dim == 768:
return BaseModelType.StableDiffusion1
elif token_dim == 1024:

View File

@ -9,7 +9,7 @@ def lora_token_vector_length(checkpoint: dict) -> int:
:param checkpoint: The checkpoint
"""
def _get_shape_1(key, tensor, checkpoint):
def _get_shape_1(key: str, tensor, checkpoint) -> int:
lora_token_vector_length = None
if "." not in key:
@ -57,6 +57,10 @@ def lora_token_vector_length(checkpoint: dict) -> int:
for key, tensor in checkpoint.items():
if key.startswith("lora_unet_") and ("_attn2_to_k." in key or "_attn2_to_v." in key):
lora_token_vector_length = _get_shape_1(key, tensor, checkpoint)
elif key.startswith("lora_unet_") and (
"time_emb_proj.lora_down" in key
): # recognizes format at https://civitai.com/models/224641
lora_token_vector_length = _get_shape_1(key, tensor, checkpoint)
elif key.startswith("lora_te") and "_self_attn_" in key:
tmp_length = _get_shape_1(key, tensor, checkpoint)
if key.startswith("lora_te_"):

View File

@ -400,6 +400,8 @@ class LoRACheckpointProbe(CheckpointProbeBase):
return BaseModelType.StableDiffusion1
elif token_vector_length == 1024:
return BaseModelType.StableDiffusion2
elif token_vector_length == 1280:
return BaseModelType.StableDiffusionXL # recognizes format at https://civitai.com/models/224641
elif token_vector_length == 2048:
return BaseModelType.StableDiffusionXL
else:

View File

@ -102,7 +102,7 @@ def calc_tiles_with_overlap(
def calc_tiles_even_split(
image_height: int, image_width: int, num_tiles_x: int, num_tiles_y: int, overlap_fraction: float = 0
image_height: int, image_width: int, num_tiles_x: int, num_tiles_y: int, overlap: int = 0
) -> list[Tile]:
"""Calculate the tile coordinates for a given image shape with the number of tiles requested.
@ -111,31 +111,35 @@ def calc_tiles_even_split(
image_width (int): The image width in px.
num_x_tiles (int): The number of tile to split the image into on the X-axis.
num_y_tiles (int): The number of tile to split the image into on the Y-axis.
overlap_fraction (float, optional): The target overlap as fraction of the tiles size. Defaults to 0.
overlap (int, optional): The overlap between adjacent tiles in pixels. Defaults to 0.
Returns:
list[Tile]: A list of tiles that cover the image shape. Ordered from left-to-right, top-to-bottom.
"""
# Ensure tile size is divisible by 8
# Ensure the image is divisible by LATENT_SCALE_FACTOR
if image_width % LATENT_SCALE_FACTOR != 0 or image_height % LATENT_SCALE_FACTOR != 0:
raise ValueError(f"image size (({image_width}, {image_height})) must be divisible by {LATENT_SCALE_FACTOR}")
# Calculate the overlap size based on the percentage and adjust it to be divisible by 8 (rounding up)
overlap_x = LATENT_SCALE_FACTOR * math.ceil(
int((image_width / num_tiles_x) * overlap_fraction) / LATENT_SCALE_FACTOR
)
overlap_y = LATENT_SCALE_FACTOR * math.ceil(
int((image_height / num_tiles_y) * overlap_fraction) / LATENT_SCALE_FACTOR
)
# Calculate the tile size based on the number of tiles and overlap, and ensure it's divisible by 8 (rounding down)
tile_size_x = LATENT_SCALE_FACTOR * math.floor(
((image_width + overlap_x * (num_tiles_x - 1)) // num_tiles_x) / LATENT_SCALE_FACTOR
)
tile_size_y = LATENT_SCALE_FACTOR * math.floor(
((image_height + overlap_y * (num_tiles_y - 1)) // num_tiles_y) / LATENT_SCALE_FACTOR
)
if num_tiles_x > 1:
# ensure the overlap is not more than the maximum overlap if we only have 1 tile then we dont care about overlap
assert overlap <= image_width - (LATENT_SCALE_FACTOR * (num_tiles_x - 1))
tile_size_x = LATENT_SCALE_FACTOR * math.floor(
((image_width + overlap * (num_tiles_x - 1)) // num_tiles_x) / LATENT_SCALE_FACTOR
)
assert overlap < tile_size_x
else:
tile_size_x = image_width
if num_tiles_y > 1:
# ensure the overlap is not more than the maximum overlap if we only have 1 tile then we dont care about overlap
assert overlap <= image_height - (LATENT_SCALE_FACTOR * (num_tiles_y - 1))
tile_size_y = LATENT_SCALE_FACTOR * math.floor(
((image_height + overlap * (num_tiles_y - 1)) // num_tiles_y) / LATENT_SCALE_FACTOR
)
assert overlap < tile_size_y
else:
tile_size_y = image_height
# tiles[y * num_tiles_x + x] is the tile for the y'th row, x'th column.
tiles: list[Tile] = []
@ -143,7 +147,7 @@ def calc_tiles_even_split(
# Calculate tile coordinates. (Ignore overlap values for now.)
for tile_idx_y in range(num_tiles_y):
# Calculate the top and bottom of the row
top = tile_idx_y * (tile_size_y - overlap_y)
top = tile_idx_y * (tile_size_y - overlap)
bottom = min(top + tile_size_y, image_height)
# For the last row adjust bottom to be the height of the image
if tile_idx_y == num_tiles_y - 1:
@ -151,7 +155,7 @@ def calc_tiles_even_split(
for tile_idx_x in range(num_tiles_x):
# Calculate the left & right coordinate of each tile
left = tile_idx_x * (tile_size_x - overlap_x)
left = tile_idx_x * (tile_size_x - overlap)
right = min(left + tile_size_x, image_width)
# For the last tile in the row adjust right to be the width of the image
if tile_idx_x == num_tiles_x - 1:

View File

@ -4,6 +4,7 @@ pip install <path_to_git_source>.
"""
import os
import platform
from distutils.version import LooseVersion
import pkg_resources
import psutil
@ -31,10 +32,6 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def get_versions() -> dict:
return requests.get(url=INVOKE_AI_REL).json()
def invokeai_is_running() -> bool:
for p in psutil.process_iter():
try:
@ -50,6 +47,20 @@ def invokeai_is_running() -> bool:
return False
def get_pypi_versions():
url = "https://pypi.org/pypi/invokeai/json"
try:
data = requests.get(url).json()
except Exception:
raise Exception("Unable to fetch version information from PyPi")
versions = list(data["releases"].keys())
versions.sort(key=LooseVersion, reverse=True)
latest_version = [v for v in versions if "rc" not in v][0]
latest_release_candidate = [v for v in versions if "rc" in v][0]
return latest_version, latest_release_candidate, versions
def welcome(latest_release: str, latest_prerelease: str):
@group()
def text():
@ -63,8 +74,7 @@ def welcome(latest_release: str, latest_prerelease: str):
yield "[bold yellow]Options:"
yield f"""[1] Update to the latest [bold]official release[/bold] ([italic]{latest_release}[/italic])
[2] Update to the latest [bold]pre-release[/bold] (may be buggy; caveat emptor!) ([italic]{latest_prerelease}[/italic])
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to"""
[3] Manually enter the [bold]version[/bold] you wish to update to"""
console.rule()
print(
@ -92,44 +102,35 @@ def get_extras():
def main():
versions = get_versions()
released_versions = [x for x in versions if not (x["draft"] or x["prerelease"])]
prerelease_versions = [x for x in versions if not x["draft"] and x["prerelease"]]
latest_release = released_versions[0]["tag_name"] if len(released_versions) else None
latest_prerelease = prerelease_versions[0]["tag_name"] if len(prerelease_versions) else None
if invokeai_is_running():
print(":exclamation: [bold red]Please terminate all running instances of InvokeAI before updating.[/red bold]")
input("Press any key to continue...")
return
latest_release, latest_prerelease, versions = get_pypi_versions()
welcome(latest_release, latest_prerelease)
tag = None
branch = None
release = None
choice = Prompt.ask("Choice:", choices=["1", "2", "3", "4"], default="1")
release = latest_release
choice = Prompt.ask("Choice:", choices=["1", "2", "3"], default="1")
if choice == "1":
release = latest_release
elif choice == "2":
release = latest_prerelease
elif choice == "3":
while not tag:
tag = Prompt.ask("Enter an InvokeAI tag name")
elif choice == "4":
while not branch:
branch = Prompt.ask("Enter an InvokeAI branch name")
while True:
release = Prompt.ask("Enter an InvokeAI version")
release.strip()
if release in versions:
break
print(f":exclamation: [bold red]'{release}' is not a recognized InvokeAI release.[/red bold]")
extras = get_extras()
print(f":crossed_fingers: Upgrading to [yellow]{tag or release or branch}[/yellow]")
if release:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_SRC}/{release}.zip" --use-pep517 --upgrade'
elif tag:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_TAG}/{tag}.zip" --use-pep517 --upgrade'
else:
cmd = f'pip install "invokeai{extras} @ {INVOKE_AI_BRANCH}/{branch}.zip" --use-pep517 --upgrade'
print(f":crossed_fingers: Upgrading to [yellow]{release}[/yellow]")
cmd = f'pip install "invokeai{extras}=={release}" --use-pep517 --upgrade'
print("")
print("")
if os.system(cmd) == 0:

View File

@ -727,9 +727,6 @@
"showMinimapnodes": "Mostrar el minimapa",
"reloadNodeTemplates": "Recargar las plantillas de nodos",
"loadWorkflow": "Cargar el flujo de trabajo",
"resetWorkflow": "Reiniciar e flujo de trabajo",
"resetWorkflowDesc": "¿Está seguro de que deseas restablecer este flujo de trabajo?",
"resetWorkflowDesc2": "Al reiniciar el flujo de trabajo se borrarán todos los nodos, aristas y detalles del flujo de trabajo.",
"downloadWorkflow": "Descargar el flujo de trabajo en un archivo JSON"
}
}

View File

@ -898,11 +898,8 @@
"zoomInNodes": "Ingrandire",
"fitViewportNodes": "Adatta vista",
"showGraphNodes": "Mostra sovrapposizione grafico",
"resetWorkflowDesc2": "Il ripristino dell'editor del flusso di lavoro cancellerà tutti i nodi, le connessioni e i dettagli del flusso di lavoro. I flussi di lavoro salvati non saranno interessati.",
"reloadNodeTemplates": "Ricarica i modelli di nodo",
"loadWorkflow": "Importa flusso di lavoro JSON",
"resetWorkflow": "Reimposta l'editor del flusso di lavoro",
"resetWorkflowDesc": "Sei sicuro di voler reimpostare l'editor del flusso di lavoro?",
"downloadWorkflow": "Esporta flusso di lavoro JSON",
"scheduler": "Campionatore",
"addNode": "Aggiungi nodo",
@ -1112,7 +1109,10 @@
"deletedInvalidEdge": "Eliminata connessione non valida {{source}} -> {{target}}",
"unknownInput": "Input sconosciuto: {{name}}",
"prototypeDesc": "Questa invocazione è un prototipo. Potrebbe subire modifiche sostanziali durante gli aggiornamenti dell'app e potrebbe essere rimossa in qualsiasi momento.",
"betaDesc": "Questa invocazione è in versione beta. Fino a quando non sarà stabile, potrebbe subire modifiche importanti durante gli aggiornamenti dell'app. Abbiamo intenzione di supportare questa invocazione a lungo termine."
"betaDesc": "Questa invocazione è in versione beta. Fino a quando non sarà stabile, potrebbe subire modifiche importanti durante gli aggiornamenti dell'app. Abbiamo intenzione di supportare questa invocazione a lungo termine.",
"newWorkflow": "Nuovo flusso di lavoro",
"newWorkflowDesc": "Creare un nuovo flusso di lavoro?",
"newWorkflowDesc2": "Il flusso di lavoro attuale presenta modifiche non salvate."
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@ -1619,7 +1619,6 @@
"saveWorkflow": "Salva flusso di lavoro",
"openWorkflow": "Apri flusso di lavoro",
"clearWorkflowSearchFilter": "Cancella il filtro di ricerca del flusso di lavoro",
"workflowEditorReset": "Reimpostazione dell'editor del flusso di lavoro",
"workflowLibrary": "Libreria",
"noRecentWorkflows": "Nessun flusso di lavoro recente",
"workflowSaved": "Flusso di lavoro salvato",
@ -1633,7 +1632,10 @@
"deleteWorkflow": "Elimina flusso di lavoro",
"workflows": "Flussi di lavoro",
"noDescription": "Nessuna descrizione",
"userWorkflows": "I miei flussi di lavoro"
"userWorkflows": "I miei flussi di lavoro",
"newWorkflowCreated": "Nuovo flusso di lavoro creato",
"downloadWorkflow": "Salva su file",
"uploadWorkflow": "Carica da file"
},
"app": {
"storeNotInitialized": "Il negozio non è inizializzato"

View File

@ -844,9 +844,6 @@
"hideLegendNodes": "Typelegende veld verbergen",
"reloadNodeTemplates": "Herlaad knooppuntsjablonen",
"loadWorkflow": "Laad werkstroom",
"resetWorkflow": "Herstel werkstroom",
"resetWorkflowDesc": "Weet je zeker dat je deze werkstroom wilt herstellen?",
"resetWorkflowDesc2": "Herstel van een werkstroom haalt alle knooppunten, randen en werkstroomdetails weg.",
"downloadWorkflow": "Download JSON van werkstroom",
"booleanPolymorphicDescription": "Een verzameling Booleanse waarden.",
"scheduler": "Planner",

View File

@ -909,9 +909,6 @@
"hideLegendNodes": "Скрыть тип поля",
"showMinimapnodes": "Показать миникарту",
"loadWorkflow": "Загрузить рабочий процесс",
"resetWorkflowDesc2": "Сброс рабочего процесса очистит все узлы, ребра и детали рабочего процесса.",
"resetWorkflow": "Сбросить рабочий процесс",
"resetWorkflowDesc": "Вы уверены, что хотите сбросить этот рабочий процесс?",
"reloadNodeTemplates": "Перезагрузить шаблоны узлов",
"downloadWorkflow": "Скачать JSON рабочего процесса",
"booleanPolymorphicDescription": "Коллекция логических значений.",
@ -1599,7 +1596,6 @@
"saveWorkflow": "Сохранить рабочий процесс",
"openWorkflow": "Открытый рабочий процесс",
"clearWorkflowSearchFilter": "Очистить фильтр поиска рабочих процессов",
"workflowEditorReset": "Сброс редактора рабочих процессов",
"workflowLibrary": "Библиотека",
"downloadWorkflow": "Скачать рабочий процесс",
"noRecentWorkflows": "Нет недавних рабочих процессов",

View File

@ -892,11 +892,8 @@
},
"nodes": {
"zoomInNodes": "放大",
"resetWorkflowDesc": "是否确定要重置工作流编辑器?",
"resetWorkflow": "重置工作流编辑器",
"loadWorkflow": "加载工作流",
"zoomOutNodes": "缩小",
"resetWorkflowDesc2": "重置工作流编辑器将清除所有节点、边际和节点图详情。不影响已保存的工作流。",
"reloadNodeTemplates": "重载节点模板",
"hideGraphNodes": "隐藏节点图信息",
"fitViewportNodes": "自适应视图",
@ -1637,7 +1634,6 @@
"saveWorkflow": "保存工作流",
"openWorkflow": "打开工作流",
"clearWorkflowSearchFilter": "清除工作流检索过滤器",
"workflowEditorReset": "工作流编辑器重置",
"workflowLibrary": "工作流库",
"downloadWorkflow": "下载工作流",
"noRecentWorkflows": "无最近工作流",

View File

@ -144,6 +144,7 @@ export const buildCanvasImageToImageGraph = (
type: 'l2i',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [
@ -255,6 +256,7 @@ export const buildCanvasImageToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -295,6 +297,7 @@ export const buildCanvasImageToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
(graph.nodes[IMAGE_TO_LATENTS] as ImageToLatentsInvocation).image =

View File

@ -191,6 +191,7 @@ export const buildCanvasInpaintGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
reference: canvasInitImage,
use_cache: false,
},
},
edges: [

View File

@ -199,6 +199,7 @@ export const buildCanvasOutpaintGraph = (
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -266,6 +266,7 @@ export const buildCanvasSDXLImageToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -306,6 +307,7 @@ export const buildCanvasSDXLImageToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
(graph.nodes[IMAGE_TO_LATENTS] as ImageToLatentsInvocation).image =

View File

@ -196,6 +196,7 @@ export const buildCanvasSDXLInpaintGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
reference: canvasInitImage,
use_cache: false,
},
},
edges: [

View File

@ -204,6 +204,7 @@ export const buildCanvasSDXLOutpaintGraph = (
type: 'color_correct',
id: CANVAS_OUTPUT,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -258,6 +258,7 @@ export const buildCanvasSDXLTextToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -288,6 +289,7 @@ export const buildCanvasSDXLTextToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
graph.edges.push({

View File

@ -246,6 +246,7 @@ export const buildCanvasTextToImageGraph = (
is_intermediate,
width: width,
height: height,
use_cache: false,
};
graph.edges.push(
@ -276,6 +277,7 @@ export const buildCanvasTextToImageGraph = (
id: CANVAS_OUTPUT,
is_intermediate,
fp32,
use_cache: false,
};
graph.edges.push({

View File

@ -143,6 +143,7 @@ export const buildLinearImageToImageGraph = (
// },
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -154,6 +154,7 @@ export const buildLinearSDXLImageToImageGraph = (
// },
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -127,6 +127,7 @@ export const buildLinearSDXLTextToImageGraph = (
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -146,6 +146,7 @@ export const buildLinearTextToImageGraph = (
id: LATENTS_TO_IMAGE,
fp32,
is_intermediate,
use_cache: false,
},
},
edges: [

View File

@ -138,7 +138,6 @@ dependencies = [
"invokeai-node-web" = "invokeai.app.api_app:invoke_api"
"invokeai-import-images" = "invokeai.frontend.install.import_images:main"
"invokeai-db-maintenance" = "invokeai.backend.util.db_maintenance:main"
"invokeai-migrate-models-to-db" = "invokeai.backend.model_manager.migrate_to_db:main"
[project.urls]
"Homepage" = "https://invoke-ai.github.io/InvokeAI/"

View File

@ -305,9 +305,7 @@ def test_calc_tiles_min_overlap_input_validation(
def test_calc_tiles_even_split_single_tile():
"""Test calc_tiles_even_split() behavior when a single tile covers the image."""
tiles = calc_tiles_even_split(
image_height=512, image_width=1024, num_tiles_x=1, num_tiles_y=1, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=512, image_width=1024, num_tiles_x=1, num_tiles_y=1, overlap=64)
expected_tiles = [
Tile(
@ -322,36 +320,34 @@ def test_calc_tiles_even_split_single_tile():
def test_calc_tiles_even_split_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image is evenly covered by multiple tiles."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=576, image_width=1600, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=576, image_width=1600, num_tiles_x=3, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=320, left=0, right=624),
overlap=TBLR(top=0, bottom=72, left=0, right=136),
coords=TBLR(top=0, bottom=320, left=0, right=576),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=488, right=1112),
overlap=TBLR(top=0, bottom=72, left=136, right=136),
coords=TBLR(top=0, bottom=320, left=512, right=1088),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=976, right=1600),
overlap=TBLR(top=0, bottom=72, left=136, right=0),
coords=TBLR(top=0, bottom=320, left=1024, right=1600),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=248, bottom=576, left=0, right=624),
overlap=TBLR(top=72, bottom=0, left=0, right=136),
coords=TBLR(top=256, bottom=576, left=0, right=576),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=248, bottom=576, left=488, right=1112),
overlap=TBLR(top=72, bottom=0, left=136, right=136),
coords=TBLR(top=256, bottom=576, left=512, right=1088),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=248, bottom=576, left=976, right=1600),
overlap=TBLR(top=72, bottom=0, left=136, right=0),
coords=TBLR(top=256, bottom=576, left=1024, right=1600),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
assert tiles == expected_tiles
@ -360,36 +356,34 @@ def test_calc_tiles_even_split_evenly_divisible():
def test_calc_tiles_even_split_not_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image requires 'uneven' overlaps to achieve proper coverage."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=400, image_width=1200, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=400, image_width=1200, num_tiles_x=3, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=224, left=0, right=464),
overlap=TBLR(top=0, bottom=56, left=0, right=104),
coords=TBLR(top=0, bottom=232, left=0, right=440),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=224, left=360, right=824),
overlap=TBLR(top=0, bottom=56, left=104, right=104),
coords=TBLR(top=0, bottom=232, left=376, right=816),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=224, left=720, right=1200),
overlap=TBLR(top=0, bottom=56, left=104, right=0),
coords=TBLR(top=0, bottom=232, left=752, right=1200),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=168, bottom=400, left=0, right=464),
overlap=TBLR(top=56, bottom=0, left=0, right=104),
coords=TBLR(top=168, bottom=400, left=0, right=440),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=168, bottom=400, left=360, right=824),
overlap=TBLR(top=56, bottom=0, left=104, right=104),
coords=TBLR(top=168, bottom=400, left=376, right=816),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=168, bottom=400, left=720, right=1200),
overlap=TBLR(top=56, bottom=0, left=104, right=0),
coords=TBLR(top=168, bottom=400, left=752, right=1200),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
@ -399,28 +393,26 @@ def test_calc_tiles_even_split_not_evenly_divisible():
def test_calc_tiles_even_split_difficult_size():
"""Test calc_tiles_even_split() behavior when the image is a difficult size to spilt evenly and keep div8."""
# Parameters are a difficult size for other tile gen routines to calculate
tiles = calc_tiles_even_split(
image_height=1000, image_width=1000, num_tiles_x=2, num_tiles_y=2, overlap_fraction=0.25
)
tiles = calc_tiles_even_split(image_height=1000, image_width=1000, num_tiles_x=2, num_tiles_y=2, overlap=64)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=560, left=0, right=560),
overlap=TBLR(top=0, bottom=128, left=0, right=128),
coords=TBLR(top=0, bottom=528, left=0, right=528),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=560, left=432, right=1000),
overlap=TBLR(top=0, bottom=128, left=128, right=0),
coords=TBLR(top=0, bottom=528, left=464, right=1000),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=432, bottom=1000, left=0, right=560),
overlap=TBLR(top=128, bottom=0, left=0, right=128),
coords=TBLR(top=464, bottom=1000, left=0, right=528),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=432, bottom=1000, left=432, right=1000),
overlap=TBLR(top=128, bottom=0, left=128, right=0),
coords=TBLR(top=464, bottom=1000, left=464, right=1000),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
@ -428,11 +420,13 @@ def test_calc_tiles_even_split_difficult_size():
@pytest.mark.parametrize(
["image_height", "image_width", "num_tiles_x", "num_tiles_y", "overlap_fraction", "raises"],
["image_height", "image_width", "num_tiles_x", "num_tiles_y", "overlap", "raises"],
[
(128, 128, 1, 1, 0.25, False), # OK
(128, 128, 1, 1, 127, False), # OK
(128, 128, 1, 1, 0, False), # OK
(128, 128, 2, 1, 0, False), # OK
(128, 128, 2, 2, 0, False), # OK
(128, 128, 2, 1, 120, True), # overlap equals tile_height.
(128, 128, 1, 2, 120, True), # overlap equals tile_width.
(127, 127, 1, 1, 0, True), # image size must be dividable by 8
],
)
@ -441,15 +435,15 @@ def test_calc_tiles_even_split_input_validation(
image_width: int,
num_tiles_x: int,
num_tiles_y: int,
overlap_fraction: float,
overlap: int,
raises: bool,
):
"""Test that calc_tiles_even_split() raises an exception if the inputs are invalid."""
if raises:
with pytest.raises(ValueError):
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
with pytest.raises((AssertionError, ValueError)):
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap)
else:
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap)
#############################################