Compare commits

...

190 Commits

Author SHA1 Message Date
8c6a8d072d remove tab character 2023-12-15 09:35:06 -05:00
ec52f15f4b add frontend build steps to pypi workflow 2023-12-15 09:30:37 -05:00
454f01e0c1 [feature] add ability to filter model listings by format (#5286)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This minor change adds the ability to filter the model lists returned by
V2 of the model manager using the model file format (e.g. "checkpoint").
Just thought this would be a useful feature.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This can be merged when approved without any adverse effects.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : minor feature - tested informally using the router API

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 00:03:01 -05:00
72dca55e44 Merge branch 'feat/model_manager/search-by-format' of github.com:invoke-ai/InvokeAI into feat/model_manager/search-by-format 2023-12-14 23:55:08 -05:00
264ea6d94d fix ruff errors 2023-12-14 23:54:59 -05:00
60e3e653fa Merge branch 'main' into feat/model_manager/search-by-format 2023-12-14 23:53:54 -05:00
082894c377 Adding Kapa Assistant to Docs (#5290)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ x ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ x ] Yes
- [ ] No


## Description
This adds the Kapa assistant to our docs.
2023-12-15 09:47:40 +11:00
4b00f8fc82 Merge branch 'main' into Adding-Kapa-assistant-to-docs 2023-12-15 09:46:25 +11:00
6ea09ba0b6 feat(ui): workflow menu tweaks
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
2023-12-14 08:30:59 -05:00
42c04db167 adding kapa widget to docs 2023-12-13 22:33:50 -05:00
b935768eeb Update mkdocs.yml 2023-12-13 22:28:47 -05:00
ea4ef042f3 Ruff fixes 2023-12-14 12:47:10 +11:00
18b2bcbbee Added Classification from baseinvocation 2023-12-14 12:47:10 +11:00
5ad88c7f86 Fixed classification 2023-12-14 12:47:10 +11:00
3b04fef31d Added classification 2023-12-14 12:47:10 +11:00
bec888923a Fix for ruff 2023-12-14 12:47:10 +11:00
c6235049c7 Add an unsharp mask node to core nodes
Unsharp mask is an image operation that, despite its name, sharpens an image. Like a Gaussian blur, it takes a radius and strength.
2023-12-14 12:47:10 +11:00
e10f6e8962 fix(nodes): mark CalculateImageTilesInvocation as beta
missed this when I added classification
2023-12-13 20:33:25 -05:00
77f04ff8d6 docs: add warning to developer install about database & main 2023-12-14 11:47:33 +11:00
461e474394 fix(nodes): fix embedded workflows with IDs
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.

Now it strips unknown attributes, allowing those workflows to load.
2023-12-14 11:38:04 +11:00
f0c70fe3f1 fix(db): add error handling for workflow migration
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
2023-12-14 10:16:56 +11:00
442ac2b828 fix(ui): fix frontend workflow migration when node is missing version
This should default to "1.0.0" to match the behaviour of the backend.
2023-12-14 09:59:11 +11:00
bb986b97f3 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
98655db57b translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1340 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 84.2% (1150 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 83.1% (1135 of 1365 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
8845894e83 translationBot(ui): update translation (Italian)
Currently translated at 97.0% (1325 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
937c7e957d add merge plan to PR template 2023-12-13 16:59:31 -05:00
569ae7c482 add ability to filter model listings by format 2023-12-13 15:59:21 -05:00
340957f920 Update torch to 2.1.1 and xformers to 0.0.23 2023-12-13 14:49:32 -05:00
076d9b05ea Update transformers to 4.36 and Accelerate to 0.25 2023-12-13 14:42:34 -05:00
2b54e240d4 Bump Diffusers Dependency (#5243)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: dependency bump

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ x ] No


## Description
Updating diffusers to .24 - fixes a few issues. Needs to be tested to
ensure things like our IP Adapter implementation don't break
2023-12-13 20:31:00 +05:30
5127e9df2d Fix error caused by bump to diffusers 0.24. We pass kwargs now instead of positional args so that we are more robust to future changes. 2023-12-13 09:17:30 -05:00
42329a1849 Updating HF Hub dependency 2023-12-13 09:17:30 -05:00
42bc6ef154 Bump Diffusers Dependency 2023-12-13 09:17:30 -05:00
6c6c45c3da feat(db): add SQLiteMigrator to perform db migrations (#5227)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Have you discussed this change with the InvokeAI team?

- [x] Yes
- [ ] No, because:

## Description

This PR enhances our SQLite database with migration logic.

### `SQLiteMigrator` class

The new `SQLiteMigrator` class handles safely running database
migrations. It is initialized in the `SqliteDatabase` class's init, and
immediately runs all database migrations.

### `Migration` class

Migrations are reprsented by a `Migration` class, which has 3
attributes:

- `db_version: int`: The database version this migration results in.
- `app_version: str`: The semver app version this migration is run for.
- `migrate: Callable[[sqlite3.Cursor], None]`: A function that performs
the migration. It receives a cursor _only_, but can do anything it wants
to do. A convention is established for these functions.

All schema-creating SQL now lives in a `migrate` function. We haven't
needed to make any data migrations yet, but when we do, this will also
be handled within one of these callbacks.

### Migration Flow

First, migrations are registered with `SQLiteMigrator` with it's
`register_migration` method. This performs some basic checks of the
migration version.

After registering all migrations, they are run with the `run_migrations`
method. This does a few things:

- Creates a `version` table in the DB, if it doesn't already exist. This
table has `db_version INTEGER`, `app_version TEXT` and `migrated_at
DATETIME` columns.
- Sort the migrations by their `db_version`.
- Do some checks to see if we need a migration.
- Backs up the database (if it's a file database). The migration bails
out if this fails.
- Runs each migration. If there is a problem, restore from backup.

### Included Migrations

Migrations are in `invokeai/app/services/shared/sqlite/migrations`.

#### `migrate_1.py`

All\* schema SQL up to 3.4.0post2 is in `migration_1.py`. Running only
this migration should result in a database that is identical to the one
you get from starting up 3.4.0post2.

SQL in this migration is **idempotent** (same as it was when the SQL was
spread across the various services).

#### `migrate_2.py`

Schema changes through 3.5.0 (the upcoming release) are in
`migration_2.py`.

SQL in this migration is **not idempotent**. Future migrations need not
be idempotent, as the migration logic ensures each will only be run
once.

### \*Caveat - ItemStorage

This class provides a generic document-db-like interface for storing
objects. Our `graph_executions` and `graphs` tables are created and
managed by this service. This PR does not touch this class and therefore
does not touch either of those two tables.

We can decide how to handle those tables in the future as the need
arises.

### Change to Model Manager Metadata table

I noticed that there is a `model_manager_metadata` table which included
the app version, and whose `version` property wasn't accessed outside
the service.

I believe the new `version` table fulfills the purpose of this table,
and have removed it.

@lstein Please let me know if this is not right.

## QA Instructions, Screenshots, Recordings

1.  Case 1 - Upgrade

    - Back up your 3.4.0post2 database
    - Run this PR
- It should upgrade your database and everything should work exactly
like it did before

2.  Case 2 - New Install

- Move your database out of the invoke root so that when the app starts,
it creates a new one
    - Run this PR
    - It should work just like a new install

3.  Case 3 - With an In-Memory Database

- Enable the in-memory memory database (set `use_memory_db` under
`Paths` in `invokeai.yaml` to `true`)
    - Run this PR
    - It should work just like a new install


## Added/updated tests?

- [x] Yes: Fairly comprehensive tests are added for the
`SQLiteMigrator`.
- [ ] No : _please replace this line with details on why tests
      have not been included_
2023-12-13 09:04:51 -05:00
f76b04a3b8 fix(db): rename "SQLiteMigrator" -> "SqliteMigrator" 2023-12-13 11:31:15 +11:00
821e0326c9 fix(db): formatting 2023-12-13 11:25:57 +11:00
cc18d86f29 Merge branch 'main' into feat/db/migrations 2023-12-13 11:24:55 +11:00
ed1583383e fix(db): remove stale comment in tests 2023-12-13 11:24:27 +11:00
c50a49719b fix(db): raise a MigrationVersionError when invalid versions are used
This inherits from `ValueError`, so pydantic understands it when doing validation.
2023-12-13 11:21:16 +11:00
ebf5f5d418 feat(db): address feedback, cleanup
- use simpler pattern for migration dependencies
- move SqliteDatabase & migration to utility method `init_db`, use this in both the app and tests, ensuring the same db schema is used in both
2023-12-13 11:19:59 +11:00
386b656530 feat(db): remove unnecessary fixture declaration
Also revert the change to `conftest.py` in which the file was flagged for pytest to crawl for fixtures.
2023-12-13 10:13:03 +11:00
d7cede6c28 chore/fix: bump fastapi to 0.105.0
This fixes a problem with `Annotated` which prevented us from using pydantic's `Field` to specify a discriminator for a union. We had to use FastAPI's `Body` as a workaround.
2023-12-13 09:48:34 +11:00
15de7c21d9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 10:24:00 -05:00
9620f9336c updated comment 2023-12-12 10:24:00 -05:00
a64ced7b29 remove unneeded if else 2023-12-12 10:24:00 -05:00
dd7deff1a3 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 10:24:00 -05:00
2cdda1fda2 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 17:22:52 +11:00
6caa70123d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 96.4% (1314 of 1363 strings)

Co-authored-by: junzi <nomal.si2621.vip@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-12 17:15:54 +11:00
7e831c8a96 Selected in View within Gallery (#5240)
* selector added

* ref and useeffect added

* scrolling done using useeffect

* fixed scroll and changed the ref name

* fixed scroll again

* created hook for scroll logic

* feat(ui): debounce metadata fetch by 300ms

This vastly reduces the network requests when using the arrow keys to quickly skim through images.

* feat(ui): extract logic to determine virtuoso scrollToIndex align

This needs to be used in `useNextPrevImage()` to ensure the scrolling puts the image at the top or bottom appropriately

* feat(ui): add debounce to image workflow hook

This was spamming network requests like the metadata query

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-12-12 17:14:28 +11:00
3d64bc886d feat(nodes): flag all tiled upscaling nodes as beta 2023-12-12 16:43:05 +11:00
1a136d6167 feat(nodes): fix classification docstrings 2023-12-12 16:43:05 +11:00
43f2837117 feat(nodes): add invocation classifications
Invocations now have a classification:
- Stable: LTS
- Beta: LTS planned, API may change
- Prototype: No LTS planned, API may change, may be removed entirely

The `@invocation` decorator has a new arg `classification`, and an enum `Classification` is added to `baseinvocation.py`.

The default is Stable; this is a non-breaking change.

The classification is presented in the node header as a hammer icon (Beta) or flask icon (prototype).

The icon has a tooltip briefly describing the classification.
2023-12-12 16:43:05 +11:00
5f77ef7e99 feat(db): improve docstrings in migrator 2023-12-12 16:30:57 +11:00
22ccaa4e9a [Feature] Allow the model record migrate script to update existing model records (#5264)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

1. The new model manager sqlite3-based configuration record storage
system is automatically populated with probed values from existing
models found in the models path when `invokeai-web` starts up for the
first time. However, the user's customization of these models in
`invokeai.yaml`, including such things as the prediction type and model
description, are not automatically copied over. This PR enhances the
`invokeai-migrate-models-to-db` script so that any customized
configuration data from `invokeai.yaml` replaces the original probed
values. This script only needs to be run once, but it does not hurt to
run it additional times. In the near future, I'm going to register this
module with psychedelicious's sqlite migration system so that the update
happens automatically during database migration.

2. The SQL-based model config record system stores a JSON version of the
config, as well as several fields that are broken out into individual
columns for search/indexing purposes. This PR keeps the JSON and the
broken-out fields in sync using the `json_extract()` sqlite3 function to
populate the broken out `base`, `type`, `name`, `path` and `format`
fields in the `model_config` table.

3. Finally, this PR fixes the annoying `invokeai-web` shutdown message:
`TypeError: ModelInstallService.stop() takes 1 positional argument but 2
were given`

## Related Tickets & Documents


- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

If you've run `invokeai-web` at any time since PR #5039, your
`invokeai.db` will have a `model_config` table containing probe
information from all models in the invokeai models directory as well as
those in `autoimport` (if applicable). However, any models present in
`models.yaml` whose paths are outside these directories will not be
present. To add them, and to update the description and other values
from `models.yaml`, run the command `invokeai-migrate-models-to-db`. You
should see the missing models added to the database table with the
correct information.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-12 00:25:05 -05:00
d277bd3c38 Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-12 00:24:43 -05:00
fd4e041e7c feat: serve HTTPS when configured with ssl_certfile 2023-12-12 16:01:43 +11:00
15a3e8076f Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-11 23:10:04 -05:00
2fbe3a3104 fix ruff error 2023-12-11 23:04:18 -05:00
b0cfa58526 allow the model record migrate script to update existing model records 2023-12-11 22:47:19 -05:00
285ed26edd Add commands to Makefile for convenient release preparation (#5263)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR does three things:

1) It separates out the script that creates the installer zipfile
(`create_installer.sh`) from the script that tags the repository with
the current release version (now called `tag_release.sh`)

2) It adds new targets to Makefile for running the installer script and
tagging.

3) It adds a `help` target that lists the Makefile targets:

```
$ make help
Developer commands:

ruff           Run ruff, fixing any safely-fixable errors and formatting
ruff-unsafe    Run ruff, fixing all fixable errors and formatting
mypy           Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors
mypy-all       Run mypy ignoring the config in pyproject.tom but still ignoring missing imports
frontend-build Build the frontend in order to run on localhost:9090
frontend-dev   Run the frontend in developer mode on localhost:5173
installer-zip  Build the installer .zip file for the current version
tag-release    Tag the GitHub repository with the current version (use at release time only!)
```
`help` is also the default target so that the help message will print
out when only `make` is issued.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No: not needed

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:33:45 -05:00
02565b9a00 Merge branch 'main' into install/release-tools 2023-12-11 22:32:28 -05:00
78a6024d6c Tiled upscaling graph - new nodes (#5234)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Additional tile generation nodes of
CalculateImageTilesEvenSplitInvocation &
CalculateImageTilesMinimumOverlapInvocation
Additional blending method of merge_tiles_with_seam_blending
Updated Node MergeTilesToImageInvocation with seam blending

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:14:15 -05:00
95198da645 fix(db): fix sqlite migrator tests on windows 2023-12-12 13:54:47 +11:00
ee1f1f3363 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 13:39:47 +11:00
18ba7feca1 feat(db): update docstrings 2023-12-12 13:35:46 +11:00
55b0c7cdc9 feat(db): tidy migration_2 2023-12-12 13:30:29 +11:00
713a83e7da Merge branch 'main' into install/release-tools 2023-12-11 21:20:51 -05:00
f3a97e06ec add the tag_release.sh script 2023-12-11 21:11:37 -05:00
50815d36c6 feat(db): add tests for migration dependencies 2023-12-12 13:09:24 +11:00
a69f518c76 feat(db): tidy dependencies for migrations 2023-12-12 13:09:09 +11:00
18093c4f1d split installer zipfile script from tagging script; add make commands 2023-12-11 21:08:03 -05:00
0cf7fe43af feat(db): refactor migrate callbacks to use dependencies, remote pre/post callbacks 2023-12-12 12:35:42 +11:00
6063760ce2 feat(db): tweak docstring 2023-12-12 11:13:40 +11:00
c5ba4f2ea5 feat(db): remove file backups
Instead of mucking about with the filesystem, we rely on SQLite transactions to handle failed migrations.
2023-12-12 11:12:46 +11:00
3414437eea feat(db): instantiate SqliteMigrator with a SqliteDatabase
Simplifies a couple things:
- Init is more straightforward
- It's clear in the migrator that the connection we are working with is related to the SqliteDatabase
2023-12-12 10:46:08 +11:00
417db71471 feat(db): decouple SqliteDatabase from config object
- Simplify init args to path (None means use memory), logger, and verbose
- Add docstrings to SqliteDatabase (it had almost none)
- Update all usages of the class
2023-12-12 10:30:37 +11:00
afe4e55bf9 feat(db): simplify migration registration validation
With the previous change to assert that the to_version == from_version + 1, this validation can be simpler.
2023-12-12 09:52:03 +11:00
55acc16b2d feat(db): require migration versions to be consecutive 2023-12-12 09:43:09 +11:00
535ce10e99 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 14:40:55 -05:00
Sam
11f4a48144 Add container GID 2023-12-11 14:30:40 -05:00
Sam
67ed4a0245 Respect CONTAINER_UID in Dockerfile chown
CONTAINER_UID is used for the user ID within the container, however I noticed the UID was hard coded to 1000 in the Dockerfile chown -R command.

This leaves the default as 1000, but allows it to be overrriden by setting CONTAINER_UID.
2023-12-11 14:30:40 -05:00
fbbc1037cd missed a rename of overlap to overlap_fraction in test for even_spilt 2023-12-11 17:23:28 +00:00
0852fd4e88 Updated tests for even_split overlap renamed to overlap_fraction 2023-12-11 17:17:29 +00:00
c84526fae5 Fixed Tests that where using round_to_8 and removed redundant tests 2023-12-11 17:05:45 +00:00
f762940335 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 16:57:36 +00:00
fefb78795f - Even_spilt overlap renamed to overlap_fraction
- min_overlap removed * restrictions and round_to_8
- min_overlap handles tile size > image size by clipping the num tiles to 1.
- Updated assert test on min_overlap.
2023-12-11 16:55:27 +00:00
ef8284f009 fix(db): fix tests 2023-12-11 16:41:47 +11:00
290851016e feat(db): move sqlite_migrator into its own module 2023-12-11 16:41:30 +11:00
fa7d002175 fix(tests): fix typing issues 2023-12-11 16:22:29 +11:00
f1b6f78319 fix(db): fix windows db migrator tests
- Ensure db files are closed before manipulating them
- Use contextlib.closing() so that sqlite connections are closed on existing the context
2023-12-11 16:14:25 +11:00
26ab917021 fix(tests): add sqlite migrator to test fixtures 2023-12-11 16:14:25 +11:00
4f3c32a2ee fix(db): remove errant print stmts 2023-12-11 16:14:25 +11:00
77065b1ce1 feat(db): update test for migration chain for missing from 0 2023-12-11 16:14:25 +11:00
41db92b9e8 feat(db): add check for missing migration from 0 2023-12-11 16:14:25 +11:00
c823f5667b feat(db): update sqlite migrator tests 2023-12-11 16:14:25 +11:00
3227b30430 feat(db): extract non-stateful logic to class methods 2023-12-11 16:14:25 +11:00
567f107a81 feat(db): return backup_db_path, move log stmt to run_migrations 2023-12-11 16:14:25 +11:00
b3d5955bc7 fix(db): rename Migrator._migrations -> _migration_set 2023-12-11 16:14:25 +11:00
8726b203d4 fix(db): fix migration chain validation 2023-12-11 16:14:25 +11:00
b3f92e0547 fix(db): fix docstring 2023-12-11 16:14:25 +11:00
72c9a7663f fix(db): add docstring 2023-12-11 16:14:25 +11:00
fcb9e89bd7 feat(db): tidy db naming utils 2023-12-11 16:14:25 +11:00
56966d6d05 feat(db): only reinit db if migrations occurred 2023-12-11 16:14:25 +11:00
e46dc9b34e fix(db): close db conn before reinitializing 2023-12-11 16:14:25 +11:00
e461f9925e feat(db): invert backup/restore logic
Do the migration on a temp copy of the db, then back up the original and move the temp into its file.
2023-12-11 16:14:25 +11:00
abeb1bd3b3 feat(db): reduce power MigrateCallback, only gets cursor
use partial to provide extra dependencies for the image workflow migration function
2023-12-11 16:14:25 +11:00
83e820d721 feat(db): decouple from SqliteDatabase 2023-12-11 16:14:25 +11:00
f8e4b93a74 feat(db): add migration lock file 2023-12-11 16:14:25 +11:00
0710ec30cf feat(db): incorporate feedback 2023-12-11 16:14:25 +11:00
c382329e8c feat(db): move migrator out of SqliteDatabase 2023-12-11 16:14:25 +11:00
a2dc780188 feat: add script to migrate image workflows 2023-12-11 16:14:25 +11:00
abc9dc4d17 fix(tests): fix sqlite migrator backup and restore test
On Windows, we must ensure the connection to the database is closed before exiting the tempfile context.

Also, rejiggered the thing to use the file directly.
2023-12-11 16:14:25 +11:00
3c692018cd fix(db): make idempotency test actually test something 2023-12-11 16:14:25 +11:00
3ba3c1918c fix(db): remove duplicated test case 2023-12-11 16:14:25 +11:00
f2c6819d68 feat(db): add SQLiteMigrator to perform db migrations 2023-12-11 16:14:25 +11:00
ef807cf63a Refactor model manager: model installer component (#5171)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This is the next phase of the model manager refactor, as discussed with
@psychedelicious and @RyanJDick. This implements the model installer,
which is responsible for managing model weights on disk and installing
new models.

Currently only installation of local files and directories is supported.
Remote installation will be implemented after the queued download
manager is reviewed and approved.

Please see the documentation located at
[docs/contributing/MODEL_MANAGER.md](8695ad6f59/docs/contributing/MODEL_MANAGER.md (model-installation))
for an explanation of how this module works.

Things that have changed relative to the current implementation.

1. Model importation runs in a background thread. Access to the
installation status is through a ModelInstallJob object returned by the
`import_model()` call. In addition, the installation process generates a
series of `model_install` events on the event bus.
2. `model_install_progress` events are documented, but not currently
issued. These will be issued when background downloading is implemented.
3. The model installer currently runs in parallel to the current model
manager. The frontend continues to use `configs/models.yaml` and ignores
what is in the `model_config` table of `invokeai.db`.
4. When the installer is initialized at app startup time, it
synchronizes its database to the contents of the InvokeAI `models`
directory. The current model manager does this as well, so you will see
two log messages indicating that this directory is being scanned.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

You can test using the FastAPI swagger pages at
http://localhost:9090/docs. Use the calls listed under
`model_manager_v2`. Be aware that only installation of local models
(indicated by their file or directory path) are currently supported.

## Added/updated tests?

- [X] Yes -- see
`tests/app/services/model_install/test_model_install.py`
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-10 23:16:39 -05:00
bbcd58e681 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 21:34:14 -05:00
36043bf38b fixed docstring in probe module 2023-12-10 21:33:54 -05:00
fd68c47920 Merge branch 'main' into refactor/model-manager-3 2023-12-10 21:26:44 -05:00
de2879f602 port new code for detecting sdxl-based embeddings 2023-12-10 15:48:02 -05:00
3b1ff4a7f4 resolve test failure caused by renamed sqlite_database module 2023-12-10 12:59:00 -05:00
d7f7fbc8c2 Merge branch 'main' into refactor/model-manager-3 2023-12-10 12:55:28 -05:00
e2567a7e31 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 12:55:24 -05:00
2f3457c02a rename installer __del__() to stop(). Improve probe error messages 2023-12-10 12:55:01 -05:00
aab6369ffe Update invokeai/backend/model_manager/search.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-10 12:24:50 -05:00
4c97b619fb Update tiles.py
merge with main
2023-12-09 22:05:23 +00:00
abdd840fb9 Merge branch 'main' into tiled-upscaling-graph 2023-12-09 22:03:18 +00:00
e656768eb2 more fixes from code review 2023-12-09 21:56:31 +00:00
494c2a9b05 Updates based on code review by @RyanJDick 2023-12-09 18:38:07 +00:00
5f37176938 ruff formatting 2023-12-08 19:40:10 +00:00
375a91db32 further updated tests 2023-12-08 19:38:16 +00:00
b7ba426249 Fixed some params on tile gen tests on tests 2023-12-08 18:53:28 +00:00
d3ad356c6a Ruff Formatting
Fix pyTest issues
2023-12-08 18:31:33 +00:00
fdb97c1d02 Merge branch 'main' into tiled-upscaling-graph 2023-12-08 18:22:05 +00:00
8cda42ab0a ruff formatting 2023-12-08 18:17:40 +00:00
fed2bdafeb Added Defaults to calc_tiles_min_overlap for overlap and round
Added tests for min_overlap and even_split tile gen
2023-12-08 18:16:13 +00:00
913c68982a Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-06 22:23:49 -05:00
6e1e67aa72 remove source filtering from list_models() 2023-12-06 22:23:08 -05:00
ee6fbabbfb Merge branch 'main' into refactor/model-manager-3 2023-12-06 22:20:06 -05:00
cd15d8b7a9 ruff formatting
reformatted due to ruff errors
2023-12-06 08:10:22 +00:00
3b4b4ba40a Merge branch 'main' into tiled-upscaling-graph 2023-12-06 08:00:31 +00:00
674d9796d0 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 21:03:16 +00:00
5816320645 Merge branch 'main' into tiled-upscaling-graph 2023-12-05 15:32:49 +00:00
14254e8be8 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 12:29:55 +00:00
3bfaee9c57 Merge branch 'main' into refactor/model-manager-3 2023-12-04 22:51:45 -05:00
3b06cc6782 reformatted using newer version of ruff 2023-12-04 21:15:56 -05:00
7c9f48b84d fix ruff check 2023-12-04 21:14:02 -05:00
fed2bf6dab Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-04 21:12:40 -05:00
2b583ffcdf implement review suggestions from @RyanjDick 2023-12-04 21:12:10 -05:00
6f46d15c05 Update invokeai/app/services/model_install/model_install_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-04 20:09:41 -05:00
018ccebd6f make ModelLocalSource comparisons work across platforms 2023-12-04 19:07:25 -05:00
620b2d477a implement suggestions from first review by @psychedelicious 2023-12-04 17:08:33 -05:00
f73b678aae Merge branch 'main' into refactor/model-manager-3 2023-12-04 17:06:36 -05:00
e46ac45741 port probing changes from main model_probe.py to refactored probe.py 2023-12-01 09:19:24 -05:00
75089b7a9d merge in changes from main 2023-12-01 09:18:07 -05:00
778fd55f0d Merge branch 'main' into refactor/model-manager-3 2023-12-01 09:15:18 -05:00
bb87c988cb Change input field ordering of CropLatentsCoreInvocation to match ImageCropInvocation. 2023-11-29 10:23:55 -05:00
049b0239da Re-organize merge_tiles_with_linear_blending(...) to merge rows horizontally first and then vertically. This change achieves slightly more natural blending on the corners where 4 tiles overlap. 2023-11-29 09:48:56 -05:00
932de08fc0 Infer a tight-fitting output image size from the passed tiles in MergeTilesToImageInvocation. 2023-11-29 09:48:56 -05:00
303791d5c6 Add width and height fields to TileToPropertiesInvocation output to avoid having to calculate them with math nodes. 2023-11-29 09:48:56 -05:00
7e4a689370 Update tiling nodes to use width-before-height field ordering convention. 2023-11-29 09:48:56 -05:00
04e0fefdee Rename CropLatentsInvocation -> CropLatentsCoreInvocation to prevent conflict with custom node. And other minor tidying. 2023-11-29 09:48:56 -05:00
9b4e6da226 Improve documentation of CropLatentsInvocation. 2023-11-29 09:48:56 -05:00
e1c53a2465 Use LATENT_SCALE_FACTOR = 8 constant in CropLatentsInvocation. 2023-11-29 09:48:55 -05:00
121b930abf Copy CropLatentsInvocation from 74647fa9c1/images_to_grids.py (L1117C1-L1167C80). 2023-11-29 09:48:55 -05:00
436560da39 (minor) Add 'Invocation' suffix to all tiling node classes. 2023-11-29 09:48:55 -05:00
3980f79ed5 Tidy up tiles invocations, add documentation. 2023-11-29 09:48:55 -05:00
1d0dc7eeab Add unit tests for merge_tiles_with_linear_blending(...). 2023-11-29 09:48:55 -05:00
1f63fa8236 Add unit tests for calc_tiles_with_overlap(...) and fix a bug in its implementation. 2023-11-29 09:48:55 -05:00
caf47dee09 Add unit tests for tile paste(...) util function. 2023-11-29 09:48:55 -05:00
d742479810 Add nodes for tile splitting and merging. The main motivation for these nodes is for use in tiled upscaling workflows. 2023-11-29 09:48:55 -05:00
ecd3dcd5df Merge branch 'main' into refactor/model-manager-3 2023-11-27 22:15:51 -05:00
a79e814c8d Merge branch 'main' into refactor/model-manager-3 2023-11-27 16:06:42 -05:00
3fe1bef5cd Merge branch 'main' into refactor/model-manager-3 2023-11-27 08:08:01 -05:00
dbd0151c0e make test file path comparison work on windows systems (another fix) 2023-11-26 18:52:25 -05:00
6da508f147 make test file path comparison work on windows systems 2023-11-26 18:40:22 -05:00
8ef596eac7 further changes for ruff 2023-11-26 17:13:31 -05:00
8f4f4d48d5 fix import unsorted import block issues in the tests 2023-11-26 13:37:47 -05:00
60eae7443a Merge branch 'main' into refactor/model-manager-3 2023-11-26 13:33:41 -05:00
8695ad6f59 all features implemented, docs updated, ready for review 2023-11-26 13:18:21 -05:00
dc5c452ef9 rename test/nodes to test/aa_nodes to ensure these tests run first 2023-11-26 09:38:30 -05:00
8aefe2cefe import_model and list_install_jobs router APIs written 2023-11-25 21:45:59 -05:00
ec510d34b5 fix model probing for controlnet checkpoint legacy config files 2023-11-25 15:53:22 -05:00
19baea1883 all backend features in place; config scanning is failing on controlnet 2023-11-24 19:37:46 -05:00
80bc9be3ab make install_path and register_path work; refactor model probing 2023-11-23 23:15:32 -05:00
8c7a7bc897 Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:29:23 -05:00
4aab728590 move name/description logic into model_probe.py 2023-11-22 22:29:02 -05:00
9cf060115d Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:28:31 -05:00
9ea3126118 start implementation of installer 2023-11-20 23:02:30 -05:00
6c56233edc define install abstract base class 2023-11-20 21:57:10 -05:00
96 changed files with 6984 additions and 1266 deletions

View File

@ -42,6 +42,21 @@ Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
## Added/updated tests?
- [ ] Yes

View File

@ -15,19 +15,37 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: Checkout sources
uses: actions/checkout@v4
- name: install deps
- name: Setup Node 20
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: 8
- name: Install pnpm dependencies
working-directory: invokeai/frontend/web
run: 'pnpm install --prefer-frozen-lockfile'
- name: Build frontend
working-directory: invokeai/frontend/web
run: 'pnpm build'
- name: Install python deps
run: pip install --upgrade build twine
- name: build package
- name: Build wheel package
run: python3 -m build
- name: check distribution
- name: Check distribution
run: twine check dist/*
- name: check PyPI versions
- name: Check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
@ -36,6 +54,6 @@ jobs:
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
- name: Upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
run: twine upload dist/*

View File

@ -1,6 +1,20 @@
# simple Makefile with scripts that are otherwise hard to remember
# to use, run from the repo root `make <command>`
default: help
help:
@echo Developer commands:
@echo
@echo "ruff Run ruff, fixing any safely-fixable errors and formatting"
@echo "ruff-unsafe Run ruff, fixing all fixable errors and formatting"
@echo "mypy Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors"
@echo "mypy-all Run mypy ignoring the config in pyproject.tom but still ignoring missing imports"
@echo "frontend-build Build the frontend in order to run on localhost:9090"
@echo "frontend-dev Run the frontend in developer mode on localhost:5173"
@echo "installer-zip Build the installer .zip file for the current version"
@echo "tag-release Tag the GitHub repository with the current version (use at release time only!)"
# Runs ruff, fixing any safely-fixable errors and formatting
ruff:
ruff check . --fix
@ -18,4 +32,21 @@ mypy:
# Runs mypy, ignoring the config in pyproject.toml but still ignoring missing (untyped) imports
# (many files are ignored by the config, so this is useful for checking all files)
mypy-all:
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
# Build the frontend
frontend-build:
cd invokeai/frontend/web && pnpm build
# Run the frontend in dev mode
frontend-dev:
cd invokeai/frontend/web && pnpm dev
# Installer zip file
installer-zip:
cd installer && ./create_installer.sh
# Tag the release
tag-release:
cd installer && ./tag_release.sh

View File

@ -100,6 +100,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
ENV CONTAINER_UID=${CONTAINER_UID:-1000}
ENV CONTAINER_GID=${CONTAINER_GID:-1000}
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
@ -117,7 +119,7 @@ WORKDIR ${INVOKEAI_SRC}
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]

View File

@ -10,40 +10,36 @@ model. These are the:
tracks the type of the model, its provenance, and where it can be
found on disk.
* _ModelLoadServiceBase_ Responsible for loading a model from disk
into RAM and VRAM and getting it ready for inference.
* _DownloadQueueServiceBase_ A multithreaded downloader responsible
for downloading models from a remote source to disk. The download
queue has special methods for downloading repo_id folders from
Hugging Face, as well as discriminating among model versions in
Civitai, but can be used for arbitrary content.
* _ModelInstallServiceBase_ A service for installing models to
disk. It uses `DownloadQueueServiceBase` to download models and
their metadata, and `ModelRecordServiceBase` to store that
information. It is also responsible for managing the InvokeAI
`models` directory and its contents.
* _DownloadQueueServiceBase_ (**CURRENTLY UNDER DEVELOPMENT - NOT IMPLEMENTED**)
A multithreaded downloader responsible
for downloading models from a remote source to disk. The download
queue has special methods for downloading repo_id folders from
Hugging Face, as well as discriminating among model versions in
Civitai, but can be used for arbitrary content.
* _ModelLoadServiceBase_ (**CURRENTLY UNDER DEVELOPMENT - NOT IMPLEMENTED**)
Responsible for loading a model from disk
into RAM and VRAM and getting it ready for inference.
## Location of the Code
All four of these services can be found in
`invokeai/app/services` in the following directories:
* `invokeai/app/services/model_records/`
* `invokeai/app/services/downloads/`
* `invokeai/app/services/model_loader/`
* `invokeai/app/services/model_install/`
With the exception of the install service, each of these is a thin
shell around a corresponding implementation located in
`invokeai/backend/model_manager`. The main difference between the
modules found in app services and those in the backend folder is that
the former add support for event reporting and are more tied to the
needs of the InvokeAI API.
* `invokeai/app/services/model_loader/` (**under development**)
* `invokeai/app/services/downloads/`(**under development**)
Code related to the FastAPI web API can be found in
`invokeai/app/api/routers/models.py`.
`invokeai/app/api/routers/model_records.py`.
***
@ -165,10 +161,6 @@ of the fields, including `name`, `model_type` and `base_model`, are
shared between `ModelConfigBase` and `ModelBase`, and this is a
potential source of confusion.
** TO DO: ** The `ModelBase` code needs to be revised to reduce the
duplication of similar classes and to support using the `key` as the
primary model identifier.
## Reading and Writing Model Configuration Records
The `ModelRecordService` provides the ability to retrieve model
@ -362,7 +354,7 @@ model and pass its key to `get_model()`.
Several methods allow you to create and update stored model config
records.
#### add_model(key, config) -> ModelConfigBase:
#### add_model(key, config) -> AnyModelConfig:
Given a key and a configuration, this will add the model's
configuration record to the database. `config` can either be a subclass of
@ -386,27 +378,356 @@ fields to be updated. This will return an `AnyModelConfig` on success,
or raise `InvalidModelConfigException` or `UnknownModelException`
exceptions on failure.
***TO DO:*** Investigate why `update_model()` returns an
`AnyModelConfig` while `add_model()` returns a `ModelConfigBase`.
### rename_model(key, new_name) -> ModelConfigBase:
This is a special case of `update_model()` for the use case of
changing the model's name. It is broken out because there are cases in
which the InvokeAI application wants to synchronize the model's name
with its path in the `models` directory after changing the name, type
or base. However, when using the ModelRecordService directly, the call
is equivalent to:
```
store.rename_model(key, {'name': 'new_name'})
```
***TO DO:*** Investigate why `rename_model()` is returning a
`ModelConfigBase` while `update_model()` returns a `AnyModelConfig`.
***
## Model installation
The `ModelInstallService` class implements the
`ModelInstallServiceBase` abstract base class, and provides a one-stop
shop for all your model install needs. It provides the following
functionality:
- Registering a model config record for a model already located on the
local filesystem, without moving it or changing its path.
- Installing a model alreadiy located on the local filesystem, by
moving it into the InvokeAI root directory under the
`models` folder (or wherever config parameter `models_dir`
specifies).
- Probing of models to determine their type, base type and other key
information.
- Interface with the InvokeAI event bus to provide status updates on
the download, installation and registration process.
- Downloading a model from an arbitrary URL and installing it in
`models_dir` (_implementation pending_).
- Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link (_implementation pending_).
- Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative
variants such as fp16. (_implementation pending_)
### Initializing the installer
A default installer is created at InvokeAI api startup time and stored
in `ApiDependencies.invoker.services.model_install` and can
also be retrieved from an invocation's `context` argument with
`context.services.model_install`.
In the event you wish to create a new installer, you may use the
following initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import ModelRecordServiceSQL
from invokeai.app.services.model_install import ModelInstallService
from invokeai.app.services.shared.sqlite import SqliteDatabase
from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config()
config.parse_args()
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config, logger)
store = ModelRecordServiceSQL(db)
installer = ModelInstallService(config, store)
```
The full form of `ModelInstallService()` takes the following
required parameters:
| **Argument** | **Type** | **Description** |
|------------------|------------------------------|------------------------------|
| `config` | InvokeAIAppConfig | InvokeAI app configuration object |
| `record_store` | ModelRecordServiceBase | Config record storage database |
| `event_bus` | EventServiceBase | Optional event bus to send download/install progress events to |
Once initialized, the installer will provide the following methods:
#### install_job = installer.import_model()
The `import_model()` method is the core of the installer. The
following illustrates basic usage:
```
from invokeai.app.services.model_install import (
LocalModelSource,
HFModelSource,
URLModelSource,
)
source1 = LocalModelSource(path='/opt/models/sushi.safetensors') # a local safetensors file
source2 = LocalModelSource(path='/opt/models/sushi_diffusers') # a local diffusers folder
source3 = HFModelSource(repo_id='runwayml/stable-diffusion-v1-5') # a repo_id
source4 = HFModelSource(repo_id='runwayml/stable-diffusion-v1-5', subfolder='vae') # a subfolder within a repo_id
source5 = HFModelSource(repo_id='runwayml/stable-diffusion-v1-5', variant='fp16') # a named variant of a HF model
source6 = URLModelSource(url='https://civitai.com/api/download/models/63006') # model located at a URL
source7 = URLModelSource(url='https://civitai.com/api/download/models/63006', access_token='letmein') # with an access token
for source in [source1, source2, source3, source4, source5, source6, source7]:
install_job = installer.install_model(source)
source2job = installer.wait_for_installs()
for source in sources:
job = source2job[source]
if job.status == "completed":
model_config = job.config_out
model_key = model_config.key
print(f"{source} installed as {model_key}")
elif job.status == "error":
print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}")
```
As shown here, the `import_model()` method accepts a variety of
sources, including local safetensors files, local diffusers folders,
HuggingFace repo_ids with and without a subfolder designation,
Civitai model URLs and arbitrary URLs that point to checkpoint files
(but not to folders).
Each call to `import_model()` return a `ModelInstallJob` job,
an object which tracks the progress of the install.
If a remote model is requested, the model's files are downloaded in
parallel across a multiple set of threads using the download
queue. During the download process, the `ModelInstallJob` is updated
to provide status and progress information. After the files (if any)
are downloaded, the remainder of the installation runs in a single
serialized background thread. These are the model probing, file
copying, and config record database update steps.
Multiple install jobs can be queued up. You may block until all
install jobs are completed (or errored) by calling the
`wait_for_installs()` method as shown in the code
example. `wait_for_installs()` will return a `dict` that maps the
requested source to its job. This object can be interrogated
to determine its status. If the job errored out, then the error type
and details can be recovered from `job.error_type` and `job.error`.
The full list of arguments to `import_model()` is as follows:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `source` | Union[str, Path, AnyHttpUrl] | | The source of the model, Path, URL or repo_id |
| `inplace` | bool | True | Leave a local model in its current location |
| `variant` | str | None | Desired variant, such as 'fp16' or 'onnx' (HuggingFace only) |
| `subfolder` | str | None | Repository subfolder (HuggingFace only) |
| `config` | Dict[str, Any] | None | Override all or a portion of model's probed attributes |
| `access_token` | str | None | Provide authorization information needed to download |
The `inplace` field controls how local model Paths are handled. If
True (the default), then the model is simply registered in its current
location by the installer's `ModelConfigRecordService`. Otherwise, a
copy of the model put into the location specified by the `models_dir`
application configuration parameter.
The `variant` field is used for HuggingFace repo_ids only. If
provided, the repo_id download handler will look for and download
tensors files that follow the convention for the selected variant:
- "fp16" will select files named "*model.fp16.{safetensors,bin}"
- "onnx" will select files ending with the suffix ".onnx"
- "openvino" will select files beginning with "openvino_model"
In the special case of the "fp16" variant, the installer will select
the 32-bit version of the files if the 16-bit version is unavailable.
`subfolder` is used for HuggingFace repo_ids only. If provided, the
model will be downloaded from the designated subfolder rather than the
top-level repository folder. If a subfolder is attached to the repo_id
using the format `repo_owner/repo_name:subfolder`, then the subfolder
specified by the repo_id will override the subfolder argument.
`config` can be used to override all or a portion of the configuration
attributes returned by the model prober. See the section below for
details.
`access_token` is passed to the download queue and used to access
repositories that require it.
#### Monitoring the install job process
When you create an install job with `import_model()`, it launches the
download and installation process in the background and returns a
`ModelInstallJob` object for monitoring the process.
The `ModelInstallJob` class has the following structure:
| **Attribute** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `status` | `InstallStatus` | An enum of ["waiting", "running", "completed" and "error" |
| `config_in` | `dict` | Overriding configuration values provided by the caller |
| `config_out` | `AnyModelConfig`| After successful completion, contains the configuration record written to the database |
| `inplace` | `boolean` | True if the caller asked to install the model in place using its local path |
| `source` | `ModelSource` | The local path, remote URL or repo_id of the model to be installed |
| `local_path` | `Path` | If a remote model, holds the path of the model after it is downloaded; if a local model, same as `source` |
| `error_type` | `str` | Name of the exception that led to an error status |
| `error` | `str` | Traceback of the error |
If the `event_bus` argument was provided, events will also be
broadcast to the InvokeAI event bus. The events will appear on the bus
as an event of type `EventServiceBase.model_event`, a timestamp and
the following event names:
- `model_install_started`
The payload will contain the keys `timestamp` and `source`. The latter
indicates the requested model source for installation.
- `model_install_progress`
Emitted at regular intervals when downloading a remote model, the
payload will contain the keys `timestamp`, `source`, `current_bytes`
and `total_bytes`. These events are _not_ emitted when a local model
already on the filesystem is imported.
- `model_install_completed`
Issued once at the end of a successful installation. The payload will
contain the keys `timestamp`, `source` and `key`, where `key` is the
ID under which the model has been registered.
- `model_install_error`
Emitted if the installation process fails for some reason. The payload
will contain the keys `timestamp`, `source`, `error_type` and
`error`. `error_type` is a short message indicating the nature of the
error, and `error` is the long traceback to help debug the problem.
#### Model confguration and probing
The install service uses the `invokeai.backend.model_manager.probe`
module during import to determine the model's type, base type, and
other configuration parameters. Among other things, it assigns a
default name and description for the model based on probed
fields.
When downloading remote models is implemented, additional
configuration information, such as list of trigger terms, will be
retrieved from the HuggingFace and Civitai model repositories.
The probed values can be overriden by providing a dictionary in the
optional `config` argument passed to `import_model()`. You may provide
overriding values for any of the model's configuration
attributes. Here is an example of setting the
`SchedulerPredictionType` and `name` for an sd-2 model:
This is typically used to set
the model's name and description, but can also be used to overcome
cases in which automatic probing is unable to (correctly) determine
the model's attribute. The most common situation is the
`prediction_type` field for sd-2 (and rare sd-1) models. Here is an
example of how it works:
```
install_job = installer.import_model(
source='stabilityai/stable-diffusion-2-1',
variant='fp16',
config=dict(
prediction_type=SchedulerPredictionType('v_prediction')
name='stable diffusion 2 base model',
)
)
```
### Other installer methods
This section describes additional methods provided by the installer class.
#### jobs = installer.wait_for_installs()
Block until all pending installs are completed or errored and then
returns a list of completed jobs.
#### jobs = installer.list_jobs([source])
Return a list of all active and complete `ModelInstallJobs`. An
optional `source` argument allows you to filter the returned list by a
model source string pattern using a partial string match.
#### jobs = installer.get_job(source)
Return a list of `ModelInstallJob` corresponding to the indicated
model source.
#### installer.prune_jobs
Remove non-pending jobs (completed or errored) from the job list
returned by `list_jobs()` and `get_job()`.
#### installer.app_config, installer.record_store,
installer.event_bus
Properties that provide access to the installer's `InvokeAIAppConfig`,
`ModelRecordServiceBase` and `EventServiceBase` objects.
#### key = installer.register_path(model_path, config), key = installer.install_path(model_path, config)
These methods bypass the download queue and directly register or
install the model at the indicated path, returning the unique ID for
the installed model.
Both methods accept a Path object corresponding to a checkpoint or
diffusers folder, and an optional dict of config attributes to use to
override the values derived from model probing.
The difference between `register_path()` and `install_path()` is that
the former creates a model configuration record without changing the
location of the model in the filesystem. The latter makes a copy of
the model inside the InvokeAI models directory before registering
it.
#### installer.unregister(key)
This will remove the model config record for the model at key, and is
equivalent to `installer.record_store.del_model(key)`
#### installer.delete(key)
This is similar to `unregister()` but has the additional effect of
conditionally deleting the underlying model file(s) if they reside
within the InvokeAI models directory
#### installer.unconditionally_delete(key)
This method is similar to `unregister()`, but also unconditionally
deletes the corresponding model weights file(s), regardless of whether
they are inside or outside the InvokeAI models hierarchy.
#### List[str]=installer.scan_directory(scan_dir: Path, install: bool)
This method will recursively scan the directory indicated in
`scan_dir` for new models and either install them in the models
directory or register them in place, depending on the setting of
`install` (default False).
The return value is the list of keys of the new installed/registered
models.
#### installer.sync_to_config()
This method synchronizes models in the models directory and autoimport
directory to those in the `ModelConfigRecordService` database. New
models are registered and orphan models are unregistered.
#### installer.start(invoker)
The `start` method is called by the API intialization routines when
the API starts up. Its effect is to call `sync_to_config()` to
synchronize the model record store database with what's currently on
disk.
# The remainder of this documentation is provisional, pending implementation of the Download and Load services
## Let's get loaded, the lowdown on ModelLoadService
The `ModelLoadService` is responsible for loading a named model into
@ -863,351 +1184,3 @@ other resources that it might have been using.
This will start/pause/cancel all jobs that have been submitted to the
queue and have not yet reached a terminal state.
## Model installation
The `ModelInstallService` class implements the
`ModelInstallServiceBase` abstract base class, and provides a one-stop
shop for all your model install needs. It provides the following
functionality:
- Registering a model config record for a model already located on the
local filesystem, without moving it or changing its path.
- Installing a model alreadiy located on the local filesystem, by
moving it into the InvokeAI root directory under the
`models` folder (or wherever config parameter `models_dir`
specifies).
- Downloading a model from an arbitrary URL and installing it in
`models_dir`.
- Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link. Any metadata provided
by Civitai, such as trigger terms, are captured and placed in the
model config record.
- Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative
variants such as fp16.
- Probing of models to determine their type, base type and other key
information.
- Interface with the InvokeAI event bus to provide status updates on
the download, installation and registration process.
### Initializing the installer
A default installer is created at InvokeAI api startup time and stored
in `ApiDependencies.invoker.services.model_install_service` and can
also be retrieved from an invocation's `context` argument with
`context.services.model_install_service`.
In the event you wish to create a new installer, you may use the
following initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download_manager import DownloadQueueServive
from invokeai.app.services.model_record_service import ModelRecordServiceBase
config = InvokeAI.get_config()
queue = DownloadQueueService()
store = ModelRecordServiceBase.open(config)
installer = ModelInstallService(config=config, queue=queue, store=store)
```
The full form of `ModelInstallService()` takes the following
parameters. Each parameter will default to a reasonable value, but it
is recommended that you set them explicitly as shown in the above example.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `config` | InvokeAIAppConfig | Use system-wide config | InvokeAI app configuration object |
| `queue` | DownloadQueueServiceBase | Create a new download queue for internal use | Download queue |
| `store` | ModelRecordServiceBase | Use config to select the database to open | Config storage database |
| `event_bus` | EventServiceBase | None | An event bus to send download/install progress events to |
| `event_handlers` | List[DownloadEventHandler] | None | Event handlers for the download queue |
Note that if `store` is not provided, then the class will use
`ModelRecordServiceBase.open(config)` to select the database to use.
Once initialized, the installer will provide the following methods:
#### install_job = installer.install_model()
The `install_model()` method is the core of the installer. The
following illustrates basic usage:
```
sources = [
Path('/opt/models/sushi.safetensors'), # a local safetensors file
Path('/opt/models/sushi_diffusers/'), # a local diffusers folder
'runwayml/stable-diffusion-v1-5', # a repo_id
'runwayml/stable-diffusion-v1-5:vae', # a subfolder within a repo_id
'https://civitai.com/api/download/models/63006', # a civitai direct download link
'https://civitai.com/models/8765?modelVersionId=10638', # civitai model page
'https://s3.amazon.com/fjacks/sd-3.safetensors', # arbitrary URL
]
for source in sources:
install_job = installer.install_model(source)
source2key = installer.wait_for_installs()
for source in sources:
model_key = source2key[source]
print(f"{source} installed as {model_key}")
```
As shown here, the `install_model()` method accepts a variety of
sources, including local safetensors files, local diffusers folders,
HuggingFace repo_ids with and without a subfolder designation,
Civitai model URLs and arbitrary URLs that point to checkpoint files
(but not to folders).
Each call to `install_model()` will return a `ModelInstallJob` job, a
subclass of `DownloadJobBase`. The install job has additional
install-specific fields described in the next section.
Each install job will run in a series of background threads using
the object's download queue. You may block until all install jobs are
completed (or errored) by calling the `wait_for_installs()` method as
shown in the code example. `wait_for_installs()` will return a `dict`
that maps the requested source to the key of the installed model. In
the case that a model fails to download or install, its value in the
dict will be None. The actual cause of the error will be reported in
the corresponding job's `error` field.
Alternatively you may install event handlers and/or listen for events
on the InvokeAI event bus in order to monitor the progress of the
requested installs.
The full list of arguments to `model_install()` is as follows:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `source` | Union[str, Path, AnyHttpUrl] | | The source of the model, Path, URL or repo_id |
| `inplace` | bool | True | Leave a local model in its current location |
| `variant` | str | None | Desired variant, such as 'fp16' or 'onnx' (HuggingFace only) |
| `subfolder` | str | None | Repository subfolder (HuggingFace only) |
| `probe_override` | Dict[str, Any] | None | Override all or a portion of model's probed attributes |
| `metadata` | ModelSourceMetadata | None | Provide metadata that will be added to model's config |
| `access_token` | str | None | Provide authorization information needed to download |
| `priority` | int | 10 | Download queue priority for the job |
The `inplace` field controls how local model Paths are handled. If
True (the default), then the model is simply registered in its current
location by the installer's `ModelConfigRecordService`. Otherwise, the
model will be moved into the location specified by the `models_dir`
application configuration parameter.
The `variant` field is used for HuggingFace repo_ids only. If
provided, the repo_id download handler will look for and download
tensors files that follow the convention for the selected variant:
- "fp16" will select files named "*model.fp16.{safetensors,bin}"
- "onnx" will select files ending with the suffix ".onnx"
- "openvino" will select files beginning with "openvino_model"
In the special case of the "fp16" variant, the installer will select
the 32-bit version of the files if the 16-bit version is unavailable.
`subfolder` is used for HuggingFace repo_ids only. If provided, the
model will be downloaded from the designated subfolder rather than the
top-level repository folder. If a subfolder is attached to the repo_id
using the format `repo_owner/repo_name:subfolder`, then the subfolder
specified by the repo_id will override the subfolder argument.
`probe_override` can be used to override all or a portion of the
attributes returned by the model prober. This can be used to overcome
cases in which automatic probing is unable to (correctly) determine
the model's attribute. The most common situation is the
`prediction_type` field for sd-2 (and rare sd-1) models. Here is an
example of how it works:
```
install_job = installer.install_model(
source='stabilityai/stable-diffusion-2-1',
variant='fp16',
probe_override=dict(
prediction_type=SchedulerPredictionType('v_prediction')
)
)
```
`metadata` allows you to attach custom metadata to the installed
model. See the next section for details.
`priority` and `access_token` are passed to the download queue and
have the same effect as they do for the DownloadQueueServiceBase.
#### Monitoring the install job process
When you create an install job with `model_install()`, events will be
passed to the list of `DownloadEventHandlers` provided at installer
initialization time. Event handlers can also be added to individual
model install jobs by calling their `add_handler()` method as
described earlier for the `DownloadQueueService`.
If the `event_bus` argument was provided, events will also be
broadcast to the InvokeAI event bus. The events will appear on the bus
as a singular event type named `model_event` with a payload of
`job`. You can then retrieve the job and check its status.
** TO DO: ** consider breaking `model_event` into
`model_install_started`, `model_install_completed`, etc. The event bus
features have not yet been tested with FastAPI/websockets, and it may
turn out that the job object is not serializable.
#### Model metadata and probing
The install service has special handling for HuggingFace and Civitai
URLs that capture metadata from the source and include it in the model
configuration record. For example, fetching the Civitai model 8765
will produce a config record similar to this (using YAML
representation):
```
5abc3ef8600b6c1cc058480eaae3091e:
path: sd-1/lora/to8contrast-1-5.safetensors
name: to8contrast-1-5
base_model: sd-1
model_type: lora
model_format: lycoris
key: 5abc3ef8600b6c1cc058480eaae3091e
hash: 5abc3ef8600b6c1cc058480eaae3091e
description: 'Trigger terms: to8contrast style'
author: theovercomer8
license: allowCommercialUse=Sell; allowDerivatives=True; allowNoCredit=True
source: https://civitai.com/models/8765?modelVersionId=10638
thumbnail_url: null
tags:
- model
- style
- portraits
```
For sources that do not provide model metadata, you can attach custom
fields by providing a `metadata` argument to `model_install()` using
an initialized `ModelSourceMetadata` object (available for import from
`model_install_service.py`):
```
from invokeai.app.services.model_install_service import ModelSourceMetadata
meta = ModelSourceMetadata(
name="my model",
author="Sushi Chef",
description="Highly customized model; trigger with 'sushi',"
license="mit",
thumbnail_url="http://s3.amazon.com/ljack/pics/sushi.png",
tags=list('sfw', 'food')
)
install_job = installer.install_model(
source='sushi_chef/model3',
variant='fp16',
metadata=meta,
)
```
It is not currently recommended to provide custom metadata when
installing from Civitai or HuggingFace source, as the metadata
provided by the source will overwrite the fields you provide. Instead,
after the model is installed you can use
`ModelRecordService.update_model()` to change the desired fields.
** TO DO: ** Change the logic so that the caller's metadata fields take
precedence over those provided by the source.
#### Other installer methods
This section describes additional, less-frequently-used attributes and
methods provided by the installer class.
##### installer.wait_for_installs()
This is equivalent to the `DownloadQueue` `join()` method. It will
block until all the active jobs in the install queue have reached a
terminal state (completed, errored or cancelled).
##### installer.queue, installer.store, installer.config
These attributes provide access to the `DownloadQueueServiceBase`,
`ModelConfigRecordServiceBase`, and `InvokeAIAppConfig` objects that
the installer uses.
For example, to temporarily pause all pending installations, you can
do this:
```
installer.queue.pause_all_jobs()
```
##### key = installer.register_path(model_path, overrides), key = installer.install_path(model_path, overrides)
These methods bypass the download queue and directly register or
install the model at the indicated path, returning the unique ID for
the installed model.
Both methods accept a Path object corresponding to a checkpoint or
diffusers folder, and an optional dict of attributes to use to
override the values derived from model probing.
The difference between `register_path()` and `install_path()` is that
the former will not move the model from its current position, while
the latter will move it into the `models_dir` hierarchy.
##### installer.unregister(key)
This will remove the model config record for the model at key, and is
equivalent to `installer.store.unregister(key)`
##### installer.delete(key)
This is similar to `unregister()` but has the additional effect of
deleting the underlying model file(s) -- even if they were outside the
`models_dir` directory!
##### installer.conditionally_delete(key)
This method will call `unregister()` if the model identified by `key`
is outside the `models_dir` hierarchy, and call `delete()` if the
model is inside.
#### List[str]=installer.scan_directory(scan_dir: Path, install: bool)
This method will recursively scan the directory indicated in
`scan_dir` for new models and either install them in the models
directory or register them in place, depending on the setting of
`install` (default False).
The return value is the list of keys of the new installed/registered
models.
#### installer.scan_models_directory()
This method scans the models directory for new models and registers
them in place. Models that are present in the
`ModelConfigRecordService` database whose paths are not found will be
unregistered.
#### installer.sync_to_config()
This method synchronizes models in the models directory and autoimport
directory to those in the `ModelConfigRecordService` database. New
models are registered and orphan models are unregistered.
#### hash=installer.hash(model_path)
This method is calls the fasthash algorithm on a model's Path
(either a file or a folder) to generate a unique ID based on the
contents of the model.
##### installer.start(invoker)
The `start` method is called by the API intialization routines when
the API starts up. Its effect is to call `sync_to_config()` to
synchronize the model record store database with what's currently on
disk.
This method should not ordinarily be called manually.

View File

@ -154,14 +154,16 @@ groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| Setting | Default Value | Description |
|---------------------|---------------|----------------------------------------------------------------------------------------------------------------------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| `ssl_certfile` | null | Path to an SSL certificate file, used to enable HTTPS. |
| `ssl_keyfile` | null | Path to an SSL keyfile, if the key is not included in the certificate file. |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].

View File

@ -293,6 +293,19 @@ manager, please follow these steps:
## Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
@ -388,3 +401,5 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@ -0,0 +1,10 @@
document.addEventListener("DOMContentLoaded", function () {
var script = document.createElement("script");
script.src = "https://widget.kapa.ai/kapa-widget.bundle.js";
script.setAttribute("data-website-id", "b5973bb1-476b-451e-8cf4-98de86745a10");
script.setAttribute("data-project-name", "Invoke.AI");
script.setAttribute("data-project-color", "#11213C");
script.setAttribute("data-project-logo", "https://avatars.githubusercontent.com/u/113954515?s=280&v=4");
script.async = true;
document.head.appendChild(script);
});

View File

@ -13,14 +13,6 @@ function is_bin_in_path {
builtin type -P "$1" &>/dev/null
}
function does_tag_exist {
git rev-parse --quiet --verify "refs/tags/$1" >/dev/null
}
function git_show_ref {
git show-ref --dereference $1 --abbrev 7
}
function git_show {
git show -s --format='%h %s' $1
}
@ -53,50 +45,11 @@ VERSION=$(
)
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3-latest"
echo "Building installer for version $VERSION..."
echo
if does_tag_exist $VERSION; then
echo -e "${BCYAN}${VERSION}${RESET} already exists:"
git_show_ref tags/$VERSION
echo
fi
if does_tag_exist $LATEST_TAG; then
echo -e "${BCYAN}${LATEST_TAG}${RESET} already exists:"
git_show_ref tags/$LATEST_TAG
echo
fi
echo -e "${BGREEN}HEAD${RESET}:"
git_show
echo
echo -e -n "Create tags ${BCYAN}${VERSION}${RESET} and ${BCYAN}${LATEST_TAG}${RESET} @ ${BGREEN}HEAD${RESET}, ${RED}deleting existing tags on remote${RESET}? "
read -e -p 'y/n [n]: ' input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
echo
echo -e "Deleting ${BCYAN}${VERSION}${RESET} tag on remote..."
git push origin :refs/tags/$VERSION
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${VERSION}${RESET} locally..."
if ! git tag -fa $VERSION; then
echo "Existing/invalid tag"
exit -1
fi
echo -e "Deleting ${BCYAN}${LATEST_TAG}${RESET} tag on remote..."
git push origin :refs/tags/$LATEST_TAG
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${LATEST_TAG}${RESET} locally..."
git tag -fa $LATEST_TAG
echo
echo -e "${BYELLOW}Remember to 'git push origin --tags'!${RESET}"
fi
# ---------------------- FRONTEND ----------------------
pushd ../invokeai/frontend/web >/dev/null

View File

@ -244,9 +244,9 @@ class InvokeAiInstance:
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.0",
"torch==2.1.1",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"torchvision>=0.16.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,

71
installer/tag_release.sh Executable file
View File

@ -0,0 +1,71 @@
#!/bin/bash
set -e
BCYAN="\e[1;36m"
BYELLOW="\e[1;33m"
BGREEN="\e[1;32m"
BRED="\e[1;31m"
RED="\e[31m"
RESET="\e[0m"
function does_tag_exist {
git rev-parse --quiet --verify "refs/tags/$1" >/dev/null
}
function git_show_ref {
git show-ref --dereference $1 --abbrev 7
}
function git_show {
git show -s --format='%h %s' $1
}
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
)
PATCH=""
MAJOR_VERSION=$(echo $VERSION | sed 's/\..*$//')
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v${MAJOR_VERSION}-latest"
if does_tag_exist $VERSION; then
echo -e "${BCYAN}${VERSION}${RESET} already exists:"
git_show_ref tags/$VERSION
echo
fi
if does_tag_exist $LATEST_TAG; then
echo -e "${BCYAN}${LATEST_TAG}${RESET} already exists:"
git_show_ref tags/$LATEST_TAG
echo
fi
echo -e "${BGREEN}HEAD${RESET}:"
git_show
echo
echo -e -n "Create tags ${BCYAN}${VERSION}${RESET} and ${BCYAN}${LATEST_TAG}${RESET} @ ${BGREEN}HEAD${RESET}, ${RED}deleting existing tags on remote${RESET}? "
read -e -p 'y/n [n]: ' input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
echo
echo -e "Deleting ${BCYAN}${VERSION}${RESET} tag on remote..."
git push --delete origin $VERSION
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${VERSION}${RESET} locally..."
if ! git tag -fa $VERSION; then
echo "Existing/invalid tag"
exit -1
fi
echo -e "Deleting ${BCYAN}${LATEST_TAG}${RESET} tag on remote..."
git push --delete origin $LATEST_TAG
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${LATEST_TAG}${RESET} locally..."
git tag -fa $LATEST_TAG
echo -e "Pushing updated tags to remote..."
git push origin --tags
fi
exit 0

View File

@ -2,6 +2,7 @@
from logging import Logger
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@ -22,6 +23,7 @@ from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_install import ModelInstallService
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
@ -29,7 +31,6 @@ from ..services.session_processor.session_processor_default import DefaultSessio
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.shared.default_graphs import create_system_graphs
from ..services.shared.graph import GraphExecutionState, LibraryGraph
from ..services.shared.sqlite.sqlite_database import SqliteDatabase
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
@ -66,8 +67,9 @@ class ApiDependencies:
logger.debug(f"Internet connectivity is {config.internet_available}")
output_folder = config.output_path
image_files = DiskImageFileStorage(f"{output_folder}/images")
db = SqliteDatabase(config, logger)
db = init_db(config=config, logger=logger, image_files=image_files)
configuration = config
logger = logger
@ -79,13 +81,15 @@ class ApiDependencies:
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_library = SqliteItemStorage[LibraryGraph](db=db, table_name="graphs")
image_files = DiskImageFileStorage(f"{output_folder}/images")
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
model_install_service = ModelInstallService(
app_config=config, record_store=model_record_service, event_bus=events
)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
@ -112,6 +116,7 @@ class ApiDependencies:
logger=logger,
model_manager=model_manager,
model_records=model_record_service,
model_install=model_install_service,
names=names,
performance_statistics=performance_statistics,
processor=processor,

View File

@ -4,7 +4,7 @@
from hashlib import sha1
from random import randbytes
from typing import List, Optional
from typing import Any, Dict, List, Optional
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
@ -12,6 +12,7 @@ from pydantic import BaseModel, ConfigDict
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_install import ModelInstallJob, ModelSource
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
@ -25,7 +26,7 @@ from invokeai.backend.model_manager.config import (
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["models"])
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2"])
class ModelsList(BaseModel):
@ -43,15 +44,25 @@ class ModelsList(BaseModel):
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[str] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(record_store.search_by_attr(base_model=base_model, model_type=model_type))
found_models.extend(
record_store.search_by_attr(
base_model=base_model, model_type=model_type, model_name=model_name, model_format=model_format
)
)
else:
found_models.extend(record_store.search_by_attr(model_type=model_type))
found_models.extend(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
return ModelsList(models=found_models)
@ -117,12 +128,17 @@ async def update_model_record(
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""Delete Model"""
"""
Delete model record from database.
The configuration record will be removed. The corresponding weights files will be
deleted as well if they reside within the InvokeAI "models" directory.
"""
logger = ApiDependencies.invoker.services.logger
try:
record_store = ApiDependencies.invoker.services.model_records
record_store.del_model(key)
installer = ApiDependencies.invoker.services.model_install
installer.delete(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
@ -162,3 +178,145 @@ async def add_model_record(
# now fetch it out
return record_store.get_model(config.key)
@model_records_router.post(
"/import",
operation_id="import_model_record",
responses={
201: {"description": "The model imported successfully"},
415: {"description": "Unrecognized file/folder format"},
424: {"description": "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
)
async def import_model(
source: ModelSource,
config: Optional[Dict[str, Any]] = Body(
description="Dict of fields that override auto-probed values in the model config record, such as name, description and prediction_type ",
default=None,
),
) -> ModelInstallJob:
"""Add a model using its local path, repo_id, or remote URL.
Models will be downloaded, probed, configured and installed in a
series of background threads. The return object has `status` attribute
that can be used to monitor progress.
The source object is a discriminated Union of LocalModelSource,
HFModelSource and URLModelSource. Set the "type" field to the
appropriate value:
* To install a local path using LocalModelSource, pass a source of form:
`{
"type": "local",
"path": "/path/to/model",
"inplace": false
}`
The "inplace" flag, if true, will register the model in place in its
current filesystem location. Otherwise, the model will be copied
into the InvokeAI models directory.
* To install a HuggingFace repo_id using HFModelSource, pass a source of form:
`{
"type": "hf",
"repo_id": "stabilityai/stable-diffusion-2.0",
"variant": "fp16",
"subfolder": "vae",
"access_token": "f5820a918aaf01"
}`
The `variant`, `subfolder` and `access_token` fields are optional.
* To install a remote model using an arbitrary URL, pass:
`{
"type": "url",
"url": "http://www.civitai.com/models/123456",
"access_token": "f5820a918aaf01"
}`
The `access_token` field is optonal
The model's configuration record will be probed and filled in
automatically. To override the default guesses, pass "metadata"
with a Dict containing the attributes you wish to override.
Installation occurs in the background. Either use list_model_install_jobs()
to poll for completion, or listen on the event bus for the following events:
"model_install_started"
"model_install_completed"
"model_install_error"
On successful completion, the event's payload will contain the field "key"
containing the installed ID of the model. On an error, the event's payload
will contain the fields "error_type" and "error" describing the nature of the
error and its traceback, respectively.
"""
logger = ApiDependencies.invoker.services.logger
try:
installer = ApiDependencies.invoker.services.model_install
result: ModelInstallJob = installer.import_model(
source=source,
config=config,
)
logger.info(f"Started installation of {source}")
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return result
@model_records_router.get(
"/import",
operation_id="list_model_install_jobs",
)
async def list_model_install_jobs() -> List[ModelInstallJob]:
"""
Return list of model install jobs.
If the optional 'source' argument is provided, then the list will be filtered
for partial string matches against the install source.
"""
jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_install.list_jobs()
return jobs
@model_records_router.patch(
"/import",
operation_id="prune_model_install_jobs",
responses={
204: {"description": "All completed and errored jobs have been pruned"},
400: {"description": "Bad request"},
},
)
async def prune_model_install_jobs() -> Response:
"""
Prune all completed and errored jobs from the install job list.
"""
ApiDependencies.invoker.services.model_install.prune_jobs()
return Response(status_code=204)
@model_records_router.patch(
"/sync",
operation_id="sync_models_to_config",
responses={
204: {"description": "Model config record database resynced with files on disk"},
400: {"description": "Bad request"},
},
)
async def sync_models_to_config() -> Response:
"""
Traverse the models and autoimport directories. Model files without a corresponding
record in the database are added. Orphan records without a models file are deleted.
"""
ApiDependencies.invoker.services.model_install.sync_to_config()
return Response(status_code=204)

View File

@ -20,6 +20,7 @@ class SocketIO:
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue)
self.__sio.on("unsubscribe_queue", handler=self._handle_unsub_queue)
local_handler.register(event_name=EventServiceBase.queue_event, _func=self._handle_queue_event)
local_handler.register(event_name=EventServiceBase.model_event, _func=self._handle_model_event)
async def _handle_queue_event(self, event: Event):
await self.__sio.emit(
@ -28,10 +29,13 @@ class SocketIO:
room=event[1]["data"]["queue_id"],
)
async def _handle_sub_queue(self, sid, data, *args, **kwargs):
async def _handle_sub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.enter_room(sid, data["queue_id"])
async def _handle_unsub_queue(self, sid, data, *args, **kwargs):
async def _handle_unsub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.leave_room(sid, data["queue_id"])
async def _handle_model_event(self, event: Event) -> None:
await self.__sio.emit(event=event[1]["event"], data=event[1]["data"])

View File

@ -272,6 +272,8 @@ def invoke_api() -> None:
port=port,
loop="asyncio",
log_level=app_config.log_level,
ssl_certfile=app_config.ssl_certfile,
ssl_keyfile=app_config.ssl_keyfile,
)
server = uvicorn.Server(config)

View File

@ -39,6 +39,19 @@ class InvalidFieldError(TypeError):
pass
class Classification(str, Enum, metaclass=MetaEnum):
"""
The classification of an Invocation.
- `Stable`: The invocation, including its inputs/outputs and internal logic, is stable. You may build workflows with it, having confidence that they will not break because of a change in this invocation.
- `Beta`: The invocation is not yet stable, but is planned to be stable in the future. Workflows built around this invocation may break, but we are committed to supporting this invocation long-term.
- `Prototype`: The invocation is not yet stable and may be removed from the application at any time. Workflows built around this invocation may break, and we are *not* committed to supporting this invocation.
"""
Stable = "stable"
Beta = "beta"
Prototype = "prototype"
class Input(str, Enum, metaclass=MetaEnum):
"""
The type of input a field accepts.
@ -439,6 +452,7 @@ class UIConfigBase(BaseModel):
description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".',
)
node_pack: Optional[str] = Field(default=None, description="Whether or not this is a custom node")
classification: Classification = Field(default=Classification.Stable, description="The node's classification")
model_config = ConfigDict(
validate_assignment=True,
@ -607,6 +621,7 @@ class BaseInvocation(ABC, BaseModel):
schema["category"] = uiconfig.category
if uiconfig.node_pack is not None:
schema["node_pack"] = uiconfig.node_pack
schema["classification"] = uiconfig.classification
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
@ -782,6 +797,7 @@ def invocation(
category: Optional[str] = None,
version: Optional[str] = None,
use_cache: Optional[bool] = True,
classification: Classification = Classification.Stable,
) -> Callable[[Type[TBaseInvocation]], Type[TBaseInvocation]]:
"""
Registers an invocation.
@ -792,6 +808,7 @@ def invocation(
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
:param Optional[str] version: Adds a version to the invocation. Must be a valid semver string. Defaults to None.
:param Optional[bool] use_cache: Whether or not to use the invocation cache. Defaults to True. The user may override this in the workflow editor.
:param Classification classification: The classification of the invocation. Defaults to FeatureClassification.Stable. Use Beta or Prototype if the invocation is unstable.
"""
def wrapper(cls: Type[TBaseInvocation]) -> Type[TBaseInvocation]:
@ -812,6 +829,7 @@ def invocation(
cls.UIConfig.title = title
cls.UIConfig.tags = tags
cls.UIConfig.category = category
cls.UIConfig.classification = classification
# Grab the node pack's name from the module name, if it's a custom node
is_custom_node = cls.__module__.rsplit(".", 1)[0] == "invokeai.app.invocations"

View File

@ -13,7 +13,15 @@ from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, Input, InputField, InvocationContext, WithMetadata, invocation
from .baseinvocation import (
BaseInvocation,
Classification,
Input,
InputField,
InvocationContext,
WithMetadata,
invocation,
)
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
@ -421,6 +429,64 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata):
)
@invocation(
"unsharp_mask",
title="Unsharp Mask",
tags=["image", "unsharp_mask"],
category="image",
version="1.2.0",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata):
"""Applies an unsharp mask filter to an image"""
image: ImageField = InputField(description="The image to use")
radius: float = InputField(gt=0, description="Unsharp mask radius", default=2)
strength: float = InputField(ge=0, description="Unsharp mask strength", default=50)
def pil_from_array(self, arr):
return Image.fromarray((arr * 255).astype("uint8"))
def array_from_pil(self, img):
return numpy.array(img) / 255
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
mode = image.mode
alpha_channel = image.getchannel("A") if mode == "RGBA" else None
image = image.convert("RGB")
image_blurred = self.array_from_pil(image.filter(ImageFilter.GaussianBlur(radius=self.radius)))
image = self.array_from_pil(image)
image += (image - image_blurred) * (self.strength / 100.0)
image = numpy.clip(image, 0, 1)
image = self.pil_from_array(image)
image = image.convert(mode)
# Make the image RGBA if we had a source alpha channel
if alpha_channel is not None:
image.putalpha(alpha_channel)
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image.width,
height=image.height,
)
PIL_RESAMPLING_MODES = Literal[
"nearest",
"box",

View File

@ -1,3 +1,5 @@
from typing import Literal
import numpy as np
from PIL import Image
from pydantic import BaseModel
@ -5,6 +7,8 @@ from pydantic import BaseModel
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
Input,
InputField,
InvocationContext,
OutputField,
@ -14,7 +18,13 @@ from invokeai.app.invocations.baseinvocation import (
)
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.tiles.tiles import calc_tiles_with_overlap, merge_tiles_with_linear_blending
from invokeai.backend.tiles.tiles import (
calc_tiles_even_split,
calc_tiles_min_overlap,
calc_tiles_with_overlap,
merge_tiles_with_linear_blending,
merge_tiles_with_seam_blending,
)
from invokeai.backend.tiles.utils import Tile
@ -28,7 +38,14 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
tiles: list[Tile] = OutputField(description="The tiles coordinates that cover a particular image shape.")
@invocation("calculate_image_tiles", title="Calculate Image Tiles", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"calculate_image_tiles",
title="Calculate Image Tiles",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@ -55,6 +72,79 @@ class CalculateImageTilesInvocation(BaseInvocation):
return CalculateImageTilesOutput(tiles=tiles)
@invocation(
"calculate_image_tiles_even_split",
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
image_width: int = InputField(ge=1, default=1024, description="The image width, in pixels, to calculate tiles for.")
image_height: int = InputField(
ge=1, default=1024, description="The image height, in pixels, to calculate tiles for."
)
num_tiles_x: int = InputField(
default=2,
ge=1,
description="Number of tiles to divide image into on the x axis",
)
num_tiles_y: int = InputField(
default=2,
ge=1,
description="Number of tiles to divide image into on the y axis",
)
overlap_fraction: float = InputField(
default=0.25,
ge=0,
lt=1,
description="Overlap between adjacent tiles as a fraction of the tile's dimensions (0-1)",
)
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
tiles = calc_tiles_even_split(
image_height=self.image_height,
image_width=self.image_width,
num_tiles_x=self.num_tiles_x,
num_tiles_y=self.num_tiles_y,
overlap_fraction=self.overlap_fraction,
)
return CalculateImageTilesOutput(tiles=tiles)
@invocation(
"calculate_image_tiles_min_overlap",
title="Calculate Image Tiles Minimum Overlap",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesMinimumOverlapInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
image_width: int = InputField(ge=1, default=1024, description="The image width, in pixels, to calculate tiles for.")
image_height: int = InputField(
ge=1, default=1024, description="The image height, in pixels, to calculate tiles for."
)
tile_width: int = InputField(ge=1, default=576, description="The tile width, in pixels.")
tile_height: int = InputField(ge=1, default=576, description="The tile height, in pixels.")
min_overlap: int = InputField(default=128, ge=0, description="Minimum overlap between adjacent tiles, in pixels.")
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
tiles = calc_tiles_min_overlap(
image_height=self.image_height,
image_width=self.image_width,
tile_height=self.tile_height,
tile_width=self.tile_width,
min_overlap=self.min_overlap,
)
return CalculateImageTilesOutput(tiles=tiles)
@invocation_output("tile_to_properties_output")
class TileToPropertiesOutput(BaseInvocationOutput):
coords_left: int = OutputField(description="Left coordinate of the tile relative to its parent image.")
@ -76,7 +166,14 @@ class TileToPropertiesOutput(BaseInvocationOutput):
overlap_right: int = OutputField(description="Overlap between this tile and its right neighbor.")
@invocation("tile_to_properties", title="Tile to Properties", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"tile_to_properties",
title="Tile to Properties",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class TileToPropertiesInvocation(BaseInvocation):
"""Split a Tile into its individual properties."""
@ -102,7 +199,14 @@ class PairTileImageOutput(BaseInvocationOutput):
tile_with_image: TileWithImage = OutputField(description="A tile description with its corresponding image.")
@invocation("pair_tile_image", title="Pair Tile with Image", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"pair_tile_image",
title="Pair Tile with Image",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class PairTileImageInvocation(BaseInvocation):
"""Pair an image with its tile properties."""
@ -121,13 +225,29 @@ class PairTileImageInvocation(BaseInvocation):
)
@invocation("merge_tiles_to_image", title="Merge Tiles to Image", tags=["tiles"], category="tiles", version="1.1.0")
BLEND_MODES = Literal["Linear", "Seam"]
@invocation(
"merge_tiles_to_image",
title="Merge Tiles to Image",
tags=["tiles"],
category="tiles",
version="1.1.0",
classification=Classification.Beta,
)
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata):
"""Merge multiple tile images into a single image."""
# Inputs
tiles_with_images: list[TileWithImage] = InputField(description="A list of tile images with tile properties.")
blend_mode: BLEND_MODES = InputField(
default="Seam",
description="blending type Linear or Seam",
input=Input.Direct,
)
blend_amount: int = InputField(
default=32,
ge=0,
description="The amount to blend adjacent tiles in pixels. Must be <= the amount of overlap between adjacent tiles.",
)
@ -157,10 +277,18 @@ class MergeTilesToImageInvocation(BaseInvocation, WithMetadata):
channels = tile_np_images[0].shape[-1]
dtype = tile_np_images[0].dtype
np_image = np.zeros(shape=(height, width, channels), dtype=dtype)
if self.blend_mode == "Linear":
merge_tiles_with_linear_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
elif self.blend_mode == "Seam":
merge_tiles_with_seam_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
else:
raise ValueError(f"Unsupported blend mode: '{self.blend_mode}'.")
merge_tiles_with_linear_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
# Convert into a PIL image and save
pil_image = Image.fromarray(np_image)
image_dto = context.services.images.create(

View File

@ -20,63 +20,6 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `board_images` junction table."""
# Create the `board_images` junction table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS board_images (
board_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between boards and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (board_id) REFERENCES boards (board_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
)
# Add index for board id
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id ON board_images (board_id);
"""
)
# Add index for board id, sorted by created_at
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id_created_at ON board_images (board_id, created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_board_images_updated_at
AFTER UPDATE
ON board_images FOR EACH ROW
BEGIN
UPDATE board_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE board_id = old.board_id AND image_name = old.image_name;
END;
"""
)
def add_image_to_board(
self,
board_id: str,

View File

@ -28,52 +28,6 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `boards` table and `board_images` junction table."""
# Create the `boards` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS boards (
board_id TEXT NOT NULL PRIMARY KEY,
board_name TEXT NOT NULL,
cover_image_name TEXT,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
FOREIGN KEY (cover_image_name) REFERENCES images (image_name) ON DELETE SET NULL
);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_boards_created_at ON boards (created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_boards_updated_at
AFTER UPDATE
ON boards FOR EACH ROW
BEGIN
UPDATE boards SET updated_at = current_timestamp
WHERE board_id = old.board_id;
END;
"""
)
def delete(self, board_id: str) -> None:
try:
self._lock.acquire()

View File

@ -1,6 +1,5 @@
"""
Init file for InvokeAI configure package
"""
"""Init file for InvokeAI configure package."""
from .config_base import PagingArgumentParser # noqa F401
from .config_default import InvokeAIAppConfig, get_invokeai_config # noqa F401
from .config_default import InvokeAIAppConfig, get_invokeai_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config"]

View File

@ -173,7 +173,7 @@ from __future__ import annotations
import os
from pathlib import Path
from typing import ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from omegaconf import DictConfig, OmegaConf
from pydantic import Field, TypeAdapter
@ -221,6 +221,9 @@ class InvokeAIAppConfig(InvokeAISettings):
allow_credentials : bool = Field(default=True, description="Allow CORS credentials", json_schema_extra=Categories.WebServer)
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS", json_schema_extra=Categories.WebServer)
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS", json_schema_extra=Categories.WebServer)
# SSL options correspond to https://www.uvicorn.org/settings/#https
ssl_certfile : Optional[Path] = Field(default=None, description="SSL certificate file (for HTTPS)", json_schema_extra=Categories.WebServer)
ssl_keyfile : Optional[Path] = Field(default=None, description="SSL key file", json_schema_extra=Categories.WebServer)
# FEATURES
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", json_schema_extra=Categories.Features)
@ -334,7 +337,7 @@ class InvokeAIAppConfig(InvokeAISettings):
)
@classmethod
def get_config(cls, **kwargs) -> InvokeAIAppConfig:
def get_config(cls, **kwargs: Dict[str, Any]) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object."""
if (
cls.singleton_config is None
@ -383,17 +386,17 @@ class InvokeAIAppConfig(InvokeAISettings):
return db_dir / DB_FILE
@property
def model_conf_path(self) -> Optional[Path]:
def model_conf_path(self) -> Path:
"""Path to models configuration file."""
return self._resolve(self.conf_path)
@property
def legacy_conf_path(self) -> Optional[Path]:
def legacy_conf_path(self) -> Path:
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml)."""
return self._resolve(self.legacy_conf_dir)
@property
def models_path(self) -> Optional[Path]:
def models_path(self) -> Path:
"""Path to the models directory."""
return self._resolve(self.models_dir)

View File

@ -0,0 +1 @@
from .events_base import EventServiceBase # noqa F401

View File

@ -1,5 +1,6 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional
from invokeai.app.services.invocation_processor.invocation_processor_common import ProgressImage
@ -16,6 +17,7 @@ from invokeai.backend.model_management.models.base import BaseModelType, ModelTy
class EventServiceBase:
queue_event: str = "queue_event"
model_event: str = "model_event"
"""Basic event bus, to have an empty stand-in when not needed"""
@ -30,6 +32,13 @@ class EventServiceBase:
payload={"event": event_name, "data": payload},
)
def __emit_model_event(self, event_name: str, payload: dict) -> None:
payload["timestamp"] = get_timestamp()
self.dispatch(
event_name=EventServiceBase.model_event,
payload={"event": event_name, "data": payload},
)
# Define events here for every event in the system.
# This will make them easier to integrate until we find a schema generator.
def emit_generator_progress(
@ -313,3 +322,73 @@ class EventServiceBase:
event_name="queue_cleared",
payload={"queue_id": queue_id},
)
def emit_model_install_started(self, source: str) -> None:
"""
Emitted when an install job is started.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_started",
payload={"source": source},
)
def emit_model_install_completed(self, source: str, key: str) -> None:
"""
Emitted when an install job is completed successfully.
:param source: Source of the model; local path, repo_id or url
:param key: Model config record key
"""
self.__emit_model_event(
event_name="model_install_completed",
payload={
"source": source,
"key": key,
},
)
def emit_model_install_progress(
self,
source: str,
current_bytes: int,
total_bytes: int,
) -> None:
"""
Emitted while the install job is in progress.
(Downloaded models only)
:param source: Source of the model
:param current_bytes: Number of bytes downloaded so far
:param total_bytes: Total bytes to download
"""
self.__emit_model_event(
event_name="model_install_progress",
payload={
"source": source,
"current_bytes": int,
"total_bytes": int,
},
)
def emit_model_install_error(
self,
source: str,
error_type: str,
error: str,
) -> None:
"""
Emitted when an install job encounters an exception.
:param source: Source of the model
:param exception: The exception that raised the error
"""
self.__emit_model_event(
event_name="model_install_error",
payload={
"source": source,
"error_type": error_type,
"error": error,
},
)

View File

@ -32,101 +32,6 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `images` table."""
# Create the `images` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS images (
image_name TEXT NOT NULL PRIMARY KEY,
-- This is an enum in python, unrestricted string here for flexibility
image_origin TEXT NOT NULL,
-- This is an enum in python, unrestricted string here for flexibility
image_category TEXT NOT NULL,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
session_id TEXT,
node_id TEXT,
metadata TEXT,
is_intermediate BOOLEAN DEFAULT FALSE,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME
);
"""
)
self._cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in self._cursor.fetchall()]
if "starred" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images ADD COLUMN starred BOOLEAN DEFAULT FALSE;
"""
)
# Create the `images` table indices.
self._cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_images_image_name ON images(image_name);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_image_origin ON images(image_origin);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_image_category ON images(image_category);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_created_at ON images(created_at);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_starred ON images(starred);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_images_updated_at
AFTER UPDATE
ON images FOR EACH ROW
BEGIN
UPDATE images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE image_name = old.image_name;
END;
"""
)
self._cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in self._cursor.fetchall()]
if "has_workflow" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images
ADD COLUMN has_workflow BOOLEAN DEFAULT FALSE;
"""
)
def get(self, image_name: str) -> ImageRecord:
try:
self._lock.acquire()

View File

@ -21,6 +21,7 @@ if TYPE_CHECKING:
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from .item_storage.item_storage_base import ItemStorageABC
from .latents_storage.latents_storage_base import LatentsStorageBase
from .model_install import ModelInstallServiceBase
from .model_manager.model_manager_base import ModelManagerServiceBase
from .model_records import ModelRecordServiceBase
from .names.names_base import NameServiceBase
@ -50,6 +51,7 @@ class InvocationServices:
logger: "Logger"
model_manager: "ModelManagerServiceBase"
model_records: "ModelRecordServiceBase"
model_install: "ModelInstallServiceBase"
processor: "InvocationProcessorABC"
performance_statistics: "InvocationStatsServiceBase"
queue: "InvocationQueueABC"
@ -77,6 +79,7 @@ class InvocationServices:
logger: "Logger",
model_manager: "ModelManagerServiceBase",
model_records: "ModelRecordServiceBase",
model_install: "ModelInstallServiceBase",
processor: "InvocationProcessorABC",
performance_statistics: "InvocationStatsServiceBase",
queue: "InvocationQueueABC",
@ -102,6 +105,7 @@ class InvocationServices:
self.logger = logger
self.model_manager = model_manager
self.model_records = model_records
self.model_install = model_install
self.processor = processor
self.performance_statistics = performance_statistics
self.queue = queue

View File

@ -0,0 +1,25 @@
"""Initialization file for model install service package."""
from .model_install_base import (
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallServiceBase,
ModelSource,
UnknownInstallJobException,
URLModelSource,
)
from .model_install_default import ModelInstallService
__all__ = [
"ModelInstallServiceBase",
"ModelInstallService",
"InstallStatus",
"ModelInstallJob",
"UnknownInstallJobException",
"ModelSource",
"LocalModelSource",
"HFModelSource",
"URLModelSource",
]

View File

@ -0,0 +1,306 @@
import re
import traceback
from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path
from typing import Any, Dict, List, Literal, Optional, Union
from pydantic import BaseModel, Field, field_validator
from pydantic.networks import AnyHttpUrl
from typing_extensions import Annotated
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import ModelRecordServiceBase
from invokeai.backend.model_manager import AnyModelConfig
class InstallStatus(str, Enum):
"""State of an install job running in the background."""
WAITING = "waiting" # waiting to be dequeued
RUNNING = "running" # being processed
COMPLETED = "completed" # finished running
ERROR = "error" # terminated with an error message
class UnknownInstallJobException(Exception):
"""Raised when the status of an unknown job is requested."""
class StringLikeSource(BaseModel):
"""
Base class for model sources, implements functions that lets the source be sorted and indexed.
These shenanigans let this stuff work:
source1 = LocalModelSource(path='C:/users/mort/foo.safetensors')
mydict = {source1: 'model 1'}
assert mydict['C:/users/mort/foo.safetensors'] == 'model 1'
assert mydict[LocalModelSource(path='C:/users/mort/foo.safetensors')] == 'model 1'
source2 = LocalModelSource(path=Path('C:/users/mort/foo.safetensors'))
assert source1 == source2
assert source1 == 'C:/users/mort/foo.safetensors'
"""
def __hash__(self) -> int:
"""Return hash of the path field, for indexing."""
return hash(str(self))
def __lt__(self, other: object) -> int:
"""Return comparison of the stringified version, for sorting."""
return str(self) < str(other)
def __eq__(self, other: object) -> bool:
"""Return equality on the stringified version."""
if isinstance(other, Path):
return str(self) == other.as_posix()
else:
return str(self) == str(other)
class LocalModelSource(StringLikeSource):
"""A local file or directory path."""
path: str | Path
inplace: Optional[bool] = False
type: Literal["local"] = "local"
# these methods allow the source to be used in a string-like way,
# for example as an index into a dict
def __str__(self) -> str:
"""Return string version of path when string rep needed."""
return Path(self.path).as_posix()
class HFModelSource(StringLikeSource):
"""A HuggingFace repo_id, with optional variant and sub-folder."""
repo_id: str
variant: Optional[str] = None
subfolder: Optional[str | Path] = None
access_token: Optional[str] = None
type: Literal["hf"] = "hf"
@field_validator("repo_id")
@classmethod
def proper_repo_id(cls, v: str) -> str: # noqa D102
if not re.match(r"^([.\w-]+/[.\w-]+)$", v):
raise ValueError(f"{v}: invalid repo_id format")
return v
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = self.repo_id
base += f":{self.subfolder}" if self.subfolder else ""
base += f" ({self.variant})" if self.variant else ""
return base
class URLModelSource(StringLikeSource):
"""A generic URL point to a checkpoint file."""
url: AnyHttpUrl
access_token: Optional[str] = None
type: Literal["generic_url"] = "generic_url"
def __str__(self) -> str:
"""Return string version of the url when string rep needed."""
return str(self.url)
ModelSource = Annotated[Union[LocalModelSource, HFModelSource, URLModelSource], Field(discriminator="type")]
class ModelInstallJob(BaseModel):
"""Object that tracks the current status of an install request."""
status: InstallStatus = Field(default=InstallStatus.WAITING, description="Current status of install process")
config_in: Dict[str, Any] = Field(
default_factory=dict, description="Configuration information (e.g. 'description') to apply to model."
)
config_out: Optional[AnyModelConfig] = Field(
default=None, description="After successful installation, this will hold the configuration object."
)
inplace: bool = Field(
default=False, description="Leave model in its current location; otherwise install under models directory"
)
source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model")
local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source")
error_type: Optional[str] = Field(default=None, description="Class name of the exception that led to status==ERROR")
error: Optional[str] = Field(default=None, description="Error traceback") # noqa #501
def set_error(self, e: Exception) -> None:
"""Record the error and traceback from an exception."""
self.error_type = e.__class__.__name__
self.error = "".join(traceback.format_exception(e))
self.status = InstallStatus.ERROR
class ModelInstallServiceBase(ABC):
"""Abstract base class for InvokeAI model installation."""
@abstractmethod
def __init__(
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
event_bus: Optional["EventServiceBase"] = None,
):
"""
Create ModelInstallService object.
:param config: Systemwide InvokeAIAppConfig.
:param store: Systemwide ModelConfigStore
:param event_bus: InvokeAI event bus for reporting events to.
"""
def start(self, invoker: Invoker) -> None:
"""Call at InvokeAI startup time."""
self.sync_to_config()
@abstractmethod
def stop(self) -> None:
"""Stop the model install service. After this the objection can be safely deleted."""
@property
@abstractmethod
def app_config(self) -> InvokeAIAppConfig:
"""Return the appConfig object associated with the installer."""
@property
@abstractmethod
def record_store(self) -> ModelRecordServiceBase:
"""Return the ModelRecoreService object associated with the installer."""
@property
@abstractmethod
def event_bus(self) -> Optional[EventServiceBase]:
"""Return the event service base object associated with the installer."""
@abstractmethod
def register_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str:
"""
Probe and register the model at model_path.
This keeps the model in its current location.
:param model_path: Filesystem Path to the model.
:param config: Dict of attributes that will override autoassigned values.
:returns id: The string ID of the registered model.
"""
@abstractmethod
def unregister(self, key: str) -> None:
"""Remove model with indicated key from the database."""
@abstractmethod
def delete(self, key: str) -> None:
"""Remove model with indicated key from the database. Delete its files only if they are within our models directory."""
@abstractmethod
def unconditionally_delete(self, key: str) -> None:
"""Remove model with indicated key from the database and unconditionally delete weight files from disk."""
@abstractmethod
def install_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str:
"""
Probe, register and install the model in the models directory.
This moves the model from its current location into
the models directory handled by InvokeAI.
:param model_path: Filesystem Path to the model.
:param config: Dict of attributes that will override autoassigned values.
:returns id: The string ID of the registered model.
"""
@abstractmethod
def import_model(
self,
source: ModelSource,
config: Optional[Dict[str, Any]] = None,
) -> ModelInstallJob:
"""Install the indicated model.
:param source: ModelSource object
:param config: Optional dict. Any fields in this dict
will override corresponding autoassigned probe fields in the
model's config record. Use it to override
`name`, `description`, `base_type`, `model_type`, `format`,
`prediction_type`, `image_size`, and/or `ztsnr_training`.
This will download the model located at `source`,
probe it, and install it into the models directory.
This call is executed asynchronously in a separate
thread and will issue the following events on the event bus:
- model_install_started
- model_install_error
- model_install_completed
The `inplace` flag does not affect the behavior of downloaded
models, which are always moved into the `models` directory.
The call returns a ModelInstallJob object which can be
polled to learn the current status and/or error message.
Variants recognized by HuggingFace currently are:
1. onnx
2. openvino
3. fp16
4. None (usually returns fp32 model)
"""
@abstractmethod
def get_job(self, source: ModelSource) -> List[ModelInstallJob]:
"""Return the ModelInstallJob(s) corresponding to the provided source."""
@abstractmethod
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
"""
List active and complete install jobs.
"""
@abstractmethod
def prune_jobs(self) -> None:
"""Prune all completed and errored jobs."""
@abstractmethod
def wait_for_installs(self) -> List[ModelInstallJob]:
"""
Wait for all pending installs to complete.
This will block until all pending installs have
completed, been cancelled, or errored out. It will
block indefinitely if one or more jobs are in the
paused state.
It will return the current list of jobs.
"""
@abstractmethod
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]:
"""
Recursively scan directory for new models and register or install them.
:param scan_dir: Path to the directory to scan.
:param install: Install if True, otherwise register in place.
:returns list of IDs: Returns list of IDs of models registered/installed
"""
@abstractmethod
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the model record database."""

View File

@ -0,0 +1,395 @@
"""Model installation class."""
import threading
from hashlib import sha256
from logging import Logger
from pathlib import Path
from queue import Queue
from random import randbytes
from shutil import copyfile, copytree, move, rmtree
from typing import Any, Dict, List, Optional, Set, Union
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, UnknownModelException
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
InvalidModelConfigException,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import Chdir, InvokeAILogger
from .model_install_base import (
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallServiceBase,
ModelSource,
)
# marker that the queue is done and that thread should exit
STOP_JOB = ModelInstallJob(
source=LocalModelSource(path="stop"),
local_path=Path("/dev/null"),
)
class ModelInstallService(ModelInstallServiceBase):
"""class for InvokeAI model installation."""
_app_config: InvokeAIAppConfig
_record_store: ModelRecordServiceBase
_event_bus: Optional[EventServiceBase] = None
_install_queue: Queue[ModelInstallJob]
_install_jobs: List[ModelInstallJob]
_logger: Logger
_cached_model_paths: Set[Path]
_models_installed: Set[str]
def __init__(
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
event_bus: Optional[EventServiceBase] = None,
):
"""
Initialize the installer object.
:param app_config: InvokeAIAppConfig object
:param record_store: Previously-opened ModelRecordService database
:param event_bus: Optional EventService object
"""
self._app_config = app_config
self._record_store = record_store
self._event_bus = event_bus
self._logger = InvokeAILogger.get_logger(name=self.__class__.__name__)
self._install_jobs = []
self._install_queue = Queue()
self._cached_model_paths = set()
self._models_installed = set()
self._start_installer_thread()
@property
def app_config(self) -> InvokeAIAppConfig: # noqa D102
return self._app_config
@property
def record_store(self) -> ModelRecordServiceBase: # noqa D102
return self._record_store
@property
def event_bus(self) -> Optional[EventServiceBase]: # noqa D102
return self._event_bus
def stop(self, *args, **kwargs) -> None:
"""Stop the install thread; after this the object can be deleted and garbage collected."""
self._install_queue.put(STOP_JOB)
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
def _install_next_item(self) -> None:
done = False
while not done:
job = self._install_queue.get()
if job == STOP_JOB:
done = True
continue
assert job.local_path is not None
try:
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
self._signal_job_completed(job)
except (OSError, DuplicateModelException, InvalidModelConfigException) as excp:
self._signal_job_errored(job, excp)
finally:
self._install_queue.task_done()
self._logger.info("Install thread exiting")
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_started(str(job.source))
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob, excp: Exception) -> None:
job.set_error(excp)
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
def register_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str: # noqa D102
model_path = Path(model_path)
config = config or {}
if config.get("source") is None:
config["source"] = model_path.resolve().as_posix()
return self._register(model_path, config)
def install_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str: # noqa D102
model_path = Path(model_path)
config = config or {}
if config.get("source") is None:
config["source"] = model_path.resolve().as_posix()
info: AnyModelConfig = self._probe_model(Path(model_path), config)
old_hash = info.original_hash
dest_path = self.app_config.models_path / info.base.value / info.type.value / model_path.name
new_path = self._copy_model(model_path, dest_path)
new_hash = FastModelHash.hash(new_path)
assert new_hash == old_hash, f"{model_path}: Model hash changed during installation, possibly corrupted."
return self._register(
new_path,
config,
info,
)
def import_model(
self,
source: ModelSource,
config: Optional[Dict[str, Any]] = None,
) -> ModelInstallJob: # noqa D102
if not config:
config = {}
# Installing a local path
if isinstance(source, LocalModelSource) and Path(source.path).exists(): # a path that is already on disk
job = ModelInstallJob(
source=source,
config_in=config,
local_path=Path(source.path),
)
self._install_jobs.append(job)
self._install_queue.put(job)
return job
else: # here is where we'd download a URL or repo_id. Implementation pending download queue.
raise UnknownModelException("File or directory not found")
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
return self._install_jobs
def get_job(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102
return [x for x in self._install_jobs if x.source == source]
def wait_for_installs(self) -> List[ModelInstallJob]: # noqa D102
self._install_queue.join()
return self._install_jobs
def prune_jobs(self) -> None:
"""Prune all completed and errored jobs."""
unfinished_jobs = [
x for x in self._install_jobs if x.status not in [InstallStatus.COMPLETED, InstallStatus.ERROR]
]
self._install_jobs = unfinished_jobs
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the config record store database."""
self._scan_models_directory()
if autoimport := self._app_config.autoimport_dir:
self._logger.info("Scanning autoimport directory for new models")
installed = self.scan_directory(self._app_config.root_path / autoimport)
self._logger.info(f"{len(installed)} new models registered")
self._logger.info("Model installer (re)initialized")
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]: # noqa D102
self._cached_model_paths = {Path(x.path) for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback)
self._models_installed: Set[str] = set()
search.search(scan_dir)
return list(self._models_installed)
def _scan_models_directory(self) -> None:
"""
Scan the models directory for new and missing models.
New models will be added to the storage backend. Missing models
will be deleted.
"""
defunct_models = set()
installed = set()
with Chdir(self._app_config.models_path):
self._logger.info("Checking for models that have been moved or deleted from disk")
for model_config in self.record_store.all_models():
path = Path(model_config.path)
if not path.exists():
self._logger.info(f"{model_config.name}: path {path.as_posix()} no longer exists. Unregistering")
defunct_models.add(model_config.key)
for key in defunct_models:
self.unregister(key)
self._logger.info(f"Scanning {self._app_config.models_path} for new and orphaned models")
for cur_base_model in BaseModelType:
for cur_model_type in ModelType:
models_dir = Path(cur_base_model.value, cur_model_type.value)
installed.update(self.scan_directory(models_dir))
self._logger.info(f"{len(installed)} new models registered; {len(defunct_models)} unregistered")
def _sync_model_path(self, key: str, ignore_hash_change: bool = False) -> AnyModelConfig:
"""
Move model into the location indicated by its basetype, type and name.
Call this after updating a model's attributes in order to move
the model's path into the location indicated by its basetype, type and
name. Applies only to models whose paths are within the root `models_dir`
directory.
May raise an UnknownModelException.
"""
model = self.record_store.get_model(key)
old_path = Path(model.path)
models_dir = self.app_config.models_path
if not old_path.is_relative_to(models_dir):
return model
new_path = models_dir / model.base.value / model.type.value / model.name
self._logger.info(f"Moving {model.name} to {new_path}.")
new_path = self._move_model(old_path, new_path)
new_hash = FastModelHash.hash(new_path)
model.path = new_path.relative_to(models_dir).as_posix()
if model.current_hash != new_hash:
assert (
ignore_hash_change
), f"{model.name}: Model hash changed during installation, model is possibly corrupted"
model.current_hash = new_hash
self._logger.info(f"Model has new hash {model.current_hash}, but will continue to be identified by {key}")
self.record_store.update_model(key, model)
return model
def _scan_register(self, model: Path) -> bool:
if model in self._cached_model_paths:
return True
try:
id = self.register_path(model)
self._sync_model_path(id) # possibly move it to right place in `models`
self._logger.info(f"Registered {model.name} with id {id}")
self._models_installed.add(id)
except DuplicateModelException:
pass
return True
def _scan_install(self, model: Path) -> bool:
if model in self._cached_model_paths:
return True
try:
id = self.install_path(model)
self._logger.info(f"Installed {model} with id {id}")
self._models_installed.add(id)
except DuplicateModelException:
pass
return True
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
def _copy_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path:
return old_path
new_path.parent.mkdir(parents=True, exist_ok=True)
if old_path.is_dir():
copytree(old_path, new_path)
else:
copyfile(old_path, new_path)
return new_path
def _move_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path:
return old_path
new_path.parent.mkdir(parents=True, exist_ok=True)
# if path already exists then we jigger the name to make it unique
counter: int = 1
while new_path.exists():
path = new_path.with_stem(new_path.stem + f"_{counter:02d}")
if not path.exists():
new_path = path
counter += 1
move(old_path, new_path)
return new_path
def _probe_model(self, model_path: Path, config: Optional[Dict[str, Any]] = None) -> AnyModelConfig:
info: AnyModelConfig = ModelProbe.probe(Path(model_path))
if config: # used to override probe fields
for key, value in config.items():
setattr(info, key, value)
return info
def _create_key(self) -> str:
return sha256(randbytes(100)).hexdigest()[0:32]
def _register(
self, model_path: Path, config: Optional[Dict[str, Any]] = None, info: Optional[AnyModelConfig] = None
) -> str:
info = info or ModelProbe.probe(model_path, config)
key = self._create_key()
model_path = model_path.absolute()
if model_path.is_relative_to(self.app_config.models_path):
model_path = model_path.relative_to(self.app_config.models_path)
info.path = model_path.as_posix()
# add 'main' specific fields
if hasattr(info, "config"):
# make config relative to our root
legacy_conf = (self.app_config.root_dir / self.app_config.legacy_conf_dir / info.config).resolve()
info.config = legacy_conf.relative_to(self.app_config.root_dir).as_posix()
self.record_store.add_model(key, info)
return key

View File

@ -6,3 +6,11 @@ from .model_records_base import ( # noqa F401
UnknownModelException,
)
from .model_records_sql import ModelRecordServiceSQL # noqa F401
__all__ = [
"ModelRecordServiceBase",
"ModelRecordServiceSQL",
"DuplicateModelException",
"InvalidModelException",
"UnknownModelException",
]

View File

@ -7,10 +7,7 @@ from abc import ABC, abstractmethod
from pathlib import Path
from typing import List, Optional, Union
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
# should match the InvokeAI version when this is first released.
CONFIG_FILE_VERSION = "3.2.0"
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType
class DuplicateModelException(Exception):
@ -32,12 +29,6 @@ class ConfigFileVersionMismatchException(Exception):
class ModelRecordServiceBase(ABC):
"""Abstract base class for storage and retrieval of model configs."""
@property
@abstractmethod
def version(self) -> str:
"""Return the config file/database schema version."""
pass
@abstractmethod
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
"""
@ -115,6 +106,7 @@ class ModelRecordServiceBase(ABC):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -122,6 +114,7 @@ class ModelRecordServiceBase(ABC):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.

View File

@ -49,12 +49,12 @@ from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelFormat,
ModelType,
)
from ..shared.sqlite.sqlite_database import SqliteDatabase
from .model_records_base import (
CONFIG_FILE_VERSION,
DuplicateModelException,
ModelRecordServiceBase,
UnknownModelException,
@ -78,85 +78,6 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._db = db
self._cursor = self._db.conn.cursor()
with self._db.lock:
# Enable foreign keys
self._db.conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._db.conn.commit()
assert (
str(self.version) == CONFIG_FILE_VERSION
), f"Model config version {self.version} does not match expected version {CONFIG_FILE_VERSION}"
def _create_tables(self) -> None:
"""Create sqlite3 tables."""
# model_config table breaks out the fields that are common to all config objects
# and puts class-specific ones in a serialized json object
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT NOT NULL,
type TEXT NOT NULL,
name TEXT NOT NULL,
path TEXT NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
# metadata table
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_manager_metadata (
metadata_key TEXT NOT NULL PRIMARY KEY,
metadata_value TEXT NOT NULL
);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_config_updated_at
AFTER UPDATE
ON model_config FOR EACH ROW
BEGIN
UPDATE model_config SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
# Add indexes for searchable fields
for stmt in [
"CREATE INDEX IF NOT EXISTS base_index ON model_config(base);",
"CREATE INDEX IF NOT EXISTS type_index ON model_config(type);",
"CREATE INDEX IF NOT EXISTS name_index ON model_config(name);",
"CREATE UNIQUE INDEX IF NOT EXISTS path_index ON model_config(path);",
]:
self._cursor.execute(stmt)
# Add our version to the metadata table
self._cursor.execute(
"""--sql
INSERT OR IGNORE into model_manager_metadata (
metadata_key,
metadata_value
)
VALUES (?,?);
""",
("version", CONFIG_FILE_VERSION),
)
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
@ -175,21 +96,13 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
"""--sql
INSERT INTO model_config (
id,
base,
type,
name,
path,
original_hash,
config
)
VALUES (?,?,?,?,?,?,?);
VALUES (?,?,?);
""",
(
key,
record.base,
record.type,
record.name,
record.path,
record.original_hash,
json_serialized,
),
@ -214,22 +127,6 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
return self.get_model(key)
@property
def version(self) -> str:
"""Return the version of the database schema."""
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT metadata_value FROM model_manager_metadata
WHERE metadata_key=?;
""",
("version",),
)
rows = self._cursor.fetchone()
if not rows:
raise KeyError("Models database does not have metadata key 'version'")
return rows[0]
def del_model(self, key: str) -> None:
"""
Delete a model.
@ -269,14 +166,11 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._cursor.execute(
"""--sql
UPDATE model_config
SET base=?,
type=?,
name=?,
path=?,
SET
config=?
WHERE id=?;
""",
(record.base, record.type, record.name, record.path, json_serialized, key),
(json_serialized, key),
)
if self._cursor.rowcount == 0:
raise UnknownModelException("model not found")
@ -332,6 +226,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -339,6 +234,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.
@ -355,6 +251,9 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
if model_type:
where_clause.append("type=?")
bindings.append(model_type)
if model_format:
where_clause.append("format=?")
bindings.append(model_format)
where = f"WHERE {' AND '.join(where_clause)}" if where_clause else ""
with self._db.lock:
self._cursor.execute(
@ -374,7 +273,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE model_path=?;
WHERE path=?;
""",
(str(path),),
)

View File

@ -50,7 +50,6 @@ class SqliteSessionQueue(SessionQueueBase):
self.__lock = db.lock
self.__conn = db.conn
self.__cursor = self.__conn.cursor()
self._create_tables()
def _match_event_name(self, event: FastAPIEvent, match_in: list[str]) -> bool:
return event[1]["event"] in match_in
@ -98,123 +97,6 @@ class SqliteSessionQueue(SessionQueueBase):
except SessionQueueItemNotFoundError:
return
def _create_tables(self) -> None:
"""Creates the session queue tables, indicies, and triggers"""
try:
self.__lock.acquire()
self.__cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS session_queue (
item_id INTEGER PRIMARY KEY AUTOINCREMENT, -- used for ordering, cursor pagination
batch_id TEXT NOT NULL, -- identifier of the batch this queue item belongs to
queue_id TEXT NOT NULL, -- identifier of the queue this queue item belongs to
session_id TEXT NOT NULL UNIQUE, -- duplicated data from the session column, for ease of access
field_values TEXT, -- NULL if no values are associated with this queue item
session TEXT NOT NULL, -- the session to be executed
status TEXT NOT NULL DEFAULT 'pending', -- the status of the queue item, one of 'pending', 'in_progress', 'completed', 'failed', 'canceled'
priority INTEGER NOT NULL DEFAULT 0, -- the priority, higher is more important
error TEXT, -- any errors associated with this queue item
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- updated via trigger
started_at DATETIME, -- updated via trigger
completed_at DATETIME -- updated via trigger, completed items are cleaned up on application startup
-- Ideally this is a FK, but graph_executions uses INSERT OR REPLACE, and REPLACE triggers the ON DELETE CASCADE...
-- FOREIGN KEY (session_id) REFERENCES graph_executions (id) ON DELETE CASCADE
);
"""
)
self.__cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_item_id ON session_queue(item_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_session_id ON session_queue(session_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_batch_id ON session_queue(batch_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_created_priority ON session_queue(priority);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_created_status ON session_queue(status);
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_completed_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'completed'
OR NEW.status = 'failed'
OR NEW.status = 'canceled'
BEGIN
UPDATE session_queue
SET completed_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_started_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'in_progress'
BEGIN
UPDATE session_queue
SET started_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_updated_at
AFTER UPDATE
ON session_queue FOR EACH ROW
BEGIN
UPDATE session_queue
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = old.item_id;
END;
"""
)
self.__cursor.execute("PRAGMA table_info(session_queue)")
columns = [column[1] for column in self.__cursor.fetchall()]
if "workflow" not in columns:
self.__cursor.execute(
"""--sql
ALTER TABLE session_queue ADD COLUMN workflow TEXT;
"""
)
self.__conn.commit()
except Exception:
self.__conn.rollback()
raise
finally:
self.__lock.release()
def _set_in_progress_to_canceled(self) -> None:
"""
Sets all in_progress queue items to canceled. Run on app startup, not associated with any queue.

View File

@ -3,45 +3,65 @@ import threading
from logging import Logger
from pathlib import Path
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite.sqlite_common import sqlite_memory
class SqliteDatabase:
def __init__(self, config: InvokeAIAppConfig, logger: Logger):
self._logger = logger
self._config = config
"""
Manages a connection to an SQLite database.
if self._config.use_memory_db:
self.db_path = sqlite_memory
logger.info("Using in-memory database")
:param db_path: Path to the database file. If None, an in-memory database is used.
:param logger: Logger to use for logging.
:param verbose: Whether to log SQL statements. Provides `logger.debug` as the SQLite trace callback.
This is a light wrapper around the `sqlite3` module, providing a few conveniences:
- The database file is written to disk if it does not exist.
- Foreign key constraints are enabled by default.
- The connection is configured to use the `sqlite3.Row` row factory.
In addition to the constructor args, the instance provides the following attributes and methods:
- `conn`: A `sqlite3.Connection` object. Note that the connection must never be closed if the database is in-memory.
- `lock`: A shared re-entrant lock, used to approximate thread safety.
- `clean()`: Runs the SQL `VACUUM;` command and reports on the freed space.
"""
def __init__(self, db_path: Path | None, logger: Logger, verbose: bool = False) -> None:
"""Initializes the database. This is used internally by the class constructor."""
self.logger = logger
self.db_path = db_path
self.verbose = verbose
if not self.db_path:
logger.info("Initializing in-memory database")
else:
db_path = self._config.db_path
db_path.parent.mkdir(parents=True, exist_ok=True)
self.db_path = str(db_path)
self._logger.info(f"Using database at {self.db_path}")
self.db_path.parent.mkdir(parents=True, exist_ok=True)
self.logger.info(f"Initializing database at {self.db_path}")
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.conn = sqlite3.connect(database=self.db_path or sqlite_memory, check_same_thread=False)
self.lock = threading.RLock()
self.conn.row_factory = sqlite3.Row
if self._config.log_sql:
self.conn.set_trace_callback(self._logger.debug)
if self.verbose:
self.conn.set_trace_callback(self.logger.debug)
self.conn.execute("PRAGMA foreign_keys = ON;")
def clean(self) -> None:
"""
Cleans the database by running the VACUUM command, reporting on the freed space.
"""
# No need to clean in-memory database
if not self.db_path:
return
with self.lock:
try:
if self.db_path == sqlite_memory:
return
initial_db_size = Path(self.db_path).stat().st_size
self.conn.execute("VACUUM;")
self.conn.commit()
final_db_size = Path(self.db_path).stat().st_size
freed_space_in_mb = round((initial_db_size - final_db_size) / 1024 / 1024, 2)
if freed_space_in_mb > 0:
self._logger.info(f"Cleaned database (freed {freed_space_in_mb}MB)")
self.logger.info(f"Cleaned database (freed {freed_space_in_mb}MB)")
except Exception as e:
self._logger.error(f"Error cleaning database: {e}")
self.logger.error(f"Error cleaning database: {e}")
raise

View File

@ -0,0 +1,32 @@
from logging import Logger
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileStorageBase) -> SqliteDatabase:
"""
Initializes the SQLite database.
:param config: The app config
:param logger: The logger
:param image_files: The image files service (used by migration 2)
This function:
- Instantiates a :class:`SqliteDatabase`
- Instantiates a :class:`SqliteMigrator` and registers all migrations
- Runs all migrations
"""
db_path = None if config.use_memory_db else config.db_path
db = SqliteDatabase(db_path=db_path, logger=logger, verbose=config.log_sql)
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.run_migrations()
return db

View File

@ -0,0 +1,372 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration1Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
"""Migration callback for database version 1."""
self._create_board_images(cursor)
self._create_boards(cursor)
self._create_images(cursor)
self._create_model_config(cursor)
self._create_session_queue(cursor)
self._create_workflow_images(cursor)
self._create_workflows(cursor)
def _create_board_images(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `board_images` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS board_images (
board_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between boards and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (board_id) REFERENCES boards (board_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_board_images_board_id ON board_images (board_id);",
"CREATE INDEX IF NOT EXISTS idx_board_images_board_id_created_at ON board_images (board_id, created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_board_images_updated_at
AFTER UPDATE
ON board_images FOR EACH ROW
BEGIN
UPDATE board_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE board_id = old.board_id AND image_name = old.image_name;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_boards(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `boards` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS boards (
board_id TEXT NOT NULL PRIMARY KEY,
board_name TEXT NOT NULL,
cover_image_name TEXT,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
FOREIGN KEY (cover_image_name) REFERENCES images (image_name) ON DELETE SET NULL
);
"""
]
indices = ["CREATE INDEX IF NOT EXISTS idx_boards_created_at ON boards (created_at);"]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_boards_updated_at
AFTER UPDATE
ON boards FOR EACH ROW
BEGIN
UPDATE boards SET updated_at = current_timestamp
WHERE board_id = old.board_id;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_images(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `images` table, indices and triggers. Adds the `starred` column."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS images (
image_name TEXT NOT NULL PRIMARY KEY,
-- This is an enum in python, unrestricted string here for flexibility
image_origin TEXT NOT NULL,
-- This is an enum in python, unrestricted string here for flexibility
image_category TEXT NOT NULL,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
session_id TEXT,
node_id TEXT,
metadata TEXT,
is_intermediate BOOLEAN DEFAULT FALSE,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME
);
"""
]
indices = [
"CREATE UNIQUE INDEX IF NOT EXISTS idx_images_image_name ON images(image_name);",
"CREATE INDEX IF NOT EXISTS idx_images_image_origin ON images(image_origin);",
"CREATE INDEX IF NOT EXISTS idx_images_image_category ON images(image_category);",
"CREATE INDEX IF NOT EXISTS idx_images_created_at ON images(created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_images_updated_at
AFTER UPDATE
ON images FOR EACH ROW
BEGIN
UPDATE images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE image_name = old.image_name;
END;
"""
]
# Add the 'starred' column to `images` if it doesn't exist
cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in cursor.fetchall()]
if "starred" not in columns:
tables.append("ALTER TABLE images ADD COLUMN starred BOOLEAN DEFAULT FALSE;")
indices.append("CREATE INDEX IF NOT EXISTS idx_images_starred ON images(starred);")
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_model_config(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `model_config` table, `model_manager_metadata` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT NOT NULL,
type TEXT NOT NULL,
name TEXT NOT NULL,
path TEXT NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
""",
"""--sql
CREATE TABLE IF NOT EXISTS model_manager_metadata (
metadata_key TEXT NOT NULL PRIMARY KEY,
metadata_value TEXT NOT NULL
);
""",
]
# Add trigger for `updated_at`.
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS model_config_updated_at
AFTER UPDATE
ON model_config FOR EACH ROW
BEGIN
UPDATE model_config SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
]
# Add indexes for searchable fields
indices = [
"CREATE INDEX IF NOT EXISTS base_index ON model_config(base);",
"CREATE INDEX IF NOT EXISTS type_index ON model_config(type);",
"CREATE INDEX IF NOT EXISTS name_index ON model_config(name);",
"CREATE UNIQUE INDEX IF NOT EXISTS path_index ON model_config(path);",
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_session_queue(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS session_queue (
item_id INTEGER PRIMARY KEY AUTOINCREMENT, -- used for ordering, cursor pagination
batch_id TEXT NOT NULL, -- identifier of the batch this queue item belongs to
queue_id TEXT NOT NULL, -- identifier of the queue this queue item belongs to
session_id TEXT NOT NULL UNIQUE, -- duplicated data from the session column, for ease of access
field_values TEXT, -- NULL if no values are associated with this queue item
session TEXT NOT NULL, -- the session to be executed
status TEXT NOT NULL DEFAULT 'pending', -- the status of the queue item, one of 'pending', 'in_progress', 'completed', 'failed', 'canceled'
priority INTEGER NOT NULL DEFAULT 0, -- the priority, higher is more important
error TEXT, -- any errors associated with this queue item
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- updated via trigger
started_at DATETIME, -- updated via trigger
completed_at DATETIME -- updated via trigger, completed items are cleaned up on application startup
-- Ideally this is a FK, but graph_executions uses INSERT OR REPLACE, and REPLACE triggers the ON DELETE CASCADE...
-- FOREIGN KEY (session_id) REFERENCES graph_executions (id) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_item_id ON session_queue(item_id);",
"CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_session_id ON session_queue(session_id);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_batch_id ON session_queue(batch_id);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_created_priority ON session_queue(priority);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_created_status ON session_queue(status);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_completed_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'completed'
OR NEW.status = 'failed'
OR NEW.status = 'canceled'
BEGIN
UPDATE session_queue
SET completed_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
""",
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_started_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'in_progress'
BEGIN
UPDATE session_queue
SET started_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
""",
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_updated_at
AFTER UPDATE
ON session_queue FOR EACH ROW
BEGIN
UPDATE session_queue
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = old.item_id;
END;
""",
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_workflow_images(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflow_images (
workflow_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between workflows and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (workflow_id) REFERENCES workflows (workflow_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_workflow_images_workflow_id ON workflow_images (workflow_id);",
"CREATE INDEX IF NOT EXISTS idx_workflow_images_workflow_id_created_at ON workflow_images (workflow_id, created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflow_images_updated_at
AFTER UPDATE
ON workflow_images FOR EACH ROW
BEGIN
UPDATE workflow_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id AND image_name = old.image_name;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_workflows(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflows (
workflow TEXT NOT NULL,
workflow_id TEXT GENERATED ALWAYS AS (json_extract(workflow, '$.id')) VIRTUAL NOT NULL UNIQUE, -- gets implicit index
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) -- updated via trigger
);
"""
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflows_updated_at
AFTER UPDATE
ON workflows FOR EACH ROW
BEGIN
UPDATE workflows
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id;
END;
"""
]
for stmt in tables + triggers:
cursor.execute(stmt)
def build_migration_1() -> Migration:
"""
Builds the migration from database version 0 (init) to 1.
This migration represents the state of the database circa InvokeAI v3.4.0, which was the last
version to not use migrations to manage the database.
As such, this migration does include some ALTER statements, and the SQL statements are written
to be idempotent.
- Create `board_images` junction table
- Create `boards` table
- Create `images` table, add `starred` column
- Create `model_config` table
- Create `session_queue` table
- Create `workflow_images` junction table
- Create `workflows` table
"""
migration_1 = Migration(
from_version=0,
to_version=1,
callback=Migration1Callback(),
)
return migration_1

View File

@ -0,0 +1,198 @@
import sqlite3
from logging import Logger
from pydantic import ValidationError
from tqdm import tqdm
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.image_files.image_files_common import ImageFileNotFoundException
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator,
)
class Migration2Callback:
def __init__(self, image_files: ImageFileStorageBase, logger: Logger):
self._image_files = image_files
self._logger = logger
def __call__(self, cursor: sqlite3.Cursor):
self._add_images_has_workflow(cursor)
self._add_session_queue_workflow(cursor)
self._drop_old_workflow_tables(cursor)
self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
"""Add the `has_workflow` column to `images` table."""
cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in cursor.fetchall()]
if "has_workflow" not in columns:
cursor.execute("ALTER TABLE images ADD COLUMN has_workflow BOOLEAN DEFAULT FALSE;")
def _add_session_queue_workflow(self, cursor: sqlite3.Cursor) -> None:
"""Add the `workflow` column to `session_queue` table."""
cursor.execute("PRAGMA table_info(session_queue)")
columns = [column[1] for column in cursor.fetchall()]
if "workflow" not in columns:
cursor.execute("ALTER TABLE session_queue ADD COLUMN workflow TEXT;")
def _drop_old_workflow_tables(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `workflows` and `workflow_images` tables."""
cursor.execute("DROP TABLE IF EXISTS workflow_images;")
cursor.execute("DROP TABLE IF EXISTS workflows;")
def _add_workflow_library(self, cursor: sqlite3.Cursor) -> None:
"""Adds the `workflow_library` table and drops the `workflows` and `workflow_images` tables."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflow_library (
workflow_id TEXT NOT NULL PRIMARY KEY,
workflow TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated manually when retrieving workflow
opened_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Generated columns, needed for indexing and searching
category TEXT GENERATED ALWAYS as (json_extract(workflow, '$.meta.category')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(workflow, '$.name')) VIRTUAL NOT NULL,
description TEXT GENERATED ALWAYS as (json_extract(workflow, '$.description')) VIRTUAL NOT NULL
);
""",
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_workflow_library_created_at ON workflow_library(created_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_updated_at ON workflow_library(updated_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_opened_at ON workflow_library(opened_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_category ON workflow_library(category);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_name ON workflow_library(name);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_description ON workflow_library(description);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflow_library_updated_at
AFTER UPDATE
ON workflow_library FOR EACH ROW
BEGIN
UPDATE workflow_library
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
"""
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in
the database now has a `has_workflow` column, indicating if an image has a workflow embedded.
This migrate callback checks each image for the presence of an embedded workflow, then updates its entry
in the database accordingly.
"""
# Get all image names
cursor.execute("SELECT image_name FROM images")
image_names: list[str] = [image[0] for image in cursor.fetchall()]
total_image_names = len(image_names)
if not total_image_names:
return
self._logger.info(f"Migrating workflows for {total_image_names} images")
# Migrate the images
to_migrate: list[tuple[bool, str]] = []
pbar = tqdm(image_names)
for idx, image_name in enumerate(pbar):
pbar.set_description(f"Checking image {idx + 1}/{total_image_names} for workflow")
try:
pil_image = self._image_files.get(image_name)
except ImageFileNotFoundException:
self._logger.warning(f"Image {image_name} not found, skipping")
continue
if "invokeai_workflow" in pil_image.info:
try:
UnsafeWorkflowWithVersionValidator.validate_json(pil_image.info.get("invokeai_workflow", ""))
except ValidationError:
self._logger.warning(f"Image {image_name} has invalid embedded workflow, skipping")
continue
to_migrate.append((True, image_name))
self._logger.info(f"Adding {len(to_migrate)} embedded workflows to database")
cursor.executemany("UPDATE images SET has_workflow = ? WHERE image_name = ?", to_migrate)
def build_migration_2(image_files: ImageFileStorageBase, logger: Logger) -> Migration:
"""
Builds the migration from database version 1 to 2.
Introduced in v3.5.0 for the new workflow library.
:param image_files: The image files service, used to check for embedded workflows
:param logger: The logger, used to log progress during embedded workflows handling
This migration does the following:
- Add `has_workflow` column to `images` table
- Add `workflow` column to `session_queue` table
- Drop `workflows` and `workflow_images` tables
- Add `workflow_library` table
- Drops the `model_manager_metadata` table
- Drops the `model_config` table, recreating it (at this point, there is no user data in this table)
- Populates the `has_workflow` column in the `images` table (requires `image_files` & `logger` dependencies)
"""
migration_2 = Migration(
from_version=1,
to_version=2,
callback=Migration2Callback(image_files=image_files, logger=logger),
)
return migration_2

View File

@ -0,0 +1,164 @@
import sqlite3
from typing import Optional, Protocol, runtime_checkable
from pydantic import BaseModel, ConfigDict, Field, model_validator
@runtime_checkable
class MigrateCallback(Protocol):
"""
A callback that performs a migration.
Migrate callbacks are provided an open cursor to the database. They should not commit their
transaction; this is handled by the migrator.
If the callback needs to access additional dependencies, will be provided to the callback at runtime.
See :class:`Migration` for an example.
"""
def __call__(self, cursor: sqlite3.Cursor) -> None:
...
class MigrationError(RuntimeError):
"""Raised when a migration fails."""
class MigrationVersionError(ValueError):
"""Raised when a migration version is invalid."""
class Migration(BaseModel):
"""
Represents a migration for a SQLite database.
:param from_version: The database version on which this migration may be run
:param to_version: The database version that results from this migration
:param migrate_callback: The callback to run to perform the migration
Migration callbacks will be provided an open cursor to the database. They should not commit their
transaction; this is handled by the migrator.
It is suggested to use a class to define the migration callback and a builder function to create
the :class:`Migration`. This allows the callback to be provided with additional dependencies and
keeps things tidy, as all migration logic is self-contained.
Example:
```py
# Define the migration callback class
class Migration1Callback:
# This migration needs a logger, so we define a class that accepts a logger in its constructor.
def __init__(self, image_files: ImageFileStorageBase) -> None:
self._image_files = ImageFileStorageBase
# This dunder method allows the instance of the class to be called like a function.
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._add_with_banana_column(cursor)
self._do_something_with_images(cursor)
def _add_with_banana_column(self, cursor: sqlite3.Cursor) -> None:
\"""Adds the with_banana column to the sushi table.\"""
# Execute SQL using the cursor, taking care to *not commit* a transaction
cursor.execute('ALTER TABLE sushi ADD COLUMN with_banana BOOLEAN DEFAULT TRUE;')
def _do_something_with_images(self, cursor: sqlite3.Cursor) -> None:
\"""Does something with the image files service.\"""
self._image_files.get(...)
# Define the migration builder function. This function creates an instance of the migration callback
# class and returns a Migration.
def build_migration_1(image_files: ImageFileStorageBase) -> Migration:
\"""Builds the migration from database version 0 to 1.
Requires the image files service to...
\"""
migration_1 = Migration(
from_version=0,
to_version=1,
migrate_callback=Migration1Callback(image_files=image_files),
)
return migration_1
# Register the migration after all dependencies have been initialized
db = SqliteDatabase(db_path, logger)
migrator = SqliteMigrator(db)
migrator.register_migration(build_migration_1(image_files))
migrator.run_migrations()
```
"""
from_version: int = Field(ge=0, strict=True, description="The database version on which this migration may be run")
to_version: int = Field(ge=1, strict=True, description="The database version that results from this migration")
callback: MigrateCallback = Field(description="The callback to run to perform the migration")
@model_validator(mode="after")
def validate_to_version(self) -> "Migration":
"""Validates that to_version is one greater than from_version."""
if self.to_version != self.from_version + 1:
raise MigrationVersionError("to_version must be one greater than from_version")
return self
def __hash__(self) -> int:
# Callables are not hashable, so we need to implement our own __hash__ function to use this class in a set.
return hash((self.from_version, self.to_version))
model_config = ConfigDict(arbitrary_types_allowed=True)
class MigrationSet:
"""
A set of Migrations. Performs validation during migration registration and provides utility methods.
Migrations should be registered with `register()`. Once all are registered, `validate_migration_chain()`
should be called to ensure that the migrations form a single chain of migrations from version 0 to the latest version.
"""
def __init__(self) -> None:
self._migrations: set[Migration] = set()
def register(self, migration: Migration) -> None:
"""Registers a migration."""
migration_from_already_registered = any(m.from_version == migration.from_version for m in self._migrations)
migration_to_already_registered = any(m.to_version == migration.to_version for m in self._migrations)
if migration_from_already_registered or migration_to_already_registered:
raise MigrationVersionError("Migration with from_version or to_version already registered")
self._migrations.add(migration)
def get(self, from_version: int) -> Optional[Migration]:
"""Gets the migration that may be run on the given database version."""
# register() ensures that there is only one migration with a given from_version, so this is safe.
return next((m for m in self._migrations if m.from_version == from_version), None)
def validate_migration_chain(self) -> None:
"""
Validates that the migrations form a single chain of migrations from version 0 to the latest version,
Raises a MigrationError if there is a problem.
"""
if self.count == 0:
return
if self.latest_version == 0:
return
next_migration = self.get(from_version=0)
if next_migration is None:
raise MigrationError("Migration chain is fragmented")
touched_count = 1
while next_migration is not None:
next_migration = self.get(next_migration.to_version)
if next_migration is not None:
touched_count += 1
if touched_count != self.count:
raise MigrationError("Migration chain is fragmented")
@property
def count(self) -> int:
"""The count of registered migrations."""
return len(self._migrations)
@property
def latest_version(self) -> int:
"""Gets latest to_version among registered migrations. Returns 0 if there are no migrations registered."""
if self.count == 0:
return 0
return sorted(self._migrations, key=lambda m: m.to_version)[-1].to_version

View File

@ -0,0 +1,130 @@
import sqlite3
from pathlib import Path
from typing import Optional
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration, MigrationError, MigrationSet
class SqliteMigrator:
"""
Manages migrations for a SQLite database.
:param db: The instance of :class:`SqliteDatabase` to migrate.
Migrations should be registered with :meth:`register_migration`.
Each migration is run in a transaction. If a migration fails, the transaction is rolled back.
Example Usage:
```py
db = SqliteDatabase(db_path="my_db.db", logger=logger)
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2())
migrator.run_migrations()
```
"""
backup_path: Optional[Path] = None
def __init__(self, db: SqliteDatabase) -> None:
self._db = db
self._logger = db.logger
self._migration_set = MigrationSet()
def register_migration(self, migration: Migration) -> None:
"""Registers a migration."""
self._migration_set.register(migration)
self._logger.debug(f"Registered migration {migration.from_version} -> {migration.to_version}")
def run_migrations(self) -> bool:
"""Migrates the database to the latest version."""
with self._db.lock:
# This throws if there is a problem.
self._migration_set.validate_migration_chain()
cursor = self._db.conn.cursor()
self._create_migrations_table(cursor=cursor)
if self._migration_set.count == 0:
self._logger.debug("No migrations registered")
return False
if self._get_current_version(cursor=cursor) == self._migration_set.latest_version:
self._logger.debug("Database is up to date, no migrations to run")
return False
self._logger.info("Database update needed")
next_migration = self._migration_set.get(from_version=self._get_current_version(cursor))
while next_migration is not None:
self._run_migration(next_migration)
next_migration = self._migration_set.get(self._get_current_version(cursor))
self._logger.info("Database updated successfully")
return True
def _run_migration(self, migration: Migration) -> None:
"""Runs a single migration."""
try:
# Using sqlite3.Connection as a context manager commits a the transaction on exit, or rolls it back if an
# exception is raised.
with self._db.lock, self._db.conn as conn:
cursor = conn.cursor()
if self._get_current_version(cursor) != migration.from_version:
raise MigrationError(
f"Database is at version {self._get_current_version(cursor)}, expected {migration.from_version}"
)
self._logger.debug(f"Running migration from {migration.from_version} to {migration.to_version}")
# Run the actual migration
migration.callback(cursor)
# Update the version
cursor.execute("INSERT INTO migrations (version) VALUES (?);", (migration.to_version,))
self._logger.debug(
f"Successfully migrated database from {migration.from_version} to {migration.to_version}"
)
# We want to catch *any* error, mirroring the behaviour of the sqlite3 module.
except Exception as e:
# The connection context manager has already rolled back the migration, so we don't need to do anything.
msg = f"Error migrating database from {migration.from_version} to {migration.to_version}: {e}"
self._logger.error(msg)
raise MigrationError(msg) from e
def _create_migrations_table(self, cursor: sqlite3.Cursor) -> None:
"""Creates the migrations table for the database, if one does not already exist."""
with self._db.lock:
try:
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='migrations';")
if cursor.fetchone() is not None:
return
cursor.execute(
"""--sql
CREATE TABLE migrations (
version INTEGER PRIMARY KEY,
migrated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW'))
);
"""
)
cursor.execute("INSERT INTO migrations (version) VALUES (0);")
cursor.connection.commit()
self._logger.debug("Created migrations table")
except sqlite3.Error as e:
msg = f"Problem creating migrations table: {e}"
self._logger.error(msg)
cursor.connection.rollback()
raise MigrationError(msg) from e
@classmethod
def _get_current_version(cls, cursor: sqlite3.Cursor) -> int:
"""Gets the current version of the database, or 0 if the migrations table does not exist."""
try:
cursor.execute("SELECT MAX(version) FROM migrations;")
version: int = cursor.fetchone()[0]
if version is None:
return 0
return version
except sqlite3.OperationalError as e:
if "no such table" in str(e):
return 0
raise

View File

@ -65,12 +65,24 @@ class WorkflowWithoutID(BaseModel):
nodes: list[dict[str, JsonValue]] = Field(description="The nodes of the workflow.")
edges: list[dict[str, JsonValue]] = Field(description="The edges of the workflow.")
model_config = ConfigDict(extra="forbid")
model_config = ConfigDict(extra="ignore")
WorkflowWithoutIDValidator = TypeAdapter(WorkflowWithoutID)
class UnsafeWorkflowWithVersion(BaseModel):
"""
This utility model only requires a workflow to have a valid version string.
It is used to validate a workflow version without having to validate the entire workflow.
"""
meta: WorkflowMeta = Field(description="The meta of the workflow.")
UnsafeWorkflowWithVersionValidator = TypeAdapter(UnsafeWorkflowWithVersion)
class Workflow(WorkflowWithoutID):
id: str = Field(description="The id of the workflow.")

View File

@ -26,7 +26,6 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
self._lock = db.lock
self._conn = db.conn
self._cursor = self._conn.cursor()
self._create_tables()
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
@ -233,87 +232,3 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
raise
finally:
self._lock.release()
def _create_tables(self) -> None:
try:
self._lock.acquire()
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS workflow_library (
workflow_id TEXT NOT NULL PRIMARY KEY,
workflow TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated manually when retrieving workflow
opened_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Generated columns, needed for indexing and searching
category TEXT GENERATED ALWAYS as (json_extract(workflow, '$.meta.category')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(workflow, '$.name')) VIRTUAL NOT NULL,
description TEXT GENERATED ALWAYS as (json_extract(workflow, '$.description')) VIRTUAL NOT NULL
);
"""
)
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflow_library_updated_at
AFTER UPDATE
ON workflow_library FOR EACH ROW
BEGIN
UPDATE workflow_library
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id;
END;
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_created_at ON workflow_library(created_at);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_updated_at ON workflow_library(updated_at);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_opened_at ON workflow_library(opened_at);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_category ON workflow_library(category);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_name ON workflow_library(name);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_workflow_library_description ON workflow_library(description);
"""
)
# We do not need the original `workflows` table or `workflow_images` junction table.
self._cursor.execute(
"""--sql
DROP TABLE IF EXISTS workflow_images;
"""
)
self._cursor.execute(
"""--sql
DROP TABLE IF EXISTS workflows;
"""
)
self._conn.commit()
except Exception:
self._conn.rollback()
raise
finally:
self._lock.release()

View File

@ -32,6 +32,8 @@ class ModelProbeInfo(object):
upcast_attention: bool
format: Literal["diffusers", "checkpoint", "lycoris", "olive", "onnx"]
image_size: int
name: Optional[str] = None
description: Optional[str] = None
class ProbeBase(object):
@ -113,12 +115,16 @@ class ModelProbe(object):
base_type = probe.get_base_type()
variant_type = probe.get_variant_type()
prediction_type = probe.get_scheduler_prediction_type()
name = cls.get_model_name(model_path)
description = f"{base_type.value} {model_type.value} model {name}"
format = probe.get_format()
model_info = ModelProbeInfo(
model_type=model_type,
base_type=base_type,
variant_type=variant_type,
prediction_type=prediction_type,
name=name,
description=description,
upcast_attention=(
base_type == BaseModelType.StableDiffusion2
and prediction_type == SchedulerPredictionType.VPrediction
@ -142,6 +148,13 @@ class ModelProbe(object):
return model_info
@classmethod
def get_model_name(cls, model_path: Path) -> str:
if model_path.suffix in {".safetensors", ".bin", ".pt", ".ckpt"}:
return model_path.stem
else:
return model_path.name
@classmethod
def get_model_type_from_checkpoint(cls, model_path: Path, checkpoint: dict) -> ModelType:
if model_path.suffix not in (".bin", ".pt", ".ckpt", ".safetensors", ".pth"):

View File

@ -0,0 +1,29 @@
"""Re-export frequently-used symbols from the Model Manager backend."""
from .config import (
AnyModelConfig,
BaseModelType,
InvalidModelConfigException,
ModelConfigFactory,
ModelFormat,
ModelType,
ModelVariantType,
SchedulerPredictionType,
SubModelType,
)
from .probe import ModelProbe
from .search import ModelSearch
__all__ = [
"ModelProbe",
"ModelSearch",
"InvalidModelConfigException",
"ModelConfigFactory",
"BaseModelType",
"ModelType",
"SubModelType",
"ModelVariantType",
"ModelFormat",
"SchedulerPredictionType",
"AnyModelConfig",
]

View File

@ -23,7 +23,7 @@ from enum import Enum
from typing import Literal, Optional, Type, Union
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter
from typing_extensions import Annotated
from typing_extensions import Annotated, Any, Dict
class InvalidModelConfigException(Exception):
@ -122,7 +122,7 @@ class ModelConfigBase(BaseModel):
validate_assignment=True,
)
def update(self, attributes: dict):
def update(self, attributes: Dict[str, Any]) -> None:
"""Update the object with fields in dict."""
for key, value in attributes.items():
setattr(self, key, value) # may raise a validation error
@ -195,8 +195,6 @@ class MainCheckpointConfig(_CheckpointConfig, _MainConfig):
"""Model config for main checkpoint models."""
type: Literal[ModelType.Main] = ModelType.Main
# Note that we do not need prediction_type or upcast_attention here
# because they are provided in the checkpoint's own config file.
class MainDiffusersConfig(_DiffusersConfig, _MainConfig):

View File

@ -2,6 +2,7 @@
"""Migrate from the InvokeAI v2 models.yaml format to the v3 sqlite format."""
from hashlib import sha1
from logging import Logger
from omegaconf import DictConfig, OmegaConf
from pydantic import TypeAdapter
@ -10,6 +11,7 @@ from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import (
DuplicateModelException,
ModelRecordServiceSQL,
UnknownModelException,
)
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
@ -38,24 +40,27 @@ class MigrateModelYamlToDb:
"""
config: InvokeAIAppConfig
logger: InvokeAILogger
logger: Logger
def __init__(self):
def __init__(self) -> None:
self.config = InvokeAIAppConfig.get_config()
self.config.parse_args()
self.logger = InvokeAILogger.get_logger()
def get_db(self) -> ModelRecordServiceSQL:
"""Fetch the sqlite3 database for this installation."""
db = SqliteDatabase(self.config, self.logger)
db_path = None if self.config.use_memory_db else self.config.db_path
db = SqliteDatabase(db_path=db_path, logger=self.logger, verbose=self.config.log_sql)
return ModelRecordServiceSQL(db)
def get_yaml(self) -> DictConfig:
"""Fetch the models.yaml DictConfig for this installation."""
yaml_path = self.config.model_conf_path
return OmegaConf.load(yaml_path)
omegaconf = OmegaConf.load(yaml_path)
assert isinstance(omegaconf, DictConfig)
return omegaconf
def migrate(self):
def migrate(self) -> None:
"""Do the migration from models.yaml to invokeai.db."""
db = self.get_db()
yaml = self.get_yaml()
@ -69,6 +74,7 @@ class MigrateModelYamlToDb:
base_type, model_type, model_name = str(model_key).split("/")
hash = FastModelHash.hash(self.config.models_path / stanza.path)
assert isinstance(model_key, str)
new_key = sha1(model_key.encode("utf-8")).hexdigest()
stanza["base"] = BaseModelType(base_type)
@ -77,12 +83,20 @@ class MigrateModelYamlToDb:
stanza["original_hash"] = hash
stanza["current_hash"] = hash
new_config = ModelsValidator.validate_python(stanza)
self.logger.info(f"Adding model {model_name} with key {model_key}")
new_config: AnyModelConfig = ModelsValidator.validate_python(stanza) # type: ignore # see https://github.com/pydantic/pydantic/discussions/7094
try:
db.add_model(new_key, new_config)
if original_record := db.search_by_path(stanza.path):
key = original_record[0].key
self.logger.info(f"Updating model {model_name} with information from models.yaml using key {key}")
db.update_model(key, new_config)
else:
self.logger.info(f"Adding model {model_name} with key {model_key}")
db.add_model(new_key, new_config)
except DuplicateModelException:
self.logger.warning(f"Model {model_name} is already in the database")
except UnknownModelException:
self.logger.warning(f"Model at {stanza.path} could not be found in database")
def main():

View File

@ -0,0 +1,684 @@
import json
import re
from pathlib import Path
from typing import Any, Dict, Literal, Optional, Union
import safetensors.torch
import torch
from picklescan.scanner import scan_file_path
from invokeai.backend.model_management.models.base import read_checkpoint_meta
from invokeai.backend.model_management.models.ip_adapter import IPAdapterModelFormat
from invokeai.backend.model_management.util import lora_token_vector_length
from invokeai.backend.util.util import SilenceWarnings
from .config import (
AnyModelConfig,
BaseModelType,
InvalidModelConfigException,
ModelConfigFactory,
ModelFormat,
ModelType,
ModelVariantType,
SchedulerPredictionType,
)
from .hash import FastModelHash
CkptType = Dict[str, Any]
LEGACY_CONFIGS: Dict[BaseModelType, Dict[ModelVariantType, Union[str, Dict[SchedulerPredictionType, str]]]] = {
BaseModelType.StableDiffusion1: {
ModelVariantType.Normal: "v1-inference.yaml",
ModelVariantType.Inpaint: "v1-inpainting-inference.yaml",
},
BaseModelType.StableDiffusion2: {
ModelVariantType.Normal: {
SchedulerPredictionType.Epsilon: "v2-inference.yaml",
SchedulerPredictionType.VPrediction: "v2-inference-v.yaml",
},
ModelVariantType.Inpaint: {
SchedulerPredictionType.Epsilon: "v2-inpainting-inference.yaml",
SchedulerPredictionType.VPrediction: "v2-inpainting-inference-v.yaml",
},
},
BaseModelType.StableDiffusionXL: {
ModelVariantType.Normal: "sd_xl_base.yaml",
},
BaseModelType.StableDiffusionXLRefiner: {
ModelVariantType.Normal: "sd_xl_refiner.yaml",
},
}
class ProbeBase(object):
"""Base class for probes."""
def __init__(self, model_path: Path):
self.model_path = model_path
def get_base_type(self) -> BaseModelType:
"""Get model base type."""
raise NotImplementedError
def get_format(self) -> ModelFormat:
"""Get model file format."""
raise NotImplementedError
def get_variant_type(self) -> Optional[ModelVariantType]:
"""Get model variant type."""
return None
def get_scheduler_prediction_type(self) -> Optional[SchedulerPredictionType]:
"""Get model scheduler prediction type."""
return None
class ModelProbe(object):
PROBES: Dict[str, Dict[ModelType, type[ProbeBase]]] = {
"diffusers": {},
"checkpoint": {},
"onnx": {},
}
CLASS2TYPE = {
"StableDiffusionPipeline": ModelType.Main,
"StableDiffusionInpaintPipeline": ModelType.Main,
"StableDiffusionXLPipeline": ModelType.Main,
"StableDiffusionXLImg2ImgPipeline": ModelType.Main,
"StableDiffusionXLInpaintPipeline": ModelType.Main,
"LatentConsistencyModelPipeline": ModelType.Main,
"AutoencoderKL": ModelType.Vae,
"AutoencoderTiny": ModelType.Vae,
"ControlNetModel": ModelType.ControlNet,
"CLIPVisionModelWithProjection": ModelType.CLIPVision,
"T2IAdapter": ModelType.T2IAdapter,
}
@classmethod
def register_probe(
cls, format: Literal["diffusers", "checkpoint", "onnx"], model_type: ModelType, probe_class: type[ProbeBase]
) -> None:
cls.PROBES[format][model_type] = probe_class
@classmethod
def heuristic_probe(
cls,
model_path: Path,
fields: Optional[Dict[str, Any]] = None,
) -> AnyModelConfig:
return cls.probe(model_path, fields)
@classmethod
def probe(
cls,
model_path: Path,
fields: Optional[Dict[str, Any]] = None,
) -> AnyModelConfig:
"""
Probe the model at model_path and return its configuration record.
:param model_path: Path to the model file (checkpoint) or directory (diffusers).
:param fields: An optional dictionary that can be used to override probed
fields. Typically used for fields that don't probe well, such as prediction_type.
Returns: The appropriate model configuration derived from ModelConfigBase.
"""
if fields is None:
fields = {}
format_type = ModelFormat.Diffusers if model_path.is_dir() else ModelFormat.Checkpoint
model_info = None
model_type = None
if format_type == "diffusers":
model_type = cls.get_model_type_from_folder(model_path)
else:
model_type = cls.get_model_type_from_checkpoint(model_path)
format_type = ModelFormat.Onnx if model_type == ModelType.ONNX else format_type
probe_class = cls.PROBES[format_type].get(model_type)
if not probe_class:
raise InvalidModelConfigException(f"Unhandled combination of {format_type} and {model_type}")
hash = FastModelHash.hash(model_path)
probe = probe_class(model_path)
fields["path"] = model_path.as_posix()
fields["type"] = fields.get("type") or model_type
fields["base"] = fields.get("base") or probe.get_base_type()
fields["variant"] = fields.get("variant") or probe.get_variant_type()
fields["prediction_type"] = fields.get("prediction_type") or probe.get_scheduler_prediction_type()
fields["name"] = fields.get("name") or cls.get_model_name(model_path)
fields["description"] = (
fields.get("description") or f"{fields['base'].value} {fields['type'].value} model {fields['name']}"
)
fields["format"] = fields.get("format") or probe.get_format()
fields["original_hash"] = fields.get("original_hash") or hash
fields["current_hash"] = fields.get("current_hash") or hash
# additional fields needed for main and controlnet models
if fields["type"] in [ModelType.Main, ModelType.ControlNet] and fields["format"] == ModelFormat.Checkpoint:
fields["config"] = cls._get_checkpoint_config_path(
model_path,
model_type=fields["type"],
base_type=fields["base"],
variant_type=fields["variant"],
prediction_type=fields["prediction_type"],
).as_posix()
# additional fields needed for main non-checkpoint models
elif fields["type"] == ModelType.Main and fields["format"] in [
ModelFormat.Onnx,
ModelFormat.Olive,
ModelFormat.Diffusers,
]:
fields["upcast_attention"] = fields.get("upcast_attention") or (
fields["base"] == BaseModelType.StableDiffusion2
and fields["prediction_type"] == SchedulerPredictionType.VPrediction
)
model_info = ModelConfigFactory.make_config(fields)
return model_info
@classmethod
def get_model_name(cls, model_path: Path) -> str:
if model_path.suffix in {".safetensors", ".bin", ".pt", ".ckpt"}:
return model_path.stem
else:
return model_path.name
@classmethod
def get_model_type_from_checkpoint(cls, model_path: Path, checkpoint: Optional[CkptType] = None) -> ModelType:
if model_path.suffix not in (".bin", ".pt", ".ckpt", ".safetensors", ".pth"):
raise InvalidModelConfigException(f"{model_path}: unrecognized suffix")
if model_path.name == "learned_embeds.bin":
return ModelType.TextualInversion
ckpt = checkpoint if checkpoint else read_checkpoint_meta(model_path, scan=True)
ckpt = ckpt.get("state_dict", ckpt)
for key in ckpt.keys():
if any(key.startswith(v) for v in {"cond_stage_model.", "first_stage_model.", "model.diffusion_model."}):
return ModelType.Main
elif any(key.startswith(v) for v in {"encoder.conv_in", "decoder.conv_in"}):
return ModelType.Vae
elif any(key.startswith(v) for v in {"lora_te_", "lora_unet_"}):
return ModelType.Lora
elif any(key.endswith(v) for v in {"to_k_lora.up.weight", "to_q_lora.down.weight"}):
return ModelType.Lora
elif any(key.startswith(v) for v in {"control_model", "input_blocks"}):
return ModelType.ControlNet
elif key in {"emb_params", "string_to_param"}:
return ModelType.TextualInversion
else:
# diffusers-ti
if len(ckpt) < 10 and all(isinstance(v, torch.Tensor) for v in ckpt.values()):
return ModelType.TextualInversion
raise InvalidModelConfigException(f"Unable to determine model type for {model_path}")
@classmethod
def get_model_type_from_folder(cls, folder_path: Path) -> ModelType:
"""Get the model type of a hugging-face style folder."""
class_name = None
error_hint = None
for suffix in ["bin", "safetensors"]:
if (folder_path / f"learned_embeds.{suffix}").exists():
return ModelType.TextualInversion
if (folder_path / f"pytorch_lora_weights.{suffix}").exists():
return ModelType.Lora
if (folder_path / "unet/model.onnx").exists():
return ModelType.ONNX
if (folder_path / "image_encoder.txt").exists():
return ModelType.IPAdapter
i = folder_path / "model_index.json"
c = folder_path / "config.json"
config_path = i if i.exists() else c if c.exists() else None
if config_path:
with open(config_path, "r") as file:
conf = json.load(file)
if "_class_name" in conf:
class_name = conf["_class_name"]
elif "architectures" in conf:
class_name = conf["architectures"][0]
else:
class_name = None
else:
error_hint = f"No model_index.json or config.json found in {folder_path}."
if class_name and (type := cls.CLASS2TYPE.get(class_name)):
return type
else:
error_hint = f"class {class_name} is not one of the supported classes [{', '.join(cls.CLASS2TYPE.keys())}]"
# give up
raise InvalidModelConfigException(
f"Unable to determine model type for {folder_path}" + (f"; {error_hint}" if error_hint else "")
)
@classmethod
def _get_checkpoint_config_path(
cls,
model_path: Path,
model_type: ModelType,
base_type: BaseModelType,
variant_type: ModelVariantType,
prediction_type: SchedulerPredictionType,
) -> Path:
# look for a YAML file adjacent to the model file first
possible_conf = model_path.with_suffix(".yaml")
if possible_conf.exists():
return possible_conf.absolute()
if model_type == ModelType.Main:
config_file = LEGACY_CONFIGS[base_type][variant_type]
if isinstance(config_file, dict): # need another tier for sd-2.x models
config_file = config_file[prediction_type]
elif model_type == ModelType.ControlNet:
config_file = (
"../controlnet/cldm_v15.yaml" if base_type == BaseModelType("sd-1") else "../controlnet/cldm_v21.yaml"
)
else:
raise InvalidModelConfigException(
f"{model_path}: Unrecognized combination of model_type={model_type}, base_type={base_type}"
)
assert isinstance(config_file, str)
return Path(config_file)
@classmethod
def _scan_and_load_checkpoint(cls, model_path: Path) -> CkptType:
with SilenceWarnings():
if model_path.suffix.endswith((".ckpt", ".pt", ".bin")):
cls._scan_model(model_path.name, model_path)
model = torch.load(model_path)
assert isinstance(model, dict)
return model
else:
return safetensors.torch.load_file(model_path)
@classmethod
def _scan_model(cls, model_name: str, checkpoint: Path) -> None:
"""
Apply picklescanner to the indicated checkpoint and issue a warning
and option to exit if an infected file is identified.
"""
# scan model
scan_result = scan_file_path(checkpoint)
if scan_result.infected_files != 0:
raise Exception("The model {model_name} is potentially infected by malware. Aborting import.")
# ##################################################3
# Checkpoint probing
# ##################################################3
class CheckpointProbeBase(ProbeBase):
def __init__(self, model_path: Path):
super().__init__(model_path)
self.checkpoint = ModelProbe._scan_and_load_checkpoint(model_path)
def get_format(self) -> ModelFormat:
return ModelFormat("checkpoint")
def get_variant_type(self) -> ModelVariantType:
model_type = ModelProbe.get_model_type_from_checkpoint(self.model_path, self.checkpoint)
if model_type != ModelType.Main:
return ModelVariantType.Normal
state_dict = self.checkpoint.get("state_dict") or self.checkpoint
in_channels = state_dict["model.diffusion_model.input_blocks.0.0.weight"].shape[1]
if in_channels == 9:
return ModelVariantType.Inpaint
elif in_channels == 5:
return ModelVariantType.Depth
elif in_channels == 4:
return ModelVariantType.Normal
else:
raise InvalidModelConfigException(
f"Cannot determine variant type (in_channels={in_channels}) at {self.model_path}"
)
class PipelineCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
state_dict = self.checkpoint.get("state_dict") or checkpoint
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 768:
return BaseModelType.StableDiffusion1
if key_name in state_dict and state_dict[key_name].shape[-1] == 1024:
return BaseModelType.StableDiffusion2
key_name = "model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 2048:
return BaseModelType.StableDiffusionXL
elif key_name in state_dict and state_dict[key_name].shape[-1] == 1280:
return BaseModelType.StableDiffusionXLRefiner
else:
raise InvalidModelConfigException("Cannot determine base type")
def get_scheduler_prediction_type(self) -> SchedulerPredictionType:
"""Return model prediction type."""
type = self.get_base_type()
if type == BaseModelType.StableDiffusion2:
checkpoint = self.checkpoint
state_dict = self.checkpoint.get("state_dict") or checkpoint
key_name = "model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight"
if key_name in state_dict and state_dict[key_name].shape[-1] == 1024:
if "global_step" in checkpoint:
if checkpoint["global_step"] == 220000:
return SchedulerPredictionType.Epsilon
elif checkpoint["global_step"] == 110000:
return SchedulerPredictionType.VPrediction
return SchedulerPredictionType.VPrediction # a guess for sd2 ckpts
elif type == BaseModelType.StableDiffusion1:
return SchedulerPredictionType.Epsilon # a reasonable guess for sd1 ckpts
else:
return SchedulerPredictionType.Epsilon
class VaeCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
# I can't find any standalone 2.X VAEs to test with!
return BaseModelType.StableDiffusion1
class LoRACheckpointProbe(CheckpointProbeBase):
"""Class for LoRA checkpoints."""
def get_format(self) -> ModelFormat:
return ModelFormat("lycoris")
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
token_vector_length = lora_token_vector_length(checkpoint)
if token_vector_length == 768:
return BaseModelType.StableDiffusion1
elif token_vector_length == 1024:
return BaseModelType.StableDiffusion2
elif token_vector_length == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown LoRA type: {self.model_path}")
class TextualInversionCheckpointProbe(CheckpointProbeBase):
"""Class for probing embeddings."""
def get_format(self) -> ModelFormat:
return ModelFormat.EmbeddingFile
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
if "string_to_token" in checkpoint:
token_dim = list(checkpoint["string_to_param"].values())[0].shape[-1]
elif "emb_params" in checkpoint:
token_dim = checkpoint["emb_params"].shape[-1]
elif "clip_g" in checkpoint:
token_dim = checkpoint["clip_g"].shape[-1]
else:
token_dim = list(checkpoint.values())[0].shape[0]
if token_dim == 768:
return BaseModelType.StableDiffusion1
elif token_dim == 1024:
return BaseModelType.StableDiffusion2
elif token_dim == 1280:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"{self.model_path}: Could not determine base type")
class ControlNetCheckpointProbe(CheckpointProbeBase):
"""Class for probing controlnets."""
def get_base_type(self) -> BaseModelType:
checkpoint = self.checkpoint
for key_name in (
"control_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight",
"input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight",
):
if key_name not in checkpoint:
continue
if checkpoint[key_name].shape[-1] == 768:
return BaseModelType.StableDiffusion1
elif checkpoint[key_name].shape[-1] == 1024:
return BaseModelType.StableDiffusion2
raise InvalidModelConfigException("{self.model_path}: Unable to determine base type")
class IPAdapterCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
class CLIPVisionCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
class T2IAdapterCheckpointProbe(CheckpointProbeBase):
def get_base_type(self) -> BaseModelType:
raise NotImplementedError()
########################################################
# classes for probing folders
#######################################################
class FolderProbeBase(ProbeBase):
def get_variant_type(self) -> ModelVariantType:
return ModelVariantType.Normal
def get_format(self) -> ModelFormat:
return ModelFormat("diffusers")
class PipelineFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
with open(self.model_path / "unet" / "config.json", "r") as file:
unet_conf = json.load(file)
if unet_conf["cross_attention_dim"] == 768:
return BaseModelType.StableDiffusion1
elif unet_conf["cross_attention_dim"] == 1024:
return BaseModelType.StableDiffusion2
elif unet_conf["cross_attention_dim"] == 1280:
return BaseModelType.StableDiffusionXLRefiner
elif unet_conf["cross_attention_dim"] == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(f"Unknown base model for {self.model_path}")
def get_scheduler_prediction_type(self) -> SchedulerPredictionType:
with open(self.model_path / "scheduler" / "scheduler_config.json", "r") as file:
scheduler_conf = json.load(file)
if scheduler_conf["prediction_type"] == "v_prediction":
return SchedulerPredictionType.VPrediction
elif scheduler_conf["prediction_type"] == "epsilon":
return SchedulerPredictionType.Epsilon
else:
raise InvalidModelConfigException("Unknown scheduler prediction type: {scheduler_conf['prediction_type']}")
def get_variant_type(self) -> ModelVariantType:
# This only works for pipelines! Any kind of
# exception results in our returning the
# "normal" variant type
try:
config_file = self.model_path / "unet" / "config.json"
with open(config_file, "r") as file:
conf = json.load(file)
in_channels = conf["in_channels"]
if in_channels == 9:
return ModelVariantType.Inpaint
elif in_channels == 5:
return ModelVariantType.Depth
elif in_channels == 4:
return ModelVariantType.Normal
except Exception:
pass
return ModelVariantType.Normal
class VaeFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
if self._config_looks_like_sdxl():
return BaseModelType.StableDiffusionXL
elif self._name_looks_like_sdxl():
# but SD and SDXL VAE are the same shape (3-channel RGB to 4-channel float scaled down
# by a factor of 8), we can't necessarily tell them apart by config hyperparameters.
return BaseModelType.StableDiffusionXL
else:
return BaseModelType.StableDiffusion1
def _config_looks_like_sdxl(self) -> bool:
# config values that distinguish Stability's SD 1.x VAE from their SDXL VAE.
config_file = self.model_path / "config.json"
if not config_file.exists():
raise InvalidModelConfigException(f"Cannot determine base type for {self.model_path}")
with open(config_file, "r") as file:
config = json.load(file)
return config.get("scaling_factor", 0) == 0.13025 and config.get("sample_size") in [512, 1024]
def _name_looks_like_sdxl(self) -> bool:
return bool(re.search(r"xl\b", self._guess_name(), re.IGNORECASE))
def _guess_name(self) -> str:
name = self.model_path.name
if name == "vae":
name = self.model_path.parent.name
return name
class TextualInversionFolderProbe(FolderProbeBase):
def get_format(self) -> ModelFormat:
return ModelFormat.EmbeddingFolder
def get_base_type(self) -> BaseModelType:
path = self.model_path / "learned_embeds.bin"
if not path.exists():
raise InvalidModelConfigException(
f"{self.model_path.as_posix()} does not contain expected 'learned_embeds.bin' file"
)
return TextualInversionCheckpointProbe(path).get_base_type()
class ONNXFolderProbe(FolderProbeBase):
def get_format(self) -> ModelFormat:
return ModelFormat("onnx")
def get_base_type(self) -> BaseModelType:
return BaseModelType.StableDiffusion1
def get_variant_type(self) -> ModelVariantType:
return ModelVariantType.Normal
class ControlNetFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
if not config_file.exists():
raise InvalidModelConfigException(f"Cannot determine base type for {self.model_path}")
with open(config_file, "r") as file:
config = json.load(file)
# no obvious way to distinguish between sd2-base and sd2-768
dimension = config["cross_attention_dim"]
base_model = (
BaseModelType.StableDiffusion1
if dimension == 768
else (
BaseModelType.StableDiffusion2
if dimension == 1024
else BaseModelType.StableDiffusionXL
if dimension == 2048
else None
)
)
if not base_model:
raise InvalidModelConfigException(f"Unable to determine model base for {self.model_path}")
return base_model
class LoRAFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
model_file = None
for suffix in ["safetensors", "bin"]:
base_file = self.model_path / f"pytorch_lora_weights.{suffix}"
if base_file.exists():
model_file = base_file
break
if not model_file:
raise InvalidModelConfigException("Unknown LoRA format encountered")
return LoRACheckpointProbe(model_file).get_base_type()
class IPAdapterFolderProbe(FolderProbeBase):
def get_format(self) -> IPAdapterModelFormat:
return IPAdapterModelFormat.InvokeAI.value
def get_base_type(self) -> BaseModelType:
model_file = self.model_path / "ip_adapter.bin"
if not model_file.exists():
raise InvalidModelConfigException("Unknown IP-Adapter model format.")
state_dict = torch.load(model_file, map_location="cpu")
cross_attention_dim = state_dict["ip_adapter"]["1.to_k_ip.weight"].shape[-1]
if cross_attention_dim == 768:
return BaseModelType.StableDiffusion1
elif cross_attention_dim == 1024:
return BaseModelType.StableDiffusion2
elif cross_attention_dim == 2048:
return BaseModelType.StableDiffusionXL
else:
raise InvalidModelConfigException(
f"IP-Adapter had unexpected cross-attention dimension: {cross_attention_dim}."
)
class CLIPVisionFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
return BaseModelType.Any
class T2IAdapterFolderProbe(FolderProbeBase):
def get_base_type(self) -> BaseModelType:
config_file = self.model_path / "config.json"
if not config_file.exists():
raise InvalidModelConfigException(f"Cannot determine base type for {self.model_path}")
with open(config_file, "r") as file:
config = json.load(file)
adapter_type = config.get("adapter_type", None)
if adapter_type == "full_adapter_xl":
return BaseModelType.StableDiffusionXL
elif adapter_type == "full_adapter" or "light_adapter":
# I haven't seen any T2I adapter models for SD2, so assume that this is an SD1 adapter.
return BaseModelType.StableDiffusion1
else:
raise InvalidModelConfigException(
f"Unable to determine base model for '{self.model_path}' (adapter_type = {adapter_type})."
)
############## register probe classes ######
ModelProbe.register_probe("diffusers", ModelType.Main, PipelineFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.Vae, VaeFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.Lora, LoRAFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.TextualInversion, TextualInversionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.ControlNet, ControlNetFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.IPAdapter, IPAdapterFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.CLIPVision, CLIPVisionFolderProbe)
ModelProbe.register_probe("diffusers", ModelType.T2IAdapter, T2IAdapterFolderProbe)
ModelProbe.register_probe("checkpoint", ModelType.Main, PipelineCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.Vae, VaeCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.Lora, LoRACheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.TextualInversion, TextualInversionCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.ControlNet, ControlNetCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.IPAdapter, IPAdapterCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.CLIPVision, CLIPVisionCheckpointProbe)
ModelProbe.register_probe("checkpoint", ModelType.T2IAdapter, T2IAdapterCheckpointProbe)
ModelProbe.register_probe("onnx", ModelType.ONNX, ONNXFolderProbe)

View File

@ -0,0 +1,190 @@
# Copyright 2023, Lincoln D. Stein and the InvokeAI Team
"""
Abstract base class and implementation for recursive directory search for models.
Example usage:
```
from invokeai.backend.model_manager import ModelSearch, ModelProbe
def find_main_models(model: Path) -> bool:
info = ModelProbe.probe(model)
if info.model_type == 'main' and info.base_type == 'sd-1':
return True
else:
return False
search = ModelSearch(on_model_found=report_it)
found = search.search('/tmp/models')
print(found) # list of matching model paths
print(search.stats) # search stats
```
"""
import os
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Callable, Optional, Set, Union
from pydantic import BaseModel, Field
from invokeai.backend.util.logging import InvokeAILogger
default_logger = InvokeAILogger.get_logger()
class SearchStats(BaseModel):
items_scanned: int = 0
models_found: int = 0
models_filtered: int = 0
class ModelSearchBase(ABC, BaseModel):
"""
Abstract directory traversal model search class
Usage:
search = ModelSearchBase(
on_search_started = search_started_callback,
on_search_completed = search_completed_callback,
on_model_found = model_found_callback,
)
models_found = search.search('/path/to/directory')
"""
# fmt: off
on_search_started : Optional[Callable[[Path], None]] = Field(default=None, description="Called just before the search starts.") # noqa E221
on_model_found : Optional[Callable[[Path], bool]] = Field(default=None, description="Called when a model is found.") # noqa E221
on_search_completed : Optional[Callable[[Set[Path]], None]] = Field(default=None, description="Called when search is complete.") # noqa E221
stats : SearchStats = Field(default_factory=SearchStats, description="Summary statistics after search") # noqa E221
logger : InvokeAILogger = Field(default=default_logger, description="Logger instance.") # noqa E221
# fmt: on
class Config:
arbitrary_types_allowed = True
@abstractmethod
def search_started(self) -> None:
"""
Called before the scan starts.
Passes the root search directory to the Callable `on_search_started`.
"""
pass
@abstractmethod
def model_found(self, model: Path) -> None:
"""
Called when a model is found during search.
:param model: Model to process - could be a directory or checkpoint.
Passes the model's Path to the Callable `on_model_found`.
This Callable receives the path to the model and returns a boolean
to indicate whether the model should be returned in the search
results.
"""
pass
@abstractmethod
def search_completed(self) -> None:
"""
Called before the scan starts.
Passes the Set of found model Paths to the Callable `on_search_completed`.
"""
pass
@abstractmethod
def search(self, directory: Union[Path, str]) -> Set[Path]:
"""
Recursively search for models in `directory` and return a set of model paths.
If provided, the `on_search_started`, `on_model_found` and `on_search_completed`
Callables will be invoked during the search.
"""
pass
class ModelSearch(ModelSearchBase):
"""
Implementation of ModelSearch with callbacks.
Usage:
search = ModelSearch()
search.model_found = lambda path : 'anime' in path.as_posix()
found = search.list_models(['/tmp/models1','/tmp/models2'])
# returns all models that have 'anime' in the path
"""
models_found: Set[Path] = Field(default=None)
scanned_dirs: Set[Path] = Field(default=None)
pruned_paths: Set[Path] = Field(default=None)
def search_started(self) -> None:
self.models_found = set()
self.scanned_dirs = set()
self.pruned_paths = set()
if self.on_search_started:
self.on_search_started(self._directory)
def model_found(self, model: Path) -> None:
self.stats.models_found += 1
if not self.on_model_found or self.on_model_found(model):
self.stats.models_filtered += 1
self.models_found.add(model)
def search_completed(self) -> None:
if self.on_search_completed:
self.on_search_completed(self._models_found)
def search(self, directory: Union[Path, str]) -> Set[Path]:
self._directory = Path(directory)
self.stats = SearchStats() # zero out
self.search_started() # This will initialize _models_found to empty
self._walk_directory(directory)
self.search_completed()
return self.models_found
def _walk_directory(self, path: Union[Path, str]) -> None:
for root, dirs, files in os.walk(path, followlinks=True):
# don't descend into directories that start with a "."
# to avoid the Mac .DS_STORE issue.
if str(Path(root).name).startswith("."):
self.pruned_paths.add(Path(root))
if any(Path(root).is_relative_to(x) for x in self.pruned_paths):
continue
self.stats.items_scanned += len(dirs) + len(files)
for d in dirs:
path = Path(root) / d
if path.parent in self.scanned_dirs:
self.scanned_dirs.add(path)
continue
if any(
(path / x).exists()
for x in [
"config.json",
"model_index.json",
"learned_embeds.bin",
"pytorch_lora_weights.bin",
"image_encoder.txt",
]
):
self.scanned_dirs.add(path)
try:
self.model_found(path)
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))
for f in files:
path = Path(root) / f
if path.parent in self.scanned_dirs:
continue
if path.suffix in {".ckpt", ".bin", ".pth", ".safetensors", ".pt"}:
try:
self.model_found(path)
except KeyboardInterrupt:
raise
except Exception as e:
self.logger.warning(str(e))

View File

@ -242,17 +242,6 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
control_model: ControlNetModel = None,
):
super().__init__(
vae,
text_encoder,
tokenizer,
unet,
scheduler,
safety_checker,
feature_extractor,
requires_safety_checker,
)
self.register_modules(
vae=vae,
text_encoder=text_encoder,
tokenizer=tokenizer,
@ -260,9 +249,9 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
scheduler=scheduler,
safety_checker=safety_checker,
feature_extractor=feature_extractor,
# FIXME: can't currently register control module
# control_model=control_model,
requires_safety_checker=requires_safety_checker,
)
self.invokeai_diffuser = InvokeAIDiffuserComponent(self.unet, self._unet_forward)
self.control_model = control_model
self.use_ip_adapter = False

View File

@ -3,7 +3,42 @@ from typing import Union
import numpy as np
from invokeai.backend.tiles.utils import TBLR, Tile, paste
from invokeai.app.invocations.latent import LATENT_SCALE_FACTOR
from invokeai.backend.tiles.utils import TBLR, Tile, paste, seam_blend
def calc_overlap(tiles: list[Tile], num_tiles_x: int, num_tiles_y: int) -> list[Tile]:
"""Calculate and update the overlap of a list of tiles.
Args:
tiles (list[Tile]): The list of tiles describing the locations of the respective `tile_images`.
num_tiles_x: the number of tiles on the x axis.
num_tiles_y: the number of tiles on the y axis.
"""
def get_tile_or_none(idx_y: int, idx_x: int) -> Union[Tile, None]:
if idx_y < 0 or idx_y > num_tiles_y or idx_x < 0 or idx_x > num_tiles_x:
return None
return tiles[idx_y * num_tiles_x + idx_x]
for tile_idx_y in range(num_tiles_y):
for tile_idx_x in range(num_tiles_x):
cur_tile = get_tile_or_none(tile_idx_y, tile_idx_x)
top_neighbor_tile = get_tile_or_none(tile_idx_y - 1, tile_idx_x)
left_neighbor_tile = get_tile_or_none(tile_idx_y, tile_idx_x - 1)
assert cur_tile is not None
# Update cur_tile top-overlap and corresponding top-neighbor bottom-overlap.
if top_neighbor_tile is not None:
cur_tile.overlap.top = max(0, top_neighbor_tile.coords.bottom - cur_tile.coords.top)
top_neighbor_tile.overlap.bottom = cur_tile.overlap.top
# Update cur_tile left-overlap and corresponding left-neighbor right-overlap.
if left_neighbor_tile is not None:
cur_tile.overlap.left = max(0, left_neighbor_tile.coords.right - cur_tile.coords.left)
left_neighbor_tile.overlap.right = cur_tile.overlap.left
return tiles
def calc_tiles_with_overlap(
@ -63,31 +98,129 @@ def calc_tiles_with_overlap(
tiles.append(tile)
def get_tile_or_none(idx_y: int, idx_x: int) -> Union[Tile, None]:
if idx_y < 0 or idx_y > num_tiles_y or idx_x < 0 or idx_x > num_tiles_x:
return None
return tiles[idx_y * num_tiles_x + idx_x]
return calc_overlap(tiles, num_tiles_x, num_tiles_y)
# Iterate over tiles again and calculate overlaps.
def calc_tiles_even_split(
image_height: int, image_width: int, num_tiles_x: int, num_tiles_y: int, overlap_fraction: float = 0
) -> list[Tile]:
"""Calculate the tile coordinates for a given image shape with the number of tiles requested.
Args:
image_height (int): The image height in px.
image_width (int): The image width in px.
num_x_tiles (int): The number of tile to split the image into on the X-axis.
num_y_tiles (int): The number of tile to split the image into on the Y-axis.
overlap_fraction (float, optional): The target overlap as fraction of the tiles size. Defaults to 0.
Returns:
list[Tile]: A list of tiles that cover the image shape. Ordered from left-to-right, top-to-bottom.
"""
# Ensure tile size is divisible by 8
if image_width % LATENT_SCALE_FACTOR != 0 or image_height % LATENT_SCALE_FACTOR != 0:
raise ValueError(f"image size (({image_width}, {image_height})) must be divisible by {LATENT_SCALE_FACTOR}")
# Calculate the overlap size based on the percentage and adjust it to be divisible by 8 (rounding up)
overlap_x = LATENT_SCALE_FACTOR * math.ceil(
int((image_width / num_tiles_x) * overlap_fraction) / LATENT_SCALE_FACTOR
)
overlap_y = LATENT_SCALE_FACTOR * math.ceil(
int((image_height / num_tiles_y) * overlap_fraction) / LATENT_SCALE_FACTOR
)
# Calculate the tile size based on the number of tiles and overlap, and ensure it's divisible by 8 (rounding down)
tile_size_x = LATENT_SCALE_FACTOR * math.floor(
((image_width + overlap_x * (num_tiles_x - 1)) // num_tiles_x) / LATENT_SCALE_FACTOR
)
tile_size_y = LATENT_SCALE_FACTOR * math.floor(
((image_height + overlap_y * (num_tiles_y - 1)) // num_tiles_y) / LATENT_SCALE_FACTOR
)
# tiles[y * num_tiles_x + x] is the tile for the y'th row, x'th column.
tiles: list[Tile] = []
# Calculate tile coordinates. (Ignore overlap values for now.)
for tile_idx_y in range(num_tiles_y):
# Calculate the top and bottom of the row
top = tile_idx_y * (tile_size_y - overlap_y)
bottom = min(top + tile_size_y, image_height)
# For the last row adjust bottom to be the height of the image
if tile_idx_y == num_tiles_y - 1:
bottom = image_height
for tile_idx_x in range(num_tiles_x):
cur_tile = get_tile_or_none(tile_idx_y, tile_idx_x)
top_neighbor_tile = get_tile_or_none(tile_idx_y - 1, tile_idx_x)
left_neighbor_tile = get_tile_or_none(tile_idx_y, tile_idx_x - 1)
# Calculate the left & right coordinate of each tile
left = tile_idx_x * (tile_size_x - overlap_x)
right = min(left + tile_size_x, image_width)
# For the last tile in the row adjust right to be the width of the image
if tile_idx_x == num_tiles_x - 1:
right = image_width
assert cur_tile is not None
tile = Tile(
coords=TBLR(top=top, bottom=bottom, left=left, right=right),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
# Update cur_tile top-overlap and corresponding top-neighbor bottom-overlap.
if top_neighbor_tile is not None:
cur_tile.overlap.top = max(0, top_neighbor_tile.coords.bottom - cur_tile.coords.top)
top_neighbor_tile.overlap.bottom = cur_tile.overlap.top
tiles.append(tile)
# Update cur_tile left-overlap and corresponding left-neighbor right-overlap.
if left_neighbor_tile is not None:
cur_tile.overlap.left = max(0, left_neighbor_tile.coords.right - cur_tile.coords.left)
left_neighbor_tile.overlap.right = cur_tile.overlap.left
return calc_overlap(tiles, num_tiles_x, num_tiles_y)
return tiles
def calc_tiles_min_overlap(
image_height: int,
image_width: int,
tile_height: int,
tile_width: int,
min_overlap: int = 0,
) -> list[Tile]:
"""Calculate the tile coordinates for a given image shape under a simple tiling scheme with overlaps.
Args:
image_height (int): The image height in px.
image_width (int): The image width in px.
tile_height (int): The tile height in px. All tiles will have this height.
tile_width (int): The tile width in px. All tiles will have this width.
min_overlap (int): The target minimum overlap between adjacent tiles. If the tiles do not evenly cover the image
shape, then the overlap will be spread between the tiles.
Returns:
list[Tile]: A list of tiles that cover the image shape. Ordered from left-to-right, top-to-bottom.
"""
assert min_overlap < tile_height
assert min_overlap < tile_width
# catches the cases when the tile size is larger than the images size and adjusts the tile size
if image_width < tile_width:
tile_width = image_width
if image_height < tile_height:
tile_height = image_height
num_tiles_x = math.ceil((image_width - min_overlap) / (tile_width - min_overlap))
num_tiles_y = math.ceil((image_height - min_overlap) / (tile_height - min_overlap))
# tiles[y * num_tiles_x + x] is the tile for the y'th row, x'th column.
tiles: list[Tile] = []
# Calculate tile coordinates. (Ignore overlap values for now.)
for tile_idx_y in range(num_tiles_y):
top = (tile_idx_y * (image_height - tile_height)) // (num_tiles_y - 1) if num_tiles_y > 1 else 0
bottom = top + tile_height
for tile_idx_x in range(num_tiles_x):
left = (tile_idx_x * (image_width - tile_width)) // (num_tiles_x - 1) if num_tiles_x > 1 else 0
right = left + tile_width
tile = Tile(
coords=TBLR(top=top, bottom=bottom, left=left, right=right),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
tiles.append(tile)
return calc_overlap(tiles, num_tiles_x, num_tiles_y)
def merge_tiles_with_linear_blending(
@ -199,3 +332,91 @@ def merge_tiles_with_linear_blending(
),
mask=mask,
)
def merge_tiles_with_seam_blending(
dst_image: np.ndarray, tiles: list[Tile], tile_images: list[np.ndarray], blend_amount: int
):
"""Merge a set of image tiles into `dst_image` with seam blending between the tiles.
We expect every tile edge to either:
1) have an overlap of 0, because it is aligned with the image edge, or
2) have an overlap >= blend_amount.
If neither of these conditions are satisfied, we raise an exception.
The seam blending is centered on a seam of least energy of the overlap between adjacent tiles.
Args:
dst_image (np.ndarray): The destination image. Shape: (H, W, C).
tiles (list[Tile]): The list of tiles describing the locations of the respective `tile_images`.
tile_images (list[np.ndarray]): The tile images to merge into `dst_image`.
blend_amount (int): The amount of blending (in px) between adjacent overlapping tiles.
"""
# Sort tiles and images first by left x coordinate, then by top y coordinate. During tile processing, we want to
# iterate over tiles left-to-right, top-to-bottom.
tiles_and_images = list(zip(tiles, tile_images, strict=True))
tiles_and_images = sorted(tiles_and_images, key=lambda x: x[0].coords.left)
tiles_and_images = sorted(tiles_and_images, key=lambda x: x[0].coords.top)
# Organize tiles into rows.
tile_and_image_rows: list[list[tuple[Tile, np.ndarray]]] = []
cur_tile_and_image_row: list[tuple[Tile, np.ndarray]] = []
first_tile_in_cur_row, _ = tiles_and_images[0]
for tile_and_image in tiles_and_images:
tile, _ = tile_and_image
if not (
tile.coords.top == first_tile_in_cur_row.coords.top
and tile.coords.bottom == first_tile_in_cur_row.coords.bottom
):
# Store the previous row, and start a new one.
tile_and_image_rows.append(cur_tile_and_image_row)
cur_tile_and_image_row = []
first_tile_in_cur_row, _ = tile_and_image
cur_tile_and_image_row.append(tile_and_image)
tile_and_image_rows.append(cur_tile_and_image_row)
for tile_and_image_row in tile_and_image_rows:
first_tile_in_row, _ = tile_and_image_row[0]
row_height = first_tile_in_row.coords.bottom - first_tile_in_row.coords.top
row_image = np.zeros((row_height, dst_image.shape[1], dst_image.shape[2]), dtype=dst_image.dtype)
# Blend the tiles in the row horizontally.
for tile, tile_image in tile_and_image_row:
# We expect the tiles to be ordered left-to-right.
# For each tile:
# - extract the overlap regions and pass to seam_blend()
# - apply blended region to the row_image
# - apply the un-blended region to the row_image
tile_height, tile_width, _ = tile_image.shape
overlap_size = tile.overlap.left
# Left blending:
if overlap_size > 0:
assert overlap_size >= blend_amount
overlap_coord_right = tile.coords.left + overlap_size
src_overlap = row_image[:, tile.coords.left : overlap_coord_right]
dst_overlap = tile_image[:, :overlap_size]
blended_overlap = seam_blend(src_overlap, dst_overlap, blend_amount, x_seam=False)
row_image[:, tile.coords.left : overlap_coord_right] = blended_overlap
row_image[:, overlap_coord_right : tile.coords.right] = tile_image[:, overlap_size:]
else:
# no overlap just paste the tile
row_image[:, tile.coords.left : tile.coords.right] = tile_image
# Blend the row into the dst_image
# We assume that the entire row has the same vertical overlaps as the first_tile_in_row.
# Rows are processed in the same way as tiles (extract overlap, blend, apply)
row_overlap_size = first_tile_in_row.overlap.top
if row_overlap_size > 0:
assert row_overlap_size >= blend_amount
overlap_coords_bottom = first_tile_in_row.coords.top + row_overlap_size
src_overlap = dst_image[first_tile_in_row.coords.top : overlap_coords_bottom, :]
dst_overlap = row_image[:row_overlap_size, :]
blended_overlap = seam_blend(src_overlap, dst_overlap, blend_amount, x_seam=True)
dst_image[first_tile_in_row.coords.top : overlap_coords_bottom, :] = blended_overlap
dst_image[overlap_coords_bottom : first_tile_in_row.coords.bottom, :] = row_image[row_overlap_size:, :]
else:
# no overlap just paste the row
dst_image[first_tile_in_row.coords.top : first_tile_in_row.coords.bottom, :] = row_image

View File

@ -1,5 +1,7 @@
import math
from typing import Optional
import cv2
import numpy as np
from pydantic import BaseModel, Field
@ -31,10 +33,10 @@ def paste(dst_image: np.ndarray, src_image: np.ndarray, box: TBLR, mask: Optiona
"""Paste a source image into a destination image.
Args:
dst_image (torch.Tensor): The destination image to paste into. Shape: (H, W, C).
src_image (torch.Tensor): The source image to paste. Shape: (H, W, C). H and W must be compatible with 'box'.
dst_image (np.array): The destination image to paste into. Shape: (H, W, C).
src_image (np.array): The source image to paste. Shape: (H, W, C). H and W must be compatible with 'box'.
box (TBLR): Box defining the region in the 'dst_image' where 'src_image' will be pasted.
mask (Optional[torch.Tensor]): A mask that defines the blending between 'src_image' and 'dst_image'.
mask (Optional[np.array]): A mask that defines the blending between 'src_image' and 'dst_image'.
Range: [0.0, 1.0], Shape: (H, W). The output is calculate per-pixel according to
`src * mask + dst * (1 - mask)`.
"""
@ -45,3 +47,106 @@ def paste(dst_image: np.ndarray, src_image: np.ndarray, box: TBLR, mask: Optiona
mask = np.expand_dims(mask, -1)
dst_image_box = dst_image[box.top : box.bottom, box.left : box.right]
dst_image[box.top : box.bottom, box.left : box.right] = src_image * mask + dst_image_box * (1.0 - mask)
def seam_blend(ia1: np.ndarray, ia2: np.ndarray, blend_amount: int, x_seam: bool) -> np.ndarray:
"""Blend two overlapping tile sections using a seams to find a path.
It is assumed that input images will be RGB np arrays and are the same size.
Args:
ia1 (np.array): Image array 1 Shape: (H, W, C).
ia2 (np.array): Image array 2 Shape: (H, W, C).
x_seam (bool): If the images should be blended on the x axis or not.
blend_amount (int): The size of the blur to use on the seam. Half of this value will be used to avoid the edges of the image.
"""
assert ia1.shape == ia2.shape
assert ia2.size == ia2.size
def shift(arr, num, fill_value=255.0):
result = np.full_like(arr, fill_value)
if num > 0:
result[num:] = arr[:-num]
elif num < 0:
result[:num] = arr[-num:]
else:
result[:] = arr
return result
# Assume RGB and convert to grey
# Could offer other options for the luminance conversion
# BT.709 [0.2126, 0.7152, 0.0722], BT.2020 [0.2627, 0.6780, 0.0593])
# it might not have a huge impact due to the blur that is applied over the seam
iag1 = np.dot(ia1, [0.2989, 0.5870, 0.1140]) # BT.601 perceived brightness
iag2 = np.dot(ia2, [0.2989, 0.5870, 0.1140])
# Calc Difference between the images
ia = iag2 - iag1
# If the seam is on the X-axis rotate the array so we can treat it like a vertical seam
if x_seam:
ia = np.rot90(ia, 1)
# Calc max and min X & Y limits
# gutter is used to avoid the blur hitting the edge of the image
gutter = math.ceil(blend_amount / 2) if blend_amount > 0 else 0
max_y, max_x = ia.shape
max_x -= gutter
min_x = gutter
# Calc the energy in the difference
# Could offer different energy calculations e.g. Sobel or Scharr
energy = np.abs(np.gradient(ia, axis=0)) + np.abs(np.gradient(ia, axis=1))
# Find the starting position of the seam
res = np.copy(energy)
for y in range(1, max_y):
row = res[y, :]
rowl = shift(row, -1)
rowr = shift(row, 1)
res[y, :] = res[y - 1, :] + np.min([row, rowl, rowr], axis=0)
# create an array max_y long
lowest_energy_line = np.empty([max_y], dtype="uint16")
lowest_energy_line[max_y - 1] = np.argmin(res[max_y - 1, min_x : max_x - 1])
# Calc the path of the seam
# could offer options for larger search than just 1 pixel by adjusting lpos and rpos
for ypos in range(max_y - 2, -1, -1):
lowest_pos = lowest_energy_line[ypos + 1]
lpos = lowest_pos - 1
rpos = lowest_pos + 1
lpos = np.clip(lpos, min_x, max_x - 1)
rpos = np.clip(rpos, min_x, max_x - 1)
lowest_energy_line[ypos] = np.argmin(energy[ypos, lpos : rpos + 1]) + lpos
# Draw the mask
mask = np.zeros_like(ia)
for ypos in range(0, max_y):
to_fill = lowest_energy_line[ypos]
mask[ypos, :to_fill] = 1
# If the seam is on the X-axis rotate the array back
if x_seam:
mask = np.rot90(mask, 3)
# blur the seam mask if required
if blend_amount > 0:
mask = cv2.blur(mask, (blend_amount, blend_amount))
# for visual debugging
# from PIL import Image
# m_image = Image.fromarray((mask * 255.0).astype("uint8"))
# copy ia2 over ia1 while applying the seam mask
mask = np.expand_dims(mask, -1)
blended_image = ia1 * mask + ia2 * (1.0 - mask)
# for visual debugging
# i1 = Image.fromarray(ia1.astype("uint8"))
# i2 = Image.fromarray(ia2.astype("uint8"))
# b_image = Image.fromarray(blended_image.astype("uint8"))
# print(f"{ia1.shape}, {ia2.shape}, {mask.shape}, {blended_image.shape}")
# print(f"{i1.size}, {i2.size}, {m_image.size}, {b_image.size}")
return blended_image

View File

@ -11,4 +11,7 @@ from .devices import ( # noqa: F401
normalize_device,
torch_dtype,
)
from .logging import InvokeAILogger
from .util import Chdir, ask_user, download_with_resume, instantiate_from_config, url_attachment_name # noqa: F401
__all__ = ["Chdir", "InvokeAILogger", "choose_precision", "choose_torch_device"]

View File

@ -950,9 +950,9 @@
"problemSettingTitle": "Problem Setting Title",
"reloadNodeTemplates": "Reload Node Templates",
"removeLinearView": "Remove from Linear View",
"resetWorkflow": "Reset Workflow Editor",
"resetWorkflowDesc": "Are you sure you want to reset the Workflow Editor?",
"resetWorkflowDesc2": "Resetting the Workflow Editor will clear all nodes, edges and workflow details. Saved workflows will not be affected.",
"newWorkflow": "New Workflow",
"newWorkflowDesc": "Create a new workflow?",
"newWorkflowDesc2": "Your current workflow has unsaved changes.",
"scheduler": "Scheduler",
"schedulerDescription": "TODO",
"sDXLMainModelField": "SDXL Model",
@ -1032,7 +1032,9 @@
"workflowValidation": "Workflow Validation Error",
"workflowVersion": "Version",
"zoomInNodes": "Zoom In",
"zoomOutNodes": "Zoom Out"
"zoomOutNodes": "Zoom Out",
"betaDesc": "This invocation is in beta. Until it is stable, it may have breaking changes during app updates. We plan to support this invocation long-term.",
"prototypeDesc": "This invocation is a prototype. It may have breaking changes during app updates and may be removed at any time."
},
"parameters": {
"aspectRatio": "Aspect Ratio",
@ -1632,10 +1634,10 @@
"userWorkflows": "My Workflows",
"defaultWorkflows": "Default Workflows",
"openWorkflow": "Open Workflow",
"uploadWorkflow": "Upload Workflow",
"uploadWorkflow": "Load from File",
"deleteWorkflow": "Delete Workflow",
"unnamedWorkflow": "Unnamed Workflow",
"downloadWorkflow": "Download Workflow",
"downloadWorkflow": "Save to File",
"saveWorkflow": "Save Workflow",
"saveWorkflowAs": "Save Workflow As",
"savingWorkflow": "Saving Workflow...",
@ -1650,7 +1652,7 @@
"searchWorkflows": "Search Workflows",
"clearWorkflowSearchFilter": "Clear Workflow Search Filter",
"workflowName": "Workflow Name",
"workflowEditorReset": "Workflow Editor Reset",
"newWorkflowCreated": "New Workflow Created",
"workflowEditorMenu": "Workflow Editor Menu",
"workflowIsOpen": "Workflow is Open"
},

View File

@ -104,7 +104,16 @@
"copyError": "$t(gallery.copy) Errore",
"input": "Ingresso",
"notInstalled": "Non $t(common.installed)",
"unknownError": "Errore sconosciuto"
"unknownError": "Errore sconosciuto",
"updated": "Aggiornato",
"save": "Salva",
"created": "Creato",
"prevPage": "Pagina precedente",
"delete": "Elimina",
"orderBy": "Ordinato per",
"nextPage": "Pagina successiva",
"saveAs": "Salva come",
"unsaved": "Non salvato"
},
"gallery": {
"generations": "Generazioni",
@ -763,7 +772,10 @@
"setIPAdapterImage": "Imposta come immagine per l'Adattatore IP",
"problemSavingMaskDesc": "Impossibile salvare la maschera",
"setAsCanvasInitialImage": "Imposta come immagine iniziale della tela",
"invalidUpload": "Caricamento non valido"
"invalidUpload": "Caricamento non valido",
"problemDeletingWorkflow": "Problema durante l'eliminazione del flusso di lavoro",
"workflowDeleted": "Flusso di lavoro eliminato",
"problemRetrievingWorkflow": "Problema nel recupero del flusso di lavoro"
},
"tooltip": {
"feature": {
@ -886,11 +898,11 @@
"zoomInNodes": "Ingrandire",
"fitViewportNodes": "Adatta vista",
"showGraphNodes": "Mostra sovrapposizione grafico",
"resetWorkflowDesc2": "Reimpostare il flusso di lavoro cancellerà tutti i nodi, i bordi e i dettagli del flusso di lavoro.",
"resetWorkflowDesc2": "Il ripristino dell'editor del flusso di lavoro cancellerà tutti i nodi, le connessioni e i dettagli del flusso di lavoro. I flussi di lavoro salvati non saranno interessati.",
"reloadNodeTemplates": "Ricarica i modelli di nodo",
"loadWorkflow": "Importa flusso di lavoro JSON",
"resetWorkflow": "Reimposta flusso di lavoro",
"resetWorkflowDesc": "Sei sicuro di voler reimpostare questo flusso di lavoro?",
"resetWorkflow": "Reimposta l'editor del flusso di lavoro",
"resetWorkflowDesc": "Sei sicuro di voler reimpostare l'editor del flusso di lavoro?",
"downloadWorkflow": "Esporta flusso di lavoro JSON",
"scheduler": "Campionatore",
"addNode": "Aggiungi nodo",
@ -1080,25 +1092,27 @@
"collectionOrScalarFieldType": "{{name}} Raccolta|Scalare",
"nodeVersion": "Versione Nodo",
"inputFieldTypeParseError": "Impossibile analizzare il tipo di campo di input {{node}}.{{field}} ({{message}})",
"unsupportedArrayItemType": "tipo di elemento dell'array non supportato \"{{type}}\"",
"unsupportedArrayItemType": "Tipo di elemento dell'array non supportato \"{{type}}\"",
"targetNodeFieldDoesNotExist": "Connessione non valida: il campo di destinazione/input {{node}}.{{field}} non esiste",
"unsupportedMismatchedUnion": "tipo CollectionOrScalar non corrispondente con tipi di base {{firstType}} e {{secondType}}",
"allNodesUpdated": "Tutti i nodi sono aggiornati",
"sourceNodeDoesNotExist": "Connessione non valida: il nodo di origine/output {{node}} non esiste",
"unableToExtractEnumOptions": "impossibile estrarre le opzioni enum",
"unableToParseFieldType": "impossibile analizzare il tipo di campo",
"unableToExtractEnumOptions": "Impossibile estrarre le opzioni enum",
"unableToParseFieldType": "Impossibile analizzare il tipo di campo",
"unrecognizedWorkflowVersion": "Versione dello schema del flusso di lavoro non riconosciuta {{version}}",
"outputFieldTypeParseError": "Impossibile analizzare il tipo di campo di output {{node}}.{{field}} ({{message}})",
"sourceNodeFieldDoesNotExist": "Connessione non valida: il campo di origine/output {{node}}.{{field}} non esiste",
"unableToGetWorkflowVersion": "Impossibile ottenere la versione dello schema del flusso di lavoro",
"nodePack": "Pacchetto di nodi",
"unableToExtractSchemaNameFromRef": "impossibile estrarre il nome dello schema dal riferimento",
"unableToExtractSchemaNameFromRef": "Impossibile estrarre il nome dello schema dal riferimento",
"unknownOutput": "Output sconosciuto: {{name}}",
"unknownNodeType": "Tipo di nodo sconosciuto",
"targetNodeDoesNotExist": "Connessione non valida: il nodo di destinazione/input {{node}} non esiste",
"unknownFieldType": "$t(nodes.unknownField) tipo: {{type}}",
"deletedInvalidEdge": "Eliminata connessione non valida {{source}} -> {{target}}",
"unknownInput": "Input sconosciuto: {{name}}"
"unknownInput": "Input sconosciuto: {{name}}",
"prototypeDesc": "Questa invocazione è un prototipo. Potrebbe subire modifiche sostanziali durante gli aggiornamenti dell'app e potrebbe essere rimossa in qualsiasi momento.",
"betaDesc": "Questa invocazione è in versione beta. Fino a quando non sarà stabile, potrebbe subire modifiche importanti durante gli aggiornamenti dell'app. Abbiamo intenzione di supportare questa invocazione a lungo termine."
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@ -1594,5 +1608,34 @@
"hrf": "Correzione Alta Risoluzione",
"hrfStrength": "Forza della Correzione Alta Risoluzione",
"strengthTooltip": "Valori più bassi comportano meno dettagli, il che può ridurre potenziali artefatti."
},
"workflows": {
"saveWorkflowAs": "Salva flusso di lavoro come",
"workflowEditorMenu": "Menu dell'editor del flusso di lavoro",
"noSystemWorkflows": "Nessun flusso di lavoro del sistema",
"workflowName": "Nome del flusso di lavoro",
"noUserWorkflows": "Nessun flusso di lavoro utente",
"defaultWorkflows": "Flussi di lavoro predefiniti",
"saveWorkflow": "Salva flusso di lavoro",
"openWorkflow": "Apri flusso di lavoro",
"clearWorkflowSearchFilter": "Cancella il filtro di ricerca del flusso di lavoro",
"workflowEditorReset": "Reimpostazione dell'editor del flusso di lavoro",
"workflowLibrary": "Libreria",
"noRecentWorkflows": "Nessun flusso di lavoro recente",
"workflowSaved": "Flusso di lavoro salvato",
"workflowIsOpen": "Il flusso di lavoro è aperto",
"unnamedWorkflow": "Flusso di lavoro senza nome",
"savingWorkflow": "Salvataggio del flusso di lavoro...",
"problemLoading": "Problema durante il caricamento dei flussi di lavoro",
"loading": "Caricamento dei flussi di lavoro",
"searchWorkflows": "Cerca flussi di lavoro",
"problemSavingWorkflow": "Problema durante il salvataggio del flusso di lavoro",
"deleteWorkflow": "Elimina flusso di lavoro",
"workflows": "Flussi di lavoro",
"noDescription": "Nessuna descrizione",
"userWorkflows": "I miei flussi di lavoro"
},
"app": {
"storeNotInitialized": "Il negozio non è inizializzato"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -109,7 +109,18 @@
"somethingWentWrong": "出了点问题",
"copyError": "$t(gallery.copy) 错误",
"input": "输入",
"notInstalled": "非 $t(common.installed)"
"notInstalled": "非 $t(common.installed)",
"delete": "删除",
"updated": "已上传",
"save": "保存",
"created": "已创建",
"prevPage": "上一页",
"unknownError": "未知错误",
"direction": "指向",
"orderBy": "排序方式:",
"nextPage": "下一页",
"saveAs": "保存为",
"unsaved": "未保存"
},
"gallery": {
"generations": "生成的图像",
@ -145,7 +156,11 @@
"image": "图像",
"drop": "弃用",
"dropOrUpload": "$t(gallery.drop) 或上传",
"dropToUpload": "$t(gallery.drop) 以上传"
"dropToUpload": "$t(gallery.drop) 以上传",
"problemDeletingImagesDesc": "有一张或多张图像无法被删除",
"problemDeletingImages": "删除图像时出现问题",
"unstarImage": "取消收藏图像",
"starImage": "收藏图像"
},
"hotkeys": {
"keyboardShortcuts": "键盘快捷键",
@ -723,7 +738,7 @@
"nodesUnrecognizedTypes": "无法加载。节点图有无法识别的节点类型",
"nodesNotValidJSON": "无效的 JSON",
"nodesNotValidGraph": "无效的 InvokeAi 节点图",
"nodesLoadedFailed": "节点加载失败",
"nodesLoadedFailed": "节点加载失败",
"modelAddedSimple": "已添加模型",
"modelAdded": "已添加模型: {{modelName}}",
"imageSavingFailed": "图像保存失败",
@ -759,7 +774,10 @@
"problemImportingMask": "导入遮罩时出现问题",
"baseModelChangedCleared_other": "基础模型已更改, 已清除或禁用 {{count}} 个不兼容的子模型",
"setAsCanvasInitialImage": "设为画布初始图像",
"invalidUpload": "无效的上传"
"invalidUpload": "无效的上传",
"problemDeletingWorkflow": "删除工作流时出现问题",
"workflowDeleted": "已删除工作流",
"problemRetrievingWorkflow": "检索工作流时发生问题"
},
"unifiedCanvas": {
"layer": "图层",
@ -874,11 +892,11 @@
},
"nodes": {
"zoomInNodes": "放大",
"resetWorkflowDesc": "是否确定要清空节点图?",
"resetWorkflow": "清空节点图",
"loadWorkflow": "读取节点图",
"resetWorkflowDesc": "是否确定要重置工作流编辑器?",
"resetWorkflow": "重置工作流编辑器",
"loadWorkflow": "加载工作流",
"zoomOutNodes": "缩小",
"resetWorkflowDesc2": "重置节点图将清除所有节点、边际和节点图详情.",
"resetWorkflowDesc2": "重置工作流编辑器将清除所有节点、边际和节点图详情。不影响已保存的工作流。",
"reloadNodeTemplates": "重载节点模板",
"hideGraphNodes": "隐藏节点图信息",
"fitViewportNodes": "自适应视图",
@ -887,7 +905,7 @@
"showLegendNodes": "显示字段类型图例",
"hideLegendNodes": "隐藏字段类型图例",
"showGraphNodes": "显示节点图信息",
"downloadWorkflow": "下载节点图 JSON",
"downloadWorkflow": "下载工作流 JSON",
"workflowDescription": "简述",
"versionUnknown": " 未知版本",
"noNodeSelected": "无选中的节点",
@ -1102,7 +1120,9 @@
"collectionOrScalarFieldType": "{{name}} 合集 | 标量",
"nodeVersion": "节点版本",
"deletedInvalidEdge": "已删除无效的边缘 {{source}} -> {{target}}",
"unknownInput": "未知输入:{{name}}"
"unknownInput": "未知输入:{{name}}",
"prototypeDesc": "此调用是一个原型 (prototype)。它可能会在本项目更新期间发生破坏性更改,并且随时可能被删除。",
"betaDesc": "此调用尚处于测试阶段。在稳定之前,它可能会在项目更新期间发生破坏性更改。本项目计划长期支持这种调用。"
},
"controlnet": {
"resize": "直接缩放",
@ -1606,5 +1626,36 @@
"hrf": "高分辨率修复",
"hrfStrength": "高分辨率修复强度",
"strengthTooltip": "值越低细节越少,但可以减少部分潜在的伪影。"
},
"workflows": {
"saveWorkflowAs": "保存工作流为",
"workflowEditorMenu": "工作流编辑器菜单",
"noSystemWorkflows": "无系统工作流",
"workflowName": "工作流名称",
"noUserWorkflows": "无用户工作流",
"defaultWorkflows": "默认工作流",
"saveWorkflow": "保存工作流",
"openWorkflow": "打开工作流",
"clearWorkflowSearchFilter": "清除工作流检索过滤器",
"workflowEditorReset": "工作流编辑器重置",
"workflowLibrary": "工作流库",
"downloadWorkflow": "下载工作流",
"noRecentWorkflows": "无最近工作流",
"workflowSaved": "已保存工作流",
"workflowIsOpen": "工作流已打开",
"unnamedWorkflow": "未命名的工作流",
"savingWorkflow": "保存工作流中...",
"problemLoading": "加载工作流时出现问题",
"loading": "加载工作流中",
"searchWorkflows": "检索工作流",
"problemSavingWorkflow": "保存工作流时出现问题",
"deleteWorkflow": "删除工作流",
"workflows": "工作流",
"noDescription": "无描述",
"uploadWorkflow": "上传工作流",
"userWorkflows": "我的工作流"
},
"app": {
"storeNotInitialized": "商店尚未初始化"
}
}

View File

@ -3,6 +3,7 @@ import { useStore } from '@nanostores/react';
import { $customStarUI } from 'app/store/nanostores/customStarUI';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import IAIDndImage from 'common/components/IAIDndImage';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
import IAIFillSkeleton from 'common/components/IAIFillSkeleton';
import { imagesToDeleteSelected } from 'features/deleteImageModal/store/slice';
import {
@ -10,7 +11,9 @@ import {
ImageDraggableData,
TypesafeDraggableData,
} from 'features/dnd/types';
import { VirtuosoGalleryContext } from 'features/gallery/components/ImageGrid/types';
import { useMultiselect } from 'features/gallery/hooks/useMultiselect';
import { useScrollToVisible } from 'features/gallery/hooks/useScrollToVisible';
import { MouseEvent, memo, useCallback, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { FaTrash } from 'react-icons/fa';
@ -20,15 +23,16 @@ import {
useStarImagesMutation,
useUnstarImagesMutation,
} from 'services/api/endpoints/images';
import IAIDndImageIcon from 'common/components/IAIDndImageIcon';
interface HoverableImageProps {
imageName: string;
index: number;
virtuosoContext: VirtuosoGalleryContext;
}
const GalleryImage = (props: HoverableImageProps) => {
const dispatch = useAppDispatch();
const { imageName } = props;
const { imageName, virtuosoContext } = props;
const { currentData: imageDTO } = useGetImageDTOQuery(imageName);
const shift = useAppSelector((state) => state.hotkeys.shift);
const { t } = useTranslation();
@ -38,6 +42,13 @@ const GalleryImage = (props: HoverableImageProps) => {
const customStarUi = useStore($customStarUI);
const imageContainerRef = useScrollToVisible(
isSelected,
props.index,
selectionCount,
virtuosoContext
);
const handleDelete = useCallback(
(e: MouseEvent<HTMLButtonElement>) => {
e.stopPropagation();
@ -122,6 +133,7 @@ const GalleryImage = (props: HoverableImageProps) => {
data-testid={`image-${imageDTO.image_name}`}
>
<Flex
ref={imageContainerRef}
userSelect="none"
sx={{
position: 'relative',

View File

@ -1,7 +1,10 @@
import { Box, Flex } from '@chakra-ui/react';
import { EntityId } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/store/storeHooks';
import IAIButton from 'common/components/IAIButton';
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { VirtuosoGalleryContext } from 'features/gallery/components/ImageGrid/types';
import { $useNextPrevImageState } from 'features/gallery/hooks/useNextPrevImage';
import { selectListImagesBaseQueryArgs } from 'features/gallery/store/gallerySelectors';
import { IMAGE_LIMIT } from 'features/gallery/store/types';
import {
@ -11,7 +14,12 @@ import {
import { memo, useCallback, useEffect, useMemo, useRef, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { FaExclamationCircle, FaImage } from 'react-icons/fa';
import { VirtuosoGrid } from 'react-virtuoso';
import {
ItemContent,
ListRange,
VirtuosoGrid,
VirtuosoGridHandle,
} from 'react-virtuoso';
import {
useLazyListImagesQuery,
useListImagesQuery,
@ -20,7 +28,6 @@ import { useBoardTotal } from 'services/api/hooks/useBoardTotal';
import GalleryImage from './GalleryImage';
import ImageGridItemContainer from './ImageGridItemContainer';
import ImageGridListContainer from './ImageGridListContainer';
import { EntityId } from '@reduxjs/toolkit';
const overlayScrollbarsConfig: UseOverlayScrollbarsParams = {
defer: true,
@ -48,6 +55,10 @@ const GalleryImageGrid = () => {
const { currentViewTotal } = useBoardTotal(selectedBoardId);
const queryArgs = useAppSelector(selectListImagesBaseQueryArgs);
const virtuosoRangeRef = useRef<ListRange | null>(null);
const virtuosoRef = useRef<VirtuosoGridHandle>(null);
const { currentData, isFetching, isSuccess, isError } =
useListImagesQuery(queryArgs);
@ -72,12 +83,26 @@ const GalleryImageGrid = () => {
});
}, [areMoreAvailable, listImages, queryArgs, currentData?.ids.length]);
const itemContentFunc = useCallback(
(index: number, imageName: EntityId) => (
<GalleryImage key={imageName} imageName={imageName as string} />
),
[]
);
const virtuosoContext = useMemo<VirtuosoGalleryContext>(() => {
return {
virtuosoRef,
rootRef,
virtuosoRangeRef,
};
}, []);
const itemContentFunc: ItemContent<EntityId, VirtuosoGalleryContext> =
useCallback(
(index, imageName, virtuosoContext) => (
<GalleryImage
key={imageName}
index={index}
imageName={imageName as string}
virtuosoContext={virtuosoContext}
/>
),
[]
);
useEffect(() => {
// Initialize the gallery's custom scrollbar
@ -93,6 +118,15 @@ const GalleryImageGrid = () => {
return () => osInstance()?.destroy();
}, [scroller, initialize, osInstance]);
const onRangeChanged = useCallback((range: ListRange) => {
virtuosoRangeRef.current = range;
}, []);
useEffect(() => {
$useNextPrevImageState.setKey('virtuosoRef', virtuosoRef);
$useNextPrevImageState.setKey('virtuosoRangeRef', virtuosoRangeRef);
}, []);
if (!currentData) {
return (
<Flex
@ -140,6 +174,10 @@ const GalleryImageGrid = () => {
}}
scrollerRef={setScroller}
itemContent={itemContentFunc}
ref={virtuosoRef}
rangeChanged={onRangeChanged}
context={virtuosoContext}
overscan={10}
/>
</Box>
<IAIButton

View File

@ -0,0 +1,8 @@
import { RefObject } from 'react';
import { ListRange, VirtuosoGridHandle } from 'react-virtuoso';
export type VirtuosoGalleryContext = {
virtuosoRef: RefObject<VirtuosoGridHandle>;
rootRef: RefObject<HTMLDivElement>;
virtuosoRangeRef: RefObject<ListRange>;
};

View File

@ -1,7 +1,7 @@
import { IAINoContentFallback } from 'common/components/IAIImageFallback';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { useGetImageWorkflowQuery } from 'services/api/endpoints/images';
import { useDebouncedImageWorkflow } from 'services/api/hooks/useDebouncedImageWorkflow';
import { ImageDTO } from 'services/api/types';
import DataViewer from './DataViewer';
@ -11,7 +11,7 @@ type Props = {
const ImageMetadataWorkflowTabContent = ({ image }: Props) => {
const { t } = useTranslation();
const { currentData: workflow } = useGetImageWorkflowQuery(image.image_name);
const { workflow } = useDebouncedImageWorkflow(image);
if (!workflow) {
return <IAINoContentFallback label={t('nodes.noWorkflow')} />;

View File

@ -4,8 +4,11 @@ import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { selectListImagesBaseQueryArgs } from 'features/gallery/store/gallerySelectors';
import { imageSelected } from 'features/gallery/store/gallerySlice';
import { IMAGE_LIMIT } from 'features/gallery/store/types';
import { getScrollToIndexAlign } from 'features/gallery/util/getScrollToIndexAlign';
import { clamp } from 'lodash-es';
import { useCallback } from 'react';
import { map } from 'nanostores';
import { RefObject, useCallback } from 'react';
import { ListRange, VirtuosoGridHandle } from 'react-virtuoso';
import { boardsApi } from 'services/api/endpoints/boards';
import {
imagesApi,
@ -14,6 +17,16 @@ import {
import { ListImagesArgs } from 'services/api/types';
import { imagesAdapter } from 'services/api/util';
export type UseNextPrevImageState = {
virtuosoRef: RefObject<VirtuosoGridHandle> | undefined;
virtuosoRangeRef: RefObject<ListRange> | undefined;
};
export const $useNextPrevImageState = map<UseNextPrevImageState>({
virtuosoRef: undefined,
virtuosoRangeRef: undefined,
});
export const nextPrevImageButtonsSelector = createMemoizedSelector(
[stateSelector, selectListImagesBaseQueryArgs],
(state, baseQueryArgs) => {
@ -78,6 +91,8 @@ export const nextPrevImageButtonsSelector = createMemoizedSelector(
isFetching: status === 'pending',
nextImage,
prevImage,
nextImageIndex,
prevImageIndex,
queryArgs,
};
}
@ -88,7 +103,9 @@ export const useNextPrevImage = () => {
const {
nextImage,
nextImageIndex,
prevImage,
prevImageIndex,
areMoreImagesAvailable,
isFetching,
queryArgs,
@ -98,11 +115,43 @@ export const useNextPrevImage = () => {
const handlePrevImage = useCallback(() => {
prevImage && dispatch(imageSelected(prevImage));
}, [dispatch, prevImage]);
const range = $useNextPrevImageState.get().virtuosoRangeRef?.current;
const virtuoso = $useNextPrevImageState.get().virtuosoRef?.current;
if (!range || !virtuoso) {
return;
}
if (
prevImageIndex !== undefined &&
(prevImageIndex < range.startIndex || prevImageIndex > range.endIndex)
) {
virtuoso.scrollToIndex({
index: prevImageIndex,
behavior: 'smooth',
align: getScrollToIndexAlign(prevImageIndex, range),
});
}
}, [dispatch, prevImage, prevImageIndex]);
const handleNextImage = useCallback(() => {
nextImage && dispatch(imageSelected(nextImage));
}, [dispatch, nextImage]);
const range = $useNextPrevImageState.get().virtuosoRangeRef?.current;
const virtuoso = $useNextPrevImageState.get().virtuosoRef?.current;
if (!range || !virtuoso) {
return;
}
if (
nextImageIndex !== undefined &&
(nextImageIndex < range.startIndex || nextImageIndex > range.endIndex)
) {
virtuoso.scrollToIndex({
index: nextImageIndex,
behavior: 'smooth',
align: getScrollToIndexAlign(nextImageIndex, range),
});
}
}, [dispatch, nextImage, nextImageIndex]);
const [listImages] = useLazyListImagesQuery();

View File

@ -0,0 +1,46 @@
import { VirtuosoGalleryContext } from 'features/gallery/components/ImageGrid/types';
import { getScrollToIndexAlign } from 'features/gallery/util/getScrollToIndexAlign';
import { useEffect, useRef } from 'react';
export const useScrollToVisible = (
isSelected: boolean,
index: number,
selectionCount: number,
virtuosoContext: VirtuosoGalleryContext
) => {
const imageContainerRef = useRef<HTMLDivElement>(null);
useEffect(() => {
if (
!isSelected ||
selectionCount !== 1 ||
!virtuosoContext.rootRef.current ||
!virtuosoContext.virtuosoRef.current ||
!virtuosoContext.virtuosoRangeRef.current ||
!imageContainerRef.current
) {
return;
}
const itemRect = imageContainerRef.current.getBoundingClientRect();
const rootRect = virtuosoContext.rootRef.current.getBoundingClientRect();
const itemIsVisible =
itemRect.top >= rootRect.top &&
itemRect.bottom <= rootRect.bottom &&
itemRect.left >= rootRect.left &&
itemRect.right <= rootRect.right;
if (!itemIsVisible) {
virtuosoContext.virtuosoRef.current.scrollToIndex({
index,
behavior: 'smooth',
align: getScrollToIndexAlign(
index,
virtuosoContext.virtuosoRangeRef.current
),
});
}
}, [isSelected, index, selectionCount, virtuosoContext]);
return imageContainerRef;
};

View File

@ -0,0 +1,17 @@
import { ListRange } from 'react-virtuoso';
/**
* Gets the alignment for react-virtuoso's scrollToIndex function.
* @param index The index of the item.
* @param range The range of items currently visible.
* @returns
*/
export const getScrollToIndexAlign = (
index: number,
range: ListRange
): 'start' | 'end' => {
if (index > (range.endIndex - range.startIndex) / 2 + range.startIndex) {
return 'end';
}
return 'start';
};

View File

@ -0,0 +1,68 @@
import { Icon, Tooltip } from '@chakra-ui/react';
import { memo } from 'react';
import { useTranslation } from 'react-i18next';
import { FaFlask } from 'react-icons/fa';
import { useNodeClassification } from 'features/nodes/hooks/useNodeClassification';
import { Classification } from 'features/nodes/types/common';
import { FaHammer } from 'react-icons/fa6';
interface Props {
nodeId: string;
}
const InvocationNodeClassificationIcon = ({ nodeId }: Props) => {
const classification = useNodeClassification(nodeId);
if (!classification || classification === 'stable') {
return null;
}
return (
<Tooltip
label={<ClassificationTooltipContent classification={classification} />}
placement="top"
shouldWrapChildren
>
<Icon
as={getIcon(classification)}
sx={{
display: 'block',
boxSize: 4,
color: 'base.400',
}}
/>
</Tooltip>
);
};
export default memo(InvocationNodeClassificationIcon);
const ClassificationTooltipContent = memo(
({ classification }: { classification: Classification }) => {
const { t } = useTranslation();
if (classification === 'beta') {
return t('nodes.betaDesc');
}
if (classification === 'prototype') {
return t('nodes.prototypeDesc');
}
return null;
}
);
ClassificationTooltipContent.displayName = 'ClassificationTooltipContent';
const getIcon = (classification: Classification) => {
if (classification === 'beta') {
return FaHammer;
}
if (classification === 'prototype') {
return FaFlask;
}
return undefined;
};

View File

@ -5,6 +5,7 @@ import NodeTitle from 'features/nodes/components/flow/nodes/common/NodeTitle';
import InvocationNodeCollapsedHandles from './InvocationNodeCollapsedHandles';
import InvocationNodeInfoIcon from './InvocationNodeInfoIcon';
import InvocationNodeStatusIndicator from './InvocationNodeStatusIndicator';
import InvocationNodeClassificationIcon from 'features/nodes/components/flow/nodes/Invocation/InvocationNodeClassificationIcon';
type Props = {
nodeId: string;
@ -31,6 +32,7 @@ const InvocationNodeHeader = ({ nodeId, isOpen }: Props) => {
}}
>
<NodeCollapseButton nodeId={nodeId} isOpen={isOpen} />
<InvocationNodeClassificationIcon nodeId={nodeId} />
<NodeTitle nodeId={nodeId} />
<Flex alignItems="center">
<InvocationNodeStatusIndicator nodeId={nodeId} />

View File

@ -0,0 +1,23 @@
import { createMemoizedSelector } from 'app/store/createMemoizedSelector';
import { stateSelector } from 'app/store/store';
import { useAppSelector } from 'app/store/storeHooks';
import { isInvocationNode } from 'features/nodes/types/invocation';
import { useMemo } from 'react';
export const useNodeClassification = (nodeId: string) => {
const selector = useMemo(
() =>
createMemoizedSelector(stateSelector, ({ nodes }) => {
const node = nodes.nodes.find((node) => node.id === nodeId);
if (!isInvocationNode(node)) {
return false;
}
const nodeTemplate = nodes.nodeTemplates[node?.data.type ?? ''];
return nodeTemplate?.classification;
}),
[nodeId]
);
const title = useAppSelector(selector);
return title;
};

View File

@ -19,6 +19,9 @@ export const zColorField = z.object({
});
export type ColorField = z.infer<typeof zColorField>;
export const zClassification = z.enum(['stable', 'beta', 'prototype']);
export type Classification = z.infer<typeof zClassification>;
export const zSchedulerField = z.enum([
'euler',
'deis',

View File

@ -1,6 +1,6 @@
import { Edge, Node } from 'reactflow';
import { z } from 'zod';
import { zProgressImage } from './common';
import { zClassification, zProgressImage } from './common';
import {
zFieldInputInstance,
zFieldInputTemplate,
@ -21,6 +21,7 @@ export const zInvocationTemplate = z.object({
version: zSemVer,
useCache: z.boolean(),
nodePack: z.string().min(1).nullish(),
classification: zClassification,
});
export type InvocationTemplate = z.infer<typeof zInvocationTemplate>;
// #endregion

View File

@ -83,6 +83,7 @@ export const parseSchema = (
const description = schema.description ?? '';
const version = schema.version;
const nodePack = schema.node_pack;
const classification = schema.classification;
const inputs = reduce(
schema.properties,
@ -245,6 +246,7 @@ export const parseSchema = (
outputs,
useCache,
nodePack,
classification,
};
Object.assign(invocationsAccumulator, { [type]: invocation });

View File

@ -64,7 +64,10 @@ const migrateV1toV2 = (workflowToMigrate: WorkflowV1): WorkflowV2 => {
const nodePack = invocationTemplate
? invocationTemplate.nodePack
: t('common.unknown');
(node.data as unknown as InvocationNodeData).nodePack = nodePack;
// Fallback to 1.0.0 if not specified - this matches the behavior of the backend
node.data.version ||= '1.0.0';
}
});
// Bump version

View File

@ -11,44 +11,48 @@ import {
Text,
useDisclosure,
} from '@chakra-ui/react';
import { useAppDispatch } from 'app/store/storeHooks';
import { useAppDispatch, useAppSelector } from 'app/store/storeHooks';
import { nodeEditorReset } from 'features/nodes/store/nodesSlice';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/util/makeToast';
import { memo, useCallback, useRef } from 'react';
import { useTranslation } from 'react-i18next';
import { FaTrash } from 'react-icons/fa';
import { FaCircleNodes } from 'react-icons/fa6';
const ResetWorkflowEditorMenuItem = () => {
const NewWorkflowMenuItem = () => {
const { t } = useTranslation();
const dispatch = useAppDispatch();
const { isOpen, onOpen, onClose } = useDisclosure();
const cancelRef = useRef<HTMLButtonElement | null>(null);
const isTouched = useAppSelector((state) => state.workflow.isTouched);
const handleConfirmClear = useCallback(() => {
const handleNewWorkflow = useCallback(() => {
dispatch(nodeEditorReset());
dispatch(
addToast(
makeToast({
title: t('workflows.workflowEditorReset'),
title: t('workflows.newWorkflowCreated'),
status: 'success',
})
)
);
onClose();
}, [dispatch, t, onClose]);
}, [dispatch, onClose, t]);
const onClick = useCallback(() => {
if (!isTouched) {
handleNewWorkflow();
return;
}
onOpen();
}, [handleNewWorkflow, isTouched, onOpen]);
return (
<>
<MenuItem
as="button"
icon={<FaTrash />}
sx={{ color: 'error.600', _dark: { color: 'error.300' } }}
onClick={onOpen}
>
{t('nodes.resetWorkflow')}
<MenuItem as="button" icon={<FaCircleNodes />} onClick={onClick}>
{t('nodes.newWorkflow')}
</MenuItem>
<AlertDialog
@ -61,13 +65,13 @@ const ResetWorkflowEditorMenuItem = () => {
<AlertDialogContent>
<AlertDialogHeader fontSize="lg" fontWeight="bold">
{t('nodes.resetWorkflow')}
{t('nodes.newWorkflow')}
</AlertDialogHeader>
<AlertDialogBody py={4}>
<Flex flexDir="column" gap={2}>
<Text>{t('nodes.resetWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.resetWorkflowDesc2')}</Text>
<Text>{t('nodes.newWorkflowDesc')}</Text>
<Text variant="subtext">{t('nodes.newWorkflowDesc2')}</Text>
</Flex>
</AlertDialogBody>
@ -75,7 +79,7 @@ const ResetWorkflowEditorMenuItem = () => {
<Button ref={cancelRef} onClick={onClose}>
{t('common.cancel')}
</Button>
<Button colorScheme="error" ml={3} onClick={handleConfirmClear}>
<Button colorScheme="error" ml={3} onClick={handleNewWorkflow}>
{t('common.accept')}
</Button>
</AlertDialogFooter>
@ -85,4 +89,4 @@ const ResetWorkflowEditorMenuItem = () => {
);
};
export default memo(ResetWorkflowEditorMenuItem);
export default memo(NewWorkflowMenuItem);

View File

@ -9,7 +9,7 @@ import IAIIconButton from 'common/components/IAIIconButton';
import { useGlobalMenuCloseTrigger } from 'common/hooks/useGlobalMenuCloseTrigger';
import { useFeatureStatus } from 'features/system/hooks/useFeatureStatus';
import DownloadWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/DownloadWorkflowMenuItem';
import ResetWorkflowEditorMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/ResetWorkflowEditorMenuItem';
import NewWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/NewWorkflowMenuItem';
import SaveWorkflowAsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowAsMenuItem';
import SaveWorkflowMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SaveWorkflowMenuItem';
import SettingsMenuItem from 'features/workflowLibrary/components/WorkflowLibraryMenu/SettingsMenuItem';
@ -39,7 +39,7 @@ const WorkflowLibraryMenu = () => {
{isWorkflowLibraryEnabled && <SaveWorkflowAsMenuItem />}
<DownloadWorkflowMenuItem />
<UploadWorkflowMenuItem />
<ResetWorkflowEditorMenuItem />
<NewWorkflowMenuItem />
<MenuDivider />
<SettingsMenuItem />
</MenuList>

View File

@ -0,0 +1,22 @@
import { skipToken } from '@reduxjs/toolkit/query';
import { useAppSelector } from 'app/store/storeHooks';
import { useGetImageWorkflowQuery } from 'services/api/endpoints/images';
import { ImageDTO } from 'services/api/types';
import { useDebounce } from 'use-debounce';
export const useDebouncedImageWorkflow = (imageDTO?: ImageDTO | null) => {
const workflowFetchDebounce = useAppSelector(
(state) => state.config.workflowFetchDebounce ?? 300
);
const [debouncedImageName] = useDebounce(
imageDTO?.has_workflow ? imageDTO.image_name : null,
workflowFetchDebounce
);
const { data: workflow, isLoading } = useGetImageWorkflowQuery(
debouncedImageName ?? skipToken
);
return { workflow, isLoading };
};

View File

@ -1,17 +1,14 @@
import { skipToken } from '@reduxjs/toolkit/query';
import { useDebounce } from 'use-debounce';
import { useGetImageMetadataQuery } from 'services/api/endpoints/images';
import { useAppSelector } from 'app/store/storeHooks';
import { useGetImageMetadataQuery } from 'services/api/endpoints/images';
import { useDebounce } from 'use-debounce';
export const useDebouncedMetadata = (imageName?: string | null) => {
const metadataFetchDebounce = useAppSelector(
(state) => state.config.metadataFetchDebounce
(state) => state.config.metadataFetchDebounce ?? 300
);
const [debouncedImageName] = useDebounce(
imageName,
metadataFetchDebounce ?? 0
);
const [debouncedImageName] = useDebounce(imageName, metadataFetchDebounce);
const { data: metadata, isLoading } = useGetImageMetadataQuery(
debouncedImageName ?? skipToken

File diff suppressed because one or more lines are too long

View File

@ -101,6 +101,8 @@ plugins:
extra_javascript:
- https://unpkg.com/tablesort@5.3.0/dist/tablesort.min.js
- javascripts/tablesort.js
- https://widget.kapa.ai/kapa-widget.bundle.js
- javascript/init_kapa_widget.js
extra:
analytics:
@ -164,6 +166,7 @@ nav:
- Overview: 'contributing/contribution_guides/development.md'
- New Contributors: 'contributing/contribution_guides/newContributorChecklist.md'
- InvokeAI Architecture: 'contributing/ARCHITECTURE.md'
- Model Manager v2: 'contributing/MODEL_MANAGER.md'
- Frontend Documentation: 'contributing/contribution_guides/contributingToFrontend.md'
- Local Development: 'contributing/LOCAL_DEVELOPMENT.md'
- Adding Tests: 'contributing/TESTS.md'

View File

@ -32,7 +32,7 @@ classifiers = [
'Topic :: Scientific/Engineering :: Image Processing',
]
dependencies = [
"accelerate~=0.24.0",
"accelerate~=0.25.0",
"albumentations",
"basicsr",
"click",
@ -41,15 +41,15 @@ dependencies = [
"controlnet-aux>=0.0.6",
"timm==0.6.13", # needed to override timm latest in controlnet_aux, see https://github.com/isl-org/ZoeDepth/issues/26
"datasets",
"diffusers[torch]~=0.23.0",
"diffusers[torch]~=0.24.0",
"dnspython~=2.4.0",
"dynamicprompts",
"easing-functions",
"einops",
"facexlib",
"fastapi~=0.104.1",
"fastapi~=0.105.0",
"fastapi-events~=0.9.1",
"huggingface-hub~=0.16.4",
"huggingface-hub~=0.19.4",
"imohash",
"invisible-watermark~=0.2.0", # needed to install SDXL base and refiner using their repo_ids
"matplotlib", # needed for plotting of Penner easing functions
@ -80,11 +80,11 @@ dependencies = [
"semver~=3.0.1",
"send2trash",
"test-tube~=0.7.5",
"torch==2.1.0",
"torchvision==0.16.0",
"torch==2.1.1",
"torchvision==0.16.1",
"torchmetrics~=0.11.0",
"torchsde~=0.2.5",
"transformers~=4.35.0",
"transformers~=4.36.0",
"uvicorn[standard]~=0.21.1",
"windows-curses; sys_platform=='win32'",
]
@ -107,7 +107,7 @@ dependencies = [
"pytest-datadir",
]
"xformers" = [
"xformers==0.0.22post7; sys_platform!='darwin'",
"xformers==0.0.23; sys_platform!='darwin'",
"triton; sys_platform=='linux'",
]
"onnx" = ["onnxruntime"]
@ -221,6 +221,8 @@ exclude = [
# global mypy config
[tool.mypy]
ignore_missing_imports = true # ignores missing types in third-party libraries
strict = true
exclude = ["tests/*"]
# overrides for specific modules
[[tool.mypy.overrides]]

View File

@ -1,9 +1,11 @@
#!/bin/env python
"""Little command-line utility for probing a model on disk."""
import argparse
from pathlib import Path
from invokeai.backend.model_management.model_probe import ModelProbe
from invokeai.backend.model_manager import InvalidModelConfigException, ModelProbe
parser = argparse.ArgumentParser(description="Probe model type")
parser.add_argument(
@ -14,5 +16,8 @@ parser.add_argument(
args = parser.parse_args()
for path in args.model_path:
info = ModelProbe().probe(path)
print(f"{path}: {info}")
try:
info = ModelProbe.probe(path)
print(f"{path}:{info.model_dump_json(indent=4)}")
except InvalidModelConfigException as exc:
print(exc)

View File

View File

@ -28,8 +28,8 @@ from invokeai.app.services.shared.graph import (
IterateInvocation,
LibraryGraph,
)
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.util.logging import InvokeAILogger
from tests.fixtures.sqlite_database import create_mock_sqlite_database
from .test_invoker import create_edge
@ -49,7 +49,8 @@ def simple_graph():
@pytest.fixture
def mock_services() -> InvocationServices:
configuration = InvokeAIAppConfig(use_memory_db=True, node_cache_size=0)
db = SqliteDatabase(configuration, InvokeAILogger.get_logger())
logger = InvokeAILogger.get_logger()
db = create_mock_sqlite_database(configuration, logger)
# NOTE: none of these are actually called by the test invocations
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
return InvocationServices(
@ -69,6 +70,7 @@ def mock_services() -> InvocationServices:
logger=logging, # type: ignore
model_manager=None, # type: ignore
model_records=None, # type: ignore
model_install=None, # type: ignore
names=None, # type: ignore
performance_statistics=InvocationStatsService(),
processor=DefaultInvocationProcessor(),

View File

@ -4,6 +4,7 @@ import pytest
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.backend.util.logging import InvokeAILogger
from tests.fixtures.sqlite_database import create_mock_sqlite_database
# This import must happen before other invoke imports or test in other files(!!) break
from .test_nodes import ( # isort: split
@ -24,7 +25,6 @@ from invokeai.app.services.invoker import Invoker
from invokeai.app.services.item_storage.item_storage_sqlite import SqliteItemStorage
from invokeai.app.services.session_queue.session_queue_common import DEFAULT_QUEUE_ID
from invokeai.app.services.shared.graph import Graph, GraphExecutionState, GraphInvocation, LibraryGraph
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
@pytest.fixture
@ -52,8 +52,9 @@ def graph_with_subgraph():
# the test invocations.
@pytest.fixture
def mock_services() -> InvocationServices:
db = SqliteDatabase(InvokeAIAppConfig(use_memory_db=True), InvokeAILogger.get_logger())
configuration = InvokeAIAppConfig(use_memory_db=True, node_cache_size=0)
logger = InvokeAILogger.get_logger()
db = create_mock_sqlite_database(configuration, logger)
# NOTE: none of these are actually called by the test invocations
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
@ -74,6 +75,7 @@ def mock_services() -> InvocationServices:
logger=logging, # type: ignore
model_manager=None, # type: ignore
model_records=None, # type: ignore
model_install=None, # type: ignore
names=None, # type: ignore
performance_statistics=InvocationStatsService(),
processor=DefaultInvocationProcessor(),

View File

@ -12,7 +12,7 @@ from invokeai.app.services.session_queue.session_queue_common import (
prepare_values_to_insert,
)
from invokeai.app.services.shared.graph import Graph, GraphExecutionState, GraphInvocation
from tests.nodes.test_nodes import PromptTestInvocation
from tests.aa_nodes.test_nodes import PromptTestInvocation
@pytest.fixture

View File

@ -15,8 +15,11 @@ class TestModel(BaseModel):
@pytest.fixture
def db() -> SqliteItemStorage[TestModel]:
sqlite_db = SqliteDatabase(InvokeAIAppConfig(use_memory_db=True), InvokeAILogger.get_logger())
sqlite_item_storage = SqliteItemStorage[TestModel](db=sqlite_db, table_name="test", id_field="id")
config = InvokeAIAppConfig(use_memory_db=True)
logger = InvokeAILogger.get_logger()
db_path = None if config.use_memory_db else config.db_path
db = SqliteDatabase(db_path=db_path, logger=logger, verbose=config.log_sql)
sqlite_item_storage = SqliteItemStorage[TestModel](db=db, table_name="test", id_field="id")
return sqlite_item_storage

View File

@ -0,0 +1,198 @@
"""
Test the model installer
"""
from pathlib import Path
from typing import Any, Dict, List
import pytest
from pydantic import BaseModel, ValidationError
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.model_install import (
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallService,
ModelInstallServiceBase,
)
from invokeai.app.services.model_records import ModelRecordServiceBase, ModelRecordServiceSQL, UnknownModelException
from invokeai.backend.model_manager.config import BaseModelType, ModelType
from invokeai.backend.util.logging import InvokeAILogger
from tests.fixtures.sqlite_database import create_mock_sqlite_database
@pytest.fixture
def test_file(datadir: Path) -> Path:
return datadir / "test_embedding.safetensors"
@pytest.fixture
def app_config(datadir: Path) -> InvokeAIAppConfig:
return InvokeAIAppConfig(
root=datadir / "root",
models_dir=datadir / "root/models",
)
@pytest.fixture
def store(
app_config: InvokeAIAppConfig,
) -> ModelRecordServiceBase:
logger = InvokeAILogger.get_logger(config=app_config)
db = create_mock_sqlite_database(app_config, logger)
store: ModelRecordServiceBase = ModelRecordServiceSQL(db)
return store
@pytest.fixture
def installer(app_config: InvokeAIAppConfig, store: ModelRecordServiceBase) -> ModelInstallServiceBase:
return ModelInstallService(
app_config=app_config,
record_store=store,
event_bus=DummyEventService(),
)
class DummyEvent(BaseModel):
"""Dummy Event to use with Dummy Event service."""
event_name: str
payload: Dict[str, Any]
class DummyEventService(EventServiceBase):
"""Dummy event service for testing."""
events: List[DummyEvent]
def __init__(self) -> None:
super().__init__()
self.events = []
def dispatch(self, event_name: str, payload: Any) -> None:
"""Dispatch an event by appending it to self.events."""
self.events.append(DummyEvent(event_name=payload["event"], payload=payload["data"]))
def test_registration(installer: ModelInstallServiceBase, test_file: Path) -> None:
store = installer.record_store
matches = store.search_by_attr(model_name="test_embedding")
assert len(matches) == 0
key = installer.register_path(test_file)
assert key is not None
assert len(key) == 32
def test_registration_meta(installer: ModelInstallServiceBase, test_file: Path) -> None:
store = installer.record_store
key = installer.register_path(test_file)
model_record = store.get_model(key)
assert model_record is not None
assert model_record.name == "test_embedding"
assert model_record.type == ModelType.TextualInversion
assert Path(model_record.path) == test_file
assert model_record.base == BaseModelType("sd-1")
assert model_record.description is not None
assert model_record.source is not None
assert Path(model_record.source) == test_file
def test_registration_meta_override_fail(installer: ModelInstallServiceBase, test_file: Path) -> None:
key = None
with pytest.raises(ValidationError):
key = installer.register_path(test_file, {"name": "banana_sushi", "type": ModelType("lora")})
assert key is None
def test_registration_meta_override_succeed(installer: ModelInstallServiceBase, test_file: Path) -> None:
store = installer.record_store
key = installer.register_path(
test_file, {"name": "banana_sushi", "source": "fake/repo_id", "current_hash": "New Hash"}
)
model_record = store.get_model(key)
assert model_record.name == "banana_sushi"
assert model_record.source == "fake/repo_id"
assert model_record.current_hash == "New Hash"
def test_install(installer: ModelInstallServiceBase, test_file: Path, app_config: InvokeAIAppConfig) -> None:
store = installer.record_store
key = installer.install_path(test_file)
model_record = store.get_model(key)
assert model_record.path == "sd-1/embedding/test_embedding.safetensors"
assert model_record.source == test_file.as_posix()
def test_background_install(installer: ModelInstallServiceBase, test_file: Path, app_config: InvokeAIAppConfig) -> None:
"""Note: may want to break this down into several smaller unit tests."""
path = test_file
description = "Test of metadata assignment"
source = LocalModelSource(path=path, inplace=False)
job = installer.import_model(source, config={"description": description})
assert job is not None
assert isinstance(job, ModelInstallJob)
# See if job is registered properly
assert job in installer.get_job(source)
# test that the job object tracked installation correctly
jobs = installer.wait_for_installs()
assert len(jobs) > 0
my_job = [x for x in jobs if x.source == source]
assert len(my_job) == 1
assert my_job[0].status == InstallStatus.COMPLETED
# test that the expected events were issued
bus = installer.event_bus
assert bus is not None # sigh - ruff is a stickler for type checking
assert isinstance(bus, DummyEventService)
assert len(bus.events) == 2
event_names = [x.event_name for x in bus.events]
assert "model_install_started" in event_names
assert "model_install_completed" in event_names
assert Path(bus.events[0].payload["source"]) == source
assert Path(bus.events[1].payload["source"]) == source
key = bus.events[1].payload["key"]
assert key is not None
# see if the thing actually got installed at the expected location
model_record = installer.record_store.get_model(key)
assert model_record is not None
assert model_record.path == "sd-1/embedding/test_embedding.safetensors"
assert Path(app_config.models_dir / model_record.path).exists()
# see if metadata was properly passed through
assert model_record.description == description
# see if prune works properly
installer.prune_jobs()
assert not installer.get_job(source)
def test_delete_install(installer: ModelInstallServiceBase, test_file: Path, app_config: InvokeAIAppConfig):
store = installer.record_store
key = installer.install_path(test_file)
model_record = store.get_model(key)
assert Path(app_config.models_dir / model_record.path).exists()
assert test_file.exists() # original should still be there after installation
installer.delete(key)
assert not Path(
app_config.models_dir / model_record.path
).exists() # after deletion, installed copy should not exist
assert test_file.exists() # but original should still be there
with pytest.raises(UnknownModelException):
store.get_model(key)
def test_delete_register(installer: ModelInstallServiceBase, test_file: Path, app_config: InvokeAIAppConfig):
store = installer.record_store
key = installer.register_path(test_file)
model_record = store.get_model(key)
assert Path(app_config.models_dir / model_record.path).exists()
assert test_file.exists() # original should still be there after installation
installer.delete(key)
assert Path(app_config.models_dir / model_record.path).exists()
with pytest.raises(UnknownModelException):
store.get_model(key)

View File

@ -0,0 +1 @@
This directory is used by pytest-datadir.

View File

@ -0,0 +1,79 @@
model:
base_learning_rate: 1.0e-04
target: invokeai.backend.models.diffusion.ddpm.LatentDiffusion
params:
linear_start: 0.00085
linear_end: 0.0120
num_timesteps_cond: 1
log_every_t: 200
timesteps: 1000
first_stage_key: "jpg"
cond_stage_key: "txt"
image_size: 64
channels: 4
cond_stage_trainable: false # Note: different from the one we trained before
conditioning_key: crossattn
monitor: val/loss_simple_ema
scale_factor: 0.18215
use_ema: False
scheduler_config: # 10000 warmup steps
target: invokeai.backend.stable_diffusion.lr_scheduler.LambdaLinearScheduler
params:
warm_up_steps: [ 10000 ]
cycle_lengths: [ 10000000000000 ] # incredibly large number to prevent corner cases
f_start: [ 1.e-6 ]
f_max: [ 1. ]
f_min: [ 1. ]
personalization_config:
target: invokeai.backend.stable_diffusion.embedding_manager.EmbeddingManager
params:
placeholder_strings: ["*"]
initializer_words: ['sculpture']
per_image_tokens: false
num_vectors_per_token: 1
progressive_words: False
unet_config:
target: invokeai.backend.stable_diffusion.diffusionmodules.openaimodel.UNetModel
params:
image_size: 32 # unused
in_channels: 4
out_channels: 4
model_channels: 320
attention_resolutions: [ 4, 2, 1 ]
num_res_blocks: 2
channel_mult: [ 1, 2, 4, 4 ]
num_heads: 8
use_spatial_transformer: True
transformer_depth: 1
context_dim: 768
use_checkpoint: True
legacy: False
first_stage_config:
target: invokeai.backend.stable_diffusion.autoencoder.AutoencoderKL
params:
embed_dim: 4
monitor: val/rec_loss
ddconfig:
double_z: true
z_channels: 4
resolution: 256
in_channels: 3
out_ch: 3
ch: 128
ch_mult:
- 1
- 2
- 4
- 4
num_res_blocks: 2
attn_resolutions: []
dropout: 0.0
lossconfig:
target: torch.nn.Identity
cond_stage_config:
target: invokeai.backend.stable_diffusion.encoders.modules.WeightedFrozenCLIPEmbedder

View File

@ -0,0 +1 @@
Dummy file to establish git path.

View File

@ -3,6 +3,7 @@ Test the refactored model config classes.
"""
from hashlib import sha256
from typing import Any
import pytest
@ -13,7 +14,6 @@ from invokeai.app.services.model_records import (
ModelRecordServiceSQL,
UnknownModelException,
)
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.backend.model_manager.config import (
BaseModelType,
MainCheckpointConfig,
@ -23,13 +23,16 @@ from invokeai.backend.model_manager.config import (
VaeDiffusersConfig,
)
from invokeai.backend.util.logging import InvokeAILogger
from tests.fixtures.sqlite_database import create_mock_sqlite_database
@pytest.fixture
def store(datadir) -> ModelRecordServiceBase:
def store(
datadir: Any,
) -> ModelRecordServiceBase:
config = InvokeAIAppConfig(root=datadir)
logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config, logger)
db = create_mock_sqlite_database(config, logger)
return ModelRecordServiceSQL(db)

View File

@ -1,7 +1,12 @@
import numpy as np
import pytest
from invokeai.backend.tiles.tiles import calc_tiles_with_overlap, merge_tiles_with_linear_blending
from invokeai.backend.tiles.tiles import (
calc_tiles_even_split,
calc_tiles_min_overlap,
calc_tiles_with_overlap,
merge_tiles_with_linear_blending,
)
from invokeai.backend.tiles.utils import TBLR, Tile
####################################
@ -14,7 +19,10 @@ def test_calc_tiles_with_overlap_single_tile():
tiles = calc_tiles_with_overlap(image_height=512, image_width=1024, tile_height=512, tile_width=1024, overlap=64)
expected_tiles = [
Tile(coords=TBLR(top=0, bottom=512, left=0, right=1024), overlap=TBLR(top=0, bottom=0, left=0, right=0))
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=1024),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
assert tiles == expected_tiles
@ -27,13 +35,31 @@ def test_calc_tiles_with_overlap_evenly_divisible():
expected_tiles = [
# Row 0
Tile(coords=TBLR(top=0, bottom=320, left=0, right=576), overlap=TBLR(top=0, bottom=64, left=0, right=64)),
Tile(coords=TBLR(top=0, bottom=320, left=512, right=1088), overlap=TBLR(top=0, bottom=64, left=64, right=64)),
Tile(coords=TBLR(top=0, bottom=320, left=1024, right=1600), overlap=TBLR(top=0, bottom=64, left=64, right=0)),
Tile(
coords=TBLR(top=0, bottom=320, left=0, right=576),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=512, right=1088),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=1024, right=1600),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(coords=TBLR(top=256, bottom=576, left=0, right=576), overlap=TBLR(top=64, bottom=0, left=0, right=64)),
Tile(coords=TBLR(top=256, bottom=576, left=512, right=1088), overlap=TBLR(top=64, bottom=0, left=64, right=64)),
Tile(coords=TBLR(top=256, bottom=576, left=1024, right=1600), overlap=TBLR(top=64, bottom=0, left=64, right=0)),
Tile(
coords=TBLR(top=256, bottom=576, left=0, right=576),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=256, bottom=576, left=512, right=1088),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=256, bottom=576, left=1024, right=1600),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
assert tiles == expected_tiles
@ -46,16 +72,30 @@ def test_calc_tiles_with_overlap_not_evenly_divisible():
expected_tiles = [
# Row 0
Tile(coords=TBLR(top=0, bottom=256, left=0, right=512), overlap=TBLR(top=0, bottom=112, left=0, right=64)),
Tile(coords=TBLR(top=0, bottom=256, left=448, right=960), overlap=TBLR(top=0, bottom=112, left=64, right=272)),
Tile(coords=TBLR(top=0, bottom=256, left=688, right=1200), overlap=TBLR(top=0, bottom=112, left=272, right=0)),
# Row 1
Tile(coords=TBLR(top=144, bottom=400, left=0, right=512), overlap=TBLR(top=112, bottom=0, left=0, right=64)),
Tile(
coords=TBLR(top=144, bottom=400, left=448, right=960), overlap=TBLR(top=112, bottom=0, left=64, right=272)
coords=TBLR(top=0, bottom=256, left=0, right=512),
overlap=TBLR(top=0, bottom=112, left=0, right=64),
),
Tile(
coords=TBLR(top=144, bottom=400, left=688, right=1200), overlap=TBLR(top=112, bottom=0, left=272, right=0)
coords=TBLR(top=0, bottom=256, left=448, right=960),
overlap=TBLR(top=0, bottom=112, left=64, right=272),
),
Tile(
coords=TBLR(top=0, bottom=256, left=688, right=1200),
overlap=TBLR(top=0, bottom=112, left=272, right=0),
),
# Row 1
Tile(
coords=TBLR(top=144, bottom=400, left=0, right=512),
overlap=TBLR(top=112, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=144, bottom=400, left=448, right=960),
overlap=TBLR(top=112, bottom=0, left=64, right=272),
),
Tile(
coords=TBLR(top=144, bottom=400, left=688, right=1200),
overlap=TBLR(top=112, bottom=0, left=272, right=0),
),
]
@ -75,7 +115,12 @@ def test_calc_tiles_with_overlap_not_evenly_divisible():
],
)
def test_calc_tiles_with_overlap_input_validation(
image_height: int, image_width: int, tile_height: int, tile_width: int, overlap: int, raises: bool
image_height: int,
image_width: int,
tile_height: int,
tile_width: int,
overlap: int,
raises: bool,
):
"""Test that calc_tiles_with_overlap() raises an exception if the inputs are invalid."""
if raises:
@ -85,6 +130,328 @@ def test_calc_tiles_with_overlap_input_validation(
calc_tiles_with_overlap(image_height, image_width, tile_height, tile_width, overlap)
####################################
# Test calc_tiles_min_overlap(...)
####################################
def test_calc_tiles_min_overlap_single_tile():
"""Test calc_tiles_min_overlap() behavior when a single tile covers the image."""
tiles = calc_tiles_min_overlap(
image_height=512,
image_width=1024,
tile_height=512,
tile_width=1024,
min_overlap=64,
)
expected_tiles = [
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=1024),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
assert tiles == expected_tiles
def test_calc_tiles_min_overlap_evenly_divisible():
"""Test calc_tiles_min_overlap() behavior when the image is evenly covered by multiple tiles."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_min_overlap(
image_height=576,
image_width=1600,
tile_height=320,
tile_width=576,
min_overlap=64,
)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=320, left=0, right=576),
overlap=TBLR(top=0, bottom=64, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=512, right=1088),
overlap=TBLR(top=0, bottom=64, left=64, right=64),
),
Tile(
coords=TBLR(top=0, bottom=320, left=1024, right=1600),
overlap=TBLR(top=0, bottom=64, left=64, right=0),
),
# Row 1
Tile(
coords=TBLR(top=256, bottom=576, left=0, right=576),
overlap=TBLR(top=64, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=256, bottom=576, left=512, right=1088),
overlap=TBLR(top=64, bottom=0, left=64, right=64),
),
Tile(
coords=TBLR(top=256, bottom=576, left=1024, right=1600),
overlap=TBLR(top=64, bottom=0, left=64, right=0),
),
]
assert tiles == expected_tiles
def test_calc_tiles_min_overlap_not_evenly_divisible():
"""Test calc_tiles_min_overlap() behavior when the image requires 'uneven' overlaps to achieve proper coverage."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_min_overlap(
image_height=400,
image_width=1200,
tile_height=256,
tile_width=512,
min_overlap=64,
)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=256, left=0, right=512),
overlap=TBLR(top=0, bottom=112, left=0, right=168),
),
Tile(
coords=TBLR(top=0, bottom=256, left=344, right=856),
overlap=TBLR(top=0, bottom=112, left=168, right=168),
),
Tile(
coords=TBLR(top=0, bottom=256, left=688, right=1200),
overlap=TBLR(top=0, bottom=112, left=168, right=0),
),
# Row 1
Tile(
coords=TBLR(top=144, bottom=400, left=0, right=512),
overlap=TBLR(top=112, bottom=0, left=0, right=168),
),
Tile(
coords=TBLR(top=144, bottom=400, left=344, right=856),
overlap=TBLR(top=112, bottom=0, left=168, right=168),
),
Tile(
coords=TBLR(top=144, bottom=400, left=688, right=1200),
overlap=TBLR(top=112, bottom=0, left=168, right=0),
),
]
assert tiles == expected_tiles
def test_calc_tiles_min_overlap_tile_bigger_than_image():
"""Test calc_tiles_min_overlap() behavior when the tile is nigger than the image"""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_min_overlap(
image_height=1024,
image_width=1024,
tile_height=1408,
tile_width=1408,
min_overlap=128,
)
expected_tiles = [
# single tile
Tile(
coords=TBLR(top=0, bottom=1024, left=0, right=1024),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
),
]
assert tiles == expected_tiles
@pytest.mark.parametrize(
[
"image_height",
"image_width",
"tile_height",
"tile_width",
"min_overlap",
"raises",
],
[
(128, 128, 128, 128, 127, False), # OK
(128, 128, 128, 128, 0, False), # OK
(128, 128, 64, 64, 0, False), # OK
(128, 128, 129, 128, 0, False), # tile_height exceeds image_height defaults to 1 tile.
(128, 128, 128, 129, 0, False), # tile_width exceeds image_width defaults to 1 tile.
(128, 128, 64, 128, 64, True), # overlap equals tile_height.
(128, 128, 128, 64, 64, True), # overlap equals tile_width.
],
)
def test_calc_tiles_min_overlap_input_validation(
image_height: int,
image_width: int,
tile_height: int,
tile_width: int,
min_overlap: int,
raises: bool,
):
"""Test that calc_tiles_min_overlap() raises an exception if the inputs are invalid."""
if raises:
with pytest.raises(AssertionError):
calc_tiles_min_overlap(image_height, image_width, tile_height, tile_width, min_overlap)
else:
calc_tiles_min_overlap(image_height, image_width, tile_height, tile_width, min_overlap)
####################################
# Test calc_tiles_even_split(...)
####################################
def test_calc_tiles_even_split_single_tile():
"""Test calc_tiles_even_split() behavior when a single tile covers the image."""
tiles = calc_tiles_even_split(
image_height=512, image_width=1024, num_tiles_x=1, num_tiles_y=1, overlap_fraction=0.25
)
expected_tiles = [
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=1024),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
assert tiles == expected_tiles
def test_calc_tiles_even_split_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image is evenly covered by multiple tiles."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=576, image_width=1600, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=320, left=0, right=624),
overlap=TBLR(top=0, bottom=72, left=0, right=136),
),
Tile(
coords=TBLR(top=0, bottom=320, left=488, right=1112),
overlap=TBLR(top=0, bottom=72, left=136, right=136),
),
Tile(
coords=TBLR(top=0, bottom=320, left=976, right=1600),
overlap=TBLR(top=0, bottom=72, left=136, right=0),
),
# Row 1
Tile(
coords=TBLR(top=248, bottom=576, left=0, right=624),
overlap=TBLR(top=72, bottom=0, left=0, right=136),
),
Tile(
coords=TBLR(top=248, bottom=576, left=488, right=1112),
overlap=TBLR(top=72, bottom=0, left=136, right=136),
),
Tile(
coords=TBLR(top=248, bottom=576, left=976, right=1600),
overlap=TBLR(top=72, bottom=0, left=136, right=0),
),
]
assert tiles == expected_tiles
def test_calc_tiles_even_split_not_evenly_divisible():
"""Test calc_tiles_even_split() behavior when the image requires 'uneven' overlaps to achieve proper coverage."""
# Parameters mimic roughly the same output as the original tile generations of the same test name
tiles = calc_tiles_even_split(
image_height=400, image_width=1200, num_tiles_x=3, num_tiles_y=2, overlap_fraction=0.25
)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=224, left=0, right=464),
overlap=TBLR(top=0, bottom=56, left=0, right=104),
),
Tile(
coords=TBLR(top=0, bottom=224, left=360, right=824),
overlap=TBLR(top=0, bottom=56, left=104, right=104),
),
Tile(
coords=TBLR(top=0, bottom=224, left=720, right=1200),
overlap=TBLR(top=0, bottom=56, left=104, right=0),
),
# Row 1
Tile(
coords=TBLR(top=168, bottom=400, left=0, right=464),
overlap=TBLR(top=56, bottom=0, left=0, right=104),
),
Tile(
coords=TBLR(top=168, bottom=400, left=360, right=824),
overlap=TBLR(top=56, bottom=0, left=104, right=104),
),
Tile(
coords=TBLR(top=168, bottom=400, left=720, right=1200),
overlap=TBLR(top=56, bottom=0, left=104, right=0),
),
]
assert tiles == expected_tiles
def test_calc_tiles_even_split_difficult_size():
"""Test calc_tiles_even_split() behavior when the image is a difficult size to spilt evenly and keep div8."""
# Parameters are a difficult size for other tile gen routines to calculate
tiles = calc_tiles_even_split(
image_height=1000, image_width=1000, num_tiles_x=2, num_tiles_y=2, overlap_fraction=0.25
)
expected_tiles = [
# Row 0
Tile(
coords=TBLR(top=0, bottom=560, left=0, right=560),
overlap=TBLR(top=0, bottom=128, left=0, right=128),
),
Tile(
coords=TBLR(top=0, bottom=560, left=432, right=1000),
overlap=TBLR(top=0, bottom=128, left=128, right=0),
),
# Row 1
Tile(
coords=TBLR(top=432, bottom=1000, left=0, right=560),
overlap=TBLR(top=128, bottom=0, left=0, right=128),
),
Tile(
coords=TBLR(top=432, bottom=1000, left=432, right=1000),
overlap=TBLR(top=128, bottom=0, left=128, right=0),
),
]
assert tiles == expected_tiles
@pytest.mark.parametrize(
["image_height", "image_width", "num_tiles_x", "num_tiles_y", "overlap_fraction", "raises"],
[
(128, 128, 1, 1, 0.25, False), # OK
(128, 128, 1, 1, 0, False), # OK
(128, 128, 2, 1, 0, False), # OK
(127, 127, 1, 1, 0, True), # image size must be dividable by 8
],
)
def test_calc_tiles_even_split_input_validation(
image_height: int,
image_width: int,
num_tiles_x: int,
num_tiles_y: int,
overlap_fraction: float,
raises: bool,
):
"""Test that calc_tiles_even_split() raises an exception if the inputs are invalid."""
if raises:
with pytest.raises(ValueError):
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
else:
calc_tiles_even_split(image_height, image_width, num_tiles_x, num_tiles_y, overlap_fraction)
#############################################
# Test merge_tiles_with_linear_blending(...)
#############################################
@ -95,8 +462,14 @@ def test_merge_tiles_with_linear_blending_horizontal(blend_amount: int):
"""Test merge_tiles_with_linear_blending(...) behavior when merging horizontally."""
# Initialize 2 tiles side-by-side.
tiles = [
Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=0, left=0, right=64)),
Tile(coords=TBLR(top=0, bottom=512, left=448, right=960), overlap=TBLR(top=0, bottom=0, left=64, right=0)),
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=512, left=448, right=960),
overlap=TBLR(top=0, bottom=0, left=64, right=0),
),
]
dst_image = np.zeros((512, 960, 3), dtype=np.uint8)
@ -116,7 +489,10 @@ def test_merge_tiles_with_linear_blending_horizontal(blend_amount: int):
expected_output[:, 480 + (blend_amount // 2) :, :] = 128
merge_tiles_with_linear_blending(
dst_image=dst_image, tiles=tiles, tile_images=tile_images, blend_amount=blend_amount
dst_image=dst_image,
tiles=tiles,
tile_images=tile_images,
blend_amount=blend_amount,
)
np.testing.assert_array_equal(dst_image, expected_output, strict=True)
@ -127,8 +503,14 @@ def test_merge_tiles_with_linear_blending_vertical(blend_amount: int):
"""Test merge_tiles_with_linear_blending(...) behavior when merging vertically."""
# Initialize 2 tiles stacked vertically.
tiles = [
Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=64, left=0, right=0)),
Tile(coords=TBLR(top=448, bottom=960, left=0, right=512), overlap=TBLR(top=64, bottom=0, left=0, right=0)),
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=64, left=0, right=0),
),
Tile(
coords=TBLR(top=448, bottom=960, left=0, right=512),
overlap=TBLR(top=64, bottom=0, left=0, right=0),
),
]
dst_image = np.zeros((960, 512, 3), dtype=np.uint8)
@ -148,7 +530,10 @@ def test_merge_tiles_with_linear_blending_vertical(blend_amount: int):
expected_output[480 + (blend_amount // 2) :, :, :] = 128
merge_tiles_with_linear_blending(
dst_image=dst_image, tiles=tiles, tile_images=tile_images, blend_amount=blend_amount
dst_image=dst_image,
tiles=tiles,
tile_images=tile_images,
blend_amount=blend_amount,
)
np.testing.assert_array_equal(dst_image, expected_output, strict=True)
@ -160,8 +545,14 @@ def test_merge_tiles_with_linear_blending_blend_amount_exceeds_vertical_overlap(
"""
# Initialize 2 tiles stacked vertically.
tiles = [
Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=64, left=0, right=0)),
Tile(coords=TBLR(top=448, bottom=960, left=0, right=512), overlap=TBLR(top=64, bottom=0, left=0, right=0)),
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=64, left=0, right=0),
),
Tile(
coords=TBLR(top=448, bottom=960, left=0, right=512),
overlap=TBLR(top=64, bottom=0, left=0, right=0),
),
]
dst_image = np.zeros((960, 512, 3), dtype=np.uint8)
@ -180,8 +571,14 @@ def test_merge_tiles_with_linear_blending_blend_amount_exceeds_horizontal_overla
"""
# Initialize 2 tiles side-by-side.
tiles = [
Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=0, left=0, right=64)),
Tile(coords=TBLR(top=0, bottom=512, left=448, right=960), overlap=TBLR(top=0, bottom=0, left=64, right=0)),
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=0, left=0, right=64),
),
Tile(
coords=TBLR(top=0, bottom=512, left=448, right=960),
overlap=TBLR(top=0, bottom=0, left=64, right=0),
),
]
dst_image = np.zeros((512, 960, 3), dtype=np.uint8)
@ -198,7 +595,12 @@ def test_merge_tiles_with_linear_blending_tiles_overflow_dst_image():
"""Test that merge_tiles_with_linear_blending(...) raises an exception if any of the tiles overflows the
dst_image.
"""
tiles = [Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=0, left=0, right=0))]
tiles = [
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
dst_image = np.zeros((256, 512, 3), dtype=np.uint8)
@ -213,7 +615,12 @@ def test_merge_tiles_with_linear_blending_mismatched_list_lengths():
"""Test that merge_tiles_with_linear_blending(...) raises an exception if the lengths of 'tiles' and 'tile_images'
do not match.
"""
tiles = [Tile(coords=TBLR(top=0, bottom=512, left=0, right=512), overlap=TBLR(top=0, bottom=0, left=0, right=0))]
tiles = [
Tile(
coords=TBLR(top=0, bottom=512, left=0, right=512),
overlap=TBLR(top=0, bottom=0, left=0, right=0),
)
]
dst_image = np.zeros((256, 512, 3), dtype=np.uint8)

0
tests/fixtures/__init__.py vendored Normal file
View File

13
tests/fixtures/sqlite_database.py vendored Normal file
View File

@ -0,0 +1,13 @@
from logging import Logger
from unittest import mock
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
def create_mock_sqlite_database(config: InvokeAIAppConfig, logger: Logger) -> SqliteDatabase:
image_files = mock.Mock(spec=ImageFileStorageBase)
db = init_db(config=config, logger=logger, image_files=image_files)
return db

View File

@ -0,0 +1,272 @@
import sqlite3
from contextlib import closing
from logging import Logger
from pathlib import Path
from tempfile import TemporaryDirectory
import pytest
from pydantic import ValidationError
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import (
MigrateCallback,
Migration,
MigrationError,
MigrationSet,
MigrationVersionError,
)
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import (
SqliteMigrator,
)
@pytest.fixture
def logger() -> Logger:
return Logger("test_sqlite_migrator")
@pytest.fixture
def memory_db_conn() -> sqlite3.Connection:
return sqlite3.connect(":memory:")
@pytest.fixture
def memory_db_cursor(memory_db_conn: sqlite3.Connection) -> sqlite3.Cursor:
return memory_db_conn.cursor()
@pytest.fixture
def migrator(logger: Logger) -> SqliteMigrator:
db = SqliteDatabase(db_path=None, logger=logger, verbose=False)
return SqliteMigrator(db=db)
@pytest.fixture
def no_op_migrate_callback() -> MigrateCallback:
def no_op_migrate(cursor: sqlite3.Cursor, **kwargs) -> None:
pass
return no_op_migrate
@pytest.fixture
def migration_no_op(no_op_migrate_callback: MigrateCallback) -> Migration:
return Migration(from_version=0, to_version=1, callback=no_op_migrate_callback)
@pytest.fixture
def migrate_callback_create_table_of_name() -> MigrateCallback:
def migrate(cursor: sqlite3.Cursor, **kwargs) -> None:
table_name = kwargs["table_name"]
cursor.execute(f"CREATE TABLE {table_name} (id INTEGER PRIMARY KEY);")
return migrate
@pytest.fixture
def migrate_callback_create_test_table() -> MigrateCallback:
def migrate(cursor: sqlite3.Cursor, **kwargs) -> None:
cursor.execute("CREATE TABLE test (id INTEGER PRIMARY KEY);")
return migrate
@pytest.fixture
def migration_create_test_table(migrate_callback_create_test_table: MigrateCallback) -> Migration:
return Migration(from_version=0, to_version=1, callback=migrate_callback_create_test_table)
@pytest.fixture
def failing_migration() -> Migration:
def failing_migration(cursor: sqlite3.Cursor, **kwargs) -> None:
raise Exception("Bad migration")
return Migration(from_version=0, to_version=1, callback=failing_migration)
@pytest.fixture
def failing_migrate_callback() -> MigrateCallback:
def failing_migrate(cursor: sqlite3.Cursor, **kwargs) -> None:
raise Exception("Bad migration")
return failing_migrate
def create_migrate(i: int) -> MigrateCallback:
def migrate(cursor: sqlite3.Cursor, **kwargs) -> None:
cursor.execute(f"CREATE TABLE test{i} (id INTEGER PRIMARY KEY);")
return migrate
def test_migration_to_version_is_one_gt_from_version(no_op_migrate_callback: MigrateCallback) -> None:
with pytest.raises(ValidationError, match="to_version must be one greater than from_version"):
Migration(from_version=0, to_version=2, callback=no_op_migrate_callback)
# not raising is sufficient
Migration(from_version=1, to_version=2, callback=no_op_migrate_callback)
def test_migration_hash(no_op_migrate_callback: MigrateCallback) -> None:
migration = Migration(from_version=0, to_version=1, callback=no_op_migrate_callback)
assert hash(migration) == hash((0, 1))
def test_migration_set_add_migration(migrator: SqliteMigrator, migration_no_op: Migration) -> None:
migration = migration_no_op
migrator._migration_set.register(migration)
assert migration in migrator._migration_set._migrations
def test_migration_set_may_not_register_dupes(
migrator: SqliteMigrator, no_op_migrate_callback: MigrateCallback
) -> None:
migrate_0_to_1_a = Migration(from_version=0, to_version=1, callback=no_op_migrate_callback)
migrate_0_to_1_b = Migration(from_version=0, to_version=1, callback=no_op_migrate_callback)
migrator._migration_set.register(migrate_0_to_1_a)
with pytest.raises(MigrationVersionError, match=r"Migration with from_version or to_version already registered"):
migrator._migration_set.register(migrate_0_to_1_b)
migrate_1_to_2_a = Migration(from_version=1, to_version=2, callback=no_op_migrate_callback)
migrate_1_to_2_b = Migration(from_version=1, to_version=2, callback=no_op_migrate_callback)
migrator._migration_set.register(migrate_1_to_2_a)
with pytest.raises(MigrationVersionError, match=r"Migration with from_version or to_version already registered"):
migrator._migration_set.register(migrate_1_to_2_b)
def test_migration_set_gets_migration(migration_no_op: Migration) -> None:
migration_set = MigrationSet()
migration_set.register(migration_no_op)
assert migration_set.get(0) == migration_no_op
assert migration_set.get(1) is None
def test_migration_set_validates_migration_chain(no_op_migrate_callback: MigrateCallback) -> None:
migration_set = MigrationSet()
migration_set.register(Migration(from_version=1, to_version=2, callback=no_op_migrate_callback))
with pytest.raises(MigrationError, match="Migration chain is fragmented"):
# no migration from 0 to 1
migration_set.validate_migration_chain()
migration_set.register(Migration(from_version=0, to_version=1, callback=no_op_migrate_callback))
migration_set.validate_migration_chain()
migration_set.register(Migration(from_version=2, to_version=3, callback=no_op_migrate_callback))
migration_set.validate_migration_chain()
migration_set.register(Migration(from_version=4, to_version=5, callback=no_op_migrate_callback))
with pytest.raises(MigrationError, match="Migration chain is fragmented"):
# no migration from 3 to 4
migration_set.validate_migration_chain()
def test_migration_set_counts_migrations(no_op_migrate_callback: MigrateCallback) -> None:
migration_set = MigrationSet()
assert migration_set.count == 0
migration_set.register(Migration(from_version=0, to_version=1, callback=no_op_migrate_callback))
assert migration_set.count == 1
migration_set.register(Migration(from_version=1, to_version=2, callback=no_op_migrate_callback))
assert migration_set.count == 2
def test_migration_set_gets_latest_version(no_op_migrate_callback: MigrateCallback) -> None:
migration_set = MigrationSet()
assert migration_set.latest_version == 0
migration_set.register(Migration(from_version=1, to_version=2, callback=no_op_migrate_callback))
assert migration_set.latest_version == 2
migration_set.register(Migration(from_version=0, to_version=1, callback=no_op_migrate_callback))
assert migration_set.latest_version == 2
def test_migration_runs(memory_db_cursor: sqlite3.Cursor, migrate_callback_create_test_table: MigrateCallback) -> None:
migration = Migration(
from_version=0,
to_version=1,
callback=migrate_callback_create_test_table,
)
migration.callback(memory_db_cursor)
memory_db_cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='test';")
assert memory_db_cursor.fetchone() is not None
def test_migrator_registers_migration(migrator: SqliteMigrator, migration_no_op: Migration) -> None:
migration = migration_no_op
migrator.register_migration(migration)
assert migration in migrator._migration_set._migrations
def test_migrator_creates_migrations_table(migrator: SqliteMigrator) -> None:
cursor = migrator._db.conn.cursor()
migrator._create_migrations_table(cursor)
cursor.execute("SELECT * FROM sqlite_master WHERE type='table' AND name='migrations';")
assert cursor.fetchone() is not None
def test_migrator_migration_sets_version(migrator: SqliteMigrator, migration_no_op: Migration) -> None:
cursor = migrator._db.conn.cursor()
migrator._create_migrations_table(cursor)
migrator.register_migration(migration_no_op)
migrator.run_migrations()
cursor.execute("SELECT MAX(version) FROM migrations;")
assert cursor.fetchone()[0] == 1
def test_migrator_gets_current_version(migrator: SqliteMigrator, migration_no_op: Migration) -> None:
cursor = migrator._db.conn.cursor()
assert migrator._get_current_version(cursor) == 0
migrator._create_migrations_table(cursor)
assert migrator._get_current_version(cursor) == 0
migrator.register_migration(migration_no_op)
migrator.run_migrations()
assert migrator._get_current_version(cursor) == 1
def test_migrator_runs_single_migration(migrator: SqliteMigrator, migration_create_test_table: Migration) -> None:
cursor = migrator._db.conn.cursor()
migrator._create_migrations_table(cursor)
migrator._run_migration(migration_create_test_table)
assert migrator._get_current_version(cursor) == 1
cursor.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='test';")
assert cursor.fetchone() is not None
def test_migrator_runs_all_migrations_in_memory(migrator: SqliteMigrator) -> None:
cursor = migrator._db.conn.cursor()
migrations = [Migration(from_version=i, to_version=i + 1, callback=create_migrate(i)) for i in range(0, 3)]
for migration in migrations:
migrator.register_migration(migration)
migrator.run_migrations()
assert migrator._get_current_version(cursor) == 3
def test_migrator_runs_all_migrations_file(logger: Logger) -> None:
with TemporaryDirectory() as tempdir:
original_db_path = Path(tempdir) / "invokeai.db"
db = SqliteDatabase(db_path=original_db_path, logger=logger, verbose=False)
migrator = SqliteMigrator(db=db)
migrations = [Migration(from_version=i, to_version=i + 1, callback=create_migrate(i)) for i in range(0, 3)]
for migration in migrations:
migrator.register_migration(migration)
migrator.run_migrations()
with closing(sqlite3.connect(original_db_path)) as original_db_conn:
original_db_cursor = original_db_conn.cursor()
assert SqliteMigrator._get_current_version(original_db_cursor) == 3
# Must manually close else we get an error on Windows
db.conn.close()
def test_migrator_makes_no_changes_on_failed_migration(
migrator: SqliteMigrator, migration_no_op: Migration, failing_migrate_callback: MigrateCallback
) -> None:
cursor = migrator._db.conn.cursor()
migrator.register_migration(migration_no_op)
migrator.run_migrations()
assert migrator._get_current_version(cursor) == 1
migrator.register_migration(Migration(from_version=1, to_version=2, callback=failing_migrate_callback))
with pytest.raises(MigrationError, match="Bad migration"):
migrator.run_migrations()
assert migrator._get_current_version(cursor) == 1
def test_idempotent_migrations(migrator: SqliteMigrator, migration_create_test_table: Migration) -> None:
cursor = migrator._db.conn.cursor()
migrator.register_migration(migration_create_test_table)
migrator.run_migrations()
# not throwing is sufficient
migrator.run_migrations()
assert migrator._get_current_version(cursor) == 1