Compare commits

..

703 Commits

Author SHA1 Message Date
86c857b9c2 {release} v3.6.1 (#5564)
## What type of PR is this? (check all applicable)

Invoke 3.6.1 release

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.1.zip](https://github.com/invoke-ai/InvokeAI/files/14041411/InvokeAI-installer-v3.6.1.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

## [optional] Are there any post deployment tasks we need to perform?
PyPI Release & GitHub Release
2024-01-24 12:31:10 -05:00
0a13d7d2c7 {release} v3.6.1 2024-01-24 11:27:36 -05:00
68da5c6d22 feat: Add Depth Anything PreProcessor (#5548)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Have you updated all relevant documentation?
- [x] No


## Description

- This adds the newly released Depth Anything to InvokeAI. A new node
`Depth Anything Processor` has been added to generate depth maps using
this new technique. https://depth-anything.github.io

- All related checkpoints will be downloaded automatically on first
boot. The `DinoV2` models will be loaded to your torch cache dir and the
checkpoints pertaining to Depth Anything will be downloaded to
`any/annotators/depth_anything`.

- Alternatively you can find the checkpoints here and download them to
that folder:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints

- This depth map can be used with any depth ControlNet model out there
but the folks at DepthAnything have also released a custom fine tuned
ControlNet model. From my limited testing, I still prefer the original
depth model because this one seems to be producing weird artifacts. Not
sure if that is a specific problem to Invoke or just the model itself.
I'll test more later. Place these in your controlnet folder like your
other ControlNets. You can get that here:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet

- Also available in the LinearUI

- DepthAnything has three models `large`, `base` and `small` -- I've
defaulted the processor to small but a user can change to the large
model if they wish to do so. Small is way faster but obviously somewhat
of a lesser quality.

- DepthAnything is now the default processor for depth controlnet
models.

## Screenshots


![opera_o3jHnWxVRi](https://github.com/invoke-ai/InvokeAI/assets/54517381/573c66f3-1492-45b0-b6df-25756f5e1d1a)

## Merge Plan

DO NOT MERGE YET. Test it first and I'm sure the model caching can be
done better. Coz I don't think I've done that at all. I would appreciate
if @brandonrising or @lstein or anyone can take a look at that part of
it.
2024-01-24 19:14:34 +05:30
f82744b95e fix: linting issues 2024-01-24 18:45:54 +05:30
5a67bc68a1 Merge branch 'main' into depth-anything 2024-01-23 22:31:19 -06:00
61cf4d4c70 feat: "Remix Image" option on images (#5553)
* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  add new remix hotkey to hotkeys modal

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-24 00:45:30 +00:00
9d20a2d5a3 Update huggingface deps to their lastest versions 2024-01-24 11:14:21 +11:00
8b0ac451e3 translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1378 of 1415 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
470dbe75a2 translationBot(ui): update translation (German)
Currently translated at 60.0% (850 of 1415 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
b7d19b8130 add project as category to back-end 2024-01-24 10:59:04 +11:00
3dc13221d8 add project as a workflow category in the front-end 2024-01-24 10:59:04 +11:00
35184dbd86 fix: incorrect local file path 2024-01-24 03:37:16 +05:30
0868fc2558 Merge branch 'main' into depth-anything 2024-01-24 03:36:25 +05:30
92fb09c4df fix: Move the models to any folder to avoid boot warnings 2024-01-24 03:35:37 +05:30
b4cf5496b6 fix(ui): handle model names with spaces
Remove `trim()` from model identifier schema, which prevented parsed model identifiers from matching.

The root issue here is that model names are identifiers. This will be resolved in the model manager refactor.

Closes #5556
2024-01-23 15:48:10 -06:00
a0e68705dd feat(ui): improved dynamic prompts behaviour
- Bump `@invoke-ai/ui` for updated styles
- Update regex to parse prompts with newlines
- Update styling of overlay button when prompt has an error
- Fix bug where loading and error state sometimes weren't cleared
2024-01-23 15:26:12 -06:00
7cb49e65bd feat: Add Resolution to DepthAnything 2024-01-23 14:13:50 -06:00
39fedb090b feat: Make depth anything the default processor for depth controlnet 2024-01-23 14:13:50 -06:00
f36a691219 feat: Make the depth anything small model the default 2024-01-23 14:13:50 -06:00
6a2eb1d2e4 fix: Change the path of the annotator folder to annotators
Just making this change in case there are other models added to the folder in the future
2024-01-23 14:13:50 -06:00
13123daa3f feat: Add DepthAnything to Linear UI 2024-01-23 14:13:50 -06:00
c859eb865e fix: lint & other minor issues 2024-01-23 14:13:50 -06:00
8f5e2cbcc7 feat: Add Depth Anything PreProcessor 2024-01-23 14:13:50 -06:00
2aed6e2dba fix(ui): duplicate "base model" in merge UI
closes #5505
2024-01-23 14:13:18 -06:00
52b51a6088 fix(ui): recall/use size sets aspect ratio correctly
Added a new action that resets the aspect ratio when dispatched.

Closes #5456
2024-01-23 14:13:18 -06:00
52b24e01e2 feat(ui): remove chakra as direct dependency
Moved a number of things to `@invoke-ai/ui` to support this.

Unfortunately, the bundle size has increased a bit. I will work on that later.
2024-01-23 14:13:18 -06:00
1178fd8bd3 fix(ui): fix styling of some form elements 2024-01-23 14:13:18 -06:00
a0187cc9df fix(ui): remove unused import in storybook 2024-01-23 14:13:18 -06:00
2f656cc357 fix(ui): fix field context menu jank
Closes #5551
2024-01-23 14:13:18 -06:00
71f9ac9985 feat(ui): scollable areas support x-axis scrolling
Closes #5490
2024-01-23 14:13:18 -06:00
8bbdfc45fa fix(ui): increase size of control adapters advanced toggle button 2024-01-23 14:13:18 -06:00
3cbb1a7671 chore(ui): bump @invoke-ai/ui
This includes a minor enhancement, increasing the contrast on tabs.
2024-01-23 14:13:18 -06:00
b74e0de74a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
e7e7793896 feat(ui): remember open/closed state of accordions/expanders 2024-01-23 14:13:18 -06:00
504bdac14a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
b76d2cd716 fix(ui): handle base model compat when recalling parameters
We had a one-behind issue with recalling metadata items that had a model.

For example, when recalling LoRAs, we check against the current main model to decide whether or not the requested LoRA is compatible and may be recalled.

When recalling all params, we are often also recalling the main model, but the compat logic didn't compare against this new main model.

The logic is updated to check against the new main model, if one is being set.

Closes #5512
2024-01-23 14:13:18 -06:00
022b32c724 fix(ui): reset clip skip to 0 if new model is sdxl
Clip skip wasn't actually used in SDXL graphs so enabling it didn't do anything, just a UI quirk.

Closes #5508
2024-01-23 14:13:18 -06:00
653b820da1 tidy(ui): clearer variable names in modelSelected listener 2024-01-23 14:13:18 -06:00
4ba0bf4dcf docs(ui): update README (#5473)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission

## Description

Update UI README


## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-23 18:47:24 +05:30
5e4daf4bc6 docs: remove separate frontend docs and link to repo
The frontend docs should just be in the frontend. This is a standard practice for monorepos with developer information for specific packages within the monorepo.
2024-01-23 18:04:41 +11:00
7e0713c869 docs(ui): fix typo 2024-01-23 18:04:41 +11:00
099d516ac0 docs(ui): update README 2024-01-23 18:04:41 +11:00
6a2856e46f Updated field descriptions 2024-01-23 02:26:30 +11:00
4dedd63b74 Update defaultNodes.md
Added ideal size node
2024-01-23 02:26:30 +11:00
db74837eb1 Update communityNodes.md
Removed ideal size node
2024-01-23 02:26:30 +11:00
892fe62264 Add Ideal Size node to core nodes
The Ideal Size node is useful for High-Res Optimization as it gives the optimum size for creating an initial generation with minimal artifacts (duplication and other strangeness) from today's models.

After inclusion, front end graph generation can be simplified by offloading calculations for HRO initial generation to this node.
2024-01-23 02:26:30 +11:00
3c79476785 requested changes made 2024-01-22 21:06:40 +11:00
dad364da17 rebased and made more chnges 2024-01-22 21:06:40 +11:00
37bc4f78d0 last 2024-01-22 21:06:40 +11:00
de0b43c81d more strings and translations added 2024-01-22 21:06:40 +11:00
ea1d2d6a4c added translation where needed 2024-01-22 21:06:40 +11:00
fafe8ccc59 fix(api): typo in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
4b88cfac19 fix(api): type in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
5fa13fba36 chore: ruff 2024-01-22 16:10:25 +11:00
f28f761436 fix(api): add NoCacheStaticFiles to prevent *all* caching
The previous method wasn't totally foolproof, and locales/assets were cached.

To solve this once and for all (famous last words, I know), we can subclass `StaticFiles` and use maximally strict no-caching headers to disable caching on all static files.
2024-01-22 16:10:25 +11:00
27d7889780 tidy(ui): remove commented line 2024-01-22 09:37:26 +11:00
a1cf153097 fix(ui): generation accordion defaults to open 2024-01-22 09:37:26 +11:00
d121eefa12 tidy(ui): organise accordions files 2024-01-22 09:37:26 +11:00
c92e25a6a7 fix(ui): add tooltip to model select 2024-01-22 09:37:26 +11:00
8be03dead5 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
1197133d06 fix(ui): fix lora name wrap 2024-01-22 09:37:26 +11:00
850458a554 chore(ui): lint 2024-01-22 09:37:26 +11:00
e96ad41729 feat(ui): use @invoke-ai/ui hooks for modifiers, global menu state 2024-01-22 09:37:26 +11:00
53cf518390 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
b00ace852d fix(ui): settings modal switch widths 2024-01-22 09:37:26 +11:00
be72765d02 fix(ui): bump @invoke-ai/ui, fix TS issues 2024-01-22 09:37:26 +11:00
580d29257c feat(ui): add missing translations 2024-01-22 09:37:26 +11:00
5d068c1da1 feat(ui): migrate to @invoke-ai/ui 2024-01-22 09:37:26 +11:00
8e2ccab1f0 feat(ui): add @invoke-ai/ui 2024-01-22 09:37:26 +11:00
6f478eef62 chore(ui): bump deps 2024-01-22 09:37:26 +11:00
1ff1c370df translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-19 15:11:32 +11:00
5ef87ef2a6 fix(ui): tidy component/props naming in ClearQueueIconButton.tsx 2024-01-19 15:03:16 +11:00
d0709d4f4e fix(ui): use trash icon for clear all queue 2024-01-19 15:03:16 +11:00
2a081b0a27 fix(ui): remove unnecessary fragments 2024-01-19 15:03:16 +11:00
d902533387 feat(ui): use global modifier state for clear queue button mode switch 2024-01-19 15:03:16 +11:00
1174713223 fix(ui): use cancel strings for cancel button 2024-01-19 15:03:16 +11:00
4b1740ad19 chore(ui): format 2024-01-19 15:03:16 +11:00
e03c88ce32 feat: 🚸 shift key queue cancellations 2024-01-19 15:03:16 +11:00
b917ffecbe chore(ui): format 2024-01-19 14:42:31 +11:00
2967a78c5a feat: 💄 update lots of icons 2024-01-19 14:42:31 +11:00
aa25ea62a5 fix(backend) installed models being redownloaded (#5526)
* fix

* fix ruff errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-18 16:53:53 -05:00
1ab0e86085 feat(ui): add about modal w/ app deps (#5462)
* resolved conflicts

* changed logo and some design changes

* feedback changes

* resolved conflicts

* changed logo and some design changes

* feedback changes

* lint fixed

* added translations

* some requested changes done

* all feedback changes done and replace links in settingsmenu comp

* fixed the gap between deps verisons & chnaged heights

* feat(ui): minor about modal styling

* feat(ui): tag app endpoints with FetchOnReconnect

* fix(ui): remove unused translation string

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2024-01-18 13:30:56 +00:00
c9ddbb4241 lint 2024-01-18 23:29:47 +11:00
415a1c7a4f add ids 2024-01-18 23:29:47 +11:00
84a4836ab7 change actions to be for any InvExpander or InvSingleAccordion that has id passed in 2024-01-18 23:29:47 +11:00
dbd6c9c6ed remove actions we can get from mutation data 2024-01-18 23:29:47 +11:00
4f95c077d4 lint fix 2024-01-18 23:29:47 +11:00
0a4cbc4e16 undo 2024-01-18 23:29:47 +11:00
d45b76fab4 undo 2024-01-18 23:29:47 +11:00
9722135cda add various actions for commercial purposes 2024-01-18 23:29:47 +11:00
7366913a31 chore(deps): bump tj-actions/changed-files in /.github/workflows
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v37...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-17 00:25:21 -05:00
bd31b5606c Retain suffix (.safetensors, .bin) when renaming a checkpoint file or LoRA
- closes #5518
2024-01-16 17:27:50 -05:00
2953dea4a0 set width and height based on defaultModel if passed in 2024-01-17 07:08:11 +11:00
f3fed0b10f fix(ui): use native langs for language select
Use each language's own language for their option in the language select. This falls back to the english translation if the language name isn't translated.
2024-01-17 06:50:16 +11:00
db57d426d9 fix(ui): force dark mode 2024-01-17 06:48:04 +11:00
4536e4a8b6 Model Manager Refactor: Install remote models and store their tags and other metadata (#5361)
* add basic functionality for model metadata fetching from hf and civitai

* add storage

* start unit tests

* add unit tests and documentation

* add missing dependency for pytests

* remove redundant fetch; add modified/published dates; updated docs

* add code to select diffusers files based on the variant type

* implement Civitai installs

* make huggingface parallel downloading work

* add unit tests for model installation manager

- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py

* improve Civitai model downloading

- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
  to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc

* add routes for retrieving metadata and tags

* code tidying and documentation

* fix ruff errors

* add file needed to maintain test root diretory in repo for unit tests

* fix self->cls in classmethod

* add pydantic plugin for mypy

* use TestSession instead of requests.Session to prevent any internet activity

improve logging

fix error message formatting

fix logging again

fix forward vs reverse slash issue in Windows install tests

* Several fixes of problems detected during PR review:

- Implement cancel_model_install_job and get_model_install_job routes
  to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
  URL for the default model.

* fix docs and tests to match get_job_by_source() rather than get_job()

* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Call CivitaiMetadata.model_validate_json() directly

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Second round of revisions suggested by @ryanjdick:

- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.

* Code cleanup suggested in PR review:

- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens

* Multiple fixes:

- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
  description, tags and other essential info
- fix a few type errors

* consolidated all invokeai root pytest fixtures into a single location

* Update invokeai/backend/model_manager/metadata/metadata_store.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Small tweaks in response to review comments:

- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
  (but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.

* Additional tweaks and minor bug fixes

- Fix calculation of aggregate diffusers model size to only count the
  size of files, not files + directories (which gives different unit test
  results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
  for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.

* fix unit test that was breaking on windows due to CR/LF changing size of test json files

* fix ruff formatting

* a few last minor fixes before merging:

- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
  manually rather than using tempfile.tmpdir().

* add unit tests for reporting HTTP download errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-14 19:54:53 +00:00
426a7b900f feat(ui): resize options/gallery panels to min on window resize
Per user feedback, this is preferrable to letting them expand when the window grows.

Also bumps `react-resizable-panels` now that one of my PRs is merged to fix an issue.
2024-01-14 11:33:44 +11:00
cc571d9ab2 Quick Fix for right gallery button 2024-01-14 09:49:52 +11:00
296c861e7d Handle bad id in log_stats(...). 2024-01-13 15:19:57 -05:00
aa45d21fd2 Reduce the number of graph_execution_manager.get(...) calls from the InvocationStatsService. 2024-01-13 15:19:57 -05:00
ac42513da9 Remove unused reset_all_stats(...). 2024-01-13 15:19:57 -05:00
e2387546fe Rename GIG -> GB. And move it to where it's being used. 2024-01-13 15:19:57 -05:00
c8929b35f0 Refactor the invocation stats service for better readability and to support reporting the execution wall time. 2024-01-13 15:19:57 -05:00
c000e270a0 Release/v3.6.0 (#5485)
## What type of PR is this? (check all applicable)

Release v3.6.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.0

## QA Instructions, Screenshots, Recordings



[InvokeAI-installer-v3.6.0.zip](https://github.com/invoke-ai/InvokeAI/files/13923761/InvokeAI-installer-v3.6.0.zip)

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce in #releases
2024-01-12 15:26:28 -05:00
8ff28da3b4 {release} v3.6.0 2024-01-12 15:00:49 -05:00
b7b376103c Update default workflows 2024-01-12 14:59:44 -05:00
08d379bb29 Update default workflows 2024-01-12 14:58:21 -05:00
74e644c4ba Allow bfloat16 to be configurable in invoke.yaml (#5469)
* feat: allow bfloat16 to be configurable in invoke.yaml

* fix: `torch_dtype()` util

- Use `choose_precision` to get the precision string
- Do not reference deprecated `config.full_precision` flat (why does this still exist?), if a user had this enabled it would override their actual precision setting and potentially cause a lot of confusion.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-12 18:40:37 +00:00
d4c36da3ee translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-12 22:52:38 +11:00
dfe0b73890 fix(ui): fix usages of panel helpers
Upstream breaking change.
2024-01-12 09:31:07 +11:00
c0c8fa9a89 fix(ui): use nodrag on invinput in workflow editor
Closes #5476
2024-01-12 09:31:07 +11:00
ad7139829c fix(ui): fix canvas space hotkey
Need to do some checks to ensure we aren't taking over input elements, and are focused on the canvas.

Closes #5478
2024-01-12 09:31:07 +11:00
a24e63d440 fix(ui): do not focus board search on load 2024-01-12 09:31:07 +11:00
59437a02c3 feat(ui): restore resizable prompt boxes
The autosize proved to be unpopular. Changed back to resizable.
2024-01-12 09:31:07 +11:00
98a44d7fa1 feat(ui): update assets
- Add various brand images, organise images
- Create favicon for docs pages (light blue version of key logo)
- Rename app title to `Invoke - Community Edition`
2024-01-12 08:02:59 +11:00
07416753be feat(ui): more context in storage errors 2024-01-12 07:54:18 +11:00
630854ce26 3.6 Docs updates (#5412)
* Update UNIFIED_CANVAS.md

* Update index.md

* Update structure

* Docs updates
2024-01-11 16:52:22 +00:00
b55c2b99a7 feat(ui): workflow library styling 2024-01-11 09:42:12 -05:00
f81d36c95f fix(ui): do not string workflow id on rehydrate 2024-01-11 09:42:12 -05:00
26b7aadd32 fix(db): fix workflows pagination math 2024-01-11 09:42:12 -05:00
8e7e3c2b4a Initial Styling Commit 2024-01-11 09:42:12 -05:00
f2e8b66be4 Fix "Cannot import name 'PagingArgumentParser' error when starting textual inversion
- Closes #5395
2024-01-11 13:57:06 +11:00
ff09fd30dc feat(ui): if in dev mode, reset API on reconnect
This retains the current good developer experience when working on the server - the UI should fully reset when you restart the server.
2024-01-11 12:51:15 +11:00
9fcc30c3d6 feat(ui): optimize reconnect queries
Add `FetchOnReconnect` tag, tagging relevant queries with it. This tag is invalidated in the socketConnected listener, when it is determined that the queue changed.
2024-01-11 12:51:15 +11:00
b29a6522ef feat(ui): always check for change to queue status when reconnecting 2024-01-11 12:51:15 +11:00
936d19cd60 feat(ui): improve comments on socketConnected listener 2024-01-11 12:51:15 +11:00
f25b6ee5d1 chore(ui): lint 2024-01-11 12:51:15 +11:00
7dea079220 fix(ui): reduce reconnect requests
- Add checks to the "recovery" logic for socket connect events to reduce the number of network requests.
- Remove the `isInitialized` state from `systemSlice` and make it a nanostore local to the socketConnected listener. It didn't need to be global state. It's also now more clearly named `isFirstConnection`.
- Export the queue status selector (minor improvement, memoizes it correctly).
2024-01-11 12:51:15 +11:00
7fc08962fb translationBot(ui): update translation (Russian)
Currently translated at 97.0% (1361 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
71155d9e72 translationBot(ui): update translation (Hungarian)
Currently translated at 1.9% (28 of 1402 strings)

translationBot(ui): added translation (Hungarian)

Co-authored-by: ItzAttila <attila.gm.studio@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/hu/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
6ccd72349d {release} v3.6.0rc6 (#5467)
## What type of PR is this? (check all applicable)

Release v3.6.0rc6

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release candidate $6 

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc6.zip](https://github.com/invoke-ai/InvokeAI/files/13890206/InvokeAI-installer-v3.6.0rc6.zip)



## Merge Plan
Merge when approved

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & Github
2024-01-10 11:19:39 -05:00
30e12376d3 {release} v3.6.0rc6 2024-01-10 10:45:33 -05:00
23c8a893e1 fix(ui): fix gallery display bug, major lag
- Fixed a bug where after you load more, changing boards doesn't work. The offset and limit for the list image query had some wonky logic, now resolved.
- Addressed major lag in gallery when selecting an image.

Both issues were related to the useMultiselect and useGalleryImages hooks, which caused every image in the gallery to re-render on whenever the selection changed. There's no way to memoize away this - we need to know when the selection changes. This is a longstanding issue.

The selection is only used in a callback, though - the onClick handler for an image to select it (or add it to the existing selection). We don't really need the reactivity for a callback, so we don't need to listen for changes to the selection.

The logic to handle multiple selection is moved to a new `galleryImageClicked` listener, which does all the selection right when it is needed.

The result is that gallery images no long need to do heavy re-renders on any selection change.

Besides the multiselect click handler, there was also inefficient use of DND payloads. Previously, the `IMAGE_DTOS` type had a payload of image DTO objects. This was only used to drag gallery selection into a board. There is no need to hold onto image DTOs when we have the selection state already in redux. We were recalculating this payload for every image, on every tick.

This payload is now just the board id (the only piece of information we need for this particular DND event).

- I also removed some unused DND types while making this change.
2024-01-10 08:22:46 -05:00
7d93329401 feat(ui): de-jank context menu
There was a lot of convoluted, janky logic related to trying to not mount the context menu's portal until its needed. This was in the library where the component was originally copied from.

I've removed that and resolved the jank, at the cost of there being an extra portal for each instance of the context menu. Don't think this is going to be an issue. If it is, the whole context menu could be refactored to be a singleton.
2024-01-10 08:22:46 -05:00
968fb655a4 Report ci disk space + minor docker fixes (#5461)
* ci: add docker build timout; log free space on runner before and after build

* docker: bump frontend builder to node=20.x; skip linting on build

* chore: gitignore .pnpm-store

* update code owners for docker and CI

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-01-10 05:20:26 +00:00
80ec9f4131 chore(ui): lint 2024-01-10 00:11:05 -05:00
f19def5f7b feat(ui): replace aspect ratio icon
closes #5448
2024-01-10 00:11:05 -05:00
9e1dd8ac9c fix(ui): reset canvas coords/dims on reset 2024-01-10 00:11:05 -05:00
ebd68b7a6c feat(ui): support reset canvas view when no image on canvas 2024-01-10 00:11:05 -05:00
68a231afea feat(ui): move canvas stage and base layer to nanostores 2024-01-10 00:11:05 -05:00
21ab650ac0 feat(ui): move canvas tool to nanostores
I was troubleshooting a hotkeys issue on canvas and thought I had broken the tool logic in a past change so I redid it moving it to nanostores. In the end, the issue was an upstream but with the hotkeys library, but I like having tool in nanostores so I'm leaving it.

It's ephemeral interaction state anyways, doesn't need to be in redux.
2024-01-10 00:11:05 -05:00
b501bd709f fix(ui): canvas bbox number input wonky
It was rounding dimensions when it shouldn't.

Closes #5453
2024-01-10 00:11:05 -05:00
4082f25062 feat(ui): do not optimize size when changing between models with same base model
There's a challenge to accomplish this due to our slice structure - the model is stored in `generationSlice`, but `canvasSlice` also needs to have awareness of it. For example, when the model changes, the canvas slice doesn't know what the previous model was, so it doesn't know whether or not to optimize the size.

This means we need to lift the "should we optimize size" information up. To do this, the `modelChanged` action creator accepts the previous model as an optional second arg.

Now the canvas has access to both the previous model and new model selection, and can decide whether or not it should optimize its size setting in the same way that the generation slice does.

Closes  #5452
2024-01-10 00:11:05 -05:00
63d74b4ba6 feat(ui): remove unnecessary tabChanged listener
This was needed when we didn't support SDXL on canvas.
2024-01-10 00:11:05 -05:00
da5907613b fix(ui): fix typing of usGalleryImages
For some reason `ReturnType<typeof useListImagesQuery>` isn't working correctly, and destructuring `queryResult` it results in `any`, when the hook is used.

I've removed the explicit return typing so that consumers of the hook get correct types.
2024-01-10 00:11:05 -05:00
3a9201bd31 feat: pin deps
Organise deps into ~3 categories:
- Core generation dependencies, pinned for reproducible builds.
- Core application dependencies, pinned for reproducible builds.
- Auxiliary dependencies, pinned only if necessary.

I pinned / bumped these to latest:
- `controlnet_aux`
- `fastapi`
- `fastapi-events`
- `huggingface-hub`
- `numpy`
- `python-socketio`
- `torchmetrics`
- `transformers`
- `uvicorn`

I checked the release notes for these and didn't see any breaking changes that would affect us. There is a `fastapi` breaking change in v108 related to background tasks but it doesn't affect us.

I tested on a fresh venv. The app still works and I can generate on macOS.

Hopefully, enforcing explicit pinned versions will reduce the issues where people get CPU torch.

It also means we should periodically bump versions up to ensure we don't get too far behind on our dependencies and have to do painful upgrades.
2024-01-10 00:03:29 -05:00
d6e2cb7cef fix(ui): use memoized selector for workflow watcher
Minor perf improvement.
2024-01-10 15:32:16 +11:00
0809e832d4 fix(ui): use less brutally strict workflow validation
Workflow building would fail when a current image node was in the workflow due to the strict validation.

So we need to use the other workflow builder util first, which strips out extraneous data.

This bug was introduced during an attempt to optimize the workflow building logic, which was causing slowdowns on the workflow editor.
2024-01-10 14:31:14 +11:00
7269c9f02e Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl (#5449)
- Closes #5435

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-08 17:18:26 -05:00
d86d7e5c33 do not show toast if 403 is triggered by forbidden image (#5447)
* do not show toast if 403 is triggered by lack of image access

* remove log

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-08 12:15:46 -05:00
5d87578746 {release} v3.5.0rc5 (#5446)
## What type of PR is this? (check all applicable)

Release - InvokeAI v3.5.0rc5


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release - InvokeAI v3.5.0rc5


## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc5.zip](https://github.com/invoke-ai/InvokeAI/files/13863661/InvokeAI-installer-v3.6.0rc5.zip)


## [optional] Are there any post deployment tasks we need to perform?
Releasee on PyPi & GitHub
2024-01-09 03:40:34 +11:00
04aef021fc {release} v3.5.0rc5 2024-01-08 10:42:16 -05:00
0fc08bb384 ui: redesign followups 8 (#5445)
* feat(ui): get rid of convoluted socket vs appSocket redux actions

There's no need to have `socket...` and `appSocket...` actions.

I did this initially due to a misunderstanding about the sequence of handling from middleware to reducers.

* feat(ui): bump deps

Mainly bumping to get latest `redux-remember`.

A change to socket.io required a change to the types in `useSocketIO`.

* chore(ui): format

* feat(ui): add error handling to redux persistence layer

- Add an error handler to `redux-remember` config using our logger
- Add custom errors representing storage set and get failures
- Update storage driver to raise these accordingly
- wrap method to clear idbkeyval storage and tidy its logic up

* feat(ui): add debuggingLoggerMiddleware

This simply logs every action and a diff of the state change.

Due to the noise this creates, it's not added by default at all. Add it to the middlewares if you want to use it.

* feat(ui): add $socket to window if in dev mode

* fix(ui): do not enable cancel hotkeys on inputs

* fix(ui): use JSON.stringify for ROARR logger serializer

A recent change to ROARR introduced limits to the size of data that will logged. This ends up making our logs far less useful. Change the serializer back to what it was previously.

* feat(ui): change diff util, update debuggerLoggerMiddleware

The previous diff library would present deleted things as `undefined`. Unfortunately, a JSON.stringify cycle will strip those values out. The ROARR logger does this and so the diffs end up being a lot less useful, not showing removed keys.

The new diff library uses a different format for the delta that serializes nicely.

* feat(ui): add migrations to redux persistence layer

- All persisted slices must now have a slice config, consisting of their initial state and a migrate callback. The migrate callback is very simple for now, with no type safety. It adds missing properties to the state. A future enhancement might be to model the each slice's state with e.g. zod and have proper validation and types.
- Persisted slices now have a `_version` property
- The migrate callback is called inside `redux-remember`'s `unserialize` handler. I couldn't figure out a good way to put this into the reducer and do logging (reducers should have no side effects). Also I ran into a weird race condition that I couldn't figure out. And finally, the typings are tricky. This works for now.
- `generationSlice` and `canvasSlice` both need migrations for the new aspect ratio setup, this has been added
- Stuff related to persistence has been moved in to `store.ts` for simplicity

* feat(ui): clean up StorageError class

* fix(ui): scale method default is now 'auto'

* feat(ui): when changing controlnet model, enable autoconfig

* fix(ui): make embedding popover immediately accessible

Prevents hotkeys from being captured when embeddings are still loading.
2024-01-08 09:11:45 -05:00
5779542084 Updated icons + Minor UI Tweaks (#5427)
* feat: 💄 updated icons + minor ui tweaks

* revert: 💄 removes ui tweaks

* revert: 💄 removed more ui tweaks

removed more ui tweaks and a commented-out icon import

* style: 🚨 satisfy the linter

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-07 14:14:44 +11:00
ebda81e96e fix(ui): fix add node autoconnect (#5434)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

The new select component appears to close itself before calling the
onchange handler. This short-circuits the autoconnect logic. Tweaked so
the ordering is correct.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #5425

## QA Instructions, Screenshots, Recordings

bug should be fixed

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-07 08:32:52 +05:30
3fe332e85f fix(ui): fix add node autoconnect
The new select component appears to close itself before calling the onchange handler. This short-circuits the autoconnect logic. Tweaked so the ordering is correct.
2024-01-07 14:00:22 +11:00
3428ea1b3c feat(ui): use config for all numerical params
Centralize the initial/min/max/etc values for all numerical params. We used this for some but at some point stopped updating it.

All numerical params now use their respective configs. Far fewer hardcoded values throughout the app now.

Also updated the config types a bit to better accommodate slider vs number input constraints.
2024-01-07 13:49:29 +11:00
6024fc7baf Update diffusers to the lastest version 2024-01-06 21:47:51 -05:00
75c1c4ce5a fix(ui): fix gallery nav math
- Use the virtuoso grid item container and list containers to calculate imagesPerRow, skipping manual compensation for padding of images
- Round the imagesPerRow instead of flooring - we often will end up with values like 4.99999 due to floating point precision
- Update `getDownImage` comments & logic to be clearer
- Use variables for the ids in query selectors, preventing future typos
- Only scroll if the new selected image is different from the prev one
2024-01-06 20:52:09 -05:00
ffa05a0bb3 Only replace vae when it is the broken SDXL 1.0 version 2024-01-06 14:06:47 -05:00
a20e17330b blackify 2024-01-06 14:06:47 -05:00
4e83644433 if sdxl-vae-fp16-fix model is available then bake it in when converting ckpts 2024-01-06 14:06:47 -05:00
604f0083f2 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1402 of 1402 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
2a8a158823 translationBot(ui): update translation (Russian)
Currently translated at 96.2% (1349 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
f8c3db72e9 feat(ui): improved arrow key navigation in gallery
- Fix preexisting bug where gallery network requests were duplicated when triggering infinite scroll
- Refactor `useNextPrevImage` to not use `state => state` as an input selector - logic split up into different hooks
- Remove use instant scroll for arrow key navigation - smooth scroll is janky when you hold the arrow down and it fires rapidly
- Move gallery nav hotkeys to GalleryImageGrid component, so they work whenever the gallery is open (previously didn't work on canvas or workflow editor tabs)
- Use nanostores for gallery grid refs instead of passing context with virtuoso's context feature, making it much simpler to do the imperative gallery nav
- General gallery hook/component cleanup
2024-01-07 01:19:32 +11:00
60815807f9 fix(ui): fix merge issue w/ selectors 2024-01-07 01:19:32 +11:00
196fb0e014 added support for bottom key 2024-01-07 01:19:32 +11:00
eba668956d up button support in gallery navigation 2024-01-07 01:19:32 +11:00
ee5ec023f4 {release} v3.6.0rc4 (#5424)
## What type of PR is this? (check all applicable)

Release v3.6.0rc4


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release for v3.6.0rc4

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc4.zip…](Installer Zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
- This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & GitHub
2024-01-06 12:02:57 +11:00
d59661e0af {release} v3.6.0rc4 2024-01-06 11:08:00 +11:00
f51e8eeae1 fix(ui): better node footer spacing 2024-01-06 09:09:38 +11:00
6e06935e75 fix(ui): fix favicon
It wasn't in the right place to be bundled into `assets/` by vite.

Also replaced uncategorized board's fallback image with new logo.
2024-01-06 09:09:38 +11:00
f7f697849c Skip weight initialization when resizing text encoder token embeddings to accomodate new TI embeddings. This saves time. 2024-01-05 15:16:00 -05:00
8e17e29a5c fix text color for lora card (#5417)
* use label

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:57:16 -05:00
12e9f17f7a only GET intermediates if that setting is an option (#5416)
* only GET intermediates if that setting is an option

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:40:34 -05:00
cb7e56a9a3 chore(ui): lint 2024-01-05 08:34:46 -05:00
1a710a4c12 fix(ui): restore prev colors for workflow editor
Brand colors are now prefixed with "invoke".
2024-01-05 08:34:46 -05:00
d8d266d3be fix(ui): fix field title spacing
Closes #5405
2024-01-05 08:34:46 -05:00
4716632c23 fix(ui): tsc 2024-01-06 00:03:07 +11:00
3c4150d153 fix(ui): update most other selectors
Just a few stragglers left. Good enough for now.
2024-01-06 00:03:07 +11:00
b71b14d582 fix(ui): update workflow selectors 2024-01-06 00:03:07 +11:00
73481d4aec feat(ui): clean up canvas selectors
Do not memoize unless absolutely necessary. Minor perf improvement
2024-01-06 00:03:07 +11:00
2c049a3b94 feat(ui): clean up a few selectors that do not need to be memoized 2024-01-06 00:03:07 +11:00
367de44a8b fix(ui): tidy remaining selectors
These were just using overly verbose syntax - like explicitly typing `state: RootState`, which is unnecessary.
2024-01-06 00:03:07 +11:00
f5f378d04b fix(ui): revert back to lrumemoize 2024-01-06 00:03:07 +11:00
823edbfdef fix(ui): fix more state => state selectors 2024-01-06 00:03:07 +11:00
29bbb27289 fix(ui): re-add reselect patch
Accidentally removed it last commit.
2024-01-06 00:03:07 +11:00
a23502f7ff fix(ui): do not use state => state as an input selector
This is a no-no, whoops!
2024-01-06 00:03:07 +11:00
ce64dbefce chore(ui): lint 2024-01-06 00:03:07 +11:00
b47afdc3b5 feat(ui): patch reselect to use lruMemoize only
Pending resolution of https://github.com/reduxjs/reselect/issues/635, we can patch `reselect` to use `lruMemoize` exclusively.

Pin RTK and react-redux versions too just to be safe.

This reduces the major GC events that were causing lag/stutters in the app, particularly in canvas and workflow editor.
2024-01-06 00:03:07 +11:00
cde9c3090f fix(ui): use useAppSelector instead of useSelector 2024-01-06 00:03:07 +11:00
6924b04d7c feat(ui): use lruMemoize for all entity adapter selectors 2024-01-06 00:03:07 +11:00
83fbd4bdf2 ui: slightly reposition floating bars. 2024-01-05 23:59:08 +11:00
6460dcc7e0 use torch.bfloat16 on cuda systems 2024-01-04 23:25:52 -05:00
59aa009c93 {release} v3.6.0rc3 (#5408)
## What type of PR is this? (check all applicable)

Release v3.6.0rc3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [] Yes
- [X] No


## Description
Next release candidate

## Related Tickets & Documents
N/A
<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc3.zip…](Installer zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPI & Github
2024-01-05 10:39:20 +11:00
59d2a012cd {release} v3.6.0rc3 2024-01-05 09:40:21 +11:00
7e3b620830 Update README.md 2024-01-04 15:56:44 -05:00
e16b55816f fix(ui): clarify comparison in usePanel 2024-01-05 07:09:37 +11:00
895cb8637e fix(ui): fix panel resize bug
A bug that caused panels to be collapsed on a fresh indexedDb in was fixed in dd32c632cd, but this re-introduced a different bug that caused the panels to expand on window resize, if they were already collapsed.

Revert the previous change and instead add one imperative resize outside the observer, so that on startup, we set both panels to their minimum sizes.
2024-01-05 07:09:37 +11:00
fe5bceb1ed translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1363 of 1400 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-05 07:04:04 +11:00
5d475a40f5 lint 2024-01-04 12:51:36 -05:00
bca7ea1674 option to override logo component 2024-01-04 12:51:36 -05:00
f27bb402fb render one or the other 2024-01-04 11:11:10 -05:00
dd32c632cd fix default panel width (#5403)
* fix default panel width

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 11:04:21 -05:00
9e2e740033 custom components for nav, gallery header, and app info (#5400)
* replace custom header with custom nav component to go below settings

* add option for custom gallery header

* add option for custom app info text on logo hover

* add data-testid for tabs

* remove descriptions

* lint

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 15:30:27 +00:00
d6362ce0bd fix(ui): fix invalid nesting of button in button 2024-01-04 09:36:59 -05:00
2347a00a70 fix(ui): do not show loading state on floating invoke button if disabled 2024-01-04 09:36:59 -05:00
0b7dc721cf chore(ui): lint 2024-01-04 09:36:59 -05:00
ac04a834ef feat(ui): tidy hotkeysmodal state 2024-01-04 09:36:59 -05:00
bbca053b48 feat(ui): style settings modal 2024-01-04 09:36:59 -05:00
fcf2006502 feat(ui): increase brightnesst of accordion title 2024-01-04 09:36:59 -05:00
ac0d0019bd chore(ui): lint 2024-01-04 09:36:59 -05:00
2d922a0a65 feat(ui): restore floating options and gallery buttons 2024-01-04 09:36:59 -05:00
8db14911d7 feat(ui): give tooltips padding from screen edge
We can pass a popperjs modifier to the tooltip to give it this padding.
2024-01-04 09:36:59 -05:00
01bab58b20 fix(ui): do not resize panel when window resizes if panel is collapsed 2024-01-04 09:36:59 -05:00
7a57bc99cf feat(ui): statusindicator changes
We are now using the lefthand vertical strip for the settings menu button. This is a good place for the status indicator.

Really, we only need to display something *if there is a problem*. If the app is processing, the progress bar indicates that.

For the case where the panels are collapsed, I'll add the floating buttons back in some form, and we'll indicate via those if the app is processing something.
2024-01-04 09:36:59 -05:00
d3b6d86e74 feat(ui): tweak badge styles 2024-01-04 09:36:59 -05:00
360b6cb286 fix(ui): fix logo version tooltip 2024-01-04 09:36:59 -05:00
8f9e9e639e fix(ui): fix hotkey key & untranslated string 2024-01-04 09:36:59 -05:00
6930d8ba41 feat(ui): do not wrap expander content in box 2024-01-04 09:36:59 -05:00
7ad74e680d feat(ui): make invexpander button styles less complex
just make it like a normal button - normal and hover state, no difference when its expanded. the icon clearly indicates this, and you see the extra components
2024-01-04 09:36:59 -05:00
c56a6a4ddd feat(ui): make expander divider button, add hover, remove color
On one hand I like the color but on the other it makes this divider a focus point, which doesn't really makes sense to me. I tried several shades but think it adds a bit too much distraction for your eyes.
2024-01-04 09:36:59 -05:00
afad764a00 feat(ui): make badges a bit paler
too stabby in the eye region
2024-01-04 09:36:59 -05:00
49a72bd714 feat(ui): use wrench icon in settings menu for settings 2024-01-04 09:36:59 -05:00
8cf14287b6 feat(ui): simplify App.tsx layout
There was an extra div, needed for the fullscreen file upload dropzone, that made styling the main app containers a bit awkward.

Refactor the uploader a bit to simplify this - no longer need so many app-level wrappers. Much cleaner.
2024-01-04 09:36:59 -05:00
0db47dd5e7 ui: Bolden text & add activation color for expanded state 2024-01-04 09:36:59 -05:00
71f6f77ae8 ui: Change background and padding of advanced settings 2024-01-04 09:36:59 -05:00
6f16229c41 fix: tone down the base color saturation by one step 2024-01-04 09:36:59 -05:00
0cc0d794d1 fix: Minor alignment issues with the queue badge 2024-01-04 09:36:59 -05:00
535639cb95 feat: Update status and progress colors to match new theme 2024-01-04 09:36:59 -05:00
2250bca8d9 feat: Remove Header
Remove header and incorporate everything else into the side bar and other areas
2024-01-04 09:36:59 -05:00
4ce39a5974 fix(ui): remove unused icons 2024-01-04 13:59:25 +11:00
644e9287f0 chore(ui): control adapter docstrings 2024-01-04 13:59:25 +11:00
6a5e0be022 fix(ui): reduce minStepsBetweenThumbs for ca begin/end 2024-01-04 13:59:25 +11:00
707f0f7091 feat(ui): update favicon to new logo 2024-01-04 13:59:25 +11:00
8e709fe05a fix(ui): remove shift+enter to cancel
Whoops!
2024-01-04 13:59:25 +11:00
154da609cb translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-04 01:19:31 +11:00
21975d6268 chore(ui): lint 2024-01-03 09:09:50 -05:00
31035b3e63 fix(ui): fix scaled bbox sliders
Removed logic related to aspect ratio from the components.

When the main bbox changes, if the scale method is auto, the reducers will handle the scaled bbox size appropriately.

Somehow linking up the manual mode to the aspect ratio is tricky, and instead of adding complexity for a rarely-used mode, I'm leaving manual mode as fully manual.
2024-01-03 09:09:50 -05:00
6c05818887 fix(ui): workaround canvas weirdness with locked aspect ratio
Cannot figure out how to allow the bbox to be transformed when aspect ratio is locked from all handles. Only the bottom right handle works as expected.

As a workaround, when the aspect ratio is locked, you can only resize the bbox from the bottom right handle.
2024-01-03 09:09:50 -05:00
77c5b051f0 fix(ui): clean up actionsDenylist 2024-01-03 09:09:50 -05:00
4fdc4c15f9 feat(ui): add optimal size handling 2024-01-03 09:09:50 -05:00
1a4be78013 fix(ui): make aspect ratio preview pixel-perfect size 2024-01-03 09:09:50 -05:00
eb16ad3d6f fix(ui): remove old esc hotkey 2024-01-03 09:09:50 -05:00
1fee08639d feat(ui): tweak board search UI 2024-01-03 09:09:50 -05:00
7caaf40835 feat(ui): reworked hotkeys modal
- Displays all as list
- Uses chakra `Kbd` component for keys
- Provides search box
2024-01-03 09:09:50 -05:00
6bfe994622 feat(ui): bump fontSize in fallback compoennt 2024-01-03 09:09:50 -05:00
8a6f03cd46 feat(ui): improved panel interactions 2024-01-03 09:09:50 -05:00
4ce9f9dc36 fix(ui): scaled bounding box uses canvas aspect ratio 2024-01-03 09:09:50 -05:00
00297716d6 Release: v3.6.0rc2 (#5386)
## What type of PR is this? (check all applicable)

Release

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
v3.6.0rc2 release

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Test latest main & [Uploading
InvokeAI-installer-v3.6.0rc2.zip…](Installer zip)

## Merge Plan
PR can be merged immediately
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Publish release on PyPI and GitHub
2024-01-03 14:58:45 +11:00
50c0dc71eb {release} v3.6.0rc2 2024-01-03 14:21:10 +11:00
29ccc6a3d8 chore(ui): lint 2024-01-03 13:18:50 +11:00
f92a5cbabc fix(ui): fix up hotkeys
- Add Shift+X back (this has been missing for a long time)
- Add secondary toggle options hotkey
2024-01-03 13:18:50 +11:00
acbf10f7ba feat(ui): improve error indicator for model fields 2024-01-03 13:18:50 +11:00
46d830b9fa feat(ui): new logo! 2024-01-03 13:18:50 +11:00
db17ec7a4b feat(ui): use dropzone noKeyboard opt instead of manual listener to disable on spacebar 2024-01-03 13:18:50 +11:00
6320d18846 fix(ui): upscale dropdown activates when clicking image actions
Weird issue with `react-select`... Made the popover lazy as a workaround.

Also updated styling of the popover.
2024-01-03 13:18:50 +11:00
37c8b9d06a fix(ui): fix sdxl style prompts
- Do not _merge_ prompt and style prompt when concat is enabled - either use the prompt as style, or use the style directly.
- Set style prompt metadata correctly.
- Add metadata recall for style prompt.
2024-01-03 13:18:50 +11:00
7ba2108eb0 fix(ui): increase contrast between disabled and enabled inputs 2024-01-03 13:18:50 +11:00
8aeeee4752 fix(ui): fix erroneous vae model display
`react-select` has some weird behaviour where if the value is `undefined`, it shows the last-selected value instead of nothing. Must fall back to `null`
2024-01-03 13:18:50 +11:00
930de51910 feat(ui): add badges for advanced settings 2024-01-03 13:18:50 +11:00
b1b5c0d3b2 fix(ui): fix workflow editor model selector, excise ONNX
Ensure workflow editor model selector component gets a value

This introduced some funky type issues related to ONNX models. ONNX doesn't work anyways (unmaintained). Instead of fixing the types to work with a non-working feature, ONNX is now removed entirely from the UI.

- Remove all refs to ONNX (and Olives)
- Fix some type issues
- Add ONNX nodes to the nodes denylist (so they are not visible in UI)
- Update VAE graph helper, which still had some ONNX logic. It's a very simple change and doesn't change any logic. Just removes some conditions that were for ONNX. I tested it and nothing broke.
- Regenerate types
- Fix prettier and eslint ignores for generated types
- Lint
2024-01-03 13:18:50 +11:00
ebe717099e feat(ui): add $store to window in dev mode
Helpful for troubleshooting.
2024-01-03 13:18:50 +11:00
06245bc761 feat(ui): add support for default values for sliders 2024-01-03 13:18:50 +11:00
b4c0dafdc8 feat(ui): remove unused iterations component 2024-01-03 13:18:50 +11:00
0cefacb3a2 feat(ui): add support for default values for numberinputs 2024-01-03 13:18:50 +11:00
baa5f75976 fix(ui): fix node styles
Got borked when adjusting control adapter styling. Should revisit this later.
2024-01-03 13:18:50 +11:00
989aaedc7f feat(nodes): add title for cfg rescale mult on denoise_latents 2024-01-03 13:18:50 +11:00
93e08df849 fix(ui): min fallback on nodes number fields -> -NUMPY_RAND_MAX 2024-01-03 13:18:50 +11:00
4a43e1c1b8 fix(ui): restore global hotkeys 2024-01-03 13:18:50 +11:00
2bbab9d94e Update recommends db backup when installing RC (#5381)
* Udpater suggest db backup when installing RC

* Update invokeai_update.py to be more specific

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py
2024-01-02 22:44:45 +00:00
a456f6e6f0 lint 2024-01-02 10:02:33 -05:00
a408f562d6 option to use new brand for loader 2024-01-02 10:02:33 -05:00
cefdf9ed00 define text color for tooltips 2024-01-02 10:02:33 -05:00
5413bf07e2 Sisco/docker allow relative paths for invokeai data (#5344)
* Update docker-compose.yml to bind local data path

* Update LOCAL_DATA_PATH in .env.sample

* Add fallback to INVOKEAI_ROOT envar if LOCAL_DATA_PATH not present.

* rename LOCAL_DATA_PATH to INVOKAI_LOCAL_ROOT

* Whoops, didnt mean to include this

* Update docker/docker-compose.yml

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>

* [chore] rename envar

* Apply suggestions from code review

---------

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2024-01-02 13:17:57 +00:00
4cffe282bd feat(ui): disable scan models tab
not working yet WIP
2024-01-02 07:28:53 -05:00
ae8ffe9d51 chore(ui): lint 2024-01-02 07:28:53 -05:00
870cc5b733 feat(ui): dynamic prompts loading ux
- Prompt must have an open curly brace followed by a close curly brace to enable dynamic prompts processing
- If a the given prompt already had a dynamic prompt cached, do not re-process
- If processing is not needed, user may invoke immediately
- Invoke button shows loading state when dynamic prompts are processing, tooltip says generating
- Dynamic prompts preview icon in prompt box shows loading state when processing, tooltip says generating
2024-01-02 07:28:53 -05:00
0b4eb888c5 feat(ui): canvas bbox interaction tweaks
Making the math match the previous implementation
2024-01-02 07:28:53 -05:00
11f1cb5391 fix(ui): fix canvas bbox style when cursor leaves canvas 2024-01-02 07:28:53 -05:00
1e2e26cfc2 feat(ui): add open queue to queue action menu 2024-01-02 07:28:53 -05:00
e9bce6e1c3 fix(ui): fix cut off badge on queue actions menu 2024-01-02 07:28:53 -05:00
799ef0e7c1 fix(ui): control adapter models select disable if incompatible 2024-01-02 07:28:53 -05:00
61c10a7ca8 fix(ui): fix canvas bbox interactions 2024-01-02 07:28:53 -05:00
93880223e6 feat(ui): move strength up one 2024-01-02 07:28:53 -05:00
271456b745 fix(ui): fix badges for image settings canvas 2024-01-02 07:28:53 -05:00
cecee33bc0 feat(ui): support grid size of 8 on canvas
- Support grid size of 8 on canvas
- Internal canvas math works on 8
- Update gridlines rendering to show 64 spaced lines and 32/16/8 when zoomed in
- Bbox manipulation defaults to grid of 64 - hold shift to get grid of 8

Besides being something we support internally, supporting 8 on canvas avoids a lot of hacky logic needed to work well with aspect ratios.
2024-01-02 07:28:53 -05:00
4f43eda09b feat(ui): modularize imagesize components
Canvas and non-canvas have separate width and height and need their own separate aspect ratios. In order to not duplicate a lot of aspect ratio logic, the components relating to image size have been modularized.
2024-01-02 07:28:53 -05:00
011757c497 fix(ui): add numberinput to control adapter weight
Required some rejiggering of the InvControl and InvSlider styles.
2024-01-02 07:28:53 -05:00
2700d0e769 fix(nodes): fix constraints/validation for controlnet
- Fix `weight` and `begin_step_percent`, the constraints were mixed up
- Add model validatort to ensure `begin_step_percent < end_step_percent`
- Bump version
2024-01-02 07:28:53 -05:00
d256d93a2a feat(ui): use larger chevrons for number input steppers 2024-01-02 07:28:53 -05:00
f3c8e986a5 feat(ui): bump badge fontsize to 10px 2024-01-02 07:28:53 -05:00
48f5e4f313 fix(ui): missing denoise strength
accidentally hid it from everywhere
2024-01-02 07:28:53 -05:00
5950ffe064 Release/v3.6.0rc1 (#5372)
## What type of PR is this? (check all applicable)

InvokeAI 3.6.0rc1 Release 


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Update version & frontend build for Invoke v3.6.0rc1

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?

Upload release to PyPI & create release on GitHub
2024-01-02 11:21:37 +11:00
49ca949cd6 {release} Update version to 3.6.0rc1 2024-01-02 10:23:55 +11:00
5d69f1cbf5 Remove frontend build from repo permanantly 2024-01-02 10:18:11 +11:00
9169006171 chore(ui): lint 2024-01-01 08:13:23 -05:00
28b74523d0 fix(ui): fix dynamic prompts with single prompt
Closes #5292

The special handling for single prompt is totally extraneous and caused a bug.
2024-01-01 08:13:23 -05:00
9359c03c3c feat(ui): use zod-less workflow builder when appropriate 2024-01-01 08:13:23 -05:00
598241e0f2 fix(ui): InvContextMenu.placement = 'auto-end'
This ensures the context menus don't get cut off when the window size is very small.
2024-01-01 08:13:23 -05:00
e698a8006c feat(ui): use lruMemoize for argsMemoize on selectors
This provides a small performance improvement, on the order of a few ms per interaction.
2024-01-01 08:13:23 -05:00
34e7b5a7fb chore(ui): lint 2024-01-01 08:13:23 -05:00
5c3dd62ae0 feat(ui): update useGlobalModifiers to store each key independently
This reduces rerenders when the user presses a modifier key.
2024-01-01 08:13:23 -05:00
7e2eeec1f3 feat(ui): optimized workflow building
- Store workflow in nanostore as singleton instead of building for each consumer
- Debounce the build (already was indirectly debounced)
- When the workflow is needed, imperatively grab it from the nanostores, instead of letting react handle it via reactivity
2024-01-01 08:13:23 -05:00
7eb79266c4 feat(ui): split dnd overlay to separate component
This reduces top-level rerenders when zooming in and out on workflow editor
2024-01-01 08:13:23 -05:00
5d4610d981 feat(ui): store node templates in separate slice
Flattens the `nodes` slice. May offer minor perf improvements in addition to just being cleaner.
2024-01-01 08:13:23 -05:00
7c548c5bf3 feat(ui): move canvas interaction state to nanostores
This drastically reduces the computation needed when moving the cursor. It also correctly separates ephemeral interaction state from redux, where it is not needed.

Also removed some unused canvas state.
2024-01-01 08:13:23 -05:00
2a38606342 fix(ui): show denoising strength on canvas 2024-01-01 08:13:23 -05:00
793cf39964 feat(ui): bump react-resizable-panels & improve usePanel hook 2024-01-01 08:13:23 -05:00
ab3e689ee0 fix(ui): fix workflow library new workflow/settings closing
Need to make the menu not lazy. A better solution is to refactor how the settings works, rendering it in a different part of the component tree
2024-01-01 08:13:23 -05:00
20f497054f feat(ui): optimized useMouseOverNode
Manually hook into pubsub to eliminate extraneous rerenders on hook change
2024-01-01 08:13:23 -05:00
6209fef63d fix(ui): focus add node popover on open
Need an extra ref to pass to the InvSelect component.
2024-01-01 08:13:23 -05:00
5168415999 feat(ui): use nanostores for useMouseOverNode
This greatly reduces the weight of the event handlers.
2024-01-01 08:13:23 -05:00
b490c8ae27 chore(ui): bump deps
Includes vite v5 - only change needed is to set .mts for vite config files.
2024-01-01 08:13:23 -05:00
6f354f16ba feat(ui): canvas perf improvements 2024-01-01 08:13:23 -05:00
e108a2302e fix(ui): fix uninteractable canvas bbox 2024-01-01 08:13:23 -05:00
2ffecef792 feat(ui): bump react-resizable-panels, improve panel resize logic 2024-01-01 08:13:23 -05:00
2663a07e94 feat(ui): misc canvas perf improvements
- disable listening when not needed
- use useMemo for gridlines
2024-01-01 08:13:23 -05:00
8d2ef5afc3 feat(ui): disable onlyRenderVisibleElements on Flow
This can cause stuttering when nodes are being moved in and out of the viewport. I think it's better to improve rendering/perf in other ways.
2024-01-01 08:13:23 -05:00
539887b215 feat(ui): misc perf/rerender improvements
More efficient selectors, memoized/stable references to objects, lazy popover/menu rendering.
2024-01-01 08:13:23 -05:00
2ba505cce9 feat(ui): use pubsub to for globalcontextmenuclose
Far more efficient than the crude redux incrementor thing.
2024-01-01 08:13:23 -05:00
bd92a31d15 feat(ui): add createLruSelector
This uses the previous implementation of the memoization function in reselect. It's possible for the new weakmap-based memoization to cause memory leaks in certain scenarios, so we will avoid it for now.
2024-01-01 08:13:23 -05:00
ee2529f3fd lru 2024-01-01 08:13:23 -05:00
89b7082bc0 fix(ui): remove debug stmts 2024-01-01 08:13:23 -05:00
55dfabb892 feat(ui): use make label widths grow
Fixes issue where translations overflowed due to hardcoded widths.
2024-01-01 08:13:23 -05:00
2a41fd0b29 fix(ui): fix field title styling 2024-01-01 08:13:23 -05:00
966919ea4a translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1335 of 1360 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
d3acdcf12f translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
52f9749bf5 feat(ui): partial rebuild of model manager internal logic 2023-12-29 08:26:14 -05:00
2a661450c3 feat(ui): increase size of clear icon on selects 2023-12-29 08:26:14 -05:00
2d96c62fdb feat(ui): more memoization 2023-12-29 08:26:14 -05:00
3e6173ee8c feat(ui): only show refiner models on refiner model select 2023-12-29 08:26:14 -05:00
4e9841c924 feat(ui): add refiner cfg scale & steps defaults & marks 2023-12-29 08:26:14 -05:00
f4ea495d23 feat(ui): InvSwitch and InvSliderThumb are round 2023-12-29 08:26:14 -05:00
43a4b815e8 fix(ui): fix InvSlider vertical thumb styling 2023-12-29 08:26:14 -05:00
4134f18319 fix(ui): InvEditable, linear field view styling 2023-12-29 08:26:14 -05:00
cd292f6c1c fix(ui): remove errant console.log 2023-12-29 08:26:14 -05:00
3ce8f3d6fe feat(ui): more memoization 2023-12-29 08:26:14 -05:00
10fd4f6a61 feat(ui): update panel lib, move gallery to percentages 2023-12-29 08:26:14 -05:00
47b1fd4bce chore(ui): bump deps 2023-12-29 08:26:14 -05:00
300805a25a fix(ui): fix typing issues 2023-12-29 08:26:14 -05:00
56527da73e feat(ui): memoize all components 2023-12-29 08:26:14 -05:00
ca4b8e65c1 feat(ui): use stable objects for animation/native element styles 2023-12-29 08:26:14 -05:00
f5194f9e2d feat(ui): generation accordion badges 2023-12-29 08:26:14 -05:00
ccbbb417f9 feat(ui): fix control adapters styling 2023-12-29 08:26:14 -05:00
37786a26a5 feat(ui): move scaling up to image settings -> advanced 2023-12-29 08:26:14 -05:00
4f2930412e feat(ui): use primitive style props or memoized sx objects 2023-12-29 08:26:14 -05:00
83049a3a5b fix(ui): typo in canvas model handler 2023-12-29 08:26:14 -05:00
38256f97b3 fix(ui): fix word break on LoRACard 2023-12-29 08:26:14 -05:00
77f2aabda4 feat(ui): sort model select options with compatible base model first 2023-12-29 08:26:14 -05:00
e32eb2a649 fix(ui): restore labels in model manager selects 2023-12-29 08:26:14 -05:00
f4cdfa3b9c fix(ui): canvas layer select cut off 2023-12-29 08:26:14 -05:00
e99b715e9e fix(ui): board collapse button styling 2023-12-29 08:26:14 -05:00
ed96c40239 feat(ui): change queue icon 2023-12-29 08:26:14 -05:00
1b3bb932b9 feat(ui): reduce button fontweight to semibold 2023-12-29 08:26:14 -05:00
f0b102d830 feat(ui): ux improvements & redesign
This is a squash merge of a bajillion messy small commits created while iterating on the UI component library and redesign.
2023-12-29 08:26:14 -05:00
a47d91f0e7 feat(api): add max_prompts constraints 2023-12-29 08:26:14 -05:00
358c1f5791 Release/v3.5.1 (#5363)
## What type of PR is this? (check all applicable)

Release v3.5.1

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
InvokeAI v3.5.1 release

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Create GH release
3. Annonce on Discord
2023-12-29 15:20:24 +11:00
faec320d48 {release} v3.5.1 2023-12-29 13:33:47 +11:00
fd074abdc4 Add frontend build 2023-12-29 13:16:23 +11:00
d8eb58cd58 Add frontend build 2023-12-29 13:15:37 +11:00
8937d66412 Add Tiled Upscaling to default workflows (#5362)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Add Tiled Upscaling to default workflows

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-29 12:43:47 +11:00
a6935ae7fb Add Tiled Upscaling to default workflows 2023-12-29 12:26:50 +11:00
69968eb67b add nightmare promptgen to communityNodes.md (#5360)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission

## Description
Adds nightmare promptgen to the community nodes list.
2023-12-29 08:06:44 +11:00
e57f5f129c add nightmare promptgen to communityNodes.md 2023-12-28 13:15:52 -05:00
1b8651fa26 fix(ui): do no create extraneous pos var 2023-12-28 20:44:02 +11:00
f6664960ca Update useBuildNode.ts
Added addition of the rect's top left coordinates to get equivalent behavior.
2023-12-28 20:44:02 +11:00
84a001720c Added back bounds check 2023-12-28 20:44:02 +11:00
c9951cd86b Eliminate constant console deprecation warnings
React Flow 11.10 eliminates the need to use project() and issues a deprecation warning to the console every time that onMouseMove is called (see https://reactflow.dev/whats-new/2023-11-10#rename-usereactflowproject-to-usereactflowscreentoflowposition). This code change eliminates that warning,
2023-12-28 20:44:02 +11:00
83a9e26cd8 Respect torch-sdp in config.yaml (#5353)
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
2023-12-28 05:46:28 +00:00
80812cf7cd Update FE .gitignore and remove FE build (#5357)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
To release 3.5.0 successfully, a front end build needed to be in the
repo so that it would be included in the invokeai package distributed on
PyPi.

This PR remove the frontend build and updates the frontend gitignore to
not include the build.


## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
N/A
2023-12-28 16:08:13 +11:00
2a6c940047 Merge branch 'main' into fix/remove_fe_build 2023-12-28 16:04:16 +11:00
78fe9b642d fix bug when there are two multi vector TI in a prompt (#5356)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: it's a simple fix

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
if there are two multi vector TI in a prompt eg `<ti-1> <ti-2>` with
ti-1 has vector size 16 and ti-2 has vector size 8 then the second one
uses the first ti_embedding.shape[0] and you get errors like eg
"<ti-2-!pad-8> is not found" because ti-2 only has vector size 8 but the
code is taking the wrong ti_embedding.shape[0]

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-28 15:59:27 +11:00
53b835945f Updated with ruff formatting 2023-12-28 11:05:19 +11:00
acba51c888 remove fe build 2023-12-28 09:44:08 +11:00
daa9d50d95 Update FE .gitignore 2023-12-28 08:45:23 +11:00
e38d0e39b7 fix bug when there are two multi vector TI in a prompt 2023-12-27 22:14:14 +01:00
2c632a811b Release/v3.5.0 (#5352)
## What type of PR is this? (check all applicable)

InvokeAI v3.5.0


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
3.5.0 release

## QA Instructions, Screenshots, Recordings

Test Installer: 

[InvokeAI-installer-v3.5.0.zip](https://github.com/invoke-ai/InvokeAI/files/13776161/InvokeAI-installer-v3.5.0.zip)


## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
* Update front end .gitignore & remove the fe build
2023-12-28 08:14:10 +11:00
6afeb37ce5 Update frontend build 2023-12-27 16:41:47 +11:00
85726c164b {release} update version to 3.5.0 2023-12-27 16:07:33 +11:00
17e1ef0140 Update git ignore to include FE build 2023-12-27 16:07:18 +11:00
cdfc01d938 Fix model names to match defaults in workflows & update example workflows in docs (#5351)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-27 10:10:15 +05:30
dc632a787a Updated example workflows 2023-12-27 15:26:05 +11:00
4e04ea0c0d fix model names to match defaults in workflows 2023-12-27 15:18:12 +11:00
f51bb00b5e Update torch xformers (#5343)
* Update torch to 2.1.2 and xformers to 0.0.23post1

* fix type
2023-12-26 06:48:32 +00:00
12f2357e70 feat(db): handle PIL errors opening images gracefully (#5314)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

For example, if PIL tries to open a *really* big image, it will raise an
exception to prevent reading a huge object into memory.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

-
https://discord.com/channels/1020123559063990373/1149513695567810630/1186200089149046804

## QA Instructions, Screenshots, Recordings

This should fix the error in the discord thread

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Can be merged when @Millu confirms it fixes the issue he ran into

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2023-12-26 15:38:55 +11:00
60629cba3c Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-26 15:27:18 +11:00
5196e4bc38 translationBot(ui): update translation (Korean)
Currently translated at 57.2% (781 of 1365 strings)

Co-authored-by: 이승석 <vidicwb@ajou.ac.kr>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
89e7848079 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1365 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
5b38b5ea7f translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1329 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
88c1af969f update docker setup, improve docs, fix variable value
Fixes #5336
2023-12-23 08:53:19 -05:00
fbede84405 [feature] Download Queue (#5225)
* add base definition of download manager

* basic functionality working

* add unit tests for download queue

* add documentation and FastAPI route

* fix docs

* add missing test dependency; fix import ordering

* fix file path length checking on windows

* fix ruff check error

* move release() into the __del__ method

* disable testing of stderr messages due to issues with pytest capsys fixture

* fix unsorted imports

* harmonized implementation of start() and stop() calls in download and & install modules

* Update invokeai/app/services/download/download_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* replace test datadir fixture with tmp_path

* replace DownloadJobBase->DownloadJob in download manager documentation

* make source and dest arguments to download_queue.download() an AnyHttpURL and Path respectively

* fix pydantic typecheck errors in the download unit test

* ruff formatting

* add "job cancelled" as an event rather than an exception

* fix ruff errors

* Update invokeai/app/services/download/download_default.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* use threading.Event to stop service worker threads; handle unfinished job edge cases

* remove dangling STOP job definition

* fix ruff complaint

* fix ruff check again

* avoid race condition when start() and stop() are called simultaneously from different threads

* avoid race condition in stop() when a job becomes active while shutting down

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-12-22 12:35:57 -05:00
756cb9c27e fix(tests): remove graph library from test fixtures 2023-12-23 00:04:48 +11:00
78b29db458 feat(backend): disable graph library
The graph library occasionally causes issues when the default graph changes substantially between versions and pydantic validation fails. See #5289 for an example.

We are not currently using the graph library, so we can disable it until we are ready to use it. It's possible that the workflow library will supersede it anyways.
2023-12-23 00:04:48 +11:00
1225c3fb47 addresses #5224 (#5332)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-12-22 12:30:51 +00:00
4957a360ff close #5209 2023-12-21 23:02:57 -05:00
32ad742f3e Ti trigger from prompt util (#5294)
* Pull logic for extracting TI triggers into a util function

* Remove duplicate regex for ti triggers

* Fix linting for ruff

* Remove unused imports
2023-12-22 03:04:44 +00:00
41cd40541a Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-22 12:21:52 +11:00
2d11d97dad remove MacOS Sonoma check in devices.py (#5312)
* remove MacOS Sonoma check in devices.py

As of pytorch 2.1.0, float16 works with our MPS fixes on Sonoma, so the check is no longer needed.

* remove unused platform import
2023-12-22 00:42:47 +00:00
64858b2523 Update contributingToFrontend.md (#5329)
The project is no longer using yarn as a package manager and have moved
to pnpm, So I wanted to update the documentation on the contribution
page.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [] No, because:
I spoke with user: imic in the #dev-chat on discord.
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Merge Plan
- "This PR can be merged when approved"
2023-12-22 08:38:34 +11:00
d5134325f6 Merge branch 'main' into patch-1 2023-12-22 08:37:15 +11:00
702d0f68af remove (Unsaved) if workflow library is disabled 2023-12-22 07:39:17 +11:00
a0d0e9f474 Update contributingToFrontend.md
The project is no longer using yarn as a package manager and have moved to pnpm, So I wanted to update the documentation on the contribution page.
2023-12-21 14:51:17 -05:00
475823835f Update communityNodes.md
Addition of my Adapters-Linked and Metadata-linked nodes
2023-12-21 13:51:59 -05:00
b95d547ccc Add more default workflows (#5325)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Added more default workflows to the workflow library

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-21 14:40:19 +11:00
9b4758f02f Merge branch 'main' into feat/default_workflows 2023-12-21 10:35:02 +11:00
8d2952695d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-21 09:56:06 +11:00
8dd55cc45e t2i with LoRA 2023-12-21 09:54:12 +11:00
562fb1f3a1 add authToastMiddleware back and fix parsing 2023-12-20 14:59:33 -05:00
21ed2d42cd Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-20 21:54:21 +11:00
79cf3ec9a5 Add facedetailer workflow 2023-12-20 18:53:49 +11:00
37b76caccf Added default workflows 2023-12-20 17:42:14 +11:00
Sam
a4f9bfc8f7 Update Dockerfile 2023-12-19 18:38:36 -05:00
Sam
9afdd0f4a8 Update Dockerfile 2023-12-19 18:38:36 -05:00
bee6ad1547 fix(pnpm): replace npm with pnpm in dockerfile 2023-12-19 18:38:36 -05:00
fa3f1b6e41 [Feat] reimport model config records after schema migration (#5281)
* add code to repopulate model config records after schema update

* reformat for ruff

* migrate model records using db cursor rather than the ModelRecordConfigService

* ruff fixes

* tweak exception reporting

* fix: build frontend in  pypi-release workflow

This was missing, resulting in the 3.5.0rc1 having no frontend.

* fix: use node 18, set working directory

- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands

* Don't copy extraneous paths into installer .zip

* feat(installer): delete frontend build after creating installer

This prevents an empty `dist/` from breaking the app on startup.

* feat: add python dist as release artifact, as input to enable publish to pypi

- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* freeze yaml migration logic at upgrade to 3.5

* moved migration code to migration_3

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
2023-12-19 17:01:47 -05:00
d0fa131010 (feat) updater installs from PyPi instead of GitHub releases (#5316)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updater script pulls from PyPI instead of GitHub releases (this is why
the RC packages are having issues when updating through the launcher
script)

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-19 13:15:38 +11:00
2f438431bd (fix) update logic for installing specific version 2023-12-19 11:05:15 +11:00
bbeb5cb477 Merge branch 'main' into feat/updater_use_pypi 2023-12-19 10:09:03 +11:00
cd3111c324 fix ruff errors 2023-12-19 09:58:10 +11:00
16b7246412 (feat) updater installs from PyPi instead of GitHub releases 2023-12-19 09:30:40 +11:00
42be78d328 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1327 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-19 07:20:14 +11:00
e469e24a58 Update model_probe to work with diffuser-format SD TI embeddings. (#5301)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

      
## Have you updated all relevant documentation?
- [x] Yes (N/A)
- [ ] No


## Description

This change enables the model probe to work with TI embeddings that have
the follow state_dict structure:

```python
{
    "<any_key>": torch.Tensor(...), # where the tensor has shape (N, embedding_dim)
}
```

## QA Instructions, Screenshots, Recordings

I can't imagine an embedding format that would previously have passed
the model probe, and would now fail after this change. That being said,
I'll exercise a bunch of existing TIs before merging.

- [x] Exercise existing TI formats


## Added/updated tests?

- [ ] Yes
- [x] No : _We could really benefit from tests for all of the supported
TI formats... but I'm not taking on that project right now._
2023-12-18 10:01:04 -05:00
cb698ff1fb Update model_probe to work with diffuser-format SD TI embeddings. 2023-12-18 09:51:16 -05:00
45470a3ac8 Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-18 23:32:28 +11:00
0e738c4290 Tag model manager v2 api as unstable (#5311)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

As discussed with @psychedelicious , this PR changes the swagger label
on the model manager V2 routes to `model_manager_v2_unstable` in order
to warn community members that the API is liable to change.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-18 07:09:40 -05:00
09d1bc513d Merge branch 'main' into refactor/model-manager2/mark-api-experimental 2023-12-18 07:04:00 -05:00
b6ed4ba559 feat(db): handle PIL errors opening images gracefully
For example, if PIL tries to open a *really* big image, it will raise an exception to prevent reading a huge object into memory.
2023-12-18 18:02:31 +11:00
aefa828237 Tiled upscaling - EvenSplit to use overlap in pixels instead tile fraction (#5309)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Change CalculateImageTilesEvenSplitInvocation to have an overlap in
pixels rather than as a percentage of the tile. This makes it easier to
have predictable blending of the seams as you have a known overlap size.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-17 21:13:45 -05:00
74ea592d02 tag model manager v2 api as unstable 2023-12-17 14:16:45 -05:00
457b0dfac0 Merge branch 'main' into tiled-upscaling-graph 2023-12-17 15:12:16 +00:00
96a717c4ba In CalculateImageTilesEvenSplitInvocation to have overlap_fraction becomes just overlap. This is now in pixels rather than as a fraction of the tile size.
Update calc_tiles_even_split() with the same change. Ensuring Overlap is within allowed size

Update even_split tests
2023-12-17 15:10:50 +00:00
77b74264a8 Simplify docker compose setup (#5046)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes -
https://github.com/invoke-ai/InvokeAI/pull/5007#discussion_r1378792615
- [ ] No, because: 

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Simplify Docker image creation and execution to a single script that
spins up the right service in the docker compose file.
## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Depends on #5007

## QA Instructions, Screenshots, Recordings
N/A
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : same tests should work.

## [optional] Are there any post deployment tasks we need to perform?

Not to my knowledge.
2023-12-17 17:10:56 +11:00
351078e8aa Merge branch 'main' into simplify-docker-compose-setup 2023-12-17 17:07:55 +11:00
b8354bd1a4 Merge branch 'main' into tiled-upscaling-graph 2023-12-16 19:09:28 +00:00
3b944b8af6 fix: build frontend in pypi-release workflow (#5298)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

This was missing, resulting in the 3.5.0rc1 having no frontend.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Discord installer thread:
https://discord.com/channels/1020123559063990373/1149513695567810630/1185200427717898260
- Comments from here in the release chat:
https://discord.com/channels/1020123559063990373/1020123559831539744/1185004017521279007

## QA Instructions, Screenshots, Recordings

I've run this locally and it works (I commented out the final steps of
the workflow that do PyPi stuff to ensure I didn't accidentally deploy
something).

You can run the workflow locally with https://github.com/nektos/act.
Suggest using the `gh` CLI version, its very easy to set up if you have
the github CLI installed. Then you can run `gh act -W
.github/workflows/pypi-release.yml` to run the workflow locally in a
docker image.

I don't know this local action runner would actually release to PyPi -
as mentioned, I commented those steps out when testing - but it does
successfully do both frontend and backend builds.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This needs @lstein 's approval.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?

Cut an RC2
2023-12-16 10:40:36 -05:00
b811c037bd Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:36:03 -05:00
5bf61382a4 feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
2023-12-16 20:02:09 +11:00
0f1c5f382a feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
2023-12-16 19:39:29 +11:00
4af1695c60 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-16 13:10:47 +11:00
df9a903a50 fix(ui): do not cache VAE decode on linear
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.

For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.

There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.

The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.

This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
2023-12-16 12:37:49 +11:00
311be8f97d Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:15:32 +11:00
3f970c8326 Don't copy extraneous paths into installer .zip 2023-12-15 11:27:21 -05:00
fc150acde5 [feat] Make model prober recognize yet another LoRA format (#5296)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This adds a probe for the SDXL LoRA format found in the wild at
https://civitai.com/models/224641.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

See discord message at:
https://discord.com/channels/1020123559063990373/1149510134058471514/1184982133912113182

## QA Instructions, Screenshots, Recordings

Try installing the SDXL LoRA at the URL given above.
## Merge Plan

This can be merged when approved.
## Added/updated tests?

- [ ] Yes
- [X] No : we do not yet have a comprehensive suite of models to test
probing on.

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 09:49:51 -05:00
1615df3aa1 fix: use node 18, set working directory
- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
2023-12-16 00:32:31 +11:00
b2a8c45553 fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
2023-12-15 23:56:31 +11:00
212dbaf9a2 fix comment 2023-12-15 00:25:27 -05:00
ac3cf48d7f make probe recognize lora format at https://civitai.com/models/224641 2023-12-15 00:25:27 -05:00
454f01e0c1 [feature] add ability to filter model listings by format (#5286)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This minor change adds the ability to filter the model lists returned by
V2 of the model manager using the model file format (e.g. "checkpoint").
Just thought this would be a useful feature.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This can be merged when approved without any adverse effects.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : minor feature - tested informally using the router API

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 00:03:01 -05:00
72dca55e44 Merge branch 'feat/model_manager/search-by-format' of github.com:invoke-ai/InvokeAI into feat/model_manager/search-by-format 2023-12-14 23:55:08 -05:00
264ea6d94d fix ruff errors 2023-12-14 23:54:59 -05:00
60e3e653fa Merge branch 'main' into feat/model_manager/search-by-format 2023-12-14 23:53:54 -05:00
082894c377 Adding Kapa Assistant to Docs (#5290)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ x ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ x ] Yes
- [ ] No


## Description
This adds the Kapa assistant to our docs.
2023-12-15 09:47:40 +11:00
4b00f8fc82 Merge branch 'main' into Adding-Kapa-assistant-to-docs 2023-12-15 09:46:25 +11:00
6ea09ba0b6 feat(ui): workflow menu tweaks
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
2023-12-14 08:30:59 -05:00
296060db63 Add cpu and rocm profiles. Let invokeai-nvidia service be the default. 2023-12-13 23:23:43 -05:00
d1d8ee71fc Simplify docker compose setup 2023-12-13 23:23:43 -05:00
42c04db167 adding kapa widget to docs 2023-12-13 22:33:50 -05:00
b935768eeb Update mkdocs.yml 2023-12-13 22:28:47 -05:00
ea4ef042f3 Ruff fixes 2023-12-14 12:47:10 +11:00
18b2bcbbee Added Classification from baseinvocation 2023-12-14 12:47:10 +11:00
5ad88c7f86 Fixed classification 2023-12-14 12:47:10 +11:00
3b04fef31d Added classification 2023-12-14 12:47:10 +11:00
bec888923a Fix for ruff 2023-12-14 12:47:10 +11:00
c6235049c7 Add an unsharp mask node to core nodes
Unsharp mask is an image operation that, despite its name, sharpens an image. Like a Gaussian blur, it takes a radius and strength.
2023-12-14 12:47:10 +11:00
e10f6e8962 fix(nodes): mark CalculateImageTilesInvocation as beta
missed this when I added classification
2023-12-13 20:33:25 -05:00
77f04ff8d6 docs: add warning to developer install about database & main 2023-12-14 11:47:33 +11:00
461e474394 fix(nodes): fix embedded workflows with IDs
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.

Now it strips unknown attributes, allowing those workflows to load.
2023-12-14 11:38:04 +11:00
f0c70fe3f1 fix(db): add error handling for workflow migration
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
2023-12-14 10:16:56 +11:00
442ac2b828 fix(ui): fix frontend workflow migration when node is missing version
This should default to "1.0.0" to match the behaviour of the backend.
2023-12-14 09:59:11 +11:00
bb986b97f3 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
98655db57b translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1340 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 84.2% (1150 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 83.1% (1135 of 1365 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
8845894e83 translationBot(ui): update translation (Italian)
Currently translated at 97.0% (1325 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
937c7e957d add merge plan to PR template 2023-12-13 16:59:31 -05:00
569ae7c482 add ability to filter model listings by format 2023-12-13 15:59:21 -05:00
340957f920 Update torch to 2.1.1 and xformers to 0.0.23 2023-12-13 14:49:32 -05:00
076d9b05ea Update transformers to 4.36 and Accelerate to 0.25 2023-12-13 14:42:34 -05:00
2b54e240d4 Bump Diffusers Dependency (#5243)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: dependency bump

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ x ] No


## Description
Updating diffusers to .24 - fixes a few issues. Needs to be tested to
ensure things like our IP Adapter implementation don't break
2023-12-13 20:31:00 +05:30
5127e9df2d Fix error caused by bump to diffusers 0.24. We pass kwargs now instead of positional args so that we are more robust to future changes. 2023-12-13 09:17:30 -05:00
42329a1849 Updating HF Hub dependency 2023-12-13 09:17:30 -05:00
42bc6ef154 Bump Diffusers Dependency 2023-12-13 09:17:30 -05:00
6c6c45c3da feat(db): add SQLiteMigrator to perform db migrations (#5227)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Have you discussed this change with the InvokeAI team?

- [x] Yes
- [ ] No, because:

## Description

This PR enhances our SQLite database with migration logic.

### `SQLiteMigrator` class

The new `SQLiteMigrator` class handles safely running database
migrations. It is initialized in the `SqliteDatabase` class's init, and
immediately runs all database migrations.

### `Migration` class

Migrations are reprsented by a `Migration` class, which has 3
attributes:

- `db_version: int`: The database version this migration results in.
- `app_version: str`: The semver app version this migration is run for.
- `migrate: Callable[[sqlite3.Cursor], None]`: A function that performs
the migration. It receives a cursor _only_, but can do anything it wants
to do. A convention is established for these functions.

All schema-creating SQL now lives in a `migrate` function. We haven't
needed to make any data migrations yet, but when we do, this will also
be handled within one of these callbacks.

### Migration Flow

First, migrations are registered with `SQLiteMigrator` with it's
`register_migration` method. This performs some basic checks of the
migration version.

After registering all migrations, they are run with the `run_migrations`
method. This does a few things:

- Creates a `version` table in the DB, if it doesn't already exist. This
table has `db_version INTEGER`, `app_version TEXT` and `migrated_at
DATETIME` columns.
- Sort the migrations by their `db_version`.
- Do some checks to see if we need a migration.
- Backs up the database (if it's a file database). The migration bails
out if this fails.
- Runs each migration. If there is a problem, restore from backup.

### Included Migrations

Migrations are in `invokeai/app/services/shared/sqlite/migrations`.

#### `migrate_1.py`

All\* schema SQL up to 3.4.0post2 is in `migration_1.py`. Running only
this migration should result in a database that is identical to the one
you get from starting up 3.4.0post2.

SQL in this migration is **idempotent** (same as it was when the SQL was
spread across the various services).

#### `migrate_2.py`

Schema changes through 3.5.0 (the upcoming release) are in
`migration_2.py`.

SQL in this migration is **not idempotent**. Future migrations need not
be idempotent, as the migration logic ensures each will only be run
once.

### \*Caveat - ItemStorage

This class provides a generic document-db-like interface for storing
objects. Our `graph_executions` and `graphs` tables are created and
managed by this service. This PR does not touch this class and therefore
does not touch either of those two tables.

We can decide how to handle those tables in the future as the need
arises.

### Change to Model Manager Metadata table

I noticed that there is a `model_manager_metadata` table which included
the app version, and whose `version` property wasn't accessed outside
the service.

I believe the new `version` table fulfills the purpose of this table,
and have removed it.

@lstein Please let me know if this is not right.

## QA Instructions, Screenshots, Recordings

1.  Case 1 - Upgrade

    - Back up your 3.4.0post2 database
    - Run this PR
- It should upgrade your database and everything should work exactly
like it did before

2.  Case 2 - New Install

- Move your database out of the invoke root so that when the app starts,
it creates a new one
    - Run this PR
    - It should work just like a new install

3.  Case 3 - With an In-Memory Database

- Enable the in-memory memory database (set `use_memory_db` under
`Paths` in `invokeai.yaml` to `true`)
    - Run this PR
    - It should work just like a new install


## Added/updated tests?

- [x] Yes: Fairly comprehensive tests are added for the
`SQLiteMigrator`.
- [ ] No : _please replace this line with details on why tests
      have not been included_
2023-12-13 09:04:51 -05:00
f76b04a3b8 fix(db): rename "SQLiteMigrator" -> "SqliteMigrator" 2023-12-13 11:31:15 +11:00
821e0326c9 fix(db): formatting 2023-12-13 11:25:57 +11:00
cc18d86f29 Merge branch 'main' into feat/db/migrations 2023-12-13 11:24:55 +11:00
ed1583383e fix(db): remove stale comment in tests 2023-12-13 11:24:27 +11:00
c50a49719b fix(db): raise a MigrationVersionError when invalid versions are used
This inherits from `ValueError`, so pydantic understands it when doing validation.
2023-12-13 11:21:16 +11:00
ebf5f5d418 feat(db): address feedback, cleanup
- use simpler pattern for migration dependencies
- move SqliteDatabase & migration to utility method `init_db`, use this in both the app and tests, ensuring the same db schema is used in both
2023-12-13 11:19:59 +11:00
386b656530 feat(db): remove unnecessary fixture declaration
Also revert the change to `conftest.py` in which the file was flagged for pytest to crawl for fixtures.
2023-12-13 10:13:03 +11:00
d7cede6c28 chore/fix: bump fastapi to 0.105.0
This fixes a problem with `Annotated` which prevented us from using pydantic's `Field` to specify a discriminator for a union. We had to use FastAPI's `Body` as a workaround.
2023-12-13 09:48:34 +11:00
15de7c21d9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 10:24:00 -05:00
9620f9336c updated comment 2023-12-12 10:24:00 -05:00
a64ced7b29 remove unneeded if else 2023-12-12 10:24:00 -05:00
dd7deff1a3 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 10:24:00 -05:00
612912a6c9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 14:12:22 +00:00
bca2372280 updated comment 2023-12-12 14:02:28 +00:00
0b860582f0 remove unneeded if else 2023-12-12 14:00:06 +00:00
87ff380fe4 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 13:40:28 +00:00
2cdda1fda2 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 17:22:52 +11:00
6caa70123d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 96.4% (1314 of 1363 strings)

Co-authored-by: junzi <nomal.si2621.vip@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-12 17:15:54 +11:00
7e831c8a96 Selected in View within Gallery (#5240)
* selector added

* ref and useeffect added

* scrolling done using useeffect

* fixed scroll and changed the ref name

* fixed scroll again

* created hook for scroll logic

* feat(ui): debounce metadata fetch by 300ms

This vastly reduces the network requests when using the arrow keys to quickly skim through images.

* feat(ui): extract logic to determine virtuoso scrollToIndex align

This needs to be used in `useNextPrevImage()` to ensure the scrolling puts the image at the top or bottom appropriately

* feat(ui): add debounce to image workflow hook

This was spamming network requests like the metadata query

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-12-12 17:14:28 +11:00
3d64bc886d feat(nodes): flag all tiled upscaling nodes as beta 2023-12-12 16:43:05 +11:00
1a136d6167 feat(nodes): fix classification docstrings 2023-12-12 16:43:05 +11:00
43f2837117 feat(nodes): add invocation classifications
Invocations now have a classification:
- Stable: LTS
- Beta: LTS planned, API may change
- Prototype: No LTS planned, API may change, may be removed entirely

The `@invocation` decorator has a new arg `classification`, and an enum `Classification` is added to `baseinvocation.py`.

The default is Stable; this is a non-breaking change.

The classification is presented in the node header as a hammer icon (Beta) or flask icon (prototype).

The icon has a tooltip briefly describing the classification.
2023-12-12 16:43:05 +11:00
5f77ef7e99 feat(db): improve docstrings in migrator 2023-12-12 16:30:57 +11:00
22ccaa4e9a [Feature] Allow the model record migrate script to update existing model records (#5264)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

1. The new model manager sqlite3-based configuration record storage
system is automatically populated with probed values from existing
models found in the models path when `invokeai-web` starts up for the
first time. However, the user's customization of these models in
`invokeai.yaml`, including such things as the prediction type and model
description, are not automatically copied over. This PR enhances the
`invokeai-migrate-models-to-db` script so that any customized
configuration data from `invokeai.yaml` replaces the original probed
values. This script only needs to be run once, but it does not hurt to
run it additional times. In the near future, I'm going to register this
module with psychedelicious's sqlite migration system so that the update
happens automatically during database migration.

2. The SQL-based model config record system stores a JSON version of the
config, as well as several fields that are broken out into individual
columns for search/indexing purposes. This PR keeps the JSON and the
broken-out fields in sync using the `json_extract()` sqlite3 function to
populate the broken out `base`, `type`, `name`, `path` and `format`
fields in the `model_config` table.

3. Finally, this PR fixes the annoying `invokeai-web` shutdown message:
`TypeError: ModelInstallService.stop() takes 1 positional argument but 2
were given`

## Related Tickets & Documents


- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

If you've run `invokeai-web` at any time since PR #5039, your
`invokeai.db` will have a `model_config` table containing probe
information from all models in the invokeai models directory as well as
those in `autoimport` (if applicable). However, any models present in
`models.yaml` whose paths are outside these directories will not be
present. To add them, and to update the description and other values
from `models.yaml`, run the command `invokeai-migrate-models-to-db`. You
should see the missing models added to the database table with the
correct information.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-12 00:25:05 -05:00
d277bd3c38 Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-12 00:24:43 -05:00
fd4e041e7c feat: serve HTTPS when configured with ssl_certfile 2023-12-12 16:01:43 +11:00
15a3e8076f Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-11 23:10:04 -05:00
2fbe3a3104 fix ruff error 2023-12-11 23:04:18 -05:00
b0cfa58526 allow the model record migrate script to update existing model records 2023-12-11 22:47:19 -05:00
285ed26edd Add commands to Makefile for convenient release preparation (#5263)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR does three things:

1) It separates out the script that creates the installer zipfile
(`create_installer.sh`) from the script that tags the repository with
the current release version (now called `tag_release.sh`)

2) It adds new targets to Makefile for running the installer script and
tagging.

3) It adds a `help` target that lists the Makefile targets:

```
$ make help
Developer commands:

ruff           Run ruff, fixing any safely-fixable errors and formatting
ruff-unsafe    Run ruff, fixing all fixable errors and formatting
mypy           Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors
mypy-all       Run mypy ignoring the config in pyproject.tom but still ignoring missing imports
frontend-build Build the frontend in order to run on localhost:9090
frontend-dev   Run the frontend in developer mode on localhost:5173
installer-zip  Build the installer .zip file for the current version
tag-release    Tag the GitHub repository with the current version (use at release time only!)
```
`help` is also the default target so that the help message will print
out when only `make` is issued.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No: not needed

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:33:45 -05:00
02565b9a00 Merge branch 'main' into install/release-tools 2023-12-11 22:32:28 -05:00
78a6024d6c Tiled upscaling graph - new nodes (#5234)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Additional tile generation nodes of
CalculateImageTilesEvenSplitInvocation &
CalculateImageTilesMinimumOverlapInvocation
Additional blending method of merge_tiles_with_seam_blending
Updated Node MergeTilesToImageInvocation with seam blending

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:14:15 -05:00
95198da645 fix(db): fix sqlite migrator tests on windows 2023-12-12 13:54:47 +11:00
ee1f1f3363 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 13:39:47 +11:00
18ba7feca1 feat(db): update docstrings 2023-12-12 13:35:46 +11:00
55b0c7cdc9 feat(db): tidy migration_2 2023-12-12 13:30:29 +11:00
713a83e7da Merge branch 'main' into install/release-tools 2023-12-11 21:20:51 -05:00
f3a97e06ec add the tag_release.sh script 2023-12-11 21:11:37 -05:00
50815d36c6 feat(db): add tests for migration dependencies 2023-12-12 13:09:24 +11:00
a69f518c76 feat(db): tidy dependencies for migrations 2023-12-12 13:09:09 +11:00
18093c4f1d split installer zipfile script from tagging script; add make commands 2023-12-11 21:08:03 -05:00
0cf7fe43af feat(db): refactor migrate callbacks to use dependencies, remote pre/post callbacks 2023-12-12 12:35:42 +11:00
6063760ce2 feat(db): tweak docstring 2023-12-12 11:13:40 +11:00
c5ba4f2ea5 feat(db): remove file backups
Instead of mucking about with the filesystem, we rely on SQLite transactions to handle failed migrations.
2023-12-12 11:12:46 +11:00
3414437eea feat(db): instantiate SqliteMigrator with a SqliteDatabase
Simplifies a couple things:
- Init is more straightforward
- It's clear in the migrator that the connection we are working with is related to the SqliteDatabase
2023-12-12 10:46:08 +11:00
417db71471 feat(db): decouple SqliteDatabase from config object
- Simplify init args to path (None means use memory), logger, and verbose
- Add docstrings to SqliteDatabase (it had almost none)
- Update all usages of the class
2023-12-12 10:30:37 +11:00
afe4e55bf9 feat(db): simplify migration registration validation
With the previous change to assert that the to_version == from_version + 1, this validation can be simpler.
2023-12-12 09:52:03 +11:00
55acc16b2d feat(db): require migration versions to be consecutive 2023-12-12 09:43:09 +11:00
535ce10e99 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 14:40:55 -05:00
Sam
11f4a48144 Add container GID 2023-12-11 14:30:40 -05:00
Sam
67ed4a0245 Respect CONTAINER_UID in Dockerfile chown
CONTAINER_UID is used for the user ID within the container, however I noticed the UID was hard coded to 1000 in the Dockerfile chown -R command.

This leaves the default as 1000, but allows it to be overrriden by setting CONTAINER_UID.
2023-12-11 14:30:40 -05:00
fbbc1037cd missed a rename of overlap to overlap_fraction in test for even_spilt 2023-12-11 17:23:28 +00:00
0852fd4e88 Updated tests for even_split overlap renamed to overlap_fraction 2023-12-11 17:17:29 +00:00
c84526fae5 Fixed Tests that where using round_to_8 and removed redundant tests 2023-12-11 17:05:45 +00:00
f762940335 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 16:57:36 +00:00
fefb78795f - Even_spilt overlap renamed to overlap_fraction
- min_overlap removed * restrictions and round_to_8
- min_overlap handles tile size > image size by clipping the num tiles to 1.
- Updated assert test on min_overlap.
2023-12-11 16:55:27 +00:00
ef8284f009 fix(db): fix tests 2023-12-11 16:41:47 +11:00
290851016e feat(db): move sqlite_migrator into its own module 2023-12-11 16:41:30 +11:00
fa7d002175 fix(tests): fix typing issues 2023-12-11 16:22:29 +11:00
f1b6f78319 fix(db): fix windows db migrator tests
- Ensure db files are closed before manipulating them
- Use contextlib.closing() so that sqlite connections are closed on existing the context
2023-12-11 16:14:25 +11:00
26ab917021 fix(tests): add sqlite migrator to test fixtures 2023-12-11 16:14:25 +11:00
4f3c32a2ee fix(db): remove errant print stmts 2023-12-11 16:14:25 +11:00
77065b1ce1 feat(db): update test for migration chain for missing from 0 2023-12-11 16:14:25 +11:00
41db92b9e8 feat(db): add check for missing migration from 0 2023-12-11 16:14:25 +11:00
c823f5667b feat(db): update sqlite migrator tests 2023-12-11 16:14:25 +11:00
3227b30430 feat(db): extract non-stateful logic to class methods 2023-12-11 16:14:25 +11:00
567f107a81 feat(db): return backup_db_path, move log stmt to run_migrations 2023-12-11 16:14:25 +11:00
b3d5955bc7 fix(db): rename Migrator._migrations -> _migration_set 2023-12-11 16:14:25 +11:00
8726b203d4 fix(db): fix migration chain validation 2023-12-11 16:14:25 +11:00
b3f92e0547 fix(db): fix docstring 2023-12-11 16:14:25 +11:00
72c9a7663f fix(db): add docstring 2023-12-11 16:14:25 +11:00
fcb9e89bd7 feat(db): tidy db naming utils 2023-12-11 16:14:25 +11:00
56966d6d05 feat(db): only reinit db if migrations occurred 2023-12-11 16:14:25 +11:00
e46dc9b34e fix(db): close db conn before reinitializing 2023-12-11 16:14:25 +11:00
e461f9925e feat(db): invert backup/restore logic
Do the migration on a temp copy of the db, then back up the original and move the temp into its file.
2023-12-11 16:14:25 +11:00
abeb1bd3b3 feat(db): reduce power MigrateCallback, only gets cursor
use partial to provide extra dependencies for the image workflow migration function
2023-12-11 16:14:25 +11:00
83e820d721 feat(db): decouple from SqliteDatabase 2023-12-11 16:14:25 +11:00
f8e4b93a74 feat(db): add migration lock file 2023-12-11 16:14:25 +11:00
0710ec30cf feat(db): incorporate feedback 2023-12-11 16:14:25 +11:00
c382329e8c feat(db): move migrator out of SqliteDatabase 2023-12-11 16:14:25 +11:00
a2dc780188 feat: add script to migrate image workflows 2023-12-11 16:14:25 +11:00
abc9dc4d17 fix(tests): fix sqlite migrator backup and restore test
On Windows, we must ensure the connection to the database is closed before exiting the tempfile context.

Also, rejiggered the thing to use the file directly.
2023-12-11 16:14:25 +11:00
3c692018cd fix(db): make idempotency test actually test something 2023-12-11 16:14:25 +11:00
3ba3c1918c fix(db): remove duplicated test case 2023-12-11 16:14:25 +11:00
f2c6819d68 feat(db): add SQLiteMigrator to perform db migrations 2023-12-11 16:14:25 +11:00
ef807cf63a Refactor model manager: model installer component (#5171)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This is the next phase of the model manager refactor, as discussed with
@psychedelicious and @RyanJDick. This implements the model installer,
which is responsible for managing model weights on disk and installing
new models.

Currently only installation of local files and directories is supported.
Remote installation will be implemented after the queued download
manager is reviewed and approved.

Please see the documentation located at
[docs/contributing/MODEL_MANAGER.md](8695ad6f59/docs/contributing/MODEL_MANAGER.md (model-installation))
for an explanation of how this module works.

Things that have changed relative to the current implementation.

1. Model importation runs in a background thread. Access to the
installation status is through a ModelInstallJob object returned by the
`import_model()` call. In addition, the installation process generates a
series of `model_install` events on the event bus.
2. `model_install_progress` events are documented, but not currently
issued. These will be issued when background downloading is implemented.
3. The model installer currently runs in parallel to the current model
manager. The frontend continues to use `configs/models.yaml` and ignores
what is in the `model_config` table of `invokeai.db`.
4. When the installer is initialized at app startup time, it
synchronizes its database to the contents of the InvokeAI `models`
directory. The current model manager does this as well, so you will see
two log messages indicating that this directory is being scanned.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

You can test using the FastAPI swagger pages at
http://localhost:9090/docs. Use the calls listed under
`model_manager_v2`. Be aware that only installation of local models
(indicated by their file or directory path) are currently supported.

## Added/updated tests?

- [X] Yes -- see
`tests/app/services/model_install/test_model_install.py`
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-10 23:16:39 -05:00
bbcd58e681 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 21:34:14 -05:00
36043bf38b fixed docstring in probe module 2023-12-10 21:33:54 -05:00
fd68c47920 Merge branch 'main' into refactor/model-manager-3 2023-12-10 21:26:44 -05:00
c5c975c7a9 fix(installer): fix exit on new version 2023-12-11 12:30:13 +11:00
41ad13c282 feat(installer): do not print when aliasing python
Potentially confusing and not useful
2023-12-11 12:30:13 +11:00
e9d7e6bdd5 feat(installer): make active venv error red instead of yellow 2023-12-11 12:30:13 +11:00
49b74d189e feat(installer): improve messages, simplify script
- Color outputs
- Clarify messages
- Do not offer to use existing frontend build (insurance - prevents accidentally using old build)
2023-12-11 12:30:13 +11:00
179bc64490 feat(create_installer): remove extraneous conditional
Using `-f` is functionally equivalent to first checking if the dir exists before removing it. We just want to ensure the build dir doesn't exists.
2023-12-11 12:30:13 +11:00
1feab3da37 fix(installer): update msg in create_installer
More accurate/clearer messages
2023-12-11 12:30:13 +11:00
0a15f3fc35 fix(tests): remove test for frontend build 2023-12-11 12:30:13 +11:00
daf00efa4d fix(api): only attempt to serve UI build if it exists 2023-12-11 12:30:13 +11:00
55cfb879d0 feat: no frontend build in repo
In other words, build frontend when creating installer.

Changes to `create_installer.sh`

- If `python` is not in `PATH` but `python3` is, alias them (well, via function). This is needed on some machines to run the installer without symlinking to `python3`.
- Make the messages about pushing tags clearer. The script force-pushes, so it's possible to accidentally take destructive action. I'm not sure how to otherwise prevent damage, so I just added a warning.
- Print out `pwd` when prompting about being in the `installer` dir.
- Rebuild the frontend - if there is already a frontend build, first checks if the user wants to rebuild it.
- Checks for existence of `../build` dir before deleting - if the dir doesn't exist, the script errors and exits at this point.
- Format and spell check.

Other changes:

- Ignore `dist/` folder.
- Delete `dist/`.

**Note: you may need to use `git rm --cached invokeai/app/frontend/web/dist/` if git still wants to track `dist/`.**
2023-12-11 12:30:13 +11:00
de2879f602 port new code for detecting sdxl-based embeddings 2023-12-10 15:48:02 -05:00
3b1ff4a7f4 resolve test failure caused by renamed sqlite_database module 2023-12-10 12:59:00 -05:00
d7f7fbc8c2 Merge branch 'main' into refactor/model-manager-3 2023-12-10 12:55:28 -05:00
e2567a7e31 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 12:55:24 -05:00
2f3457c02a rename installer __del__() to stop(). Improve probe error messages 2023-12-10 12:55:01 -05:00
aab6369ffe Update invokeai/backend/model_manager/search.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-10 12:24:50 -05:00
4c97b619fb Update tiles.py
merge with main
2023-12-09 22:05:23 +00:00
abdd840fb9 Merge branch 'main' into tiled-upscaling-graph 2023-12-09 22:03:18 +00:00
e656768eb2 more fixes from code review 2023-12-09 21:56:31 +00:00
494c2a9b05 Updates based on code review by @RyanJDick 2023-12-09 18:38:07 +00:00
40d4c7c8e1 fix(ui): add validation to field value reducers (#5256)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Insurance against invalid inputs. Closes #5250
2023-12-09 11:42:32 +05:30
076284c26f fix(ui): add validation to field value reducers
Insurance against invalid inputs. Closes #5250
2023-12-09 17:09:02 +11:00
1af4260ab6 fix(ui): fix workflow saving
'id' should not be omitted when building a workflow, it makes workflows always save as a copy
2023-12-09 16:35:44 +11:00
08ef71a74e fix(tests): mark non-test-case classes as such
Because their names start with "test", we need to use `__test__ = False` to tell pytest to not treat them as test cases.
2023-12-09 16:31:41 +11:00
8f6e2c0c85 fix(tests): add versions to test nodes
Fixes a warning about missing version.
2023-12-09 16:31:41 +11:00
0ac33f36ef fix(tests): fix pydantic warning about deprecated fields
Calling `inspect.getmembers()` on a pydantic field results in `getattr` being called on all members of the field. Pydantic has some attrs that are marked deprecated.

In our test suite, we do not filter deprecation warnings, so this is surfaced.

Use a context manager to ignore deprecation warnings when calling the function.
2023-12-09 16:31:41 +11:00
9661fa5f76 feat(ui): add eslint unused-imports plugin
Provides autofix for unused imports
2023-12-09 16:12:00 +11:00
ca07449fb4 fix(ui): add typeguard for action.payload
In the latest redux, unknown actions are typed as `unknown`. This forces type-safety upon us, requiring us to be more careful about the shape of actions.

In this case, we don't know if the rejection has a payload or what shape it may be in, so we need to do runtime checks. This is implemented with a simple zod schema, but probably the right way to handle this is to have consistent types in our RTK-Query error logic.
2023-12-09 16:09:26 +11:00
fb39f621c6 feat(ui): bump redux-remember 2023-12-09 16:09:26 +11:00
977d309692 fix(ui): fix memoized selectors
Some had the memoize options twice.
2023-12-09 16:09:26 +11:00
72cb8b83fe feat(ui): upgrade redux and RTK
There are a few breaking changes, which I've addressed.

The vast majority of changes are related to new handling of `reselect`'s `createSelector` options.

For better or worse, we memoize just about all our selectors using lodash `isEqual` for `resultEqualityCheck`. The upgrade requires we explicitly set the `memoize` option to `lruMemoize` to continue using lodash here.

Doing that required changing our `defaultSelectorOptions`.

Instead of changing that and finding dozens of instances where we weren't using that and instead were defining selector options manually, I've created a pre-configured selector: `createMemoizedSelector`.

This is now used everywhere instead of `createSelector`.
2023-12-09 16:09:26 +11:00
99f14b1dfe fix(ui): remove .ladle from tsconfig
was testing this out and forgot to remove
2023-12-09 16:03:09 +11:00
95a3c89a56 chore(ui): lint 2023-12-09 16:03:09 +11:00
b271474812 feat(ui): bump deps 2023-12-09 16:03:09 +11:00
2272925607 feat(ui): disable storybook telemetry 2023-12-09 16:03:09 +11:00
5902a52e40 feat(ui): add storybook 2023-12-09 16:03:09 +11:00
5140056b59 fix(actions): fix lint-frontend 2023-12-09 16:00:37 +11:00
f17b3d0068 feat(ui): migrate to pnpm
- update all scripts
- update the frontend GH action
- remove yarn-related files
- update ignores

Yarn classic + storybook has some weird module resolution issue due to how it hoists dependencies.

See https://github.com/storybookjs/storybook/issues/22431#issuecomment-1630086092

When I did the `package.json` solution in this thread, it broke vite. Next option is to upgrade to yarn 3 or pnpm. I chose pnpm.
2023-12-09 16:00:37 +11:00
5b9d25f57e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-09 13:47:40 +11:00
73dbb8792e translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1287 of 1324 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-09 13:47:40 +11:00
fc6cebb975 fix(ui): fix extra attrs added to workflow payload 2023-12-09 11:10:16 +11:00
06104f3851 fix(ui): disallow loading/deleting workflow if already open 2023-12-09 11:10:16 +11:00
6e028d691a fix(ui): use translation for unnamed workflows 2023-12-09 11:10:16 +11:00
6d176601cc feat(ui): track & indicate workflow saved status 2023-12-09 11:10:16 +11:00
4627a7c75f tidy(ui): remove unused components 2023-12-09 11:10:16 +11:00
d9a0efb20b chore)ui): typegen 2023-12-09 11:10:16 +11:00
7436aa8e3a feat(workflow_records): do not use default_factory for workflow id
Using default_factory to autogenerate UUIDs doesn't make sense here, and results awkward typescript types.

Remove the default factory and instead manually create a UUID for workflow id. There are only two places where this needs to happen so it's not a big change.
2023-12-09 11:10:16 +11:00
d75d3885c3 fix(ui): fix typo in uiPersistDenylist 2023-12-09 11:10:16 +11:00
db4763a742 feat(ui): use templates for edge validation of workflows
This addresses an edge case where:
1. the workflow references fields that are present on the workflow's nodes, but not on the invocation templates for those nodes and
2. The invocation template for that type does exist

This should be a fairly obscure edge case, but could happen if a user fiddled around with the workflow manually.

I ran into it as a result of two nodes having accidentally mixed up their invocation types, a problem introduced with a wonky merge commit.
2023-12-09 11:10:16 +11:00
13c9f8ffb7 fix(nodes): fix mismatched invocation decorator
This got messed up during a merge commit
2023-12-09 11:10:16 +11:00
e4f67628c0 feat(ui): revise workflow editor buttons
- Add menu to top-right of editor, save/saveas/download/upload/reset/settings moved in here
- Add workflow name to top-center
2023-12-09 11:10:16 +11:00
283bb73418 feat(ui): improve save/as workflow hook
Use a persistent updating toast to indicate saving progress.
2023-12-09 11:10:16 +11:00
5b5a71d40c fix(ui): do not append "(copy)" to workflow name when saving 2023-12-09 11:10:16 +11:00
61060f032a feat(ui): abstract out the global menu close trigger
This logic is moved into a hook.

This is needed for our context menus to close when the user clicks something in reactflow. It needed to be extended to support menus also.
2023-12-09 11:10:16 +11:00
3423b5848f fix(ui): do not disable the metadata and workflow tabs in viewer
Disabling these introduces an issue where, if you were on an image with a workflow/metadata, then switch to one without, you can end up on a disabled tab. This could potentially cause a runtime error.
2023-12-09 11:10:16 +11:00
fd8d1e13a0 feat(ui): clarify workflow building node filter 2023-12-09 11:10:16 +11:00
c42d692ea6 feat: workflow library (#5148)
* chore: bump pydantic to 2.5.2

This release fixes pydantic/pydantic#8175 and allows us to use `JsonValue`

* fix(ui): exclude public/en.json from prettier config

* fix(workflow_records): fix SQLite workflow insertion to ignore duplicates

* feat(backend): update workflows handling

Update workflows handling for Workflow Library.

**Updated Workflow Storage**

"Embedded Workflows" are workflows associated with images, and are now only stored in the image files. "Library Workflows" are not associated with images, and are stored only in DB.

This works out nicely. We have always saved workflows to files, but recently began saving them to the DB in addition to in image files. When that happened, we stopped reading workflows from files, so all the workflows that only existed in images were inaccessible. With this change, access to those workflows is restored, and no workflows are lost.

**Updated Workflow Handling in Nodes**

Prior to this change, workflows were embedded in images by passing the whole workflow JSON to a special workflow field on a node. In the node's `invoke()` function, the node was able to access this workflow and save it with the image. This (inaccurately) models workflows as a property of an image and is rather awkward technically.

A workflow is now a property of a batch/session queue item. It is available in the InvocationContext and therefore available to all nodes during `invoke()`.

**Database Migrations**

Added a `SQLiteMigrator` class to handle database migrations. Migrations were needed to accomodate the DB-related changes in this PR. See the code for details.

The `images`, `workflows` and `session_queue` tables required migrations for this PR, and are using the new migrator. Other tables/services are still creating tables themselves. A followup PR will adapt them to use the migrator.

**Other/Support Changes**

- Add a `has_workflow` column to `images` table to indicate that the image has an embedded workflow.
- Add handling for retrieving the workflow from an image in python. The image file must be fetched, the workflow extracted, and then sent to client, avoiding needing the browser to parse the image file. With the `has_workflow` column, the UI knows if there is a workflow to be fetched, and only fetches when the user requests to load the workflow.
- Add route to get the workflow from an image
- Add CRUD service/routes for the library workflows
- `workflow_images` table and services removed (no longer needed now that embedded workflows are not in the DB)

* feat(ui): updated workflow handling (WIP)

Clientside updates for the backend workflow changes.

Includes roughed-out workflow library UI.

* feat: revert SQLiteMigrator class

Will pursue this in a separate PR.

* feat(nodes): do not overwrite custom node module names

Use a different, simpler method to detect if a node is custom.

* feat(nodes): restore WithWorkflow as no-op class

This class is deprecated and no longer needed. Set its workflow attr value to None (meaning it is now a no-op), and issue a warning when an invocation subclasses it.

* fix(nodes): fix get_workflow from queue item dict func

* feat(backend): add WorkflowRecordListItemDTO

This is the id, name, description, created at and updated at workflow columns/attrs. Used to display lists of workflowsl

* chore(ui): typegen

* feat(ui): add workflow loading, deleting to workflow library UI

* feat(ui): workflow library pagination button styles

* wip

* feat: workflow library WIP

- Save to library
- Duplicate
- Filter/sort
- UI/queries

* feat: workflow library - system graphs - wip

* feat(backend): sync system workflows to db

* fix: merge conflicts

* feat: simplify default workflows

- Rename "system" -> "default"
- Simplify syncing logic
- Update UI to match

* feat(workflows): update default workflows

- Update TextToImage_SD15
- Add TextToImage_SDXL
- Add README

* feat(ui): refine workflow list UI

* fix(workflow_records): typo

* fix(tests): fix tests

* feat(ui): clean up workflow library hooks

* fix(db): fix mis-ordered db cleanup step

It was happening before pruning queue items - should happen afterwards, else you have to restart the app again to free disk space made available by the pruning.

* feat(ui): tweak reset workflow editor translations

* feat(ui): split out workflow redux state

The `nodes` slice is a rather complicated slice. Removing `workflow` makes it a bit more reasonable.

Also helps to flatten state out a bit.

* docs: update default workflows README

* fix: tidy up unused files, unrelated changes

* fix(backend): revert unrelated service organisational changes

* feat(backend): workflow_records.get_many arg "filter_text" -> "query"

* feat(ui): use custom hook in current image buttons

Already in use elsewhere, forgot to use it here.

* fix(ui): remove commented out property

* fix(ui): fix workflow loading

- Different handling for loading from library vs external
- Fix bug where only nodes and edges loaded

* fix(ui): fix save/save-as workflow naming

* fix(ui): fix circular dependency

* fix(db): fix bug with releasing without lock in db.clean()

* fix(db): remove extraneous lock

* chore: bump ruff

* fix(workflow_records): default `category` to `WorkflowCategory.User`

This allows old workflows to validate when reading them from the db or image files.

* hide workflow library buttons if feature is disabled

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-12-09 09:48:38 +11:00
5f37176938 ruff formatting 2023-12-08 19:40:10 +00:00
375a91db32 further updated tests 2023-12-08 19:38:16 +00:00
b7ba426249 Fixed some params on tile gen tests on tests 2023-12-08 18:53:28 +00:00
d3ad356c6a Ruff Formatting
Fix pyTest issues
2023-12-08 18:31:33 +00:00
fdb97c1d02 Merge branch 'main' into tiled-upscaling-graph 2023-12-08 18:22:05 +00:00
8cda42ab0a ruff formatting 2023-12-08 18:17:40 +00:00
fed2bdafeb Added Defaults to calc_tiles_min_overlap for overlap and round
Added tests for min_overlap and even_split tile gen
2023-12-08 18:16:13 +00:00
9ba5752770 fix link to xpuct/deliberate 2023-12-08 06:46:58 -08:00
8648c2c42e Update communityNodes.md with VeyDlin's nodes 2023-12-08 05:34:19 -08:00
b519b6e1e0 add middleware to handle 403 errors (#5245)
* add middleware to handle 403 errors

* remove log

* add logic to warn the user if not all requested images could be deleted

* lint

* fix copy

* feat(ui): simplify batchEnqueuedListener error toast logic

* feat(ui): use translations for error messages

* chore(ui): lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-12-07 19:26:15 -05:00
913c68982a Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-06 22:23:49 -05:00
6e1e67aa72 remove source filtering from list_models() 2023-12-06 22:23:08 -05:00
ee6fbabbfb Merge branch 'main' into refactor/model-manager-3 2023-12-06 22:20:06 -05:00
db58efbe65 translationBot(ui): update translation (German)
Currently translated at 62.9% (830 of 1319 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-12-07 00:09:57 +11:00
cd15d8b7a9 ruff formatting
reformatted due to ruff errors
2023-12-06 08:10:22 +00:00
3b4b4ba40a Merge branch 'main' into tiled-upscaling-graph 2023-12-06 08:00:31 +00:00
eecee472b1 chore(deps-dev): bump vite from 4.5.0 to 4.5.1 in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.5.1/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-06 16:57:35 +11:00
7b314116be feat(ui): remove husky (#5235)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

You can only have one pre-commit setup on a repo. Removing husky so it
doesn't interfere with the python pre-commit.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue
https://discord.com/channels/1020123559063990373/1149513625321603162/1181752622684831884
2023-12-06 09:03:43 +05:30
bc6d4111a2 feat(ui): remove husky
You can only have one pre-commit setup on a repo. Removing husky so it doesn't interfere with the python pre-commit.
2023-12-06 14:05:50 +11:00
674d9796d0 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 21:03:16 +00:00
5816320645 Merge branch 'main' into tiled-upscaling-graph 2023-12-05 15:32:49 +00:00
14254e8be8 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 12:29:55 +00:00
e990235d32 translationBot(ui): update translation (Korean)
Currently translated at 5.2% (70 of 1321 strings)

Co-authored-by: 이승석 <vidicwb@ajou.ac.kr>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-12-05 16:00:03 +11:00
5f122186bd translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1317 of 1319 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-05 16:00:03 +11:00
3bfaee9c57 Merge branch 'main' into refactor/model-manager-3 2023-12-04 22:51:45 -05:00
1ca0901cbe Ensure that fetching a logger doesn't reset its loglevel to default (#5222)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: minor bug

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

While writing regression tests for the queued downloader I discovered
that when using `InvokeAILogger.get_logger()` to fetch a
previously-created logger it resets that logger's log level to the
default specified in the global config. In other words, this didn't work
as expected:

```
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger1 = InvokeAILogger.get_logger('TestLogger')
logger1.setLevel(logging.DEBUG)
logger2 = InvokeAILogger.get_logger('TestLogger')
assert logger1.level == logging.DEBUG
assert logger2.level == logging.DEBUG
```

This PR fixes the problem and adds a corresponding pytest.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes
- [ ] No

## [optional] Are there any post deployment tasks we need to perform?
2023-12-04 22:50:59 -05:00
2d7555b7b8 Merge branch 'bugfix/log-levels' of github.com:invoke-ai/InvokeAI into bugfix/log-levels 2023-12-04 22:42:06 -05:00
3c7d1fcd32 clean up get_logger() call 2023-12-04 22:41:59 -05:00
c7fa2db556 Merge branch 'main' into bugfix/log-levels 2023-12-04 22:01:42 -05:00
3b06cc6782 reformatted using newer version of ruff 2023-12-04 21:15:56 -05:00
7c9f48b84d fix ruff check 2023-12-04 21:14:02 -05:00
fed2bf6dab Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-04 21:12:40 -05:00
2b583ffcdf implement review suggestions from @RyanjDick 2023-12-04 21:12:10 -05:00
6f46d15c05 Update invokeai/app/services/model_install/model_install_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-04 20:09:41 -05:00
018ccebd6f make ModelLocalSource comparisons work across platforms 2023-12-04 19:07:25 -05:00
620b2d477a implement suggestions from first review by @psychedelicious 2023-12-04 17:08:33 -05:00
f73b678aae Merge branch 'main' into refactor/model-manager-3 2023-12-04 17:06:36 -05:00
bdb0d13a2d fix import order 2023-12-02 11:56:41 -05:00
2d2ef5d72c ensure that setting loglevel on one logger doesn't change others 2023-12-02 11:48:51 -05:00
e46ac45741 port probing changes from main model_probe.py to refactored probe.py 2023-12-01 09:19:24 -05:00
75089b7a9d merge in changes from main 2023-12-01 09:18:07 -05:00
778fd55f0d Merge branch 'main' into refactor/model-manager-3 2023-12-01 09:15:18 -05:00
bb87c988cb Change input field ordering of CropLatentsCoreInvocation to match ImageCropInvocation. 2023-11-29 10:23:55 -05:00
049b0239da Re-organize merge_tiles_with_linear_blending(...) to merge rows horizontally first and then vertically. This change achieves slightly more natural blending on the corners where 4 tiles overlap. 2023-11-29 09:48:56 -05:00
932de08fc0 Infer a tight-fitting output image size from the passed tiles in MergeTilesToImageInvocation. 2023-11-29 09:48:56 -05:00
303791d5c6 Add width and height fields to TileToPropertiesInvocation output to avoid having to calculate them with math nodes. 2023-11-29 09:48:56 -05:00
7e4a689370 Update tiling nodes to use width-before-height field ordering convention. 2023-11-29 09:48:56 -05:00
04e0fefdee Rename CropLatentsInvocation -> CropLatentsCoreInvocation to prevent conflict with custom node. And other minor tidying. 2023-11-29 09:48:56 -05:00
9b4e6da226 Improve documentation of CropLatentsInvocation. 2023-11-29 09:48:56 -05:00
e1c53a2465 Use LATENT_SCALE_FACTOR = 8 constant in CropLatentsInvocation. 2023-11-29 09:48:55 -05:00
121b930abf Copy CropLatentsInvocation from 74647fa9c1/images_to_grids.py (L1117C1-L1167C80). 2023-11-29 09:48:55 -05:00
436560da39 (minor) Add 'Invocation' suffix to all tiling node classes. 2023-11-29 09:48:55 -05:00
3980f79ed5 Tidy up tiles invocations, add documentation. 2023-11-29 09:48:55 -05:00
1d0dc7eeab Add unit tests for merge_tiles_with_linear_blending(...). 2023-11-29 09:48:55 -05:00
1f63fa8236 Add unit tests for calc_tiles_with_overlap(...) and fix a bug in its implementation. 2023-11-29 09:48:55 -05:00
caf47dee09 Add unit tests for tile paste(...) util function. 2023-11-29 09:48:55 -05:00
d742479810 Add nodes for tile splitting and merging. The main motivation for these nodes is for use in tiled upscaling workflows. 2023-11-29 09:48:55 -05:00
ecd3dcd5df Merge branch 'main' into refactor/model-manager-3 2023-11-27 22:15:51 -05:00
a79e814c8d Merge branch 'main' into refactor/model-manager-3 2023-11-27 16:06:42 -05:00
3fe1bef5cd Merge branch 'main' into refactor/model-manager-3 2023-11-27 08:08:01 -05:00
dbd0151c0e make test file path comparison work on windows systems (another fix) 2023-11-26 18:52:25 -05:00
6da508f147 make test file path comparison work on windows systems 2023-11-26 18:40:22 -05:00
8ef596eac7 further changes for ruff 2023-11-26 17:13:31 -05:00
8f4f4d48d5 fix import unsorted import block issues in the tests 2023-11-26 13:37:47 -05:00
60eae7443a Merge branch 'main' into refactor/model-manager-3 2023-11-26 13:33:41 -05:00
8695ad6f59 all features implemented, docs updated, ready for review 2023-11-26 13:18:21 -05:00
dc5c452ef9 rename test/nodes to test/aa_nodes to ensure these tests run first 2023-11-26 09:38:30 -05:00
8aefe2cefe import_model and list_install_jobs router APIs written 2023-11-25 21:45:59 -05:00
ec510d34b5 fix model probing for controlnet checkpoint legacy config files 2023-11-25 15:53:22 -05:00
19baea1883 all backend features in place; config scanning is failing on controlnet 2023-11-24 19:37:46 -05:00
80bc9be3ab make install_path and register_path work; refactor model probing 2023-11-23 23:15:32 -05:00
8c7a7bc897 Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:29:23 -05:00
4aab728590 move name/description logic into model_probe.py 2023-11-22 22:29:02 -05:00
9cf060115d Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:28:31 -05:00
9ea3126118 start implementation of installer 2023-11-20 23:02:30 -05:00
6c56233edc define install abstract base class 2023-11-20 21:57:10 -05:00
1202 changed files with 68541 additions and 245207 deletions

8
.github/CODEOWNERS vendored
View File

@ -1,5 +1,5 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
@ -10,7 +10,7 @@
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername @ebr
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
@ -26,9 +26,7 @@
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@ -42,6 +42,21 @@ Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
## Added/updated tests?
- [ ] Yes

View File

@ -40,10 +40,14 @@ jobs:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
echo "----- Free space after cleanup"
df -h
- name: Checkout
uses: actions/checkout@v3
@ -91,6 +95,7 @@ jobs:
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
timeout-minutes: 40
id: docker_build
uses: docker/build-push-action@v4
with:

View File

@ -22,12 +22,22 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'
- name: Checkout
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install dependencies
run: 'pnpm install --prefer-frozen-lockfile'
- name: Typescript
run: 'pnpm run lint:tsc'
- name: Madge
run: 'pnpm run lint:madge'
- name: ESLint
run: 'pnpm run lint:eslint'
- name: Prettier
run: 'pnpm run lint:prettier'

View File

@ -1,13 +1,15 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
inputs:
publish_package:
description: 'Publish build on PyPi? [true/false]'
required: true
default: 'false'
jobs:
release:
build-and-release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
@ -15,19 +17,43 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: Checkout
uses: actions/checkout@v4
- name: install deps
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install frontend dependencies
run: pnpm install --prefer-frozen-lockfile
working-directory: invokeai/frontend/web
- name: Build frontend
run: pnpm run build
working-directory: invokeai/frontend/web
- name: Install python dependencies
run: pip install --upgrade build twine
- name: build package
- name: Build python package
run: python3 -m build
- name: check distribution
- name: Upload build as workflow artifact
uses: actions/upload-artifact@v4
with:
name: dist
path: dist
- name: Check distribution
run: twine check dist/*
- name: check PyPI versions
- name: Check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
@ -36,6 +62,6 @@ jobs:
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
- name: Publish build on PyPi
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != '' && github.event.inputs.publish_package == 'true'
run: twine upload dist/*

View File

@ -58,7 +58,7 @@ jobs:
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
uses: tj-actions/changed-files@v41
with:
files_yaml: |
python:

3
.gitignore vendored
View File

@ -16,7 +16,7 @@ __pycache__/
.Python
build/
develop-eggs/
# dist/
dist/
downloads/
eggs/
.eggs/
@ -187,3 +187,4 @@ installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh
installer/InvokeAI-Installer/

View File

@ -1,6 +1,20 @@
# simple Makefile with scripts that are otherwise hard to remember
# to use, run from the repo root `make <command>`
default: help
help:
@echo Developer commands:
@echo
@echo "ruff Run ruff, fixing any safely-fixable errors and formatting"
@echo "ruff-unsafe Run ruff, fixing all fixable errors and formatting"
@echo "mypy Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors"
@echo "mypy-all Run mypy ignoring the config in pyproject.tom but still ignoring missing imports"
@echo "frontend-build Build the frontend in order to run on localhost:9090"
@echo "frontend-dev Run the frontend in developer mode on localhost:5173"
@echo "installer-zip Build the installer .zip file for the current version"
@echo "tag-release Tag the GitHub repository with the current version (use at release time only!)"
# Runs ruff, fixing any safely-fixable errors and formatting
ruff:
ruff check . --fix
@ -18,4 +32,21 @@ mypy:
# Runs mypy, ignoring the config in pyproject.toml but still ignoring missing (untyped) imports
# (many files are ignored by the config, so this is useful for checking all files)
mypy-all:
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
# Build the frontend
frontend-build:
cd invokeai/frontend/web && pnpm build
# Run the frontend in dev mode
frontend-dev:
cd invokeai/frontend/web && pnpm dev
# Installer zip file
installer-zip:
cd installer && ./create_installer.sh
# Tag the release
tag-release:
cd installer && ./tag_release.sh

View File

@ -1,10 +1,10 @@
<div align="center">
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
# Invoke - Professional Creative AI Tools for Visual Media
## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
[![discord badge]][discord link]
@ -56,7 +56,9 @@ the foundation for multiple commercial products.
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
</div>
@ -125,8 +127,8 @@ and go to http://localhost:9090.
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
Node.js also needs to be installed along with `pnpm` (can be installed with
the command `npm install -g pnpm` if needed)
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
@ -270,7 +272,7 @@ upgrade script.** See the next section for a Windows recipe.
3. Select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
menu. Select option [6] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and

View File

@ -2,14 +2,17 @@
## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# HOST_INVOKEAI_ROOT is the path on the docker host's filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# This **must** be an absolute path.
INVOKEAI_ROOT=
# If relative, it will be relative to the docker directory in which the docker-compose.yml file is located
#HOST_INVOKEAI_ROOT=../../invokeai-data
# INVOKEAI_ROOT is the path to the root of the InvokeAI repository within the container.
# INVOKEAI_ROOT=~/invokeai
# Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # or rocm
# GPU_DRIVER=nvidia #| rocm
# CONTAINER_UID=1000

View File

@ -59,14 +59,16 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# #### Build the Web UI ------------------------------------
FROM node:18 AS web-builder
FROM node:20-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN npx vite build
#### Runtime stage ---------------------------------------
@ -100,6 +102,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
ENV CONTAINER_UID=${CONTAINER_UID:-1000}
ENV CONTAINER_GID=${CONTAINER_GID:-1000}
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
@ -117,7 +121,7 @@ WORKDIR ${INVOKEAI_SRC}
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]

View File

@ -1,6 +1,14 @@
# InvokeAI Containerized
All commands are to be run from the `docker` directory: `cd docker`
All commands should be run within the `docker` directory: `cd docker`
## Quickstart :rocket:
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
## Detailed setup
#### Linux
@ -18,12 +26,12 @@ All commands are to be run from the `docker` directory: `cd docker`
This is done via Docker Desktop preferences
## Quickstart
### Configure Invoke environment
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
1. Execute `run.sh`
The image will be built automatically if needed.
@ -37,19 +45,21 @@ The runtime directory (holding models and outputs) will be created in the locati
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `run.sh`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The default is `~/invokeai`. Example:
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
GPU_DRIVER=nvidia
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
set -e
build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker compose build args:"
echo $build_args
docker compose build $build_args

View File

@ -2,23 +2,8 @@
version: '3.8'
services:
invokeai:
x-invokeai: &invokeai
image: "local/invokeai:latest"
# edit below to run on a container runtime other than nvidia-container-runtime.
# not yet tested with rocm/AMD GPUs
# Comment out the "deploy" section to run on CPU only
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile
@ -36,7 +21,9 @@ services:
ports:
- "${INVOKEAI_PORT:-9090}:9090"
volumes:
- ${INVOKEAI_ROOT:-~/invokeai}:${INVOKEAI_ROOT:-/invokeai}
- type: bind
source: ${HOST_INVOKEAI_ROOT:-${INVOKEAI_ROOT:-~/invokeai}}
target: ${INVOKEAI_ROOT:-/invokeai}
- ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface}
# - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}}
# - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}}
@ -50,3 +37,27 @@ services:
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0
services:
invokeai-nvidia:
<<: *invokeai
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
invokeai-cpu:
<<: *invokeai
profiles:
- cpu
invokeai-rocm:
<<: *invokeai
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
profiles:
- rocm

View File

@ -1,11 +1,32 @@
#!/usr/bin/env bash
set -e
set -e -o pipefail
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
run() {
local scriptdir=$(dirname "${BASH_SOURCE[0]}")
cd "$scriptdir" || exit 1
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
local build_args=""
local profile=""
docker compose up -d
docker compose logs -f
touch .env
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
[[ -z "$profile" ]] && profile="nvidia"
local service_name="invokeai-$profile"
if [[ ! -z "$build_args" ]]; then
printf "%s\n" "docker compose build args:"
printf "%s\n" "$build_args"
fi
docker compose build $build_args
unset build_args
printf "%s\n" "starting service $service_name"
docker compose --profile "$profile" up -d "$service_name"
docker compose logs -f
}
run

Binary file not shown.

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 4.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -0,0 +1,277 @@
# The InvokeAI Download Queue
The DownloadQueueService provides a multithreaded parallel download
queue for arbitrary URLs, with queue prioritization, event handling,
and restart capabilities.
## Simple Example
```
from invokeai.app.services.download import DownloadQueueService, TqdmProgress
download_queue = DownloadQueueService()
for url in ['https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/a-painting-of-a-fire.png?raw=true',
'https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/birdhouse.png?raw=true',
'https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/missing.png',
'https://civitai.com/api/download/models/152309?type=Model&format=SafeTensor',
]:
# urls start downloading as soon as download() is called
download_queue.download(source=url,
dest='/tmp/downloads',
on_progress=TqdmProgress().update
)
download_queue.join() # wait for all downloads to finish
for job in download_queue.list_jobs():
print(job.model_dump_json(exclude_none=True, indent=4),"\n")
```
Output:
```
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/a-painting-of-a-fire.png?raw=true",
"dest": "/tmp/downloads",
"id": 0,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/a-painting-of-a-fire.png",
"job_started": "2023-12-04T05:34:41.742174",
"job_ended": "2023-12-04T05:34:42.592035",
"bytes": 666734,
"total_bytes": 666734
}
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/birdhouse.png?raw=true",
"dest": "/tmp/downloads",
"id": 1,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/birdhouse.png",
"job_started": "2023-12-04T05:34:41.741975",
"job_ended": "2023-12-04T05:34:42.652841",
"bytes": 774949,
"total_bytes": 774949
}
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/missing.png",
"dest": "/tmp/downloads",
"id": 2,
"priority": 10,
"status": "error",
"job_started": "2023-12-04T05:34:41.742079",
"job_ended": "2023-12-04T05:34:42.147625",
"bytes": 0,
"total_bytes": 0,
"error_type": "HTTPError(Not Found)",
"error": "Traceback (most recent call last):\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 182, in _download_next_item\n self._do_download(job)\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 206, in _do_download\n raise HTTPError(resp.reason)\nrequests.exceptions.HTTPError: Not Found\n"
}
{
"source": "https://civitai.com/api/download/models/152309?type=Model&format=SafeTensor",
"dest": "/tmp/downloads",
"id": 3,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/xl_more_art-full_v1.safetensors",
"job_started": "2023-12-04T05:34:42.147645",
"job_ended": "2023-12-04T05:34:43.735990",
"bytes": 719020768,
"total_bytes": 719020768
}
```
## The API
The default download queue is `DownloadQueueService`, an
implementation of ABC `DownloadQueueServiceBase`. It juggles multiple
background download requests and provides facilities for interrogating
and cancelling the requests. Access to a current or past download task
is mediated via `DownloadJob` objects which report the current status
of a job request
### The Queue Object
A default download queue is located in
`ApiDependencies.invoker.services.download_queue`. However, you can
create additional instances if you need to isolate your queue from the
main one.
```
queue = DownloadQueueService(event_bus=events)
```
`DownloadQueueService()` takes three optional arguments:
| **Argument** | **Type** | **Default** | **Description** |
|----------------|-----------------|---------------|-----------------|
| `max_parallel_dl` | int | 5 | Maximum number of simultaneous downloads allowed |
| `event_bus` | EventServiceBase | None | System-wide FastAPI event bus for reporting download events |
| `requests_session` | requests.sessions.Session | None | An alternative requests Session object to use for the download |
`max_parallel_dl` specifies how many download jobs are allowed to run
simultaneously. Each will run in a different thread of execution.
`event_bus` is an EventServiceBase, typically the one created at
InvokeAI startup. If present, download events are periodically emitted
on this bus to allow clients to follow download progress.
`requests_session` is a url library requests Session object. It is
used for testing.
### The Job object
The queue operates on a series of download job objects. These objects
specify the source and destination of the download, and keep track of
the progress of the download.
The only job type currently implemented is `DownloadJob`, a pydantic object with the
following fields:
| **Field** | **Type** | **Default** | **Description** |
|----------------|-----------------|---------------|-----------------|
| _Fields passed in at job creation time_ |
| `source` | AnyHttpUrl | | Where to download from |
| `dest` | Path | | Where to download to |
| `access_token` | str | | [optional] string containing authentication token for access |
| `on_start` | Callable | | [optional] callback when the download starts |
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
| `on_complete` | Callable | | [optional] callback called after successful download completion |
| `on_error` | Callable | | [optional] callback called after an error occurs |
| `id` | int | auto assigned | Job ID, an integer >= 0 |
| `priority` | int | 10 | Job priority. Lower priorities run before higher priorities |
| |
| _Fields updated over the course of the download task_
| `status` | DownloadJobStatus| | Status code |
| `download_path` | Path | | Path to the location of the downloaded file |
| `job_started` | float | | Timestamp for when the job started running |
| `job_ended` | float | | Timestamp for when the job completed or errored out |
| `job_sequence` | int | | A counter that is incremented each time a model is dequeued |
| `bytes` | int | 0 | Bytes downloaded so far |
| `total_bytes` | int | 0 | Total size of the file at the remote site |
| `error_type` | str | | String version of the exception that caused an error during download |
| `error` | str | | String version of the traceback associated with an error |
| `cancelled` | bool | False | Set to true if the job was cancelled by the caller|
When you create a job, you can assign it a `priority`. If multiple
jobs are queued, the job with the lowest priority runs first.
Every job has a `source` and a `dest`. `source` is a pydantic.networks AnyHttpUrl object.
The `dest` is a path on the local filesystem that specifies the
destination for the downloaded object. Its semantics are
described below.
When the job is submitted, it is assigned a numeric `id`. The id can
then be used to fetch the job object from the queue.
The `status` field is updated by the queue to indicate where the job
is in its lifecycle. Values are defined in the string enum
`DownloadJobStatus`, a symbol available from
`invokeai.app.services.download_manager`. Possible values are:
| **Value** | **String Value** | ** Description ** |
|--------------|---------------------|-------------------|
| `WAITING` | waiting | Job is on the queue but not yet running|
| `RUNNING` | running | The download is started |
| `COMPLETED` | completed | Job has finished its work without an error |
| `ERROR` | error | Job encountered an error and will not run again|
`job_started` and `job_ended` indicate when the job
was started (using a python timestamp) and when it completed.
In case of an error, the job's status will be set to `DownloadJobStatus.ERROR`, the text of the
Exception that caused the error will be placed in the `error_type`
field and the traceback that led to the error will be in `error`.
A cancelled job will have status `DownloadJobStatus.ERROR` and an
`error_type` field of "DownloadJobCancelledException". In addition,
the job's `cancelled` property will be set to True.
### Callbacks
Download jobs can be associated with a series of callbacks, each with
the signature `Callable[["DownloadJob"], None]`. The callbacks are assigned
using optional arguments `on_start`, `on_progress`, `on_complete` and
`on_error`. When the corresponding event occurs, the callback wil be
invoked and passed the job. The callback will be run in a `try:`
context in the same thread as the download job. Any exceptions that
occur during execution of the callback will be caught and converted
into a log error message, thereby allowing the download to continue.
#### `TqdmProgress`
The `invokeai.app.services.download.download_default` module defines a
class named `TqdmProgress` which can be used as an `on_progress`
handler to display a completion bar in the console. Use as follows:
```
from invokeai.app.services.download import TqdmProgress
download_queue.download(source='http://some.server.somewhere/some_file',
dest='/tmp/downloads',
on_progress=TqdmProgress().update
)
```
### Events
If the queue was initialized with the InvokeAI event bus (the case
when using `ApiDependencies.invoker.services.download_queue`), then
download events will also be issued on the bus. The events are:
* `download_started` -- This is issued when a job is taken off the
queue and a request is made to the remote server for the URL headers, but before any data
has been downloaded. The event payload will contain the keys `source`
and `download_path`. The latter contains the path that the URL will be
downloaded to.
* `download_progress -- This is issued periodically as the download
runs. The payload contains the keys `source`, `download_path`,
`current_bytes` and `total_bytes`. The latter two fields can be
used to display the percent complete.
* `download_complete` -- This is issued when the download completes
successfully. The payload contains the keys `source`, `download_path`
and `total_bytes`.
* `download_error` -- This is issued when the download stops because
of an error condition. The payload contains the fields `error_type`
and `error`. The former is the text representation of the exception,
and the latter is a traceback showing where the error occurred.
### Job control
To create a job call the queue's `download()` method. You can list all
jobs using `list_jobs()`, fetch a single job by its with
`id_to_job()`, cancel a running job with `cancel_job()`, cancel all
running jobs with `cancel_all_jobs()`, and wait for all jobs to finish
with `join()`.
#### job = queue.download(source, dest, priority, access_token)
Create a new download job and put it on the queue, returning the
DownloadJob object.
#### jobs = queue.list_jobs()
Return a list of all active and inactive `DownloadJob`s.
#### job = queue.id_to_job(id)
Return the job corresponding to given ID.
Return a list of all active and inactive `DownloadJob`s.
#### queue.prune_jobs()
Remove inactive (complete or errored) jobs from the listing returned
by `list_jobs()`.
#### queue.join()
Block until all pending jobs have run to completion or errored out.

File diff suppressed because it is too large Load Diff

View File

@ -1,75 +0,0 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From `invokeai/frontend/web/` run `yarn install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `yarn dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `yarn build`.

View File

@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](./contributingToFrontend.md)
* #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)

53
docs/deprecated/2to3.md Normal file
View File

@ -0,0 +1,53 @@
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.

View File

@ -154,14 +154,16 @@ groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| Setting | Default Value | Description |
|---------------------|---------------|----------------------------------------------------------------------------------------------------------------------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| `ssl_certfile` | null | Path to an SSL certificate file, used to enable HTTPS. |
| `ssl_keyfile` | null | Path to an SSL keyfile, if the key is not included in the certificate file. |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].

View File

@ -229,29 +229,28 @@ clarity on the intent and common use cases we expect for utilizing them.
currently being rendered by your browser into a merged copy of the image. This
lowers the resource requirements and should improve performance.
### Seam Correction
### Compositing / Seam Correction
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
by Stable Diffusion into your existing image. To do this, the area around the
`seam` at the boundary between your image and the new generation is
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
automatically blended to produce a seamless output. In a fully automatic
process, a mask is generated to cover the seam, and then the area of the seam is
process, a mask is generated to cover the boundary, and then the area of the boundary is
Inpainted.
Although the default options should work well most of the time, sometimes it can
help to alter the parameters that control the seam Inpainting. A wider seam and
a blur setting of about 1/3 of the seam have been noted as producing
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with
both sides). Seam strength of 0.7 is best for reducing hard seams.
help to alter the parameters that control the Compositing. A larger blur and
a blur setting have been noted as producing
consistently strong results . Strength of 0.7 is best for reducing hard seams.
- **Mode** - What part of the image will have the the Compositing applied to it.
- **Mask edge** will apply Compositing to the edge of the masked area
- **Mask** will apply Compositing to the entire masked area
- **Unmasked** will apply Compositing to the entire image
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur Method** - The method of blur applied to the masked area.
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
mask around the seam.
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
masked area.
- **Seam Strength** - The Image To Image Strength parameter used for the
Inpainting generation that is applied to the seam area.
- **Seam Steps** - The number of generation steps that should be used to Inpaint
the seam.
### Infill & Scaling

View File

@ -18,7 +18,7 @@ title: Home
width: 100%;
max-width: 100%;
height: 50px;
background-color: #448AFF;
background-color: #35A4DB;
color: #fff;
font-size: 16px;
border: none;
@ -43,7 +43,7 @@ title: Home
<div align="center" markdown>
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
[![project logo](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)](https://github.com/invoke-ai/InvokeAI)
[![discord badge]][discord link]
@ -145,60 +145,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
## :material-target: Troubleshooting
Please check out our **[:material-frequently-asked-questions:

View File

@ -293,6 +293,19 @@ manager, please follow these steps:
## Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
@ -388,3 +401,5 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@ -0,0 +1,10 @@
document.addEventListener("DOMContentLoaded", function () {
var script = document.createElement("script");
script.src = "https://widget.kapa.ai/kapa-widget.bundle.js";
script.setAttribute("data-website-id", "b5973bb1-476b-451e-8cf4-98de86745a10");
script.setAttribute("data-project-name", "Invoke.AI");
script.setAttribute("data-project-color", "#11213C");
script.setAttribute("data-project-logo", "https://avatars.githubusercontent.com/u/113954515?s=280&v=4");
script.async = true;
document.head.appendChild(script);
});

View File

@ -6,10 +6,17 @@ If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](
## Features
### Workflow Library
The Workflow Library enables you to save workflows to the Invoke database, allowing you to easily creating, modify and share workflows as needed.
A curated set of workflows are provided by default - these are designed to help explain important nodes' usage in the Workflow Editor.
![workflow_library](../assets/nodes/workflow_library.png)
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
To add an input to the Linear UI, right click on the **input label** and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
@ -30,7 +37,7 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts
## Important Nodes & Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -56,7 +63,7 @@ The ImageToLatents node takes in a pixel image and a VAE and outputs a latents.
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
![groupsrandseed](../assets/nodes/groupsnoise.png)
### ControlNet

View File

@ -13,25 +13,37 @@ If you'd prefer, you can also just download the whole node folder from the linke
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
- Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Average Images](#average-images)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask)
+ [Clothing Mask](#clothing-mask)
+ [Contrast Limited Adaptive Histogram Equalization](#contrast-limited-adaptive-histogram-equalization)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Ideal Size](#ideal-size)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Image Resize Plus](#image-resize-plus)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Mask Operations](#mask-operations)
+ [Match Histogram](#match-histogram)
+ [Metadata-Linked](#metadata-linked-nodes)
+ [Negative Image](#negative-image)
+ [Nightmare Promptgen](#nightmare-promptgen)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image)
+ [Remove Background](#remove-background)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Simple Skin Detection](#simple-skin-detection)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
@ -41,6 +53,19 @@ To use a community workflow, download the the `.json` node graph file and load i
- [Help](#help)
--------------------------------
### Adapters Linked Nodes
**Description:** A set of nodes for linked adapters (ControlNet, IP-Adaptor & T2I-Adapter). This allows multiple adapters to be chained together without using a `collect` node which means it can be used inside an `iterate` node without any collecting on every iteration issues.
- `ControlNet-Linked` - Collects ControlNet info to pass to other nodes.
- `IP-Adapter-Linked` - Collects IP-Adapter info to pass to other nodes.
- `T2I-Adapter-Linked` - Collects T2I-Adapter info to pass to other nodes.
Note: These are inherited from the core nodes so any update to the core nodes should be reflected in these.
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
--------------------------------
### Average Images
@ -48,6 +73,46 @@ To use a community workflow, download the the `.json` node graph file and load i
**Node Link:** https://github.com/JPPhoto/average-images-node
--------------------------------
### Clean Image Artifacts After Cut
Description: Removes residual artifacts after an image is separated from its background.
Node Link: https://github.com/VeyDlin/clean-artifact-after-cut-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clean-artifact-after-cut-node/master/.readme/node.png" width="500" />
--------------------------------
### Close Color Mask
Description: Generates a mask for images based on a closely matching color, useful for color-based selections.
Node Link: https://github.com/VeyDlin/close-color-mask-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/close-color-mask-node/master/.readme/node.png" width="500" />
--------------------------------
### Clothing Mask
Description: Employs a U2NET neural network trained for the segmentation of clothing items in images.
Node Link: https://github.com/VeyDlin/clothing-mask-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clothing-mask-node/master/.readme/node.png" width="500" />
--------------------------------
### Contrast Limited Adaptive Histogram Equalization
Description: Enhances local image contrast using adaptive histogram equalization with contrast limiting.
Node Link: https://github.com/VeyDlin/clahe-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clahe-node/master/.readme/node.png" width="500" />
--------------------------------
### Depth Map from Wavefront OBJ
@ -130,13 +195,6 @@ CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
**Node Link:** https://github.com/JPPhoto/ideal-size-node
--------------------------------
### Image and Mask Composition Pack
@ -164,6 +222,16 @@ This includes 15 Nodes:
</br><img src="https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_pack_overview.jpg" width="500" />
--------------------------------
### Image Dominant Color
Description: Identifies and extracts the dominant color from an image using k-means clustering.
Node Link: https://github.com/VeyDlin/image-dominant-color-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-dominant-color-node/master/.readme/node.png" width="500" />
--------------------------------
### Image to Character Art Image Nodes
@ -185,6 +253,17 @@ This includes 15 Nodes:
**Node Link:** https://github.com/JPPhoto/image-picker-node
--------------------------------
### Image Resize Plus
Description: Provides various image resizing options such as fill, stretch, fit, center, and crop.
Node Link: https://github.com/VeyDlin/image-resize-plus-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-resize-plus-node/master/.readme/node.png" width="500" />
--------------------------------
### Load Video Frame
@ -209,6 +288,16 @@ This includes 15 Nodes:
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
--------------------------------
### Mask Operations
Description: Offers logical operations (OR, SUB, AND) for combining and manipulating image masks.
Node Link: https://github.com/VeyDlin/mask-operations-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/mask-operations-node/master/.readme/node.png" width="500" />
--------------------------------
### Match Histogram
@ -226,6 +315,37 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" width="300" />
--------------------------------
### Metadata Linked Nodes
**Description:** A set of nodes for Metadata. Collect Metadata from within an `iterate` node & extract metadata from an image.
- `Metadata Item Linked` - Allows collecting of metadata while within an iterate node with no need for a collect node or conversion to metadata node.
- `Metadata From Image` - Provides Metadata from an image.
- `Metadata To String` - Extracts a String value of a label from metadata.
- `Metadata To Integer` - Extracts an Integer value of a label from metadata.
- `Metadata To Float` - Extracts a Float value of a label from metadata.
- `Metadata To Scheduler` - Extracts a Scheduler value of a label from metadata.
**Node Link:** https://github.com/skunkworxdark/metadata-linked-nodes
--------------------------------
### Negative Image
Description: Creates a negative version of an image, effective for visual effects and mask inversion.
Node Link: https://github.com/VeyDlin/negative-image-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/negative-image-node/master/.readme/node.png" width="500" />
--------------------------------
### Nightmare Promptgen
**Description:** Nightmare Prompt Generator - Uses a local text generation model to create unique imaginative (but usually nightmarish) prompts for InvokeAI. By default, it allows you to choose from some gpt-neo models I finetuned on over 2500 of my own InvokeAI prompts in Compel format, but you're able to add your own, as well. Offers support for replacing any troublesome words with a random choice from list you can also define.
**Node Link:** [https://github.com/gogurtenjoyer/nightmare-promptgen](https://github.com/gogurtenjoyer/nightmare-promptgen)
--------------------------------
### Oobabooga
@ -289,6 +409,15 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Node Link:** https://github.com/fieldOfView/InvokeAI-remote_image
--------------------------------
### Remove Background
Description: An integration of the rembg package to remove backgrounds from images using multiple U2NET models.
Node Link: https://github.com/VeyDlin/remove-background-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/remove-background-node/master/.readme/node.png" width="500" />
--------------------------------
### Retroize
@ -301,6 +430,17 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
<img src="https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974" width="500" />
--------------------------------
### Simple Skin Detection
Description: Detects skin in images based on predefined color thresholds.
Node Link: https://github.com/VeyDlin/simple-skin-detection-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/simple-skin-detection-node/master/.readme/node.png" width="500" />
--------------------------------
### Size Stepper Nodes
@ -386,6 +526,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
<img src="https://github.com/skunkworxdark/XYGrid_nodes/blob/main/images/collage.png" width="300" />
--------------------------------
### Example Node Template

View File

@ -36,6 +36,7 @@ their descriptions.
| Integer Math | Perform basic math operations on two integers |
| Convert Image Mode | Converts an image to a different mode. |
| Crop Image | Crops an image to a specified box. The box can be outside of the image. |
| Ideal Size | Calculates an ideal image size for latents for a first pass of a multi-pass upscaling to avoid duplication and other artifacts |
| Image Hue Adjustment | Adjusts the Hue of an image. |
| Inverse Lerp Image | Inverse linear interpolation of all pixels of an image |
| Image Primitive | An image primitive value |

View File

@ -1,6 +1,6 @@
# Example Workflows
We've curated some example workflows for you to get started with Workflows in InvokeAI
We've curated some example workflows for you to get started with Workflows in InvokeAI! These can also be found in the Workflow Library, located in the Workflow Editor of Invoke.
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!

View File

@ -215,6 +215,7 @@ We thank them for all of their time and hard work.
- Robert Bolender
- Robin Rombach
- Rohan Barar
- rohinish404
- rpagliuca
- rromb
- Rupesh Sreeraman

View File

@ -0,0 +1,5 @@
:root {
--md-primary-fg-color: #35A4DB;
--md-primary-fg-color--light: #35A4DB;
--md-primary-fg-color--dark: #35A4DB;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,8 @@
{
"name": "Text to Image",
"name": "Text to Image - SD1.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "1.0.1",
"version": "1.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
@ -18,10 +18,19 @@
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
},
{
"nodeId": "55705012-79b9-4aac-9f26-c0b10309785b",
"fieldName": "width"
},
{
"nodeId": "55705012-79b9-4aac-9f26-c0b10309785b",
"fieldName": "height"
}
],
"meta": {
"version": "1.0.0"
"category": "default",
"version": "2.0.0"
},
"nodes": [
{
@ -30,44 +39,56 @@
"data": {
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "StringField"
},
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
}
},
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 261,
"height": 259,
"position": {
"x": 995.7263915923627,
"y": 239.67783573351227
"x": 1000,
"y": 350
}
},
{
@ -76,37 +97,60 @@
"data": {
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.1",
"nodePack": "invokeai",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 512
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 512
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": true
}
},
@ -114,35 +158,40 @@
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 389,
"height": 388,
"position": {
"x": 993.4442117555518,
"y": 605.6757415334787
"x": 600,
"y": 325
}
},
{
@ -151,13 +200,24 @@
"data": {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"model": {
"id": "993eabd2-40fd-44fe-bce7-5d0c7075ddab",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "MainModelField"
},
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
@ -169,35 +229,40 @@
"unet": {
"id": "5c18c9db-328d-46d0-8cb9-143391c410be",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "UNetField"
}
},
"clip": {
"id": "6effcac0-ec2f-4bf5-a49e-a2c29cf921f4",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
},
"vae": {
"id": "57683ba3-f5f5-4f58-b9a2-4b83dacad4a1",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "VaeField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 226,
"position": {
"x": 163.04436745878343,
"y": 254.63156870373479
"x": 600,
"y": 25
}
},
{
@ -206,44 +271,56 @@
"data": {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "StringField"
},
"value": "Super cute tiger cub, national geographic award-winning photograph"
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
}
},
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 261,
"height": 259,
"position": {
"x": 595.7263915923627,
"y": 239.67783573351227
"x": 1000,
"y": 25
}
},
{
@ -252,21 +329,36 @@
"data": {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"label": "Random Seed",
"isOpen": false,
"notes": "",
"isIntermediate": true,
"useCache": false,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 2147483647
}
},
@ -274,23 +366,20 @@
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "Random Seed",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
}
},
"width": 320,
"height": 218,
"height": 32,
"position": {
"x": 541.094822888628,
"y": 694.5704476446829
"x": 600,
"y": 275
}
},
{
@ -299,144 +388,224 @@
"data": {
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "denoise_latents",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
"id": "90b7f4f8-ada7-4028-8100-d2e54f192052",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
},
"negative_conditioning": {
"id": "9393779e-796c-4f64-b740-902a1177bf53",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
},
"noise": {
"id": "8e17f1e5-4f98-40b1-b7f4-86aeeb4554c1",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"steps": {
"id": "9b63302d-6bd2-42c9-ac13-9b1afb51af88",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 50
},
"cfg_scale": {
"id": "87dd04d3-870e-49e1-98bf-af003a810109",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "FloatField"
},
"value": 7.5
},
"denoising_start": {
"id": "f369d80f-4931-4740-9bcd-9f0620719fab",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 0
},
"denoising_end": {
"id": "747d10e5-6f02-445c-994c-0604d814de8c",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 1
},
"scheduler": {
"id": "1de84a4e-3a24-4ec8-862b-16ce49633b9b",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "SchedulerField"
},
"value": "unipc"
},
"unet": {
"id": "ffa6fef4-3ce2-4bdb-9296-9a834849489b",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "UNetField"
}
},
"control": {
"id": "077b64cb-34be-4fcc-83f2-e399807a02bd",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "ControlField"
}
},
"ip_adapter": {
"id": "1d6948f7-3a65-4a65-a20c-768b287251aa",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "IPAdapterField"
}
},
"t2i_adapter": {
"id": "75e67b09-952f-4083-aaf4-6b804d690412",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "T2IAdapterField"
}
},
"cfg_rescale_multiplier": {
"id": "9101f0a6-5fe0-4826-b7b3-47e5d506826c",
"name": "cfg_rescale_multiplier",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 0
},
"latents": {
"id": "334d4ba3-5a99-4195-82c5-86fb3f4f7d43",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"denoise_mask": {
"id": "0d3dbdbf-b014-4e95-8b18-ff2ff9cb0bfa",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "DenoiseMaskField"
}
}
},
"outputs": {
"latents": {
"id": "70fa5bbc-0c38-41bb-861a-74d6d78d2f38",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"width": {
"id": "98ee0e6c-82aa-4e8f-8be5-dc5f00ee47f0",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "e8cb184a-5e1a-47c8-9695-4b8979564f5d",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
}
},
"width": 320,
"height": 646,
"height": 703,
"position": {
"x": 1476.5794704734735,
"y": 256.80174342731783
"x": 1400,
"y": 25
}
},
{
@ -445,153 +614,185 @@
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "l2i",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": false,
"useCache": true,
"version": "1.2.0",
"nodePack": "invokeai",
"inputs": {
"metadata": {
"id": "ab375f12-0042-4410-9182-29e30db82c85",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "MetadataField"
}
},
"latents": {
"id": "3a7e7efd-bff5-47d7-9d48-615127afee78",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"vae": {
"id": "a1f5f7a1-0795-4d58-b036-7820c0b0ef2b",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "VaeField"
}
},
"tiled": {
"id": "da52059a-0cee-4668-942f-519aa794d739",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": false
},
"fp32": {
"id": "c4841df3-b24e-4140-be3b-ccd454c2522c",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": true
}
},
"outputs": {
"image": {
"id": "72d667d0-cf85-459d-abf2-28bd8b823fe7",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ImageField"
}
},
"width": {
"id": "c8c907d8-1066-49d1-b9a6-83bdcd53addc",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "230f359c-b4ea-436c-b372-332d7dcdca85",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 267,
"height": 266,
"position": {
"x": 2037.9648469717395,
"y": 426.10844427600136
"x": 1800,
"y": 25
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "default",
"sourceHandle": "clip",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"targetHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default"
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"type": "default"
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "negative_conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "unet",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"type": "default"
},
{
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"sourceHandle": "latents",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "latents",
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "vae",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"type": "default"
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
}
]
}
}

View File

@ -2,43 +2,72 @@
set -e
BCYAN="\e[1;36m"
BYELLOW="\e[1;33m"
BGREEN="\e[1;32m"
BRED="\e[1;31m"
RED="\e[31m"
RESET="\e[0m"
function is_bin_in_path {
builtin type -P "$1" &>/dev/null
}
function git_show {
git show -s --format='%h %s' $1
}
cd "$(dirname "$0")"
echo -e "${BYELLOW}This script must be run from the installer directory!${RESET}"
echo "The current working directory is $(pwd)"
read -p "If that looks right, press any key to proceed, or CTRL-C to exit..."
echo
# Some machines only have `python3` in PATH, others have `python` - make an alias.
# We can use a function to approximate an alias within a non-interactive shell.
if ! is_bin_in_path python && is_bin_in_path python3; then
function python {
python3 "$@"
}
fi
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo "A virtual environment is activated. Please deactivate it before proceeding".
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
)
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."
read -p "Press any key to continue, or CTRL-C to exit..."
echo -e "${BGREEN}HEAD${RESET}:"
git_show
echo
read -e -p "Tag this repo with '${VERSION}' and '${LATEST_TAG}'? [n]: " input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
# ---------------------- FRONTEND ----------------------
git push origin :refs/tags/$VERSION
if ! git tag -fa $VERSION ; then
echo "Existing/invalid tag"
exit -1
fi
pushd ../invokeai/frontend/web >/dev/null
echo
echo "Installing frontend dependencies..."
echo
pnpm i --frozen-lockfile
echo
echo "Building frontend..."
echo
pnpm build
popd
git push origin :refs/tags/$LATEST_TAG
git tag -fa $LATEST_TAG
# ---------------------- BACKEND ----------------------
echo "remember to push --tags!"
fi
# ----------------------
echo Building the wheel
echo
echo "Building wheel..."
echo
# install the 'build' package in the user site packages, if needed
# could be improved by using a temporary venv, but it's tiny and harmless
@ -46,12 +75,15 @@ if [[ $(python -c 'from importlib.util import find_spec; print(find_spec("build"
pip install --user build
fi
rm -r ../build
rm -rf ../build
python -m build --wheel --outdir dist/ ../.
# ----------------------
echo Building installer zip fles for InvokeAI $VERSION
echo
echo "Building installer zip files for InvokeAI ${VERSION}..."
echo
# get rid of any old ones
rm -f *.zip
@ -59,9 +91,11 @@ rm -rf InvokeAI-Installer
# copy content
mkdir InvokeAI-Installer
for f in templates lib *.txt *.reg; do
for f in templates *.txt *.reg; do
cp -r ${f} InvokeAI-Installer/
done
mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
@ -72,13 +106,13 @@ cp install.sh.in InvokeAI-Installer/install.sh
chmod a+x InvokeAI-Installer/install.sh
# Windows
perl -p -e "s/^set INVOKEAI_VERSION=.*/set INVOKEAI_VERSION=$VERSION/" install.bat.in > InvokeAI-Installer/install.bat
perl -p -e "s/^set INVOKEAI_VERSION=.*/set INVOKEAI_VERSION=$VERSION/" install.bat.in >InvokeAI-Installer/install.bat
cp WinLongPathsEnabled.reg InvokeAI-Installer/
# Zip everything up
zip -r InvokeAI-installer-$VERSION.zip InvokeAI-Installer
# clean up
rm -rf InvokeAI-Installer tmp dist
rm -rf InvokeAI-Installer tmp dist ../invokeai/frontend/web/dist/
exit 0

View File

@ -241,12 +241,12 @@ class InvokeAiInstance:
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"numpy==1.26.3", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.0",
"torch==2.1.2",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"torchvision==0.16.2",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,

71
installer/tag_release.sh Executable file
View File

@ -0,0 +1,71 @@
#!/bin/bash
set -e
BCYAN="\e[1;36m"
BYELLOW="\e[1;33m"
BGREEN="\e[1;32m"
BRED="\e[1;31m"
RED="\e[31m"
RESET="\e[0m"
function does_tag_exist {
git rev-parse --quiet --verify "refs/tags/$1" >/dev/null
}
function git_show_ref {
git show-ref --dereference $1 --abbrev 7
}
function git_show {
git show -s --format='%h %s' $1
}
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
)
PATCH=""
MAJOR_VERSION=$(echo $VERSION | sed 's/\..*$//')
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v${MAJOR_VERSION}-latest"
if does_tag_exist $VERSION; then
echo -e "${BCYAN}${VERSION}${RESET} already exists:"
git_show_ref tags/$VERSION
echo
fi
if does_tag_exist $LATEST_TAG; then
echo -e "${BCYAN}${LATEST_TAG}${RESET} already exists:"
git_show_ref tags/$LATEST_TAG
echo
fi
echo -e "${BGREEN}HEAD${RESET}:"
git_show
echo
echo -e -n "Create tags ${BCYAN}${VERSION}${RESET} and ${BCYAN}${LATEST_TAG}${RESET} @ ${BGREEN}HEAD${RESET}, ${RED}deleting existing tags on remote${RESET}? "
read -e -p 'y/n [n]: ' input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
echo
echo -e "Deleting ${BCYAN}${VERSION}${RESET} tag on remote..."
git push --delete origin $VERSION
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${VERSION}${RESET} locally..."
if ! git tag -fa $VERSION; then
echo "Existing/invalid tag"
exit -1
fi
echo -e "Deleting ${BCYAN}${LATEST_TAG}${RESET} tag on remote..."
git push --delete origin $LATEST_TAG
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${LATEST_TAG}${RESET} locally..."
git tag -fa $LATEST_TAG
echo -e "Pushing updated tags to remote..."
git push origin --tags
fi
exit 0

View File

@ -2,6 +2,8 @@
from logging import Logger
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager.metadata import ModelMetadataStore
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@ -10,6 +12,7 @@ from ..services.board_images.board_images_default import BoardImagesService
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from ..services.boards.boards_default import BoardService
from ..services.config import InvokeAIAppConfig
from ..services.download import DownloadQueueService
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
@ -22,14 +25,13 @@ from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_install import ModelInstallService
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.shared.default_graphs import create_system_graphs
from ..services.shared.graph import GraphExecutionState, LibraryGraph
from ..services.shared.sqlite.sqlite_database import SqliteDatabase
from ..services.shared.graph import GraphExecutionState
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
@ -60,14 +62,15 @@ class ApiDependencies:
invoker: Invoker
@staticmethod
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger):
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
output_folder = config.output_path
image_files = DiskImageFileStorage(f"{output_folder}/images")
db = SqliteDatabase(config, logger)
db = init_db(config=config, logger=logger, image_files=image_files)
configuration = config
logger = logger
@ -78,14 +81,21 @@ class ApiDependencies:
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_library = SqliteItemStorage[LibraryGraph](db=db, table_name="graphs")
image_files = DiskImageFileStorage(f"{output_folder}/images")
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
download_queue_service = DownloadQueueService(event_bus=events)
metadata_store = ModelMetadataStore(db=db)
model_install_service = ModelInstallService(
app_config=config,
record_store=model_record_service,
download_queue=download_queue_service,
metadata_store=metadata_store,
event_bus=events,
)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
@ -103,7 +113,6 @@ class ApiDependencies:
configuration=configuration,
events=events,
graph_execution_manager=graph_execution_manager,
graph_library=graph_library,
image_files=image_files,
image_records=image_records,
images=images,
@ -112,6 +121,8 @@ class ApiDependencies:
logger=logger,
model_manager=model_manager,
model_records=model_record_service,
download_queue=download_queue_service,
model_install=model_install_service,
names=names,
performance_statistics=performance_statistics,
processor=processor,
@ -122,12 +133,10 @@ class ApiDependencies:
workflow_records=workflow_records,
)
create_system_graphs(services.graph_library)
ApiDependencies.invoker = Invoker(services)
db.clean()
@staticmethod
def shutdown():
def shutdown() -> None:
if ApiDependencies.invoker:
ApiDependencies.invoker.stop()

View File

@ -0,0 +1,28 @@
from typing import Any
from starlette.responses import Response
from starlette.staticfiles import StaticFiles
class NoCacheStaticFiles(StaticFiles):
"""
This class is used to override the default caching behavior of starlette for static files,
ensuring we *never* cache static files. It modifies the file response headers to strictly
never cache the files.
Static files include the javascript bundles, fonts, locales, and some images. Generated
images are not included, as they are served by a router.
"""
def __init__(self, *args: Any, **kwargs: Any):
self.cachecontrol = "max-age=0, no-cache, no-store, , must-revalidate"
self.pragma = "no-cache"
self.expires = "0"
super().__init__(*args, **kwargs)
def file_response(self, *args: Any, **kwargs: Any) -> Response:
resp = super().file_response(*args, **kwargs)
resp.headers.setdefault("Cache-Control", self.cachecontrol)
resp.headers.setdefault("Pragma", self.pragma)
resp.headers.setdefault("Expires", self.expires)
return resp

View File

@ -0,0 +1,111 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for the download queue."""
from typing import List, Optional
from fastapi import Body, Path, Response
from fastapi.routing import APIRouter
from pydantic.networks import AnyHttpUrl
from starlette.exceptions import HTTPException
from invokeai.app.services.download import (
DownloadJob,
UnknownJobIDException,
)
from ..dependencies import ApiDependencies
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
@download_queue_router.get(
"/",
operation_id="list_downloads",
)
async def list_downloads() -> List[DownloadJob]:
"""Get a list of active and inactive jobs."""
queue = ApiDependencies.invoker.services.download_queue
return queue.list_jobs()
@download_queue_router.patch(
"/",
operation_id="prune_downloads",
responses={
204: {"description": "All completed jobs have been pruned"},
400: {"description": "Bad request"},
},
)
async def prune_downloads():
"""Prune completed and errored jobs."""
queue = ApiDependencies.invoker.services.download_queue
queue.prune_jobs()
return Response(status_code=204)
@download_queue_router.post(
"/i/",
operation_id="download",
)
async def download(
source: AnyHttpUrl = Body(description="download source"),
dest: str = Body(description="download destination"),
priority: int = Body(default=10, description="queue priority"),
access_token: Optional[str] = Body(default=None, description="token for authorization to download"),
) -> DownloadJob:
"""Download the source URL to the file or directory indicted in dest."""
queue = ApiDependencies.invoker.services.download_queue
return queue.download(source, dest, priority, access_token)
@download_queue_router.get(
"/i/{id}",
operation_id="get_download_job",
responses={
200: {"description": "Success"},
404: {"description": "The requested download JobID could not be found"},
},
)
async def get_download_job(
id: int = Path(description="ID of the download job to fetch."),
) -> DownloadJob:
"""Get a download job using its ID."""
try:
job = ApiDependencies.invoker.services.download_queue.id_to_job(id)
return job
except UnknownJobIDException as e:
raise HTTPException(status_code=404, detail=str(e))
@download_queue_router.delete(
"/i/{id}",
operation_id="cancel_download_job",
responses={
204: {"description": "Job has been cancelled"},
404: {"description": "The requested download JobID could not be found"},
},
)
async def cancel_download_job(
id: int = Path(description="ID of the download job to cancel."),
):
"""Cancel a download job using its ID."""
try:
queue = ApiDependencies.invoker.services.download_queue
job = queue.id_to_job(id)
queue.cancel_job(job)
return Response(status_code=204)
except UnknownJobIDException as e:
raise HTTPException(status_code=404, detail=str(e))
@download_queue_router.delete(
"/i",
operation_id="cancel_all_download_jobs",
responses={
204: {"description": "Download jobs have been cancelled"},
},
)
async def cancel_all_download_jobs():
"""Cancel all download jobs."""
ApiDependencies.invoker.services.download_queue.cancel_all_jobs()
return Response(status_code=204)

View File

@ -4,7 +4,7 @@
from hashlib import sha1
from random import randbytes
from typing import List, Optional
from typing import Any, Dict, List, Optional, Set
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
@ -12,30 +12,45 @@ from pydantic import BaseModel, ConfigDict
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_install import ModelInstallJob, ModelSource
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordOrderBy,
ModelSummary,
UnknownModelException,
)
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["models"])
model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager_v2_unstable"])
class ModelsList(BaseModel):
"""Return list of configs."""
models: list[AnyModelConfig]
models: List[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
class ModelTagSet(BaseModel):
"""Return tags for a set of models."""
key: str
name: str
author: str
tags: Set[str]
@model_records_router.get(
"/",
operation_id="list_model_records",
@ -43,15 +58,25 @@ class ModelsList(BaseModel):
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[ModelFormat] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(record_store.search_by_attr(base_model=base_model, model_type=model_type))
found_models.extend(
record_store.search_by_attr(
base_model=base_model, model_type=model_type, model_name=model_name, model_format=model_format
)
)
else:
found_models.extend(record_store.search_by_attr(model_type=model_type))
found_models.extend(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
return ModelsList(models=found_models)
@ -75,6 +100,59 @@ async def get_model_record(
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.get("/meta", operation_id="list_model_summary")
async def list_model_summary(
page: int = Query(default=0, description="The page to get"),
per_page: int = Query(default=10, description="The number of models per page"),
order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
) -> PaginatedResults[ModelSummary]:
"""Gets a page of model summary data."""
return ApiDependencies.invoker.services.model_records.list_models(page=page, per_page=per_page, order_by=order_by)
@model_records_router.get(
"/meta/i/{key}",
operation_id="get_model_metadata",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "No metadata available"},
},
)
async def get_model_metadata(
key: str = Path(description="Key of the model repo metadata to fetch."),
) -> Optional[AnyModelRepoMetadata]:
"""Get a model metadata object."""
record_store = ApiDependencies.invoker.services.model_records
result = record_store.get_metadata(key)
if not result:
raise HTTPException(status_code=404, detail="No metadata for a model with this key")
return result
@model_records_router.get(
"/tags",
operation_id="list_tags",
)
async def list_tags() -> Set[str]:
"""Get a unique set of all the model tags."""
record_store = ApiDependencies.invoker.services.model_records
return record_store.list_tags()
@model_records_router.get(
"/tags/search",
operation_id="search_by_metadata_tags",
)
async def search_by_metadata_tags(
tags: Set[str] = Query(default=None, description="Tags to search for"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
results = record_store.search_by_metadata_tag(tags)
return ModelsList(models=results)
@model_records_router.patch(
"/i/{key}",
operation_id="update_model_record",
@ -117,12 +195,17 @@ async def update_model_record(
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""Delete Model"""
"""
Delete model record from database.
The configuration record will be removed. The corresponding weights files will be
deleted as well if they reside within the InvokeAI "models" directory.
"""
logger = ApiDependencies.invoker.services.logger
try:
record_store = ApiDependencies.invoker.services.model_records
record_store.del_model(key)
installer = ApiDependencies.invoker.services.model_install
installer.delete(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
@ -143,9 +226,7 @@ async def del_model_record(
async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""
Add a model using the configuration information appropriate for its type.
"""
"""Add a model using the configuration information appropriate for its type."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>":
@ -162,3 +243,175 @@ async def add_model_record(
# now fetch it out
return record_store.get_model(config.key)
@model_records_router.post(
"/import",
operation_id="import_model_record",
responses={
201: {"description": "The model imported successfully"},
415: {"description": "Unrecognized file/folder format"},
424: {"description": "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
)
async def import_model(
source: ModelSource,
config: Optional[Dict[str, Any]] = Body(
description="Dict of fields that override auto-probed values in the model config record, such as name, description and prediction_type ",
default=None,
),
) -> ModelInstallJob:
"""Add a model using its local path, repo_id, or remote URL.
Models will be downloaded, probed, configured and installed in a
series of background threads. The return object has `status` attribute
that can be used to monitor progress.
The source object is a discriminated Union of LocalModelSource,
HFModelSource and URLModelSource. Set the "type" field to the
appropriate value:
* To install a local path using LocalModelSource, pass a source of form:
`{
"type": "local",
"path": "/path/to/model",
"inplace": false
}`
The "inplace" flag, if true, will register the model in place in its
current filesystem location. Otherwise, the model will be copied
into the InvokeAI models directory.
* To install a HuggingFace repo_id using HFModelSource, pass a source of form:
`{
"type": "hf",
"repo_id": "stabilityai/stable-diffusion-2.0",
"variant": "fp16",
"subfolder": "vae",
"access_token": "f5820a918aaf01"
}`
The `variant`, `subfolder` and `access_token` fields are optional.
* To install a remote model using an arbitrary URL, pass:
`{
"type": "url",
"url": "http://www.civitai.com/models/123456",
"access_token": "f5820a918aaf01"
}`
The `access_token` field is optonal
The model's configuration record will be probed and filled in
automatically. To override the default guesses, pass "metadata"
with a Dict containing the attributes you wish to override.
Installation occurs in the background. Either use list_model_install_jobs()
to poll for completion, or listen on the event bus for the following events:
"model_install_running"
"model_install_completed"
"model_install_error"
On successful completion, the event's payload will contain the field "key"
containing the installed ID of the model. On an error, the event's payload
will contain the fields "error_type" and "error" describing the nature of the
error and its traceback, respectively.
"""
logger = ApiDependencies.invoker.services.logger
try:
installer = ApiDependencies.invoker.services.model_install
result: ModelInstallJob = installer.import_model(
source=source,
config=config,
)
logger.info(f"Started installation of {source}")
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return result
@model_records_router.get(
"/import",
operation_id="list_model_install_jobs",
)
async def list_model_install_jobs() -> List[ModelInstallJob]:
"""Return list of model install jobs."""
jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_install.list_jobs()
return jobs
@model_records_router.get(
"/import/{id}",
operation_id="get_model_install_job",
responses={
200: {"description": "Success"},
404: {"description": "No such job"},
},
)
async def get_model_install_job(id: int = Path(description="Model install id")) -> ModelInstallJob:
"""Return model install job corresponding to the given source."""
try:
return ApiDependencies.invoker.services.model_install.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.delete(
"/import/{id}",
operation_id="cancel_model_install_job",
responses={
201: {"description": "The job was cancelled successfully"},
415: {"description": "No such job"},
},
status_code=201,
)
async def cancel_model_install_job(id: int = Path(description="Model install job ID")) -> None:
"""Cancel the model install job(s) corresponding to the given job ID."""
installer = ApiDependencies.invoker.services.model_install
try:
job = installer.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=415, detail=str(e))
installer.cancel_job(job)
@model_records_router.patch(
"/import",
operation_id="prune_model_install_jobs",
responses={
204: {"description": "All completed and errored jobs have been pruned"},
400: {"description": "Bad request"},
},
)
async def prune_model_install_jobs() -> Response:
"""Prune all completed and errored jobs from the install job list."""
ApiDependencies.invoker.services.model_install.prune_jobs()
return Response(status_code=204)
@model_records_router.patch(
"/sync",
operation_id="sync_models_to_config",
responses={
204: {"description": "Model config record database resynced with files on disk"},
400: {"description": "Bad request"},
},
)
async def sync_models_to_config() -> Response:
"""
Traverse the models and autoimport directories.
Model files without a corresponding
record in the database are added. Orphan records without a models file are deleted.
"""
ApiDependencies.invoker.services.model_install.sync_to_config()
return Response(status_code=204)

View File

@ -23,10 +23,11 @@ class DynamicPromptsResponse(BaseModel):
)
async def parse_dynamicprompts(
prompt: str = Body(description="The prompt to parse with dynamicprompts"),
max_prompts: int = Body(default=1000, description="The max number of prompts to generate"),
max_prompts: int = Body(ge=1, le=10000, default=1000, description="The max number of prompts to generate"),
combinatorial: bool = Body(default=True, description="Whether to use the combinatorial generator"),
) -> DynamicPromptsResponse:
"""Creates a batch process"""
max_prompts = min(max_prompts, 10000)
generator: Union[RandomPromptGenerator, CombinatorialPromptGenerator]
try:
error: Optional[str] = None

View File

@ -20,6 +20,7 @@ class SocketIO:
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue)
self.__sio.on("unsubscribe_queue", handler=self._handle_unsub_queue)
local_handler.register(event_name=EventServiceBase.queue_event, _func=self._handle_queue_event)
local_handler.register(event_name=EventServiceBase.model_event, _func=self._handle_model_event)
async def _handle_queue_event(self, event: Event):
await self.__sio.emit(
@ -28,10 +29,13 @@ class SocketIO:
room=event[1]["data"]["queue_id"],
)
async def _handle_sub_queue(self, sid, data, *args, **kwargs):
async def _handle_sub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.enter_room(sid, data["queue_id"])
async def _handle_unsub_queue(self, sid, data, *args, **kwargs):
async def _handle_unsub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.leave_room(sid, data["queue_id"])
async def _handle_model_event(self, event: Event) -> None:
await self.__sio.emit(event=event[1]["event"], data=event[1]["data"])

View File

@ -3,6 +3,7 @@
# values from the command line or config file.
import sys
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__
from .services.config import InvokeAIAppConfig
@ -27,8 +28,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
@ -45,6 +45,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
app_info,
board_images,
boards,
download_queue,
images,
model_records,
models,
@ -75,7 +76,7 @@ mimetypes.add_type("text/css", ".css")
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
app = FastAPI(title="Invoke - Community Edition", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
# Add event handler
event_handler_id: int = id(app)
@ -116,6 +117,7 @@ app.include_router(sessions.session_router, prefix="/api")
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(models.models_router, prefix="/api")
app.include_router(model_records.model_records_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
@ -203,8 +205,8 @@ app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid a
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
swagger_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@ -212,25 +214,20 @@ def overridden_swagger() -> HTMLResponse:
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
redoc_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
# Cannot add headers to StaticFiles, so we must serve index.html with a custom route
# Add cache-control: no-store header to prevent caching of index.html, which leads to broken UIs at release
@app.get("/", include_in_schema=False, name="ui_root")
def get_index() -> FileResponse:
return FileResponse(Path(web_root_path, "dist/index.html"), headers={"Cache-Control": "no-store"})
# # Must mount *after* the other routes else it borks em
app.mount("/static", StaticFiles(directory=Path(web_root_path, "static/")), name="static") # docs favicon is in here
app.mount("/assets", StaticFiles(directory=Path(web_root_path, "dist/assets/")), name="assets")
app.mount("/locales", StaticFiles(directory=Path(web_root_path, "dist/locales/")), name="locales")
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
def invoke_api() -> None:
@ -271,6 +268,8 @@ def invoke_api() -> None:
port=port,
loop="asyncio",
log_level=app_config.log_level,
ssl_certfile=app_config.ssl_certfile,
ssl_keyfile=app_config.ssl_keyfile,
)
server = uvicorn.Server(config)

View File

@ -4,6 +4,7 @@ from __future__ import annotations
import inspect
import re
import warnings
from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
@ -38,6 +39,19 @@ class InvalidFieldError(TypeError):
pass
class Classification(str, Enum, metaclass=MetaEnum):
"""
The classification of an Invocation.
- `Stable`: The invocation, including its inputs/outputs and internal logic, is stable. You may build workflows with it, having confidence that they will not break because of a change in this invocation.
- `Beta`: The invocation is not yet stable, but is planned to be stable in the future. Workflows built around this invocation may break, but we are committed to supporting this invocation long-term.
- `Prototype`: The invocation is not yet stable and may be removed from the application at any time. Workflows built around this invocation may break, and we are *not* committed to supporting this invocation.
"""
Stable = "stable"
Beta = "beta"
Prototype = "prototype"
class Input(str, Enum, metaclass=MetaEnum):
"""
The type of input a field accepts.
@ -438,6 +452,7 @@ class UIConfigBase(BaseModel):
description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".',
)
node_pack: Optional[str] = Field(default=None, description="Whether or not this is a custom node")
classification: Classification = Field(default=Classification.Stable, description="The node's classification")
model_config = ConfigDict(
validate_assignment=True,
@ -606,6 +621,7 @@ class BaseInvocation(ABC, BaseModel):
schema["category"] = uiconfig.category
if uiconfig.node_pack is not None:
schema["node_pack"] = uiconfig.node_pack
schema["classification"] = uiconfig.classification
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
@ -709,8 +725,10 @@ class _Model(BaseModel):
pass
# Get all pydantic model attrs, methods, etc
RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=DeprecationWarning)
# Get all pydantic model attrs, methods, etc
RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None:
@ -779,6 +797,7 @@ def invocation(
category: Optional[str] = None,
version: Optional[str] = None,
use_cache: Optional[bool] = True,
classification: Classification = Classification.Stable,
) -> Callable[[Type[TBaseInvocation]], Type[TBaseInvocation]]:
"""
Registers an invocation.
@ -789,6 +808,7 @@ def invocation(
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
:param Optional[str] version: Adds a version to the invocation. Must be a valid semver string. Defaults to None.
:param Optional[bool] use_cache: Whether or not to use the invocation cache. Defaults to True. The user may override this in the workflow editor.
:param Classification classification: The classification of the invocation. Defaults to FeatureClassification.Stable. Use Beta or Prototype if the invocation is unstable.
"""
def wrapper(cls: Type[TBaseInvocation]) -> Type[TBaseInvocation]:
@ -809,6 +829,7 @@ def invocation(
cls.UIConfig.title = title
cls.UIConfig.tags = tags
cls.UIConfig.category = category
cls.UIConfig.classification = classification
# Grab the node pack's name from the module name, if it's a custom node
is_custom_node = cls.__module__.rsplit(".", 1)[0] == "invokeai.app.invocations"

View File

@ -1,4 +1,3 @@
import re
from dataclasses import dataclass
from typing import List, Optional, Union
@ -17,6 +16,7 @@ from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException, ModelType
from ...backend.util.devices import torch_dtype
from ..util.ti_utils import extract_ti_triggers_from_prompt
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
@ -87,7 +87,7 @@ class CompelInvocation(BaseInvocation):
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
for trigger in extract_ti_triggers_from_prompt(self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
@ -210,7 +210,7 @@ class SDXLPromptInvocationBase:
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", prompt):
for trigger in extract_ti_triggers_from_prompt(prompt):
name = trigger[1:-1]
try:
ti_list.append(

View File

@ -24,11 +24,13 @@ from controlnet_aux import (
)
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, ConfigDict, Field, field_validator
from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from ...backend.model_management import BaseModelType
from .baseinvocation import (
@ -75,17 +77,16 @@ class ControlField(BaseModel):
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
"""Validate that all control weights in the valid range"""
if isinstance(v, list):
for i in v:
if i < -1 or i > 2:
raise ValueError("Control weights must be within -1 to 2 range")
else:
if v < -1 or v > 2:
raise ValueError("Control weights must be within -1 to 2 range")
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("control_output")
class ControlOutput(BaseInvocationOutput):
@ -95,17 +96,17 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.1")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet"
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)"
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
@ -113,6 +114,17 @@ class ControlNetInvocation(BaseInvocation):
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> "ControlNetInvocation":
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
@ -591,3 +603,33 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
@invocation(
"depth_anything_image_processor",
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
if image.mode == "RGBA":
image = image.convert("RGB")
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image

View File

@ -13,7 +13,15 @@ from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.invisible_watermark import InvisibleWatermark
from invokeai.backend.image_util.safety_checker import SafetyChecker
from .baseinvocation import BaseInvocation, Input, InputField, InvocationContext, WithMetadata, invocation
from .baseinvocation import (
BaseInvocation,
Classification,
Input,
InputField,
InvocationContext,
WithMetadata,
invocation,
)
@invocation("show_image", title="Show Image", tags=["image"], category="image", version="1.0.0")
@ -113,11 +121,11 @@ class ImageCropInvocation(BaseInvocation, WithMetadata):
@invocation(
"img_paste",
title="Paste Image",
tags=["image", "paste"],
invocation_type="img_pad_crop",
title="Center Pad or Crop Image",
category="image",
version="1.2.0",
tags=["image", "pad", "crop"],
version="1.0.0",
)
class CenterPadCropInvocation(BaseInvocation):
"""Pad or crop an image's sides from the center by specified pixels. Positive values are outside of the image."""
@ -168,11 +176,11 @@ class CenterPadCropInvocation(BaseInvocation):
@invocation(
invocation_type="img_pad_crop",
title="Center Pad or Crop Image",
"img_paste",
title="Paste Image",
tags=["image", "paste"],
category="image",
tags=["image", "pad", "crop"],
version="1.0.0",
version="1.2.0",
)
class ImagePasteInvocation(BaseInvocation, WithMetadata):
"""Pastes an image into another image."""
@ -421,6 +429,64 @@ class ImageBlurInvocation(BaseInvocation, WithMetadata):
)
@invocation(
"unsharp_mask",
title="Unsharp Mask",
tags=["image", "unsharp_mask"],
category="image",
version="1.2.0",
classification=Classification.Beta,
)
class UnsharpMaskInvocation(BaseInvocation, WithMetadata):
"""Applies an unsharp mask filter to an image"""
image: ImageField = InputField(description="The image to use")
radius: float = InputField(gt=0, description="Unsharp mask radius", default=2)
strength: float = InputField(ge=0, description="Unsharp mask strength", default=50)
def pil_from_array(self, arr):
return Image.fromarray((arr * 255).astype("uint8"))
def array_from_pil(self, img):
return numpy.array(img) / 255
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
mode = image.mode
alpha_channel = image.getchannel("A") if mode == "RGBA" else None
image = image.convert("RGB")
image_blurred = self.array_from_pil(image.filter(ImageFilter.GaussianBlur(radius=self.radius)))
image = self.array_from_pil(image)
image += (image - image_blurred) * (self.strength / 100.0)
image = numpy.clip(image, 0, 1)
image = self.pil_from_array(image)
image = image.convert(mode)
# Make the image RGBA if we had a source alpha channel
if alpha_channel is not None:
image.putalpha(alpha_channel)
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image.width,
height=image.height,
)
PIL_RESAMPLING_MODES = Literal[
"nearest",
"box",

View File

@ -2,7 +2,7 @@ import os
from builtins import float
from typing import List, Union
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
@ -15,6 +15,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation_output,
)
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.model_management.models.ip_adapter import get_ip_adapter_image_encoder_model_id
@ -39,7 +40,6 @@ class IPAdapterField(BaseModel):
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
# weight: float = Field(default=1.0, ge=0, description="The weight of the IP-Adapter.")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
@ -47,6 +47,17 @@ class IPAdapterField(BaseModel):
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("ip_adapter_output")
class IPAdapterOutput(BaseInvocationOutput):
@ -54,7 +65,7 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.0")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.1")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
@ -64,18 +75,27 @@ class IPAdapterInvocation(BaseInvocation):
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
)
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField(
default=1, ge=-1, description="The weight given to the IP-Adapter", title="Weight"
default=1, description="The weight given to the IP-Adapter", title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the IP-Adapter is first applied (% of total steps)"
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.services.model_manager.model_info(

View File

@ -1,5 +1,6 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import math
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import List, Literal, Optional, Union
@ -220,7 +221,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.5.0",
version="1.5.1",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -279,7 +280,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
ui_order=7,
)
cfg_rescale_multiplier: float = InputField(
default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
title="CFG Rescale Multiplier", default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
)
latents: Optional[LatentsField] = InputField(
default=None,
@ -1228,3 +1229,57 @@ class CropLatentsCoreInvocation(BaseInvocation):
context.services.latents.save(name, cropped_latents)
return build_latents_output(latents_name=name, latents=cropped_latents)
@invocation_output("ideal_size_output")
class IdealSizeOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
width: int = OutputField(description="The ideal width of the image (in pixels)")
height: int = OutputField(description="The ideal height of the image (in pixels)")
@invocation(
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""
width: int = InputField(default=1024, description="Final image width")
height: int = InputField(default=576, description="Final image height")
unet: UNetField = InputField(default=None, description=FieldDescriptions.unet)
multiplier: float = InputField(
default=1.0,
description="Amount to multiply the model's dimensions by when calculating the ideal size (may result in initial generation artifacts if too large)",
)
def trim_to_multiple_of(self, *args, multiple_of=LATENT_SCALE_FACTOR):
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
aspect = self.width / self.height
dimension = 512
if self.unet.unet.base_model == BaseModelType.StableDiffusion2:
dimension = 768
elif self.unet.unet.base_model == BaseModelType.StableDiffusionXL:
dimension = 1024
dimension = dimension * self.multiplier
min_dimension = math.floor(dimension * 0.5)
model_area = dimension * dimension # hardcoded for now since all models are trained on square images
if aspect > 1.0:
init_height = max(min_dimension, math.sqrt(model_area / aspect))
init_width = init_height * aspect
else:
init_width = max(min_dimension, math.sqrt(model_area * aspect))
init_height = init_width / aspect
scaled_width, scaled_height = self.trim_to_multiple_of(
math.floor(init_width),
math.floor(init_height),
)
return IdealSizeOutput(width=scaled_width, height=scaled_height)

View File

@ -1,7 +1,6 @@
# Copyright (c) 2023 Borisov Sergey (https://github.com/StAlKeR7779)
import inspect
import re
# from contextlib import ExitStack
from typing import List, Literal, Union
@ -21,6 +20,7 @@ from invokeai.backend import BaseModelType, ModelType, SubModelType
from ...backend.model_management import ONNXModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util import choose_torch_device
from ..util.ti_utils import extract_ti_triggers_from_prompt
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
@ -78,7 +78,7 @@ class ONNXPromptInvocation(BaseInvocation):
]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
for trigger in extract_ti_triggers_from_prompt(self.prompt):
name = trigger[1:-1]
try:
ti_list.append(

View File

@ -1,6 +1,6 @@
from typing import Union
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
@ -14,6 +14,7 @@ from invokeai.app.invocations.baseinvocation import (
)
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.model_management.models.base import BaseModelType
@ -37,6 +38,17 @@ class T2IAdapterField(BaseModel):
)
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("t2i_adapter_output")
class T2IAdapterOutput(BaseInvocationOutput):
@ -44,7 +56,7 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.0"
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.1"
)
class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes."""
@ -61,7 +73,7 @@ class T2IAdapterInvocation(BaseInvocation):
default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the T2I-Adapter is first applied (% of total steps)"
default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)"
@ -71,6 +83,17 @@ class T2IAdapterInvocation(BaseInvocation):
description="The resize mode applied to the T2I-Adapter input image so that it matches the target output size.",
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> T2IAdapterOutput:
return T2IAdapterOutput(
t2i_adapter=T2IAdapterField(

View File

@ -1,3 +1,5 @@
from typing import Literal
import numpy as np
from PIL import Image
from pydantic import BaseModel
@ -5,6 +7,8 @@ from pydantic import BaseModel
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Classification,
Input,
InputField,
InvocationContext,
OutputField,
@ -14,7 +18,13 @@ from invokeai.app.invocations.baseinvocation import (
)
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.tiles.tiles import calc_tiles_with_overlap, merge_tiles_with_linear_blending
from invokeai.backend.tiles.tiles import (
calc_tiles_even_split,
calc_tiles_min_overlap,
calc_tiles_with_overlap,
merge_tiles_with_linear_blending,
merge_tiles_with_seam_blending,
)
from invokeai.backend.tiles.utils import Tile
@ -28,7 +38,14 @@ class CalculateImageTilesOutput(BaseInvocationOutput):
tiles: list[Tile] = OutputField(description="The tiles coordinates that cover a particular image shape.")
@invocation("calculate_image_tiles", title="Calculate Image Tiles", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"calculate_image_tiles",
title="Calculate Image Tiles",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
@ -55,6 +72,79 @@ class CalculateImageTilesInvocation(BaseInvocation):
return CalculateImageTilesOutput(tiles=tiles)
@invocation(
"calculate_image_tiles_even_split",
title="Calculate Image Tiles Even Split",
tags=["tiles"],
category="tiles",
version="1.1.0",
classification=Classification.Beta,
)
class CalculateImageTilesEvenSplitInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
image_width: int = InputField(ge=1, default=1024, description="The image width, in pixels, to calculate tiles for.")
image_height: int = InputField(
ge=1, default=1024, description="The image height, in pixels, to calculate tiles for."
)
num_tiles_x: int = InputField(
default=2,
ge=1,
description="Number of tiles to divide image into on the x axis",
)
num_tiles_y: int = InputField(
default=2,
ge=1,
description="Number of tiles to divide image into on the y axis",
)
overlap: int = InputField(
default=128,
ge=0,
multiple_of=8,
description="The overlap, in pixels, between adjacent tiles.",
)
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
tiles = calc_tiles_even_split(
image_height=self.image_height,
image_width=self.image_width,
num_tiles_x=self.num_tiles_x,
num_tiles_y=self.num_tiles_y,
overlap=self.overlap,
)
return CalculateImageTilesOutput(tiles=tiles)
@invocation(
"calculate_image_tiles_min_overlap",
title="Calculate Image Tiles Minimum Overlap",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class CalculateImageTilesMinimumOverlapInvocation(BaseInvocation):
"""Calculate the coordinates and overlaps of tiles that cover a target image shape."""
image_width: int = InputField(ge=1, default=1024, description="The image width, in pixels, to calculate tiles for.")
image_height: int = InputField(
ge=1, default=1024, description="The image height, in pixels, to calculate tiles for."
)
tile_width: int = InputField(ge=1, default=576, description="The tile width, in pixels.")
tile_height: int = InputField(ge=1, default=576, description="The tile height, in pixels.")
min_overlap: int = InputField(default=128, ge=0, description="Minimum overlap between adjacent tiles, in pixels.")
def invoke(self, context: InvocationContext) -> CalculateImageTilesOutput:
tiles = calc_tiles_min_overlap(
image_height=self.image_height,
image_width=self.image_width,
tile_height=self.tile_height,
tile_width=self.tile_width,
min_overlap=self.min_overlap,
)
return CalculateImageTilesOutput(tiles=tiles)
@invocation_output("tile_to_properties_output")
class TileToPropertiesOutput(BaseInvocationOutput):
coords_left: int = OutputField(description="Left coordinate of the tile relative to its parent image.")
@ -76,7 +166,14 @@ class TileToPropertiesOutput(BaseInvocationOutput):
overlap_right: int = OutputField(description="Overlap between this tile and its right neighbor.")
@invocation("tile_to_properties", title="Tile to Properties", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"tile_to_properties",
title="Tile to Properties",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class TileToPropertiesInvocation(BaseInvocation):
"""Split a Tile into its individual properties."""
@ -102,7 +199,14 @@ class PairTileImageOutput(BaseInvocationOutput):
tile_with_image: TileWithImage = OutputField(description="A tile description with its corresponding image.")
@invocation("pair_tile_image", title="Pair Tile with Image", tags=["tiles"], category="tiles", version="1.0.0")
@invocation(
"pair_tile_image",
title="Pair Tile with Image",
tags=["tiles"],
category="tiles",
version="1.0.0",
classification=Classification.Beta,
)
class PairTileImageInvocation(BaseInvocation):
"""Pair an image with its tile properties."""
@ -121,13 +225,29 @@ class PairTileImageInvocation(BaseInvocation):
)
@invocation("merge_tiles_to_image", title="Merge Tiles to Image", tags=["tiles"], category="tiles", version="1.1.0")
BLEND_MODES = Literal["Linear", "Seam"]
@invocation(
"merge_tiles_to_image",
title="Merge Tiles to Image",
tags=["tiles"],
category="tiles",
version="1.1.0",
classification=Classification.Beta,
)
class MergeTilesToImageInvocation(BaseInvocation, WithMetadata):
"""Merge multiple tile images into a single image."""
# Inputs
tiles_with_images: list[TileWithImage] = InputField(description="A list of tile images with tile properties.")
blend_mode: BLEND_MODES = InputField(
default="Seam",
description="blending type Linear or Seam",
input=Input.Direct,
)
blend_amount: int = InputField(
default=32,
ge=0,
description="The amount to blend adjacent tiles in pixels. Must be <= the amount of overlap between adjacent tiles.",
)
@ -157,10 +277,18 @@ class MergeTilesToImageInvocation(BaseInvocation, WithMetadata):
channels = tile_np_images[0].shape[-1]
dtype = tile_np_images[0].dtype
np_image = np.zeros(shape=(height, width, channels), dtype=dtype)
if self.blend_mode == "Linear":
merge_tiles_with_linear_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
elif self.blend_mode == "Seam":
merge_tiles_with_seam_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
else:
raise ValueError(f"Unsupported blend mode: '{self.blend_mode}'.")
merge_tiles_with_linear_blending(
dst_image=np_image, tiles=tiles, tile_images=tile_np_images, blend_amount=self.blend_amount
)
# Convert into a PIL image and save
pil_image = Image.fromarray(np_image)
image_dto = context.services.images.create(

View File

@ -0,0 +1,14 @@
from typing import Union
def validate_weights(weights: Union[float, list[float]]) -> None:
"""Validate that all control weights in the valid range"""
to_validate = weights if isinstance(weights, list) else [weights]
if any(i < -1 or i > 2 for i in to_validate):
raise ValueError("Control weights must be within -1 to 2 range")
def validate_begin_end_step(begin_step_percent: float, end_step_percent: float) -> None:
"""Validate that begin_step_percent is less than end_step_percent"""
if begin_step_percent >= end_step_percent:
raise ValueError("Begin step percent must be less than or equal to end step percent")

View File

@ -20,63 +20,6 @@ class SqliteBoardImageRecordStorage(BoardImageRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `board_images` junction table."""
# Create the `board_images` junction table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS board_images (
board_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between boards and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (board_id) REFERENCES boards (board_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
)
# Add index for board id
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id ON board_images (board_id);
"""
)
# Add index for board id, sorted by created_at
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_board_images_board_id_created_at ON board_images (board_id, created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_board_images_updated_at
AFTER UPDATE
ON board_images FOR EACH ROW
BEGIN
UPDATE board_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE board_id = old.board_id AND image_name = old.image_name;
END;
"""
)
def add_image_to_board(
self,
board_id: str,

View File

@ -28,52 +28,6 @@ class SqliteBoardRecordStorage(BoardRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `boards` table and `board_images` junction table."""
# Create the `boards` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS boards (
board_id TEXT NOT NULL PRIMARY KEY,
board_name TEXT NOT NULL,
cover_image_name TEXT,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
FOREIGN KEY (cover_image_name) REFERENCES images (image_name) ON DELETE SET NULL
);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_boards_created_at ON boards (created_at);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_boards_updated_at
AFTER UPDATE
ON boards FOR EACH ROW
BEGIN
UPDATE boards SET updated_at = current_timestamp
WHERE board_id = old.board_id;
END;
"""
)
def delete(self, board_id: str) -> None:
try:
self._lock.acquire()

View File

@ -1,6 +1,7 @@
"""
Init file for InvokeAI configure package
"""
"""Init file for InvokeAI configure package."""
from .config_base import PagingArgumentParser # noqa F401
from .config_default import InvokeAIAppConfig, get_invokeai_config # noqa F401
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_invokeai_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config", "PagingArgumentParser"]

View File

@ -173,7 +173,7 @@ from __future__ import annotations
import os
from pathlib import Path
from typing import ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from omegaconf import DictConfig, OmegaConf
from pydantic import Field, TypeAdapter
@ -209,7 +209,7 @@ class InvokeAIAppConfig(InvokeAISettings):
"""Configuration object for InvokeAI App."""
singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None
singleton_init: ClassVar[Optional[Dict]] = None
singleton_init: ClassVar[Optional[Dict[str, Any]]] = None
# fmt: off
type: Literal["InvokeAI"] = "InvokeAI"
@ -221,6 +221,9 @@ class InvokeAIAppConfig(InvokeAISettings):
allow_credentials : bool = Field(default=True, description="Allow CORS credentials", json_schema_extra=Categories.WebServer)
allow_methods : List[str] = Field(default=["*"], description="Methods allowed for CORS", json_schema_extra=Categories.WebServer)
allow_headers : List[str] = Field(default=["*"], description="Headers allowed for CORS", json_schema_extra=Categories.WebServer)
# SSL options correspond to https://www.uvicorn.org/settings/#https
ssl_certfile : Optional[Path] = Field(default=None, description="SSL certificate file (for HTTPS)", json_schema_extra=Categories.WebServer)
ssl_keyfile : Optional[Path] = Field(default=None, description="SSL key file", json_schema_extra=Categories.WebServer)
# FEATURES
esrgan : bool = Field(default=True, description="Enable/disable upscaling code", json_schema_extra=Categories.Features)
@ -260,7 +263,7 @@ class InvokeAIAppConfig(InvokeAISettings):
# DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "bfloat16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
# GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", json_schema_extra=Categories.Generation)
@ -298,8 +301,8 @@ class InvokeAIAppConfig(InvokeAISettings):
self,
argv: Optional[list[str]] = None,
conf: Optional[DictConfig] = None,
clobber=False,
):
clobber: Optional[bool] = False,
) -> None:
"""
Update settings with contents of init file, environment, and command-line settings.
@ -334,7 +337,7 @@ class InvokeAIAppConfig(InvokeAISettings):
)
@classmethod
def get_config(cls, **kwargs) -> InvokeAIAppConfig:
def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object."""
if (
cls.singleton_config is None
@ -353,7 +356,7 @@ class InvokeAIAppConfig(InvokeAISettings):
else:
root = self.find_root().expanduser().absolute()
self.root = root # insulate ourselves from relative paths that may change
return root
return root.resolve()
@property
def root_dir(self) -> Path:
@ -383,17 +386,17 @@ class InvokeAIAppConfig(InvokeAISettings):
return db_dir / DB_FILE
@property
def model_conf_path(self) -> Optional[Path]:
def model_conf_path(self) -> Path:
"""Path to models configuration file."""
return self._resolve(self.conf_path)
@property
def legacy_conf_path(self) -> Optional[Path]:
def legacy_conf_path(self) -> Path:
"""Path to directory of legacy configuration files (e.g. v1-inference.yaml)."""
return self._resolve(self.legacy_conf_dir)
@property
def models_path(self) -> Optional[Path]:
def models_path(self) -> Path:
"""Path to the models directory."""
return self._resolve(self.models_dir)
@ -452,7 +455,7 @@ class InvokeAIAppConfig(InvokeAISettings):
return _find_root()
def get_invokeai_config(**kwargs) -> InvokeAIAppConfig:
def get_invokeai_config(**kwargs: Any) -> InvokeAIAppConfig:
"""Legacy function which returns InvokeAIAppConfig.get_config()."""
return InvokeAIAppConfig.get_config(**kwargs)

View File

@ -0,0 +1,12 @@
"""Init file for download queue."""
from .download_base import DownloadJob, DownloadJobStatus, DownloadQueueServiceBase, UnknownJobIDException
from .download_default import DownloadQueueService, TqdmProgress
__all__ = [
"DownloadJob",
"DownloadQueueServiceBase",
"DownloadQueueService",
"TqdmProgress",
"DownloadJobStatus",
"UnknownJobIDException",
]

View File

@ -0,0 +1,262 @@
# Copyright (c) 2023 Lincoln D. Stein and the InvokeAI Development Team
"""Model download service."""
from abc import ABC, abstractmethod
from enum import Enum
from functools import total_ordering
from pathlib import Path
from typing import Any, Callable, List, Optional
from pydantic import BaseModel, Field, PrivateAttr
from pydantic.networks import AnyHttpUrl
class DownloadJobStatus(str, Enum):
"""State of a download job."""
WAITING = "waiting" # not enqueued, will not run
RUNNING = "running" # actively downloading
COMPLETED = "completed" # finished running
CANCELLED = "cancelled" # user cancelled
ERROR = "error" # terminated with an error message
class DownloadJobCancelledException(Exception):
"""This exception is raised when a download job is cancelled."""
class UnknownJobIDException(Exception):
"""This exception is raised when an invalid job id is referened."""
class ServiceInactiveException(Exception):
"""This exception is raised when user attempts to initiate a download before the service is started."""
DownloadEventHandler = Callable[["DownloadJob"], None]
DownloadExceptionHandler = Callable[["DownloadJob", Optional[Exception]], None]
@total_ordering
class DownloadJob(BaseModel):
"""Class to monitor and control a model download request."""
# required variables to be passed in on creation
source: AnyHttpUrl = Field(description="Where to download from. Specific types specified in child classes.")
dest: Path = Field(description="Destination of downloaded model on local disk; a directory or file path")
access_token: Optional[str] = Field(default=None, description="authorization token for protected resources")
# automatically assigned on creation
id: int = Field(description="Numeric ID of this job", default=-1) # default id is a sentinel
priority: int = Field(default=10, description="Queue priority; lower values are higher priority")
# set internally during download process
status: DownloadJobStatus = Field(default=DownloadJobStatus.WAITING, description="Status of the download")
download_path: Optional[Path] = Field(default=None, description="Final location of downloaded file")
job_started: Optional[str] = Field(default=None, description="Timestamp for when the download job started")
job_ended: Optional[str] = Field(
default=None, description="Timestamp for when the download job ende1d (completed or errored)"
)
content_type: Optional[str] = Field(default=None, description="Content type of downloaded file")
bytes: int = Field(default=0, description="Bytes downloaded so far")
total_bytes: int = Field(default=0, description="Total file size (bytes)")
# set when an error occurs
error_type: Optional[str] = Field(default=None, description="Name of exception that caused an error")
error: Optional[str] = Field(default=None, description="Traceback of the exception that caused an error")
# internal flag
_cancelled: bool = PrivateAttr(default=False)
# optional event handlers passed in on creation
_on_start: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_progress: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_complete: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_cancelled: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_error: Optional[DownloadExceptionHandler] = PrivateAttr(default=None)
def __hash__(self) -> int:
"""Return hash of the string representation of this object, for indexing."""
return hash(str(self))
def __le__(self, other: "DownloadJob") -> bool:
"""Return True if this job's priority is less than another's."""
return self.priority <= other.priority
def cancel(self) -> None:
"""Call to cancel the job."""
self._cancelled = True
# cancelled and the callbacks are private attributes in order to prevent
# them from being serialized and/or used in the Json Schema
@property
def cancelled(self) -> bool:
"""Call to cancel the job."""
return self._cancelled
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == DownloadJobStatus.COMPLETED
@property
def running(self) -> bool:
"""Return true if the job is running."""
return self.status == DownloadJobStatus.RUNNING
@property
def errored(self) -> bool:
"""Return true if the job is errored."""
return self.status == DownloadJobStatus.ERROR
@property
def in_terminal_state(self) -> bool:
"""Return true if job has finished, one way or another."""
return self.status not in [DownloadJobStatus.WAITING, DownloadJobStatus.RUNNING]
@property
def on_start(self) -> Optional[DownloadEventHandler]:
"""Return the on_start event handler."""
return self._on_start
@property
def on_progress(self) -> Optional[DownloadEventHandler]:
"""Return the on_progress event handler."""
return self._on_progress
@property
def on_complete(self) -> Optional[DownloadEventHandler]:
"""Return the on_complete event handler."""
return self._on_complete
@property
def on_error(self) -> Optional[DownloadExceptionHandler]:
"""Return the on_error event handler."""
return self._on_error
@property
def on_cancelled(self) -> Optional[DownloadEventHandler]:
"""Return the on_cancelled event handler."""
return self._on_cancelled
def set_callbacks(
self,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""Set the callbacks for download events."""
self._on_start = on_start
self._on_progress = on_progress
self._on_complete = on_complete
self._on_error = on_error
self._on_cancelled = on_cancelled
class DownloadQueueServiceBase(ABC):
"""Multithreaded queue for downloading models via URL."""
@abstractmethod
def start(self, *args: Any, **kwargs: Any) -> None:
"""Start the download worker threads."""
@abstractmethod
def stop(self, *args: Any, **kwargs: Any) -> None:
"""Stop the download worker threads."""
@abstractmethod
def download(
self,
source: AnyHttpUrl,
dest: Path,
priority: int = 10,
access_token: Optional[str] = None,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob:
"""
Create and enqueue download job.
:param source: Source of the download as a URL.
:param dest: Path to download to. See below.
:param on_start, on_progress, on_complete, on_error: Callbacks for the indicated
events.
:returns: A DownloadJob object for monitoring the state of the download.
The `dest` argument is a Path object. Its behavior is:
1. If the path exists and is a directory, then the URL contents will be downloaded
into that directory using the filename indicated in the response's `Content-Disposition` field.
If no content-disposition is present, then the last component of the URL will be used (similar to
wget's behavior).
2. If the path does not exist, then it is taken as the name of a new file to create with the downloaded
content.
3. If the path exists and is an existing file, then the downloader will try to resume the download from
the end of the existing file.
"""
pass
@abstractmethod
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""
Enqueue a download job.
:param job: The DownloadJob
:param on_start, on_progress, on_complete, on_error: Callbacks for the indicated
events.
"""
pass
@abstractmethod
def list_jobs(self) -> List[DownloadJob]:
"""
List active download jobs.
:returns List[DownloadJob]: List of download jobs whose state is not "completed."
"""
pass
@abstractmethod
def id_to_job(self, id: int) -> DownloadJob:
"""
Return the DownloadJob corresponding to the integer ID.
:param id: ID of the DownloadJob.
Exceptions:
* UnknownJobIDException
"""
pass
@abstractmethod
def cancel_all_jobs(self) -> None:
"""Cancel all active and enquedjobs."""
pass
@abstractmethod
def prune_jobs(self) -> None:
"""Prune completed and errored queue items from the job list."""
pass
@abstractmethod
def cancel_job(self, job: DownloadJob) -> None:
"""Cancel the job, clearing partial downloads and putting it into ERROR state."""
pass
@abstractmethod
def join(self) -> None:
"""Wait until all jobs are off the queue."""
pass

View File

@ -0,0 +1,437 @@
# Copyright (c) 2023, Lincoln D. Stein
"""Implementation of multithreaded download queue for invokeai."""
import os
import re
import threading
import traceback
from pathlib import Path
from queue import Empty, PriorityQueue
from typing import Any, Dict, List, Optional
import requests
from pydantic.networks import AnyHttpUrl
from requests import HTTPError
from tqdm import tqdm
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.util.misc import get_iso_timestamp
from invokeai.backend.util.logging import InvokeAILogger
from .download_base import (
DownloadEventHandler,
DownloadExceptionHandler,
DownloadJob,
DownloadJobCancelledException,
DownloadJobStatus,
DownloadQueueServiceBase,
ServiceInactiveException,
UnknownJobIDException,
)
# Maximum number of bytes to download during each call to requests.iter_content()
DOWNLOAD_CHUNK_SIZE = 100000
class DownloadQueueService(DownloadQueueServiceBase):
"""Class for queued download of models."""
def __init__(
self,
max_parallel_dl: int = 5,
event_bus: Optional[EventServiceBase] = None,
requests_session: Optional[requests.sessions.Session] = None,
):
"""
Initialize DownloadQueue.
:param max_parallel_dl: Number of simultaneous downloads allowed [5].
:param requests_session: Optional requests.sessions.Session object, for unit tests.
"""
self._jobs = {}
self._next_job_id = 0
self._queue = PriorityQueue()
self._stop_event = threading.Event()
self._worker_pool = set()
self._lock = threading.Lock()
self._logger = InvokeAILogger.get_logger("DownloadQueueService")
self._event_bus = event_bus
self._requests = requests_session or requests.Session()
self._accept_download_requests = False
self._max_parallel_dl = max_parallel_dl
def start(self, *args: Any, **kwargs: Any) -> None:
"""Start the download worker threads."""
with self._lock:
if self._worker_pool:
raise Exception("Attempt to start the download service twice")
self._stop_event.clear()
self._start_workers(self._max_parallel_dl)
self._accept_download_requests = True
def stop(self, *args: Any, **kwargs: Any) -> None:
"""Stop the download worker threads."""
with self._lock:
if not self._worker_pool:
raise Exception("Attempt to stop the download service before it was started")
self._accept_download_requests = False # reject attempts to add new jobs to queue
queued_jobs = [x for x in self.list_jobs() if x.status == DownloadJobStatus.WAITING]
active_jobs = [x for x in self.list_jobs() if x.status == DownloadJobStatus.RUNNING]
if queued_jobs:
self._logger.warning(f"Cancelling {len(queued_jobs)} queued downloads")
if active_jobs:
self._logger.info(f"Waiting for {len(active_jobs)} active download jobs to complete")
with self._queue.mutex:
self._queue.queue.clear()
self.join() # wait for all active jobs to finish
self._stop_event.set()
self._worker_pool.clear()
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""Enqueue a download job."""
if not self._accept_download_requests:
raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service."
)
with self._lock:
job.id = self._next_job_id
self._next_job_id += 1
job.set_callbacks(
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
self._jobs[job.id] = job
self._queue.put(job)
def download(
self,
source: AnyHttpUrl,
dest: Path,
priority: int = 10,
access_token: Optional[str] = None,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob:
"""Create and enqueue a download job and return it."""
if not self._accept_download_requests:
raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service."
)
job = DownloadJob(
source=source,
dest=dest,
priority=priority,
access_token=access_token,
)
self.submit_download_job(
job,
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
return job
def join(self) -> None:
"""Wait for all jobs to complete."""
self._queue.join()
def list_jobs(self) -> List[DownloadJob]:
"""List all the jobs."""
return list(self._jobs.values())
def prune_jobs(self) -> None:
"""Prune completed and errored queue items from the job list."""
with self._lock:
to_delete = set()
for job_id, job in self._jobs.items():
if job.in_terminal_state:
to_delete.add(job_id)
for job_id in to_delete:
del self._jobs[job_id]
def id_to_job(self, id: int) -> DownloadJob:
"""Translate a job ID into a DownloadJob object."""
try:
return self._jobs[id]
except KeyError as excp:
raise UnknownJobIDException("Unrecognized job") from excp
def cancel_job(self, job: DownloadJob) -> None:
"""
Cancel the indicated job.
If it is running it will be stopped.
job.status will be set to DownloadJobStatus.CANCELLED
"""
with self._lock:
job.cancel()
def cancel_all_jobs(self) -> None:
"""Cancel all jobs (those not in enqueued, running or paused state)."""
for job in self._jobs.values():
if not job.in_terminal_state:
self.cancel_job(job)
def _start_workers(self, max_workers: int) -> None:
"""Start the requested number of worker threads."""
self._stop_event.clear()
for i in range(0, max_workers): # noqa B007
worker = threading.Thread(target=self._download_next_item, daemon=True)
self._logger.debug(f"Download queue worker thread {worker.name} starting.")
worker.start()
self._worker_pool.add(worker)
def _download_next_item(self) -> None:
"""Worker thread gets next job on priority queue."""
done = False
while not done:
if self._stop_event.is_set():
done = True
continue
try:
job = self._queue.get(timeout=1)
except Empty:
continue
try:
job.job_started = get_iso_timestamp()
self._do_download(job)
self._signal_job_complete(job)
except (OSError, HTTPError) as excp:
job.error_type = excp.__class__.__name__ + f"({str(excp)})"
job.error = traceback.format_exc()
self._signal_job_error(job, excp)
except DownloadJobCancelledException:
self._signal_job_cancelled(job)
self._cleanup_cancelled_job(job)
finally:
job.job_ended = get_iso_timestamp()
self._queue.task_done()
self._logger.debug(f"Download queue worker thread {threading.current_thread().name} exiting.")
def _do_download(self, job: DownloadJob) -> None:
"""Do the actual download."""
url = job.source
header = {"Authorization": f"Bearer {job.access_token}"} if job.access_token else {}
open_mode = "wb"
# Make a streaming request. This will retrieve headers including
# content-length and content-disposition, but not fetch any content itself
resp = self._requests.get(str(url), headers=header, stream=True)
if not resp.ok:
raise HTTPError(resp.reason)
job.content_type = resp.headers.get("Content-Type")
content_length = int(resp.headers.get("content-length", 0))
job.total_bytes = content_length
if job.dest.is_dir():
file_name = os.path.basename(str(url.path)) # default is to use the last bit of the URL
if match := re.search('filename="(.+)"', resp.headers.get("Content-Disposition", "")):
remote_name = match.group(1)
if self._validate_filename(job.dest.as_posix(), remote_name):
file_name = remote_name
job.download_path = job.dest / file_name
else:
job.dest.parent.mkdir(parents=True, exist_ok=True)
job.download_path = job.dest
assert job.download_path
# Don't clobber an existing file. See commit 82c2c85202f88c6d24ff84710f297cfc6ae174af
# for code that instead resumes an interrupted download.
if job.download_path.exists():
raise OSError(f"[Errno 17] File {job.download_path} exists")
# append ".downloading" to the path
in_progress_path = self._in_progress_path(job.download_path)
# signal caller that the download is starting. At this point, key fields such as
# download_path and total_bytes will be populated. We call it here because the might
# discover that the local file is already complete and generate a COMPLETED status.
self._signal_job_started(job)
# "range not satisfiable" - local file is at least as large as the remote file
if resp.status_code == 416 or (content_length > 0 and job.bytes >= content_length):
self._logger.warning(f"{job.download_path}: complete file found. Skipping.")
return
# "partial content" - local file is smaller than remote file
elif resp.status_code == 206 or job.bytes > 0:
self._logger.warning(f"{job.download_path}: partial file found. Resuming")
# some other error
elif resp.status_code != 200:
raise HTTPError(resp.reason)
self._logger.debug(f"{job.source}: Downloading {job.download_path}")
report_delta = job.total_bytes / 100 # report every 1% change
last_report_bytes = 0
# DOWNLOAD LOOP
with open(in_progress_path, open_mode) as file:
for data in resp.iter_content(chunk_size=DOWNLOAD_CHUNK_SIZE):
if job.cancelled:
raise DownloadJobCancelledException("Job was cancelled at caller's request")
job.bytes += file.write(data)
if (job.bytes - last_report_bytes >= report_delta) or (job.bytes >= job.total_bytes):
last_report_bytes = job.bytes
self._signal_job_progress(job)
# if we get here we are done and can rename the file to the original dest
self._logger.debug(f"{job.source}: saved to {job.download_path} (bytes={job.bytes})")
in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool:
pc_name_max = os.pathconf(directory, "PC_NAME_MAX") if hasattr(os, "pathconf") else 260 # hardcoded for windows
pc_path_max = (
os.pathconf(directory, "PC_PATH_MAX") if hasattr(os, "pathconf") else 32767
) # hardcoded for windows with long names enabled
if "/" in filename:
return False
if filename.startswith(".."):
return False
if len(filename) > pc_name_max:
return False
if len(os.path.join(directory, filename)) > pc_path_max:
return False
return True
def _in_progress_path(self, path: Path) -> Path:
return path.with_name(path.name + ".downloading")
def _signal_job_started(self, job: DownloadJob) -> None:
job.status = DownloadJobStatus.RUNNING
if job.on_start:
try:
job.on_start(job)
except Exception as e:
self._logger.error(
f"An error occurred while processing the on_start callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_started(str(job.source), job.download_path.as_posix())
def _signal_job_progress(self, job: DownloadJob) -> None:
if job.on_progress:
try:
job.on_progress(job)
except Exception as e:
self._logger.error(
f"An error occurred while processing the on_progress callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_progress(
str(job.source),
download_path=job.download_path.as_posix(),
current_bytes=job.bytes,
total_bytes=job.total_bytes,
)
def _signal_job_complete(self, job: DownloadJob) -> None:
job.status = DownloadJobStatus.COMPLETED
if job.on_complete:
try:
job.on_complete(job)
except Exception as e:
self._logger.error(
f"An error occurred while processing the on_complete callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.download_path
self._event_bus.emit_download_complete(
str(job.source), download_path=job.download_path.as_posix(), total_bytes=job.total_bytes
)
def _signal_job_cancelled(self, job: DownloadJob) -> None:
if job.status not in [DownloadJobStatus.RUNNING, DownloadJobStatus.WAITING]:
return
job.status = DownloadJobStatus.CANCELLED
if job.on_cancelled:
try:
job.on_cancelled(job)
except Exception as e:
self._logger.error(
f"An error occurred while processing the on_cancelled callback: {traceback.format_exception(e)}"
)
if self._event_bus:
self._event_bus.emit_download_cancelled(str(job.source))
def _signal_job_error(self, job: DownloadJob, excp: Optional[Exception] = None) -> None:
job.status = DownloadJobStatus.ERROR
self._logger.error(f"{str(job.source)}: {traceback.format_exception(excp)}")
if job.on_error:
try:
job.on_error(job, excp)
except Exception as e:
self._logger.error(
f"An error occurred while processing the on_error callback: {traceback.format_exception(e)}"
)
if self._event_bus:
assert job.error_type
assert job.error
self._event_bus.emit_download_error(str(job.source), error_type=job.error_type, error=job.error)
def _cleanup_cancelled_job(self, job: DownloadJob) -> None:
self._logger.debug(f"Cleaning up leftover files from cancelled download job {job.download_path}")
try:
if job.download_path:
partial_file = self._in_progress_path(job.download_path)
partial_file.unlink()
except OSError as excp:
self._logger.warning(excp)
# Example on_progress event handler to display a TQDM status bar
# Activate with:
# download_service.download('http://foo.bar/baz', '/tmp', on_progress=TqdmProgress().job_update
class TqdmProgress(object):
"""TQDM-based progress bar object to use in on_progress handlers."""
_bars: Dict[int, tqdm] # the tqdm object
_last: Dict[int, int] # last bytes downloaded
def __init__(self) -> None: # noqa D107
self._bars = {}
self._last = {}
def update(self, job: DownloadJob) -> None: # noqa D102
job_id = job.id
# new job
if job_id not in self._bars:
assert job.download_path
dest = Path(job.download_path).name
self._bars[job_id] = tqdm(
desc=dest,
initial=0,
total=job.total_bytes,
unit="iB",
unit_scale=True,
)
self._last[job_id] = 0
self._bars[job_id].update(job.bytes - self._last[job_id])
self._last[job_id] = job.bytes

View File

@ -0,0 +1 @@
from .events_base import EventServiceBase # noqa F401

View File

@ -1,6 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional
from typing import Any, Dict, List, Optional, Union
from invokeai.app.services.invocation_processor.invocation_processor_common import ProgressImage
from invokeai.app.services.session_queue.session_queue_common import (
@ -16,6 +17,8 @@ from invokeai.backend.model_management.models.base import BaseModelType, ModelTy
class EventServiceBase:
queue_event: str = "queue_event"
download_event: str = "download_event"
model_event: str = "model_event"
"""Basic event bus, to have an empty stand-in when not needed"""
@ -30,6 +33,20 @@ class EventServiceBase:
payload={"event": event_name, "data": payload},
)
def __emit_download_event(self, event_name: str, payload: dict) -> None:
payload["timestamp"] = get_timestamp()
self.dispatch(
event_name=EventServiceBase.download_event,
payload={"event": event_name, "data": payload},
)
def __emit_model_event(self, event_name: str, payload: dict) -> None:
payload["timestamp"] = get_timestamp()
self.dispatch(
event_name=EventServiceBase.model_event,
payload={"event": event_name, "data": payload},
)
# Define events here for every event in the system.
# This will make them easier to integrate until we find a schema generator.
def emit_generator_progress(
@ -313,3 +330,166 @@ class EventServiceBase:
event_name="queue_cleared",
payload={"queue_id": queue_id},
)
def emit_download_started(self, source: str, download_path: str) -> None:
"""
Emit when a download job is started.
:param url: The downloaded url
"""
self.__emit_download_event(
event_name="download_started",
payload={"source": source, "download_path": download_path},
)
def emit_download_progress(self, source: str, download_path: str, current_bytes: int, total_bytes: int) -> None:
"""
Emit "download_progress" events at regular intervals during a download job.
:param source: The downloaded source
:param download_path: The local downloaded file
:param current_bytes: Number of bytes downloaded so far
:param total_bytes: The size of the file being downloaded (if known)
"""
self.__emit_download_event(
event_name="download_progress",
payload={
"source": source,
"download_path": download_path,
"current_bytes": current_bytes,
"total_bytes": total_bytes,
},
)
def emit_download_complete(self, source: str, download_path: str, total_bytes: int) -> None:
"""
Emit a "download_complete" event at the end of a successful download.
:param source: Source URL
:param download_path: Path to the locally downloaded file
:param total_bytes: The size of the downloaded file
"""
self.__emit_download_event(
event_name="download_complete",
payload={
"source": source,
"download_path": download_path,
"total_bytes": total_bytes,
},
)
def emit_download_cancelled(self, source: str) -> None:
"""Emit a "download_cancelled" event in the event that the download was cancelled by user."""
self.__emit_download_event(
event_name="download_cancelled",
payload={
"source": source,
},
)
def emit_download_error(self, source: str, error_type: str, error: str) -> None:
"""
Emit a "download_error" event when an download job encounters an exception.
:param source: Source URL
:param error_type: The name of the exception that raised the error
:param error: The traceback from this error
"""
self.__emit_download_event(
event_name="download_error",
payload={
"source": source,
"error_type": error_type,
"error": error,
},
)
def emit_model_install_downloading(
self,
source: str,
local_path: str,
bytes: int,
total_bytes: int,
parts: List[Dict[str, Union[str, int]]],
) -> None:
"""
Emit at intervals while the install job is in progress (remote models only).
:param source: Source of the model
:param local_path: Where model is downloading to
:param parts: Progress of downloading URLs that comprise the model, if any.
:param bytes: Number of bytes downloaded so far.
:param total_bytes: Total size of download, including all files.
This emits a Dict with keys "source", "local_path", "bytes" and "total_bytes".
"""
self.__emit_model_event(
event_name="model_install_downloading",
payload={
"source": source,
"local_path": local_path,
"bytes": bytes,
"total_bytes": total_bytes,
"parts": parts,
},
)
def emit_model_install_running(self, source: str) -> None:
"""
Emit once when an install job becomes active.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_running",
payload={"source": source},
)
def emit_model_install_completed(self, source: str, key: str, total_bytes: Optional[int] = None) -> None:
"""
Emit when an install job is completed successfully.
:param source: Source of the model; local path, repo_id or url
:param key: Model config record key
:param total_bytes: Size of the model (may be None for installation of a local path)
"""
self.__emit_model_event(
event_name="model_install_completed",
payload={
"source": source,
"total_bytes": total_bytes,
"key": key,
},
)
def emit_model_install_cancelled(self, source: str) -> None:
"""
Emit when an install job is cancelled.
:param source: Source of the model; local path, repo_id or url
"""
self.__emit_model_event(
event_name="model_install_cancelled",
payload={"source": source},
)
def emit_model_install_error(
self,
source: str,
error_type: str,
error: str,
) -> None:
"""
Emit when an install job encounters an exception.
:param source: Source of the model
:param error_type: The name of the exception
:param error: A text description of the exception
"""
self.__emit_model_event(
event_name="model_install_error",
payload={
"source": source,
"error_type": error_type,
"error": error,
},
)

View File

@ -32,101 +32,6 @@ class SqliteImageRecordStorage(ImageRecordStorageBase):
self._conn = db.conn
self._cursor = self._conn.cursor()
try:
self._lock.acquire()
self._create_tables()
self._conn.commit()
finally:
self._lock.release()
def _create_tables(self) -> None:
"""Creates the `images` table."""
# Create the `images` table.
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS images (
image_name TEXT NOT NULL PRIMARY KEY,
-- This is an enum in python, unrestricted string here for flexibility
image_origin TEXT NOT NULL,
-- This is an enum in python, unrestricted string here for flexibility
image_category TEXT NOT NULL,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
session_id TEXT,
node_id TEXT,
metadata TEXT,
is_intermediate BOOLEAN DEFAULT FALSE,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME
);
"""
)
self._cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in self._cursor.fetchall()]
if "starred" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images ADD COLUMN starred BOOLEAN DEFAULT FALSE;
"""
)
# Create the `images` table indices.
self._cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_images_image_name ON images(image_name);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_image_origin ON images(image_origin);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_image_category ON images(image_category);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_created_at ON images(created_at);
"""
)
self._cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_images_starred ON images(starred);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_images_updated_at
AFTER UPDATE
ON images FOR EACH ROW
BEGIN
UPDATE images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE image_name = old.image_name;
END;
"""
)
self._cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in self._cursor.fetchall()]
if "has_workflow" not in columns:
self._cursor.execute(
"""--sql
ALTER TABLE images
ADD COLUMN has_workflow BOOLEAN DEFAULT FALSE;
"""
)
def get(self, image_name: str) -> ImageRecord:
try:
self._lock.acquire()

View File

@ -132,7 +132,6 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
source_node_id=source_node_id,
result=outputs.model_dump(),
)
self.__invoker.services.performance_statistics.log_stats()
except KeyboardInterrupt:
pass
@ -195,6 +194,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error=traceback.format_exc(),
)
elif is_complete:
self.__invoker.services.performance_statistics.log_stats(graph_execution_state.id)
self.__invoker.services.events.emit_graph_execution_complete(
queue_batch_id=queue_item.session_queue_batch_id,
queue_item_id=queue_item.session_queue_item_id,

View File

@ -11,6 +11,7 @@ if TYPE_CHECKING:
from .board_records.board_records_base import BoardRecordStorageBase
from .boards.boards_base import BoardServiceABC
from .config import InvokeAIAppConfig
from .download import DownloadQueueServiceBase
from .events.events_base import EventServiceBase
from .image_files.image_files_base import ImageFileStorageBase
from .image_records.image_records_base import ImageRecordStorageBase
@ -21,12 +22,13 @@ if TYPE_CHECKING:
from .invocation_stats.invocation_stats_base import InvocationStatsServiceBase
from .item_storage.item_storage_base import ItemStorageABC
from .latents_storage.latents_storage_base import LatentsStorageBase
from .model_install import ModelInstallServiceBase
from .model_manager.model_manager_base import ModelManagerServiceBase
from .model_records import ModelRecordServiceBase
from .names.names_base import NameServiceBase
from .session_processor.session_processor_base import SessionProcessorBase
from .session_queue.session_queue_base import SessionQueueBase
from .shared.graph import GraphExecutionState, LibraryGraph
from .shared.graph import GraphExecutionState
from .urls.urls_base import UrlServiceBase
from .workflow_records.workflow_records_base import WorkflowRecordsStorageBase
@ -42,7 +44,6 @@ class InvocationServices:
configuration: "InvokeAIAppConfig"
events: "EventServiceBase"
graph_execution_manager: "ItemStorageABC[GraphExecutionState]"
graph_library: "ItemStorageABC[LibraryGraph]"
images: "ImageServiceABC"
image_records: "ImageRecordStorageBase"
image_files: "ImageFileStorageBase"
@ -50,6 +51,8 @@ class InvocationServices:
logger: "Logger"
model_manager: "ModelManagerServiceBase"
model_records: "ModelRecordServiceBase"
download_queue: "DownloadQueueServiceBase"
model_install: "ModelInstallServiceBase"
processor: "InvocationProcessorABC"
performance_statistics: "InvocationStatsServiceBase"
queue: "InvocationQueueABC"
@ -69,7 +72,6 @@ class InvocationServices:
configuration: "InvokeAIAppConfig",
events: "EventServiceBase",
graph_execution_manager: "ItemStorageABC[GraphExecutionState]",
graph_library: "ItemStorageABC[LibraryGraph]",
images: "ImageServiceABC",
image_files: "ImageFileStorageBase",
image_records: "ImageRecordStorageBase",
@ -77,6 +79,8 @@ class InvocationServices:
logger: "Logger",
model_manager: "ModelManagerServiceBase",
model_records: "ModelRecordServiceBase",
download_queue: "DownloadQueueServiceBase",
model_install: "ModelInstallServiceBase",
processor: "InvocationProcessorABC",
performance_statistics: "InvocationStatsServiceBase",
queue: "InvocationQueueABC",
@ -94,7 +98,6 @@ class InvocationServices:
self.configuration = configuration
self.events = events
self.graph_execution_manager = graph_execution_manager
self.graph_library = graph_library
self.images = images
self.image_files = image_files
self.image_records = image_records
@ -102,6 +105,8 @@ class InvocationServices:
self.logger = logger
self.model_manager = model_manager
self.model_records = model_records
self.download_queue = download_queue
self.model_install = model_install
self.processor = processor
self.performance_statistics = performance_statistics
self.queue = queue

View File

@ -30,23 +30,13 @@ writes to the system log is stored in InvocationServices.performance_statistics.
from abc import ABC, abstractmethod
from contextlib import AbstractContextManager
from typing import Dict
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_common import NodeLog
class InvocationStatsServiceBase(ABC):
"Abstract base class for recording node memory/time performance statistics"
# {graph_id => NodeLog}
_stats: Dict[str, NodeLog]
_cache_stats: Dict[str, CacheStats]
ram_used: float
ram_changed: float
@abstractmethod
def __init__(self):
"""
@ -77,45 +67,8 @@ class InvocationStatsServiceBase(ABC):
pass
@abstractmethod
def reset_all_stats(self):
"""Zero all statistics"""
pass
@abstractmethod
def update_invocation_stats(
self,
graph_id: str,
invocation_type: str,
time_used: float,
vram_used: float,
):
"""
Add timing information on execution of a node. Usually
used internally.
:param graph_id: ID of the graph that is currently executing
:param invocation_type: String literal type of the node
:param time_used: Time used by node's exection (sec)
:param vram_used: Maximum VRAM used during exection (GB)
"""
pass
@abstractmethod
def log_stats(self):
def log_stats(self, graph_execution_state_id: str):
"""
Write out the accumulated statistics to the log or somewhere else.
"""
pass
@abstractmethod
def update_mem_stats(
self,
ram_used: float,
ram_changed: float,
):
"""
Update the collector with RAM memory usage info.
:param ram_used: How much RAM is currently in use.
:param ram_changed: How much RAM changed since last generation.
"""
pass

View File

@ -1,25 +1,84 @@
from dataclasses import dataclass, field
from typing import Dict
# size of GIG in bytes
GIG = 1073741824
from collections import defaultdict
from dataclasses import dataclass
@dataclass
class NodeStats:
"""Class for tracking execution stats of an invocation node"""
class NodeExecutionStats:
"""Class for tracking execution stats of an invocation node."""
calls: int = 0
time_used: float = 0.0 # seconds
max_vram: float = 0.0 # GB
cache_hits: int = 0
cache_misses: int = 0
cache_high_watermark: int = 0
invocation_type: str
start_time: float # Seconds since the epoch.
end_time: float # Seconds since the epoch.
start_ram_gb: float # GB
end_ram_gb: float # GB
peak_vram_gb: float # GB
def total_time(self) -> float:
return self.end_time - self.start_time
@dataclass
class NodeLog:
"""Class for tracking node usage"""
class GraphExecutionStats:
"""Class for tracking execution stats of a graph."""
# {node_type => NodeStats}
nodes: Dict[str, NodeStats] = field(default_factory=dict)
def __init__(self):
self._node_stats_list: list[NodeExecutionStats] = []
def add_node_execution_stats(self, node_stats: NodeExecutionStats):
self._node_stats_list.append(node_stats)
def get_total_run_time(self) -> float:
"""Get the total time spent executing nodes in the graph."""
total = 0.0
for node_stats in self._node_stats_list:
total += node_stats.total_time()
return total
def get_first_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the first node in the graph (by start_time)."""
first_node = None
for node_stats in self._node_stats_list:
if first_node is None or node_stats.start_time < first_node.start_time:
first_node = node_stats
assert first_node is not None
return first_node
def get_last_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the last node in the graph (by end_time)."""
last_node = None
for node_stats in self._node_stats_list:
if last_node is None or node_stats.end_time > last_node.end_time:
last_node = node_stats
return last_node
def get_pretty_log(self, graph_execution_state_id: str) -> str:
log = f"Graph stats: {graph_execution_state_id}\n"
log += f"{'Node':>30} {'Calls':>7}{'Seconds':>9} {'VRAM Used':>10}\n"
# Log stats aggregated by node type.
node_stats_by_type: dict[str, list[NodeExecutionStats]] = defaultdict(list)
for node_stats in self._node_stats_list:
node_stats_by_type[node_stats.invocation_type].append(node_stats)
for node_type, node_type_stats_list in node_stats_by_type.items():
num_calls = len(node_type_stats_list)
time_used = sum([n.total_time() for n in node_type_stats_list])
peak_vram = max([n.peak_vram_gb for n in node_type_stats_list])
log += f"{node_type:>30} {num_calls:>4} {time_used:7.3f}s {peak_vram:4.3f}G\n"
# Log stats for the entire graph.
log += f"TOTAL GRAPH EXECUTION TIME: {self.get_total_run_time():7.3f}s\n"
first_node = self.get_first_node_stats()
last_node = self.get_last_node_stats()
if first_node is not None and last_node is not None:
total_wall_time = last_node.end_time - first_node.start_time
ram_change = last_node.end_ram_gb - first_node.start_ram_gb
log += f"TOTAL GRAPH WALL TIME: {total_wall_time:7.3f}s\n"
log += f"RAM used by InvokeAI process: {last_node.end_ram_gb:4.2f}G ({ram_change:+5.3f}G)\n"
return log

View File

@ -1,5 +1,5 @@
import time
from typing import Dict
from contextlib import contextmanager
import psutil
import torch
@ -7,161 +7,119 @@ import torch
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase
from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_base import InvocationStatsServiceBase
from .invocation_stats_common import GIG, NodeLog, NodeStats
from .invocation_stats_common import GraphExecutionStats, NodeExecutionStats
# Size of 1GB in bytes.
GB = 2**30
class InvocationStatsService(InvocationStatsServiceBase):
"""Accumulate performance information about a running graph. Collects time spent in each node,
as well as the maximum and current VRAM utilisation for CUDA systems"""
_invoker: Invoker
def __init__(self):
# {graph_id => NodeLog}
self._stats: Dict[str, NodeLog] = {}
self._cache_stats: Dict[str, CacheStats] = {}
self.ram_used: float = 0.0
self.ram_changed: float = 0.0
# Maps graph_execution_state_id to GraphExecutionStats.
self._stats: dict[str, GraphExecutionStats] = {}
# Maps graph_execution_state_id to model manager CacheStats.
self._cache_stats: dict[str, CacheStats] = {}
def start(self, invoker: Invoker) -> None:
self._invoker = invoker
class StatsContext:
"""Context manager for collecting statistics."""
invocation: BaseInvocation
collector: "InvocationStatsServiceBase"
graph_id: str
start_time: float
ram_used: int
model_manager: ModelManagerServiceBase
def __init__(
self,
invocation: BaseInvocation,
graph_id: str,
model_manager: ModelManagerServiceBase,
collector: "InvocationStatsServiceBase",
):
"""Initialize statistics for this run."""
self.invocation = invocation
self.collector = collector
self.graph_id = graph_id
self.start_time = 0.0
self.ram_used = 0
self.model_manager = model_manager
def __enter__(self):
self.start_time = time.time()
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
self.ram_used = psutil.Process().memory_info().rss
if self.model_manager:
self.model_manager.collect_cache_stats(self.collector._cache_stats[self.graph_id])
def __exit__(self, *args):
"""Called on exit from the context."""
ram_used = psutil.Process().memory_info().rss
self.collector.update_mem_stats(
ram_used=ram_used / GIG,
ram_changed=(ram_used - self.ram_used) / GIG,
)
self.collector.update_invocation_stats(
graph_id=self.graph_id,
invocation_type=self.invocation.type, # type: ignore # `type` is not on the `BaseInvocation` model, but *is* on all invocations
time_used=time.time() - self.start_time,
vram_used=torch.cuda.max_memory_allocated() / GIG if torch.cuda.is_available() else 0.0,
)
def collect_stats(
self,
invocation: BaseInvocation,
graph_execution_state_id: str,
) -> StatsContext:
if not self._stats.get(graph_execution_state_id): # first time we're seeing this
self._stats[graph_execution_state_id] = NodeLog()
@contextmanager
def collect_stats(self, invocation: BaseInvocation, graph_execution_state_id: str):
if not self._stats.get(graph_execution_state_id):
# First time we're seeing this graph_execution_state_id.
self._stats[graph_execution_state_id] = GraphExecutionStats()
self._cache_stats[graph_execution_state_id] = CacheStats()
return self.StatsContext(invocation, graph_execution_state_id, self._invoker.services.model_manager, self)
def reset_all_stats(self):
"""Zero all statistics"""
self._stats = {}
# Prune stale stats. There should be none since we're starting a new graph, but just in case.
self._prune_stale_stats()
# Record state before the invocation.
start_time = time.time()
start_ram = psutil.Process().memory_info().rss
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
if self._invoker.services.model_manager:
self._invoker.services.model_manager.collect_cache_stats(self._cache_stats[graph_execution_state_id])
def reset_stats(self, graph_execution_id: str):
try:
self._stats.pop(graph_execution_id)
except KeyError:
logger.warning(f"Attempted to clear statistics for unknown graph {graph_execution_id}")
# Let the invocation run.
yield None
finally:
# Record state after the invocation.
node_stats = NodeExecutionStats(
invocation_type=invocation.type,
start_time=start_time,
end_time=time.time(),
start_ram_gb=start_ram / GB,
end_ram_gb=psutil.Process().memory_info().rss / GB,
peak_vram_gb=torch.cuda.max_memory_allocated() / GB if torch.cuda.is_available() else 0.0,
)
self._stats[graph_execution_state_id].add_node_execution_stats(node_stats)
def update_mem_stats(
self,
ram_used: float,
ram_changed: float,
):
self.ram_used = ram_used
self.ram_changed = ram_changed
def _prune_stale_stats(self):
"""Check all graphs being tracked and prune any that have completed/errored.
def update_invocation_stats(
self,
graph_id: str,
invocation_type: str,
time_used: float,
vram_used: float,
):
if not self._stats[graph_id].nodes.get(invocation_type):
self._stats[graph_id].nodes[invocation_type] = NodeStats()
stats = self._stats[graph_id].nodes[invocation_type]
stats.calls += 1
stats.time_used += time_used
stats.max_vram = max(stats.max_vram, vram_used)
def log_stats(self):
completed = set()
errored = set()
for graph_id, _node_log in self._stats.items():
This shouldn't be necessary, but we don't have totally robust upstream handling of graph completions/errors, so
for now we call this function periodically to prevent them from accumulating.
"""
to_prune = []
for graph_execution_state_id in self._stats:
try:
current_graph_state = self._invoker.services.graph_execution_manager.get(graph_id)
graph_execution_state = self._invoker.services.graph_execution_manager.get(graph_execution_state_id)
except Exception:
errored.add(graph_id)
# TODO(ryand): What would cause this? Should this exception just be allowed to propagate?
logger.warning(f"Failed to get graph state for {graph_execution_state_id}.")
continue
if not current_graph_state.is_complete():
if not graph_execution_state.is_complete():
# The graph is still running, don't prune it.
continue
total_time = 0
logger.info(f"Graph stats: {graph_id}")
logger.info(f"{'Node':>30} {'Calls':>7}{'Seconds':>9} {'VRAM Used':>10}")
for node_type, stats in self._stats[graph_id].nodes.items():
logger.info(f"{node_type:>30} {stats.calls:>4} {stats.time_used:7.3f}s {stats.max_vram:4.3f}G")
total_time += stats.time_used
to_prune.append(graph_execution_state_id)
cache_stats = self._cache_stats[graph_id]
hwm = cache_stats.high_watermark / GIG
tot = cache_stats.cache_size / GIG
loaded = sum(list(cache_stats.loaded_model_sizes.values())) / GIG
for graph_execution_state_id in to_prune:
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
logger.info(f"TOTAL GRAPH EXECUTION TIME: {total_time:7.3f}s")
logger.info("RAM used by InvokeAI process: " + "%4.2fG" % self.ram_used + f" ({self.ram_changed:+5.3f}G)")
logger.info(f"RAM used to load models: {loaded:4.2f}G")
if torch.cuda.is_available():
logger.info("VRAM in use: " + "%4.3fG" % (torch.cuda.memory_allocated() / GIG))
logger.info("RAM cache statistics:")
logger.info(f" Model cache hits: {cache_stats.hits}")
logger.info(f" Model cache misses: {cache_stats.misses}")
logger.info(f" Models cached: {cache_stats.in_cache}")
logger.info(f" Models cleared from cache: {cache_stats.cleared}")
logger.info(f" Cache high water mark: {hwm:4.2f}/{tot:4.2f}G")
if len(to_prune) > 0:
logger.info(f"Pruned stale graph stats for {to_prune}.")
completed.add(graph_id)
def reset_stats(self, graph_execution_state_id: str):
try:
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
except KeyError as e:
logger.warning(f"Attempted to clear statistics for unknown graph {graph_execution_state_id}: {e}.")
for graph_id in completed:
del self._stats[graph_id]
del self._cache_stats[graph_id]
def log_stats(self, graph_execution_state_id: str):
try:
graph_stats = self._stats[graph_execution_state_id]
cache_stats = self._cache_stats[graph_execution_state_id]
except KeyError as e:
logger.warning(f"Attempted to log statistics for unknown graph {graph_execution_state_id}: {e}.")
return
for graph_id in errored:
del self._stats[graph_id]
del self._cache_stats[graph_id]
log = graph_stats.get_pretty_log(graph_execution_state_id)
hwm = cache_stats.high_watermark / GB
tot = cache_stats.cache_size / GB
loaded = sum(list(cache_stats.loaded_model_sizes.values())) / GB
log += f"RAM used to load models: {loaded:4.2f}G\n"
if torch.cuda.is_available():
log += f"VRAM in use: {(torch.cuda.memory_allocated() / GB):4.3f}G\n"
log += "RAM cache statistics:\n"
log += f" Model cache hits: {cache_stats.hits}\n"
log += f" Model cache misses: {cache_stats.misses}\n"
log += f" Models cached: {cache_stats.in_cache}\n"
log += f" Models cleared from cache: {cache_stats.cleared}\n"
log += f" Cache high water mark: {hwm:4.2f}/{tot:4.2f}G\n"
logger.info(log)
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]

View File

@ -0,0 +1,27 @@
"""Initialization file for model install service package."""
from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallServiceBase,
ModelSource,
UnknownInstallJobException,
URLModelSource,
)
from .model_install_default import ModelInstallService
__all__ = [
"ModelInstallServiceBase",
"ModelInstallService",
"InstallStatus",
"ModelInstallJob",
"UnknownInstallJobException",
"ModelSource",
"LocalModelSource",
"HFModelSource",
"URLModelSource",
"CivitaiModelSource",
]

View File

@ -0,0 +1,412 @@
# Copyright 2023 Lincoln D. Stein and the InvokeAI development team
"""Baseclass definitions for the model installer."""
import re
import traceback
from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path
from typing import Any, Dict, List, Literal, Optional, Set, Union
from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic.networks import AnyHttpUrl
from typing_extensions import Annotated
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import ModelRecordServiceBase
from invokeai.backend.model_manager import AnyModelConfig, ModelRepoVariant
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class InstallStatus(str, Enum):
"""State of an install job running in the background."""
WAITING = "waiting" # waiting to be dequeued
DOWNLOADING = "downloading" # downloading of model files in process
RUNNING = "running" # being processed
COMPLETED = "completed" # finished running
ERROR = "error" # terminated with an error message
CANCELLED = "cancelled" # terminated with an error message
class ModelInstallPart(BaseModel):
url: AnyHttpUrl
path: Path
bytes: int = 0
total_bytes: int = 0
class UnknownInstallJobException(Exception):
"""Raised when the status of an unknown job is requested."""
class StringLikeSource(BaseModel):
"""
Base class for model sources, implements functions that lets the source be sorted and indexed.
These shenanigans let this stuff work:
source1 = LocalModelSource(path='C:/users/mort/foo.safetensors')
mydict = {source1: 'model 1'}
assert mydict['C:/users/mort/foo.safetensors'] == 'model 1'
assert mydict[LocalModelSource(path='C:/users/mort/foo.safetensors')] == 'model 1'
source2 = LocalModelSource(path=Path('C:/users/mort/foo.safetensors'))
assert source1 == source2
assert source1 == 'C:/users/mort/foo.safetensors'
"""
def __hash__(self) -> int:
"""Return hash of the path field, for indexing."""
return hash(str(self))
def __lt__(self, other: object) -> int:
"""Return comparison of the stringified version, for sorting."""
return str(self) < str(other)
def __eq__(self, other: object) -> bool:
"""Return equality on the stringified version."""
if isinstance(other, Path):
return str(self) == other.as_posix()
else:
return str(self) == str(other)
class LocalModelSource(StringLikeSource):
"""A local file or directory path."""
path: str | Path
inplace: Optional[bool] = False
type: Literal["local"] = "local"
# these methods allow the source to be used in a string-like way,
# for example as an index into a dict
def __str__(self) -> str:
"""Return string version of path when string rep needed."""
return Path(self.path).as_posix()
class CivitaiModelSource(StringLikeSource):
"""A Civitai version id, with optional variant and access token."""
version_id: int
variant: Optional[ModelRepoVariant] = None
access_token: Optional[str] = None
type: Literal["civitai"] = "civitai"
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = str(self.version_id)
base += f" ({self.variant})" if self.variant else ""
return base
class HFModelSource(StringLikeSource):
"""
A HuggingFace repo_id with optional variant, sub-folder and access token.
Note that the variant option, if not provided to the constructor, will default to fp16, which is
what people (almost) always want.
"""
repo_id: str
variant: Optional[ModelRepoVariant] = ModelRepoVariant.FP16
subfolder: Optional[Path] = None
access_token: Optional[str] = None
type: Literal["hf"] = "hf"
@field_validator("repo_id")
@classmethod
def proper_repo_id(cls, v: str) -> str: # noqa D102
if not re.match(r"^([.\w-]+/[.\w-]+)$", v):
raise ValueError(f"{v}: invalid repo_id format")
return v
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = self.repo_id
base += f":{self.subfolder}" if self.subfolder else ""
base += f" ({self.variant})" if self.variant else ""
return base
class URLModelSource(StringLikeSource):
"""A generic URL point to a checkpoint file."""
url: AnyHttpUrl
access_token: Optional[str] = None
type: Literal["url"] = "url"
def __str__(self) -> str:
"""Return string version of the url when string rep needed."""
return str(self.url)
ModelSource = Annotated[
Union[LocalModelSource, HFModelSource, CivitaiModelSource, URLModelSource], Field(discriminator="type")
]
class ModelInstallJob(BaseModel):
"""Object that tracks the current status of an install request."""
id: int = Field(description="Unique ID for this job")
status: InstallStatus = Field(default=InstallStatus.WAITING, description="Current status of install process")
config_in: Dict[str, Any] = Field(
default_factory=dict, description="Configuration information (e.g. 'description') to apply to model."
)
config_out: Optional[AnyModelConfig] = Field(
default=None, description="After successful installation, this will hold the configuration object."
)
inplace: bool = Field(
default=False, description="Leave model in its current location; otherwise install under models directory"
)
source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model")
local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source")
bytes: Optional[int] = Field(
default=None, description="For a remote model, the number of bytes downloaded so far (may not be available)"
)
total_bytes: int = Field(default=0, description="Total size of the model to be installed")
source_metadata: Optional[AnyModelRepoMetadata] = Field(
default=None, description="Metadata provided by the model source"
)
download_parts: Set[DownloadJob] = Field(
default_factory=set, description="Download jobs contributing to this install"
)
# internal flags and transitory settings
_install_tmpdir: Optional[Path] = PrivateAttr(default=None)
_exception: Optional[Exception] = PrivateAttr(default=None)
def set_error(self, e: Exception) -> None:
"""Record the error and traceback from an exception."""
self._exception = e
self.status = InstallStatus.ERROR
def cancel(self) -> None:
"""Call to cancel the job."""
self.status = InstallStatus.CANCELLED
@property
def error_type(self) -> Optional[str]:
"""Class name of the exception that led to status==ERROR."""
return self._exception.__class__.__name__ if self._exception else None
@property
def error(self) -> Optional[str]:
"""Error traceback."""
return "".join(traceback.format_exception(self._exception)) if self._exception else None
@property
def cancelled(self) -> bool:
"""Set status to CANCELLED."""
return self.status == InstallStatus.CANCELLED
@property
def errored(self) -> bool:
"""Return true if job has errored."""
return self.status == InstallStatus.ERROR
@property
def waiting(self) -> bool:
"""Return true if job is waiting to run."""
return self.status == InstallStatus.WAITING
@property
def downloading(self) -> bool:
"""Return true if job is downloading."""
return self.status == InstallStatus.DOWNLOADING
@property
def running(self) -> bool:
"""Return true if job is running."""
return self.status == InstallStatus.RUNNING
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == InstallStatus.COMPLETED
@property
def in_terminal_state(self) -> bool:
"""Return true if job is in a terminal state."""
return self.status in [InstallStatus.COMPLETED, InstallStatus.ERROR, InstallStatus.CANCELLED]
class ModelInstallServiceBase(ABC):
"""Abstract base class for InvokeAI model installation."""
@abstractmethod
def __init__(
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: ModelMetadataStore,
event_bus: Optional["EventServiceBase"] = None,
):
"""
Create ModelInstallService object.
:param config: Systemwide InvokeAIAppConfig.
:param store: Systemwide ModelConfigStore
:param event_bus: InvokeAI event bus for reporting events to.
"""
# make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
@abstractmethod
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer service."""
@abstractmethod
def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the model install service. After this the objection can be safely deleted."""
@property
@abstractmethod
def app_config(self) -> InvokeAIAppConfig:
"""Return the appConfig object associated with the installer."""
@property
@abstractmethod
def record_store(self) -> ModelRecordServiceBase:
"""Return the ModelRecoreService object associated with the installer."""
@property
@abstractmethod
def event_bus(self) -> Optional[EventServiceBase]:
"""Return the event service base object associated with the installer."""
@abstractmethod
def register_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str:
"""
Probe and register the model at model_path.
This keeps the model in its current location.
:param model_path: Filesystem Path to the model.
:param config: Dict of attributes that will override autoassigned values.
:returns id: The string ID of the registered model.
"""
@abstractmethod
def unregister(self, key: str) -> None:
"""Remove model with indicated key from the database."""
@abstractmethod
def delete(self, key: str) -> None:
"""Remove model with indicated key from the database. Delete its files only if they are within our models directory."""
@abstractmethod
def unconditionally_delete(self, key: str) -> None:
"""Remove model with indicated key from the database and unconditionally delete weight files from disk."""
@abstractmethod
def install_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str:
"""
Probe, register and install the model in the models directory.
This moves the model from its current location into
the models directory handled by InvokeAI.
:param model_path: Filesystem Path to the model.
:param config: Dict of attributes that will override autoassigned values.
:returns id: The string ID of the registered model.
"""
@abstractmethod
def import_model(
self,
source: ModelSource,
config: Optional[Dict[str, Any]] = None,
) -> ModelInstallJob:
"""Install the indicated model.
:param source: ModelSource object
:param config: Optional dict. Any fields in this dict
will override corresponding autoassigned probe fields in the
model's config record. Use it to override
`name`, `description`, `base_type`, `model_type`, `format`,
`prediction_type`, `image_size`, and/or `ztsnr_training`.
This will download the model located at `source`,
probe it, and install it into the models directory.
This call is executed asynchronously in a separate
thread and will issue the following events on the event bus:
- model_install_started
- model_install_error
- model_install_completed
The `inplace` flag does not affect the behavior of downloaded
models, which are always moved into the `models` directory.
The call returns a ModelInstallJob object which can be
polled to learn the current status and/or error message.
Variants recognized by HuggingFace currently are:
1. onnx
2. openvino
3. fp16
4. None (usually returns fp32 model)
"""
@abstractmethod
def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]:
"""Return the ModelInstallJob(s) corresponding to the provided source."""
@abstractmethod
def get_job_by_id(self, id: int) -> ModelInstallJob:
"""Return the ModelInstallJob corresponding to the provided id. Raises ValueError if no job has that ID."""
@abstractmethod
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
"""
List active and complete install jobs.
"""
@abstractmethod
def prune_jobs(self) -> None:
"""Prune all completed and errored jobs."""
@abstractmethod
def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
@abstractmethod
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]:
"""
Wait for all pending installs to complete.
This will block until all pending installs have
completed, been cancelled, or errored out.
:param timeout: Wait up to indicated number of seconds. Raise an Exception('timeout') if
installs do not complete within the indicated time.
"""
@abstractmethod
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]:
"""
Recursively scan directory for new models and register or install them.
:param scan_dir: Path to the directory to scan.
:param install: Install if True, otherwise register in place.
:returns list of IDs: Returns list of IDs of models registered/installed
"""
@abstractmethod
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the model record database."""

View File

@ -0,0 +1,759 @@
"""Model installation class."""
import os
import re
import threading
import time
from hashlib import sha256
from pathlib import Path
from queue import Empty, Queue
from random import randbytes
from shutil import copyfile, copytree, move, rmtree
from tempfile import mkdtemp
from typing import Any, Dict, List, Optional, Set, Union
from huggingface_hub import HfFolder
from pydantic.networks import AnyHttpUrl
from requests import Session
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
InvalidModelConfigException,
ModelRepoVariant,
ModelType,
)
from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
HuggingFaceMetadataFetch,
ModelMetadataStore,
ModelMetadataWithFiles,
RemoteModelFile,
)
from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import Chdir, InvokeAILogger
from invokeai.backend.util.devices import choose_precision, choose_torch_device
from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus,
LocalModelSource,
ModelInstallJob,
ModelInstallServiceBase,
ModelSource,
URLModelSource,
)
TMPDIR_PREFIX = "tmpinstall_"
class ModelInstallService(ModelInstallServiceBase):
"""class for InvokeAI model installation."""
def __init__(
self,
app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: Optional[ModelMetadataStore] = None,
event_bus: Optional[EventServiceBase] = None,
session: Optional[Session] = None,
):
"""
Initialize the installer object.
:param app_config: InvokeAIAppConfig object
:param record_store: Previously-opened ModelRecordService database
:param event_bus: Optional EventService object
"""
self._app_config = app_config
self._record_store = record_store
self._event_bus = event_bus
self._logger = InvokeAILogger.get_logger(name=self.__class__.__name__)
self._install_jobs: List[ModelInstallJob] = []
self._install_queue: Queue[ModelInstallJob] = Queue()
self._cached_model_paths: Set[Path] = set()
self._models_installed: Set[str] = set()
self._lock = threading.Lock()
self._stop_event = threading.Event()
self._downloads_changed_event = threading.Event()
self._download_queue = download_queue
self._download_cache: Dict[AnyHttpUrl, ModelInstallJob] = {}
self._running = False
self._session = session
self._next_job_id = 0
# There may not necessarily be a metadata store initialized
# so we create one and initialize it with the same sql database
# used by the record store service.
if metadata_store:
self._metadata_store = metadata_store
else:
assert isinstance(record_store, ModelRecordServiceSQL)
self._metadata_store = ModelMetadataStore(record_store.db)
@property
def app_config(self) -> InvokeAIAppConfig: # noqa D102
return self._app_config
@property
def record_store(self) -> ModelRecordServiceBase: # noqa D102
return self._record_store
@property
def event_bus(self) -> Optional[EventServiceBase]: # noqa D102
return self._event_bus
# make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer thread."""
with self._lock:
if self._running:
raise Exception("Attempt to start the installer service twice")
self._start_installer_thread()
self._remove_dangling_install_dirs()
self.sync_to_config()
def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the installer thread; after this the object can be deleted and garbage collected."""
with self._lock:
if not self._running:
raise Exception("Attempt to stop the install service before it was started")
self._stop_event.set()
with self._install_queue.mutex:
self._install_queue.queue.clear() # get rid of pending jobs
active_jobs = [x for x in self.list_jobs() if x.running]
if active_jobs:
self._logger.warning("Waiting for active install job to complete")
self.wait_for_installs()
self._download_cache.clear()
self._running = False
def register_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str: # noqa D102
model_path = Path(model_path)
config = config or {}
if config.get("source") is None:
config["source"] = model_path.resolve().as_posix()
return self._register(model_path, config)
def install_path(
self,
model_path: Union[Path, str],
config: Optional[Dict[str, Any]] = None,
) -> str: # noqa D102
model_path = Path(model_path)
config = config or {}
if config.get("source") is None:
config["source"] = model_path.resolve().as_posix()
info: AnyModelConfig = self._probe_model(Path(model_path), config)
old_hash = info.original_hash
dest_path = self.app_config.models_path / info.base.value / info.type.value / model_path.name
try:
new_path = self._copy_model(model_path, dest_path)
except FileExistsError as excp:
raise DuplicateModelException(
f"A model named {model_path.name} is already installed at {dest_path.as_posix()}"
) from excp
new_hash = FastModelHash.hash(new_path)
assert new_hash == old_hash, f"{model_path}: Model hash changed during installation, possibly corrupted."
return self._register(
new_path,
config,
info,
)
def import_model(self, source: ModelSource, config: Optional[Dict[str, Any]] = None) -> ModelInstallJob: # noqa D102
if isinstance(source, LocalModelSource):
install_job = self._import_local_model(source, config)
self._install_queue.put(install_job) # synchronously install
elif isinstance(source, CivitaiModelSource):
install_job = self._import_from_civitai(source, config)
elif isinstance(source, HFModelSource):
install_job = self._import_from_hf(source, config)
elif isinstance(source, URLModelSource):
install_job = self._import_from_url(source, config)
else:
raise ValueError(f"Unsupported model source: '{type(source)}'")
self._install_jobs.append(install_job)
return install_job
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
return self._install_jobs
def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102
return [x for x in self._install_jobs if x.source == source]
def get_job_by_id(self, id: int) -> ModelInstallJob: # noqa D102
jobs = [x for x in self._install_jobs if x.id == id]
if not jobs:
raise ValueError(f"No job with id {id} known")
assert len(jobs) == 1
assert isinstance(jobs[0], ModelInstallJob)
return jobs[0]
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]: # noqa D102
"""Block until all installation jobs are done."""
start = time.time()
while len(self._download_cache) > 0:
if self._downloads_changed_event.wait(timeout=5): # in case we miss an event
self._downloads_changed_event.clear()
if timeout > 0 and time.time() - start > timeout:
raise Exception("Timeout exceeded")
self._install_queue.join()
return self._install_jobs
def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
job.cancel()
with self._lock:
self._cancel_download_parts(job)
def prune_jobs(self) -> None:
"""Prune all completed and errored jobs."""
unfinished_jobs = [x for x in self._install_jobs if not x.in_terminal_state]
self._install_jobs = unfinished_jobs
def sync_to_config(self) -> None:
"""Synchronize models on disk to those in the config record store database."""
self._scan_models_directory()
if autoimport := self._app_config.autoimport_dir:
self._logger.info("Scanning autoimport directory for new models")
installed = self.scan_directory(self._app_config.root_path / autoimport)
self._logger.info(f"{len(installed)} new models registered")
self._logger.info("Model installer (re)initialized")
def scan_directory(self, scan_dir: Path, install: bool = False) -> List[str]: # noqa D102
self._cached_model_paths = {Path(x.path) for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback)
self._models_installed.clear()
search.search(scan_dir)
return list(self._models_installed)
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
# --------------------------------------------------------------------------------------------
# Internal functions that manage the installer threads
# --------------------------------------------------------------------------------------------
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
self._running = True
def _install_next_item(self) -> None:
done = False
while not done:
if self._stop_event.is_set():
done = True
continue
try:
job = self._install_queue.get(timeout=1)
except Empty:
continue
assert job.local_path is not None
try:
if job.cancelled:
self._signal_job_cancelled(job)
elif job.errored:
self._signal_job_errored(job)
elif (
job.waiting or job.downloading
): # local jobs will be in waiting state, remote jobs will be downloading state
job.total_bytes = self._stat_size(job.local_path)
job.bytes = job.total_bytes
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
# enter the metadata, if there is any
if job.source_metadata:
self._metadata_store.add_metadata(key, job.source_metadata)
self._signal_job_completed(job)
except InvalidModelConfigException as excp:
if any(x.content_type is not None and "text/html" in x.content_type for x in job.download_parts):
job.set_error(
InvalidModelConfigException(
f"At least one file in {job.local_path} is an HTML page, not a model. This can happen when an access token is required to download."
)
)
else:
job.set_error(excp)
self._signal_job_errored(job)
except (OSError, DuplicateModelException) as excp:
job.set_error(excp)
self._signal_job_errored(job)
finally:
# if this is an install of a remote file, then clean up the temporary directory
if job._install_tmpdir is not None:
rmtree(job._install_tmpdir)
self._install_queue.task_done()
self._logger.info("Install thread exiting")
# --------------------------------------------------------------------------------------------
# Internal functions that manage the models directory
# --------------------------------------------------------------------------------------------
def _remove_dangling_install_dirs(self) -> None:
"""Remove leftover tmpdirs from aborted installs."""
path = self._app_config.models_path
for tmpdir in path.glob(f"{TMPDIR_PREFIX}*"):
self._logger.info(f"Removing dangling temporary directory {tmpdir}")
rmtree(tmpdir)
def _scan_models_directory(self) -> None:
"""
Scan the models directory for new and missing models.
New models will be added to the storage backend. Missing models
will be deleted.
"""
defunct_models = set()
installed = set()
with Chdir(self._app_config.models_path):
self._logger.info("Checking for models that have been moved or deleted from disk")
for model_config in self.record_store.all_models():
path = Path(model_config.path)
if not path.exists():
self._logger.info(f"{model_config.name}: path {path.as_posix()} no longer exists. Unregistering")
defunct_models.add(model_config.key)
for key in defunct_models:
self.unregister(key)
self._logger.info(f"Scanning {self._app_config.models_path} for new and orphaned models")
for cur_base_model in BaseModelType:
for cur_model_type in ModelType:
models_dir = Path(cur_base_model.value, cur_model_type.value)
installed.update(self.scan_directory(models_dir))
self._logger.info(f"{len(installed)} new models registered; {len(defunct_models)} unregistered")
def _sync_model_path(self, key: str, ignore_hash_change: bool = False) -> AnyModelConfig:
"""
Move model into the location indicated by its basetype, type and name.
Call this after updating a model's attributes in order to move
the model's path into the location indicated by its basetype, type and
name. Applies only to models whose paths are within the root `models_dir`
directory.
May raise an UnknownModelException.
"""
model = self.record_store.get_model(key)
old_path = Path(model.path)
models_dir = self.app_config.models_path
if not old_path.is_relative_to(models_dir):
return model
new_path = models_dir / model.base.value / model.type.value / model.name
self._logger.info(f"Moving {model.name} to {new_path}.")
new_path = self._move_model(old_path, new_path)
new_hash = FastModelHash.hash(new_path)
model.path = new_path.relative_to(models_dir).as_posix()
if model.current_hash != new_hash:
assert (
ignore_hash_change
), f"{model.name}: Model hash changed during installation, model is possibly corrupted"
model.current_hash = new_hash
self._logger.info(f"Model has new hash {model.current_hash}, but will continue to be identified by {key}")
self.record_store.update_model(key, model)
return model
def _scan_register(self, model: Path) -> bool:
if model in self._cached_model_paths:
return True
try:
id = self.register_path(model)
self._sync_model_path(id) # possibly move it to right place in `models`
self._logger.info(f"Registered {model.name} with id {id}")
self._models_installed.add(id)
except DuplicateModelException:
pass
return True
def _scan_install(self, model: Path) -> bool:
if model in self._cached_model_paths:
return True
try:
id = self.install_path(model)
self._logger.info(f"Installed {model} with id {id}")
self._models_installed.add(id)
except DuplicateModelException:
pass
return True
def _copy_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path:
return old_path
new_path.parent.mkdir(parents=True, exist_ok=True)
if old_path.is_dir():
copytree(old_path, new_path)
else:
copyfile(old_path, new_path)
return new_path
def _move_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path:
return old_path
new_path.parent.mkdir(parents=True, exist_ok=True)
# if path already exists then we jigger the name to make it unique
counter: int = 1
while new_path.exists():
path = new_path.with_stem(new_path.stem + f"_{counter:02d}")
if not path.exists():
new_path = path
counter += 1
move(old_path, new_path)
return new_path
def _probe_model(self, model_path: Path, config: Optional[Dict[str, Any]] = None) -> AnyModelConfig:
info: AnyModelConfig = ModelProbe.probe(Path(model_path))
if config: # used to override probe fields
for key, value in config.items():
setattr(info, key, value)
return info
def _create_key(self) -> str:
return sha256(randbytes(100)).hexdigest()[0:32]
def _register(
self, model_path: Path, config: Optional[Dict[str, Any]] = None, info: Optional[AnyModelConfig] = None
) -> str:
info = info or ModelProbe.probe(model_path, config)
key = self._create_key()
model_path = model_path.absolute()
if model_path.is_relative_to(self.app_config.models_path):
model_path = model_path.relative_to(self.app_config.models_path)
info.path = model_path.as_posix()
# add 'main' specific fields
if hasattr(info, "config"):
# make config relative to our root
legacy_conf = (self.app_config.root_dir / self.app_config.legacy_conf_dir / info.config).resolve()
info.config = legacy_conf.relative_to(self.app_config.root_dir).as_posix()
self.record_store.add_model(key, info)
return key
def _next_id(self) -> int:
with self._lock:
id = self._next_job_id
self._next_job_id += 1
return id
@staticmethod
def _guess_variant() -> ModelRepoVariant:
"""Guess the best HuggingFace variant type to download."""
precision = choose_precision(choose_torch_device())
return ModelRepoVariant.FP16 if precision == "float16" else ModelRepoVariant.DEFAULT
def _import_local_model(self, source: LocalModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
return ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
local_path=Path(source.path),
inplace=source.inplace,
)
def _import_from_civitai(self, source: CivitaiModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
if not source.access_token:
self._logger.info("No Civitai access token provided; some models may not be downloadable.")
metadata = CivitaiMetadataFetch(self._session).from_id(str(source.version_id))
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(session=self._session)
return self._import_remote_model(source=source, config=config, metadata=metadata, remote_files=remote_files)
def _import_from_hf(self, source: HFModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# Add user's cached access token to HuggingFace requests
source.access_token = source.access_token or HfFolder.get_token()
if not source.access_token:
self._logger.info("No HuggingFace access token present; some models may not be downloadable.")
metadata = HuggingFaceMetadataFetch(self._session).from_id(source.repo_id)
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(
variant=source.variant or self._guess_variant(),
subfolder=source.subfolder,
session=self._session,
)
return self._import_remote_model(
source=source,
config=config,
remote_files=remote_files,
metadata=metadata,
)
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# URLs from Civitai or HuggingFace will be handled specially
url_patterns = {
r"https?://civitai.com/": CivitaiMetadataFetch,
r"https?://huggingface.co/": HuggingFaceMetadataFetch,
}
metadata = None
for pattern, fetcher in url_patterns.items():
if re.match(pattern, str(source.url), re.IGNORECASE):
metadata = fetcher(self._session).from_url(source.url)
break
if metadata and isinstance(metadata, ModelMetadataWithFiles):
remote_files = metadata.download_urls(session=self._session)
else:
remote_files = [RemoteModelFile(url=source.url, path=Path("."), size=0)]
return self._import_remote_model(
source=source,
config=config,
metadata=metadata,
remote_files=remote_files,
)
def _import_remote_model(
self,
source: ModelSource,
remote_files: List[RemoteModelFile],
metadata: Optional[AnyModelRepoMetadata],
config: Optional[Dict[str, Any]],
) -> ModelInstallJob:
# TODO: Replace with tempfile.tmpdir() when multithreading is cleaned up.
# Currently the tmpdir isn't automatically removed at exit because it is
# being held in a daemon thread.
tmpdir = Path(
mkdtemp(
dir=self._app_config.models_path,
prefix=TMPDIR_PREFIX,
)
)
install_job = ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
source_metadata=metadata,
local_path=tmpdir, # local path may change once the download has started due to content-disposition handling
bytes=0,
total_bytes=0,
)
# we remember the path up to the top of the tmpdir so that it may be
# removed safely at the end of the install process.
install_job._install_tmpdir = tmpdir
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
self._logger.info(f"Queuing {source} for downloading")
for model_file in remote_files:
url = model_file.url
path = model_file.path
self._logger.info(f"Downloading {url} => {path}")
install_job.total_bytes += model_file.size
assert hasattr(source, "access_token")
dest = tmpdir / path.parent
dest.mkdir(parents=True, exist_ok=True)
download_job = DownloadJob(
source=url,
dest=dest,
access_token=source.access_token,
)
self._download_cache[download_job.source] = install_job # matches a download job to an install job
install_job.download_parts.add(download_job)
self._download_queue.submit_download_job(
download_job,
on_start=self._download_started_callback,
on_progress=self._download_progress_callback,
on_complete=self._download_complete_callback,
on_error=self._download_error_callback,
on_cancelled=self._download_cancelled_callback,
)
return install_job
def _stat_size(self, path: Path) -> int:
size = 0
if path.is_file():
size = path.stat().st_size
elif path.is_dir():
for root, _, files in os.walk(path):
size += sum(self._stat_size(Path(root, x)) for x in files)
return size
# ------------------------------------------------------------------
# Callbacks are executed by the download queue in a separate thread
# ------------------------------------------------------------------
def _download_started_callback(self, download_job: DownloadJob) -> None:
self._logger.info(f"{download_job.source}: model download started")
with self._lock:
install_job = self._download_cache[download_job.source]
install_job.status = InstallStatus.DOWNLOADING
assert download_job.download_path
if install_job.local_path == install_job._install_tmpdir:
partial_path = download_job.download_path.relative_to(install_job._install_tmpdir)
dest_name = partial_path.parts[0]
install_job.local_path = install_job._install_tmpdir / dest_name
# Update the total bytes count for remote sources.
if not install_job.total_bytes:
install_job.total_bytes = sum(x.total_bytes for x in install_job.download_parts)
def _download_progress_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
if install_job.cancelled: # This catches the case in which the caller directly calls job.cancel()
self._cancel_download_parts(install_job)
else:
# update sizes
install_job.bytes = sum(x.bytes for x in install_job.download_parts)
self._signal_job_downloading(install_job)
def _download_complete_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
self._download_cache.pop(download_job.source, None)
# are there any more active jobs left in this task?
if all(x.complete for x in install_job.download_parts):
# now enqueue job for actual installation into the models directory
self._install_queue.put(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
assert install_job is not None
assert excp is not None
install_job.set_error(excp)
self._logger.error(
f"Cancelling {install_job.source} due to an error while downloading {download_job.source}: {str(excp)}"
)
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_cancelled_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
if not install_job:
return
self._downloads_changed_event.set()
self._logger.warning(f"Download {download_job.source} cancelled.")
# if install job has already registered an error, then do not replace its status with cancelled
if not install_job.errored:
install_job.cancel()
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _cancel_download_parts(self, install_job: ModelInstallJob) -> None:
# on multipart downloads, _cancel_components() will get called repeatedly from the download callbacks
# do not lock here because it gets called within a locked context
for s in install_job.download_parts:
self._download_queue.cancel_job(s)
if all(x.in_terminal_state for x in install_job.download_parts):
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
self._install_queue.put(install_job)
# ------------------------------------------------------------------------------------------------
# Internal methods that put events on the event bus
# ------------------------------------------------------------------------------------------------
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_running(str(job.source))
def _signal_job_downloading(self, job: ModelInstallJob) -> None:
if self._event_bus:
parts: List[Dict[str, str | int]] = [
{
"url": str(x.source),
"local_path": str(x.download_path),
"bytes": x.bytes,
"total_bytes": x.total_bytes,
}
for x in job.download_parts
]
assert job.bytes is not None
assert job.total_bytes is not None
self._event_bus.emit_model_install_downloading(
str(job.source),
local_path=job.local_path.as_posix(),
parts=parts,
bytes=job.bytes,
total_bytes=job.total_bytes,
)
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}\n{job.error}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
def _signal_job_cancelled(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation was cancelled")
if self._event_bus:
self._event_bus.emit_model_install_cancelled(str(job.source))

View File

@ -4,5 +4,17 @@ from .model_records_base import ( # noqa F401
InvalidModelException,
ModelRecordServiceBase,
UnknownModelException,
ModelSummary,
ModelRecordOrderBy,
)
from .model_records_sql import ModelRecordServiceSQL # noqa F401
__all__ = [
"ModelRecordServiceBase",
"ModelRecordServiceSQL",
"DuplicateModelException",
"InvalidModelException",
"UnknownModelException",
"ModelSummary",
"ModelRecordOrderBy",
]

View File

@ -4,13 +4,15 @@ Abstract base class for storing and retrieving model configuration records.
"""
from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path
from typing import List, Optional, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Union
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelType
from pydantic import BaseModel, Field
# should match the InvokeAI version when this is first released.
CONFIG_FILE_VERSION = "3.2.0"
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class DuplicateModelException(Exception):
@ -29,17 +31,33 @@ class ConfigFileVersionMismatchException(Exception):
"""Raised on an attempt to open a config with an incompatible version."""
class ModelRecordOrderBy(str, Enum):
"""The order in which to return model summaries."""
Default = "default" # order by type, base, format and name
Type = "type"
Base = "base"
Name = "name"
Format = "format"
class ModelSummary(BaseModel):
"""A short summary of models for UI listing purposes."""
key: str = Field(description="model key")
type: ModelType = Field(description="model type")
base: BaseModelType = Field(description="base model")
format: ModelFormat = Field(description="model format")
name: str = Field(description="model name")
description: str = Field(description="short description of model")
tags: Set[str] = Field(description="tags associated with model")
class ModelRecordServiceBase(ABC):
"""Abstract base class for storage and retrieval of model configs."""
@property
@abstractmethod
def version(self) -> str:
"""Return the config file/database schema version."""
pass
@abstractmethod
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
@ -63,7 +81,7 @@ class ModelRecordServiceBase(ABC):
pass
@abstractmethod
def update_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
def update_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Update the model, returning the updated version.
@ -84,6 +102,47 @@ class ModelRecordServiceBase(ABC):
"""
pass
@property
@abstractmethod
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
pass
@abstractmethod
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
pass
@abstractmethod
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
pass
@abstractmethod
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
pass
@abstractmethod
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
pass
@abstractmethod
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
pass
@abstractmethod
def exists(self, key: str) -> bool:
"""
@ -115,6 +174,7 @@ class ModelRecordServiceBase(ABC):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -122,6 +182,7 @@ class ModelRecordServiceBase(ABC):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.

View File

@ -42,21 +42,26 @@ Typical usage:
import json
import sqlite3
from math import ceil
from pathlib import Path
from typing import List, Optional, Union
from typing import Any, Dict, List, Optional, Set, Tuple, Union
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelConfigFactory,
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore, UnknownMetadataException
from ..shared.sqlite.sqlite_database import SqliteDatabase
from .model_records_base import (
CONFIG_FILE_VERSION,
DuplicateModelException,
ModelRecordOrderBy,
ModelRecordServiceBase,
ModelSummary,
UnknownModelException,
)
@ -64,9 +69,6 @@ from .model_records_base import (
class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database."""
_db: SqliteDatabase
_cursor: sqlite3.Cursor
def __init__(self, db: SqliteDatabase):
"""
Initialize a new object from preexisting sqlite3 connection and threading lock objects.
@ -78,86 +80,12 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._db = db
self._cursor = self._db.conn.cursor()
with self._db.lock:
# Enable foreign keys
self._db.conn.execute("PRAGMA foreign_keys = ON;")
self._create_tables()
self._db.conn.commit()
assert (
str(self.version) == CONFIG_FILE_VERSION
), f"Model config version {self.version} does not match expected version {CONFIG_FILE_VERSION}"
@property
def db(self) -> SqliteDatabase:
"""Return the underlying database."""
return self._db
def _create_tables(self) -> None:
"""Create sqlite3 tables."""
# model_config table breaks out the fields that are common to all config objects
# and puts class-specific ones in a serialized json object
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT NOT NULL,
type TEXT NOT NULL,
name TEXT NOT NULL,
path TEXT NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
# metadata table
self._cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_manager_metadata (
metadata_key TEXT NOT NULL PRIMARY KEY,
metadata_value TEXT NOT NULL
);
"""
)
# Add trigger for `updated_at`.
self._cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_config_updated_at
AFTER UPDATE
ON model_config FOR EACH ROW
BEGIN
UPDATE model_config SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
# Add indexes for searchable fields
for stmt in [
"CREATE INDEX IF NOT EXISTS base_index ON model_config(base);",
"CREATE INDEX IF NOT EXISTS type_index ON model_config(type);",
"CREATE INDEX IF NOT EXISTS name_index ON model_config(name);",
"CREATE UNIQUE INDEX IF NOT EXISTS path_index ON model_config(path);",
]:
self._cursor.execute(stmt)
# Add our version to the metadata table
self._cursor.execute(
"""--sql
INSERT OR IGNORE into model_manager_metadata (
metadata_key,
metadata_value
)
VALUES (?,?);
""",
("version", CONFIG_FILE_VERSION),
)
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig:
def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
"""
Add a model to the database.
@ -175,21 +103,13 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
"""--sql
INSERT INTO model_config (
id,
base,
type,
name,
path,
original_hash,
config
)
VALUES (?,?,?,?,?,?,?);
VALUES (?,?,?);
""",
(
key,
record.base,
record.type,
record.name,
record.path,
record.original_hash,
json_serialized,
),
@ -214,22 +134,6 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
return self.get_model(key)
@property
def version(self) -> str:
"""Return the version of the database schema."""
with self._db.lock:
self._cursor.execute(
"""--sql
SELECT metadata_value FROM model_manager_metadata
WHERE metadata_key=?;
""",
("version",),
)
rows = self._cursor.fetchone()
if not rows:
raise KeyError("Models database does not have metadata key 'version'")
return rows[0]
def del_model(self, key: str) -> None:
"""
Delete a model.
@ -269,14 +173,11 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._cursor.execute(
"""--sql
UPDATE model_config
SET base=?,
type=?,
name=?,
path=?,
SET
config=?
WHERE id=?;
""",
(record.base, record.type, record.name, record.path, json_serialized, key),
(json_serialized, key),
)
if self._cursor.rowcount == 0:
raise UnknownModelException("model not found")
@ -332,6 +233,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
model_name: Optional[str] = None,
base_model: Optional[BaseModelType] = None,
model_type: Optional[ModelType] = None,
model_format: Optional[ModelFormat] = None,
) -> List[AnyModelConfig]:
"""
Return models matching name, base and/or type.
@ -339,6 +241,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
:param model_name: Filter by name of model (optional)
:param base_model: Filter by base model (optional)
:param model_type: Filter by type of model (optional)
:param model_format: Filter by model format (e.g. "diffusers") (optional)
If none of the optional filters are passed, will return all
models in the database.
@ -355,6 +258,9 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
if model_type:
where_clause.append("type=?")
bindings.append(model_type)
if model_format:
where_clause.append("format=?")
bindings.append(model_format)
where = f"WHERE {' AND '.join(where_clause)}" if where_clause else ""
with self._db.lock:
self._cursor.execute(
@ -374,7 +280,7 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._cursor.execute(
"""--sql
SELECT config FROM model_config
WHERE model_path=?;
WHERE path=?;
""",
(str(path),),
)
@ -394,3 +300,95 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
)
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results
@property
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
return ModelMetadataStore(self._db)
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
store = self.metadata_store
try:
metadata = store.get_metadata(key)
return metadata
except UnknownMetadataException:
return None
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
store = ModelMetadataStore(self._db)
keys = store.search_by_tag(tags)
return [self.get_model(x) for x in keys]
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
store = ModelMetadataStore(self._db)
return store.list_tags()
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
store = ModelMetadataStore(self._db)
return store.list_all_metadata()
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
ordering = {
ModelRecordOrderBy.Default: "a.type, a.base, a.format, a.name",
ModelRecordOrderBy.Type: "a.type",
ModelRecordOrderBy.Base: "a.base",
ModelRecordOrderBy.Name: "a.name",
ModelRecordOrderBy.Format: "a.format",
}
def _fixup(summary: Dict[str, str]) -> Dict[str, Union[str, int, Set[str]]]:
"""Fix up results so that there are no null values."""
result: Dict[str, Union[str, int, Set[str]]] = {}
for key, item in summary.items():
result[key] = item or ""
result["tags"] = set(json.loads(summary["tags"] or "[]"))
return result
# Lock so that the database isn't updated while we're doing the two queries.
with self._db.lock:
# query1: get the total number of model configs
self._cursor.execute(
"""--sql
select count(*) from model_config;
""",
(),
)
total = int(self._cursor.fetchone()[0])
# query2: fetch key fields from the join of model_config and model_metadata
self._cursor.execute(
f"""--sql
SELECT a.id as key, a.type, a.base, a.format, a.name,
json_extract(a.config, '$.description') as description,
json_extract(b.metadata, '$.tags') as tags
FROM model_config AS a
LEFT JOIN model_metadata AS b on a.id=b.id
ORDER BY {ordering[order_by]} -- using ? to bind doesn't work here for some reason
LIMIT ?
OFFSET ?;
""",
(
per_page,
page * per_page,
),
)
rows = self._cursor.fetchall()
items = [ModelSummary.model_validate(_fixup(dict(x))) for x in rows]
return PaginatedResults(
page=page, pages=ceil(total / per_page), per_page=per_page, total=total, items=items
)

View File

@ -50,7 +50,6 @@ class SqliteSessionQueue(SessionQueueBase):
self.__lock = db.lock
self.__conn = db.conn
self.__cursor = self.__conn.cursor()
self._create_tables()
def _match_event_name(self, event: FastAPIEvent, match_in: list[str]) -> bool:
return event[1]["event"] in match_in
@ -98,123 +97,6 @@ class SqliteSessionQueue(SessionQueueBase):
except SessionQueueItemNotFoundError:
return
def _create_tables(self) -> None:
"""Creates the session queue tables, indicies, and triggers"""
try:
self.__lock.acquire()
self.__cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS session_queue (
item_id INTEGER PRIMARY KEY AUTOINCREMENT, -- used for ordering, cursor pagination
batch_id TEXT NOT NULL, -- identifier of the batch this queue item belongs to
queue_id TEXT NOT NULL, -- identifier of the queue this queue item belongs to
session_id TEXT NOT NULL UNIQUE, -- duplicated data from the session column, for ease of access
field_values TEXT, -- NULL if no values are associated with this queue item
session TEXT NOT NULL, -- the session to be executed
status TEXT NOT NULL DEFAULT 'pending', -- the status of the queue item, one of 'pending', 'in_progress', 'completed', 'failed', 'canceled'
priority INTEGER NOT NULL DEFAULT 0, -- the priority, higher is more important
error TEXT, -- any errors associated with this queue item
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- updated via trigger
started_at DATETIME, -- updated via trigger
completed_at DATETIME -- updated via trigger, completed items are cleaned up on application startup
-- Ideally this is a FK, but graph_executions uses INSERT OR REPLACE, and REPLACE triggers the ON DELETE CASCADE...
-- FOREIGN KEY (session_id) REFERENCES graph_executions (id) ON DELETE CASCADE
);
"""
)
self.__cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_item_id ON session_queue(item_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_session_id ON session_queue(session_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_batch_id ON session_queue(batch_id);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_created_priority ON session_queue(priority);
"""
)
self.__cursor.execute(
"""--sql
CREATE INDEX IF NOT EXISTS idx_session_queue_created_status ON session_queue(status);
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_completed_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'completed'
OR NEW.status = 'failed'
OR NEW.status = 'canceled'
BEGIN
UPDATE session_queue
SET completed_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_started_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'in_progress'
BEGIN
UPDATE session_queue
SET started_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
"""
)
self.__cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_updated_at
AFTER UPDATE
ON session_queue FOR EACH ROW
BEGIN
UPDATE session_queue
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = old.item_id;
END;
"""
)
self.__cursor.execute("PRAGMA table_info(session_queue)")
columns = [column[1] for column in self.__cursor.fetchall()]
if "workflow" not in columns:
self.__cursor.execute(
"""--sql
ALTER TABLE session_queue ADD COLUMN workflow TEXT;
"""
)
self.__conn.commit()
except Exception:
self.__conn.rollback()
raise
finally:
self.__lock.release()
def _set_in_progress_to_canceled(self) -> None:
"""
Sets all in_progress queue items to canceled. Run on app startup, not associated with any queue.

View File

@ -3,45 +3,65 @@ import threading
from logging import Logger
from pathlib import Path
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite.sqlite_common import sqlite_memory
class SqliteDatabase:
def __init__(self, config: InvokeAIAppConfig, logger: Logger):
self._logger = logger
self._config = config
"""
Manages a connection to an SQLite database.
if self._config.use_memory_db:
self.db_path = sqlite_memory
logger.info("Using in-memory database")
:param db_path: Path to the database file. If None, an in-memory database is used.
:param logger: Logger to use for logging.
:param verbose: Whether to log SQL statements. Provides `logger.debug` as the SQLite trace callback.
This is a light wrapper around the `sqlite3` module, providing a few conveniences:
- The database file is written to disk if it does not exist.
- Foreign key constraints are enabled by default.
- The connection is configured to use the `sqlite3.Row` row factory.
In addition to the constructor args, the instance provides the following attributes and methods:
- `conn`: A `sqlite3.Connection` object. Note that the connection must never be closed if the database is in-memory.
- `lock`: A shared re-entrant lock, used to approximate thread safety.
- `clean()`: Runs the SQL `VACUUM;` command and reports on the freed space.
"""
def __init__(self, db_path: Path | None, logger: Logger, verbose: bool = False) -> None:
"""Initializes the database. This is used internally by the class constructor."""
self.logger = logger
self.db_path = db_path
self.verbose = verbose
if not self.db_path:
logger.info("Initializing in-memory database")
else:
db_path = self._config.db_path
db_path.parent.mkdir(parents=True, exist_ok=True)
self.db_path = str(db_path)
self._logger.info(f"Using database at {self.db_path}")
self.db_path.parent.mkdir(parents=True, exist_ok=True)
self.logger.info(f"Initializing database at {self.db_path}")
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.conn = sqlite3.connect(database=self.db_path or sqlite_memory, check_same_thread=False)
self.lock = threading.RLock()
self.conn.row_factory = sqlite3.Row
if self._config.log_sql:
self.conn.set_trace_callback(self._logger.debug)
if self.verbose:
self.conn.set_trace_callback(self.logger.debug)
self.conn.execute("PRAGMA foreign_keys = ON;")
def clean(self) -> None:
"""
Cleans the database by running the VACUUM command, reporting on the freed space.
"""
# No need to clean in-memory database
if not self.db_path:
return
with self.lock:
try:
if self.db_path == sqlite_memory:
return
initial_db_size = Path(self.db_path).stat().st_size
self.conn.execute("VACUUM;")
self.conn.commit()
final_db_size = Path(self.db_path).stat().st_size
freed_space_in_mb = round((initial_db_size - final_db_size) / 1024 / 1024, 2)
if freed_space_in_mb > 0:
self._logger.info(f"Cleaned database (freed {freed_space_in_mb}MB)")
self.logger.info(f"Cleaned database (freed {freed_space_in_mb}MB)")
except Exception as e:
self._logger.error(f"Error cleaning database: {e}")
self.logger.error(f"Error cleaning database: {e}")
raise

View File

@ -0,0 +1,36 @@
from logging import Logger
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_4 import build_migration_4
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileStorageBase) -> SqliteDatabase:
"""
Initializes the SQLite database.
:param config: The app config
:param logger: The logger
:param image_files: The image files service (used by migration 2)
This function:
- Instantiates a :class:`SqliteDatabase`
- Instantiates a :class:`SqliteMigrator` and registers all migrations
- Runs all migrations
"""
db_path = None if config.use_memory_db else config.db_path
db = SqliteDatabase(db_path=db_path, logger=logger, verbose=config.log_sql)
migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3(app_config=config, logger=logger))
migrator.register_migration(build_migration_4())
migrator.run_migrations()
return db

View File

@ -0,0 +1,372 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration1Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
"""Migration callback for database version 1."""
self._create_board_images(cursor)
self._create_boards(cursor)
self._create_images(cursor)
self._create_model_config(cursor)
self._create_session_queue(cursor)
self._create_workflow_images(cursor)
self._create_workflows(cursor)
def _create_board_images(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `board_images` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS board_images (
board_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between boards and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (board_id) REFERENCES boards (board_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_board_images_board_id ON board_images (board_id);",
"CREATE INDEX IF NOT EXISTS idx_board_images_board_id_created_at ON board_images (board_id, created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_board_images_updated_at
AFTER UPDATE
ON board_images FOR EACH ROW
BEGIN
UPDATE board_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE board_id = old.board_id AND image_name = old.image_name;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_boards(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `boards` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS boards (
board_id TEXT NOT NULL PRIMARY KEY,
board_name TEXT NOT NULL,
cover_image_name TEXT,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
FOREIGN KEY (cover_image_name) REFERENCES images (image_name) ON DELETE SET NULL
);
"""
]
indices = ["CREATE INDEX IF NOT EXISTS idx_boards_created_at ON boards (created_at);"]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_boards_updated_at
AFTER UPDATE
ON boards FOR EACH ROW
BEGIN
UPDATE boards SET updated_at = current_timestamp
WHERE board_id = old.board_id;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_images(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `images` table, indices and triggers. Adds the `starred` column."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS images (
image_name TEXT NOT NULL PRIMARY KEY,
-- This is an enum in python, unrestricted string here for flexibility
image_origin TEXT NOT NULL,
-- This is an enum in python, unrestricted string here for flexibility
image_category TEXT NOT NULL,
width INTEGER NOT NULL,
height INTEGER NOT NULL,
session_id TEXT,
node_id TEXT,
metadata TEXT,
is_intermediate BOOLEAN DEFAULT FALSE,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME
);
"""
]
indices = [
"CREATE UNIQUE INDEX IF NOT EXISTS idx_images_image_name ON images(image_name);",
"CREATE INDEX IF NOT EXISTS idx_images_image_origin ON images(image_origin);",
"CREATE INDEX IF NOT EXISTS idx_images_image_category ON images(image_category);",
"CREATE INDEX IF NOT EXISTS idx_images_created_at ON images(created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_images_updated_at
AFTER UPDATE
ON images FOR EACH ROW
BEGIN
UPDATE images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE image_name = old.image_name;
END;
"""
]
# Add the 'starred' column to `images` if it doesn't exist
cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in cursor.fetchall()]
if "starred" not in columns:
tables.append("ALTER TABLE images ADD COLUMN starred BOOLEAN DEFAULT FALSE;")
indices.append("CREATE INDEX IF NOT EXISTS idx_images_starred ON images(starred);")
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_model_config(self, cursor: sqlite3.Cursor) -> None:
"""Creates the `model_config` table, `model_manager_metadata` table, indices and triggers."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT NOT NULL,
type TEXT NOT NULL,
name TEXT NOT NULL,
path TEXT NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
""",
"""--sql
CREATE TABLE IF NOT EXISTS model_manager_metadata (
metadata_key TEXT NOT NULL PRIMARY KEY,
metadata_value TEXT NOT NULL
);
""",
]
# Add trigger for `updated_at`.
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS model_config_updated_at
AFTER UPDATE
ON model_config FOR EACH ROW
BEGIN
UPDATE model_config SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
]
# Add indexes for searchable fields
indices = [
"CREATE INDEX IF NOT EXISTS base_index ON model_config(base);",
"CREATE INDEX IF NOT EXISTS type_index ON model_config(type);",
"CREATE INDEX IF NOT EXISTS name_index ON model_config(name);",
"CREATE UNIQUE INDEX IF NOT EXISTS path_index ON model_config(path);",
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_session_queue(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS session_queue (
item_id INTEGER PRIMARY KEY AUTOINCREMENT, -- used for ordering, cursor pagination
batch_id TEXT NOT NULL, -- identifier of the batch this queue item belongs to
queue_id TEXT NOT NULL, -- identifier of the queue this queue item belongs to
session_id TEXT NOT NULL UNIQUE, -- duplicated data from the session column, for ease of access
field_values TEXT, -- NULL if no values are associated with this queue item
session TEXT NOT NULL, -- the session to be executed
status TEXT NOT NULL DEFAULT 'pending', -- the status of the queue item, one of 'pending', 'in_progress', 'completed', 'failed', 'canceled'
priority INTEGER NOT NULL DEFAULT 0, -- the priority, higher is more important
error TEXT, -- any errors associated with this queue item
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- updated via trigger
started_at DATETIME, -- updated via trigger
completed_at DATETIME -- updated via trigger, completed items are cleaned up on application startup
-- Ideally this is a FK, but graph_executions uses INSERT OR REPLACE, and REPLACE triggers the ON DELETE CASCADE...
-- FOREIGN KEY (session_id) REFERENCES graph_executions (id) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_item_id ON session_queue(item_id);",
"CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_session_id ON session_queue(session_id);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_batch_id ON session_queue(batch_id);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_created_priority ON session_queue(priority);",
"CREATE INDEX IF NOT EXISTS idx_session_queue_created_status ON session_queue(status);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_completed_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'completed'
OR NEW.status = 'failed'
OR NEW.status = 'canceled'
BEGIN
UPDATE session_queue
SET completed_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
""",
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_started_at
AFTER UPDATE OF status ON session_queue
FOR EACH ROW
WHEN
NEW.status = 'in_progress'
BEGIN
UPDATE session_queue
SET started_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = NEW.item_id;
END;
""",
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_session_queue_updated_at
AFTER UPDATE
ON session_queue FOR EACH ROW
BEGIN
UPDATE session_queue
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE item_id = old.item_id;
END;
""",
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_workflow_images(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflow_images (
workflow_id TEXT NOT NULL,
image_name TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Soft delete, currently unused
deleted_at DATETIME,
-- enforce one-to-many relationship between workflows and images using PK
-- (we can extend this to many-to-many later)
PRIMARY KEY (image_name),
FOREIGN KEY (workflow_id) REFERENCES workflows (workflow_id) ON DELETE CASCADE,
FOREIGN KEY (image_name) REFERENCES images (image_name) ON DELETE CASCADE
);
"""
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_workflow_images_workflow_id ON workflow_images (workflow_id);",
"CREATE INDEX IF NOT EXISTS idx_workflow_images_workflow_id_created_at ON workflow_images (workflow_id, created_at);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflow_images_updated_at
AFTER UPDATE
ON workflow_images FOR EACH ROW
BEGIN
UPDATE workflow_images SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id AND image_name = old.image_name;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _create_workflows(self, cursor: sqlite3.Cursor) -> None:
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflows (
workflow TEXT NOT NULL,
workflow_id TEXT GENERATED ALWAYS AS (json_extract(workflow, '$.id')) VIRTUAL NOT NULL UNIQUE, -- gets implicit index
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')) -- updated via trigger
);
"""
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflows_updated_at
AFTER UPDATE
ON workflows FOR EACH ROW
BEGIN
UPDATE workflows
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id;
END;
"""
]
for stmt in tables + triggers:
cursor.execute(stmt)
def build_migration_1() -> Migration:
"""
Builds the migration from database version 0 (init) to 1.
This migration represents the state of the database circa InvokeAI v3.4.0, which was the last
version to not use migrations to manage the database.
As such, this migration does include some ALTER statements, and the SQL statements are written
to be idempotent.
- Create `board_images` junction table
- Create `boards` table
- Create `images` table, add `starred` column
- Create `model_config` table
- Create `session_queue` table
- Create `workflow_images` junction table
- Create `workflows` table
"""
migration_1 = Migration(
from_version=0,
to_version=1,
callback=Migration1Callback(),
)
return migration_1

View File

@ -0,0 +1,166 @@
import sqlite3
from logging import Logger
from pydantic import ValidationError
from tqdm import tqdm
from invokeai.app.services.image_files.image_files_base import ImageFileStorageBase
from invokeai.app.services.image_files.image_files_common import ImageFileNotFoundException
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator,
)
class Migration2Callback:
def __init__(self, image_files: ImageFileStorageBase, logger: Logger):
self._image_files = image_files
self._logger = logger
def __call__(self, cursor: sqlite3.Cursor):
self._add_images_has_workflow(cursor)
self._add_session_queue_workflow(cursor)
self._drop_old_workflow_tables(cursor)
self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor)
self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
"""Add the `has_workflow` column to `images` table."""
cursor.execute("PRAGMA table_info(images)")
columns = [column[1] for column in cursor.fetchall()]
if "has_workflow" not in columns:
cursor.execute("ALTER TABLE images ADD COLUMN has_workflow BOOLEAN DEFAULT FALSE;")
def _add_session_queue_workflow(self, cursor: sqlite3.Cursor) -> None:
"""Add the `workflow` column to `session_queue` table."""
cursor.execute("PRAGMA table_info(session_queue)")
columns = [column[1] for column in cursor.fetchall()]
if "workflow" not in columns:
cursor.execute("ALTER TABLE session_queue ADD COLUMN workflow TEXT;")
def _drop_old_workflow_tables(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `workflows` and `workflow_images` tables."""
cursor.execute("DROP TABLE IF EXISTS workflow_images;")
cursor.execute("DROP TABLE IF EXISTS workflows;")
def _add_workflow_library(self, cursor: sqlite3.Cursor) -> None:
"""Adds the `workflow_library` table and drops the `workflows` and `workflow_images` tables."""
tables = [
"""--sql
CREATE TABLE IF NOT EXISTS workflow_library (
workflow_id TEXT NOT NULL PRIMARY KEY,
workflow TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- updated manually when retrieving workflow
opened_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Generated columns, needed for indexing and searching
category TEXT GENERATED ALWAYS as (json_extract(workflow, '$.meta.category')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(workflow, '$.name')) VIRTUAL NOT NULL,
description TEXT GENERATED ALWAYS as (json_extract(workflow, '$.description')) VIRTUAL NOT NULL
);
""",
]
indices = [
"CREATE INDEX IF NOT EXISTS idx_workflow_library_created_at ON workflow_library(created_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_updated_at ON workflow_library(updated_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_opened_at ON workflow_library(opened_at);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_category ON workflow_library(category);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_name ON workflow_library(name);",
"CREATE INDEX IF NOT EXISTS idx_workflow_library_description ON workflow_library(description);",
]
triggers = [
"""--sql
CREATE TRIGGER IF NOT EXISTS tg_workflow_library_updated_at
AFTER UPDATE
ON workflow_library FOR EACH ROW
BEGIN
UPDATE workflow_library
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE workflow_id = old.workflow_id;
END;
"""
]
for stmt in tables + indices + triggers:
cursor.execute(stmt)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
"""
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in
the database now has a `has_workflow` column, indicating if an image has a workflow embedded.
This migrate callback checks each image for the presence of an embedded workflow, then updates its entry
in the database accordingly.
"""
# Get all image names
cursor.execute("SELECT image_name FROM images")
image_names: list[str] = [image[0] for image in cursor.fetchall()]
total_image_names = len(image_names)
if not total_image_names:
return
self._logger.info(f"Migrating workflows for {total_image_names} images")
# Migrate the images
to_migrate: list[tuple[bool, str]] = []
pbar = tqdm(image_names)
for idx, image_name in enumerate(pbar):
pbar.set_description(f"Checking image {idx + 1}/{total_image_names} for workflow")
try:
pil_image = self._image_files.get(image_name)
except ImageFileNotFoundException:
self._logger.warning(f"Image {image_name} not found, skipping")
continue
except Exception as e:
self._logger.warning(f"Error while checking image {image_name}, skipping: {e}")
continue
if "invokeai_workflow" in pil_image.info:
try:
UnsafeWorkflowWithVersionValidator.validate_json(pil_image.info.get("invokeai_workflow", ""))
except ValidationError:
self._logger.warning(f"Image {image_name} has invalid embedded workflow, skipping")
continue
to_migrate.append((True, image_name))
self._logger.info(f"Adding {len(to_migrate)} embedded workflows to database")
cursor.executemany("UPDATE images SET has_workflow = ? WHERE image_name = ?", to_migrate)
def build_migration_2(image_files: ImageFileStorageBase, logger: Logger) -> Migration:
"""
Builds the migration from database version 1 to 2.
Introduced in v3.5.0 for the new workflow library.
:param image_files: The image files service, used to check for embedded workflows
:param logger: The logger, used to log progress during embedded workflows handling
This migration does the following:
- Add `has_workflow` column to `images` table
- Add `workflow` column to `session_queue` table
- Drop `workflows` and `workflow_images` tables
- Add `workflow_library` table
- Drops the `model_manager_metadata` table
- Drops the `model_config` table, recreating it (at this point, there is no user data in this table)
- Populates the `has_workflow` column in the `images` table (requires `image_files` & `logger` dependencies)
"""
migration_2 = Migration(
from_version=1,
to_version=2,
callback=Migration2Callback(image_files=image_files, logger=logger),
)
return migration_2

View File

@ -0,0 +1,79 @@
import sqlite3
from logging import Logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback:
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger) -> None:
self._app_config = app_config
self._logger = logger
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
def _drop_model_manager_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
self._logger.info("Migrating model config records from models.yaml to database")
model_record_migrator = MigrateModelYamlToDb1(self._app_config, self._logger, cursor)
model_record_migrator.migrate()
def build_migration_3(app_config: InvokeAIAppConfig, logger: Logger) -> Migration:
"""
Build the migration from database version 2 to 3.
This migration does the following:
- Drops the `model_config` table, recreating it
- Migrates data from `models.yaml` into the `model_config` table
"""
migration_3 = Migration(
from_version=2,
to_version=3,
callback=Migration3Callback(app_config=app_config, logger=logger),
)
return migration_3

View File

@ -0,0 +1,83 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration4Callback:
"""Callback to do step 4 of migration."""
def __call__(self, cursor: sqlite3.Cursor) -> None: # noqa D102
self._create_model_metadata(cursor)
self._create_model_tags(cursor)
self._create_tags(cursor)
self._create_triggers(cursor)
def _create_model_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Create the table used to store model metadata downloaded from remote sources."""
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_metadata (
id TEXT NOT NULL PRIMARY KEY,
name TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.name')) VIRTUAL NOT NULL,
author TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.author')) VIRTUAL NOT NULL,
-- Serialized JSON representation of the whole metadata object,
-- which will contain additional fields from subclasses
metadata TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
FOREIGN KEY(id) REFERENCES model_config(id) ON DELETE CASCADE
);
"""
)
def _create_model_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_tags (
model_id TEXT NOT NULL,
tag_id INTEGER NOT NULL,
FOREIGN KEY(model_id) REFERENCES model_config(id) ON DELETE CASCADE,
FOREIGN KEY(tag_id) REFERENCES tags(tag_id) ON DELETE CASCADE,
UNIQUE(model_id,tag_id)
);
"""
)
def _create_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS tags (
tag_id INTEGER NOT NULL PRIMARY KEY,
tag_text TEXT NOT NULL UNIQUE
);
"""
)
def _create_triggers(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_metadata_updated_at
AFTER UPDATE
ON model_metadata FOR EACH ROW
BEGIN
UPDATE model_metadata SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
def build_migration_4() -> Migration:
"""
Build the migration from database version 3 to 4.
Adds the tables needed to store model metadata and tags.
"""
migration_4 = Migration(
from_version=3,
to_version=4,
callback=Migration4Callback(),
)
return migration_4

Some files were not shown because too many files have changed in this diff Show More