Compare commits

..

1279 Commits

Author SHA1 Message Date
b8ef9407d1 Update model_sym_links to indicate it requires admin/developer mode in windows 2024-02-28 09:58:15 -05:00
d1e7a2f094 Fix category for model import configs 2024-02-28 09:58:15 -05:00
6b595ecd8a Fix target_is_directory param 2024-02-28 09:58:15 -05:00
e391bf7c25 Allow symlinks instead of copy during model installation 2024-02-28 09:58:15 -05:00
8c6860a2c5 Ruff format 2024-02-28 09:49:56 -05:00
fa8263e6f0 Ruff check 2024-02-28 09:49:56 -05:00
e4b8cb1d34 Extract TI loading logic into util, disallow it from ever failing a generation 2024-02-28 09:49:56 -05:00
408a800593 Fix one last reference to the uncasted model 2024-02-28 09:49:56 -05:00
9e5e3f1019 Allow TIs to be either a key or a name in the prompt during our transition to using keys 2024-02-28 09:49:56 -05:00
98a13aa7dc handle change to Civitai metadata schema for commercial usage 2024-02-28 16:15:29 +11:00
4418c118db added add all button to scan models (#5811)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-27 09:56:23 -05:00
110b0bc8fe updated to use new import model mutation 2024-02-27 09:48:41 -05:00
175cfe41a4 Merge branch 'next' into chainchompa/add-all-scan 2024-02-27 09:44:35 -05:00
a12d54afb9 added add all button to scan models 2024-02-27 09:43:02 -05:00
18af5348a2 fix(ui): merge conflict 2024-02-27 08:38:37 -05:00
b18c8e1c96 chore(ui): bump deps
The only major version is `query-string`. The breaking change for it is dropping support for old versions of node. Not a problem for us.
2024-02-27 08:38:37 -05:00
ea1e647174 ci: change frontend check to dpdm 2024-02-27 08:38:37 -05:00
af059f2cff feat(ui): move from madge to dpdm for circular dependencies 2024-02-27 08:38:37 -05:00
d8e21091e7 tidy(ui): fix circular dependencies in listeners 2024-02-27 08:38:37 -05:00
344041fd3a tidy: remove some traces of ONNX 2024-02-27 08:38:37 -05:00
588a220dd4 chore(ui): typegen, update knip config
Knip should never touch the autogenerated types
2024-02-27 08:38:37 -05:00
770d4092b9 chore(ui): update pnpm-lock.yaml
Forgot to run `pnpm i` earlier after removing packages.
2024-02-27 08:38:37 -05:00
33fe02bdff ci: add knip to ui check workflow 2024-02-27 08:38:37 -05:00
8a353bc1e3 feat(ui): configure knip 2024-02-27 08:38:37 -05:00
240f4801db tidy(ui): clean up unused code 6
unused files
2024-02-27 08:38:37 -05:00
da50507b2d tidy(ui): clean up unused code 5
variables, types and schemas
2024-02-27 08:38:37 -05:00
67d150ab66 tidy(ui): clean up unused code 4
variables, types and schemas
2024-02-27 08:38:37 -05:00
40d70add76 tidy(ui): clean up unused code 3
variables, types and schemas
2024-02-27 08:38:37 -05:00
7bd9bf3ba5 tidy(ui): clean up unused code 2
types and schemas
2024-02-27 08:38:37 -05:00
c94d607089 feat(mm): add log stmt for download complete event 2024-02-27 08:38:37 -05:00
ad801e54d4 fix(ui): model install progress sets total bytes correctly 2024-02-27 08:38:37 -05:00
fb4db83911 chore(ui): lint 2024-02-27 08:38:37 -05:00
cc229c3ea0 fix(ui): fix remaining TS issues 2024-02-27 08:38:37 -05:00
ca00fabd79 fix(ui): fix up MM queries & types (wip) 2024-02-27 08:38:37 -05:00
b361fabf81 tidy(api): remove non-heuristic install route 2024-02-27 08:38:37 -05:00
00669200c7 tidy(mm): remove ONNX from AnyModelConfig 2024-02-27 08:38:37 -05:00
fa07e82d2a tidy(ui): clean up unused code 1
- Only export when necessary
- Remove totally usused functions, variables, state, etc
- Remove unused packages
2024-02-27 08:38:37 -05:00
3632c5cd57 feat(ui): add knip + minimal config
https://knip.dev/

Replaces `unimported`
2024-02-27 08:38:37 -05:00
daef68d3c1 fix(ui): fix missing component import 2024-02-27 08:38:37 -05:00
ba29376fba ui: split the canvas mask blur and edge size setting 2024-02-27 07:32:13 -05:00
3efd9465eb feat(ui): create metadata types for control adapters
These are the same as the existing control adapter types, but the model field is non-nullable, simplifying handling of these objects.
2024-02-26 14:49:38 -05:00
a3b11c04cb fix(ui): model metadata handlers use model identifiers, not configs
Model metadata includes the main model, VAE and refiner model.

These used full model configs, as returned by the server, as their metadata type.

LoRA and control adapter metadata only use the metadata identifier.

This created a difference in handling. After parsing a model/vae/refiner, we have its name and can display it. But for LoRAs and control adapters, we only have the model key and must query for the full model config to get the name.

This change makes main model/vae/refiner metadata only have the model key, like LoRAs and control adapters.

The render function is now async so fetching can occur within it. All metadata fields with models now only contain the identifier, and fetch the model name to render their values.
2024-02-26 14:49:38 -05:00
8f9e3ac795 fix(ui): CanvasPasteBack types 2024-02-26 14:49:38 -05:00
2367f53367 tidy(ui): remove unused metadata schemas 2024-02-26 14:49:38 -05:00
8b9f0a9551 feat(nodes): update LoRAMetadataItem model
LoRA model now at under `model` not `lora.
2024-02-26 14:49:38 -05:00
ab57976e42 tidy(ui): tidy model identifier logic
- Move some files around
- Use util to extract key and base from model config
2024-02-26 14:49:38 -05:00
3c103c89f3 feat(ui): optimize model query caching
When we retrieve a list of models, upsert that data into the `getModelConfig` and `getModelConfigByAttrs` query caches.

With this change, calls to those two queries are almost always going to be free, because their caches will already have all models in them. The exception is queries for models that no longer exist.
2024-02-26 14:49:38 -05:00
0f19176944 fix(ui): fix lora metadata item type 2024-02-26 14:49:38 -05:00
fc09a954b5 fix(ui): fix node type 2024-02-26 14:49:38 -05:00
e7eee29825 feat(ui): add transformation to width/height parameter schemas to round to multiple of 8
This allows image dimensions that are not multiples of 8 to still be recalled with best effort.
2024-02-26 14:49:38 -05:00
2c1ba23f61 fix(ui): fix lora metadata rendering 2024-02-26 14:49:38 -05:00
58ef6dc6ce fix(ui): fix type issues related to change in LoRA type 2024-02-26 14:49:38 -05:00
8faefa89fe feat(ui): migrate all metadata recall logic to new system 2024-02-26 14:49:38 -05:00
02f59a3831 fix(ui): use id for component key in control adapter components 2024-02-26 14:49:38 -05:00
2555be3058 feat(ui): no JSX in metadata handlers 2024-02-26 14:49:38 -05:00
e174ce038f feat(ui): refactor metadata handling (again)
Add concepts for metadata handlers. Handlers include parsers, recallers and validators for different metadata types:
- Parsers parse a raw metadata object of any shape to a structured object.
- Recallers load the parsed metadata into state. Recallers are optional, as some metadata types don't need to be loaded into state.
- Validators provide an additional layer of validation before recalling the metadata. This is needed because a metadata object may be valid, but not able to be recalled due to some other requirement, like base model compatibility. Validators are optional.

Sometimes metadata is not a single object but a list of items - like LoRAs. Metadata handlers may implement an optional set of "item" handlers which operate on individual items in the list.

Parsers and validators are async to allow fetching additional data, like a model config. Recallers are synchronous.

The these handlers are composed into a public API, exported as a `handlers` object. Besides the handlers functions, a metadata handler set includes:
- A function to get the label of the metadata type.
- An optional function to render the value of the metadata type.
- An optional function to render the _item_ value of the metadata type.
2024-02-26 14:49:38 -05:00
0f10faf0d4 build(ui): do not fail build on eslint error in dev mode 2024-02-26 14:49:38 -05:00
393e32f8a7 chore(ui): typegen 2024-02-26 14:49:38 -05:00
70412464c8 feat(api): add MM get_by_attrs route
Gets the first model that matches the given name, base and type. Raises 404 if there isn't one.

This will be used for backwards compatibility with old metadata.
2024-02-26 14:49:38 -05:00
30fdb9dbfd undo 2024-02-26 14:44:37 -05:00
66f6013436 fix literal strings in MM UI 2024-02-26 14:44:37 -05:00
49b04f7db8 fix TI appearing as key in prompt 2024-02-26 14:20:28 -05:00
253dc5d43d fix base model grouping in combobox 2024-02-26 14:20:28 -05:00
3ccb4e6ff9 fix(mm): fix ModelCacheBase method name 2024-02-26 17:38:31 +11:00
200a9d1801 chore: ruff 2024-02-26 17:38:31 +11:00
b09a76ea0d recover gracefuly from GPU out of memory errors (next version) 2024-02-26 17:38:31 +11:00
8a2030e78a clear out VRAM when an OOM occurs 2024-02-26 17:38:31 +11:00
dfa5505ed8 feat(ui): bulk download click to download 2024-02-25 22:23:15 -05:00
f8b731b900 fix(ui): fix node types for canvas graphs 2024-02-24 19:38:16 +11:00
fd9ab0fb7d chore(ui): typegen 2024-02-24 19:38:16 +11:00
f504a5c96e tidy(nodes): rename canvas paste back 2024-02-24 19:38:16 +11:00
afe6639b9c fix: outpaint result not getting pasted back correctly 2024-02-24 19:38:16 +11:00
1f1bf15099 fix: lint errors 2024-02-24 19:38:16 +11:00
8fa238f100 canvas: improve paste back (or try to) 2024-02-24 19:38:16 +11:00
30b6a0ee23 wip(ui): Replace 2 Layer Coherence pass with Gradient Mask 2024-02-24 19:38:16 +11:00
784878c300 chore: ruff 2024-02-24 19:04:52 +11:00
b51b163400 fix(ui): fix merge issue 2024-02-24 19:04:52 +11:00
7e13224ec8 fix(ui): use new scan_folder response instead of hook to determine if models are installed already 2024-02-24 19:04:52 +11:00
7bc454209c chore(ui): typegen 2024-02-24 19:04:52 +11:00
cc7f6c7048 feat(mm): add logic to scan_folder route to check if a model is already installed
This was done in the frontend before but it's something the backend should handle.

The logic compares the found model paths to the path and source of all installed models. It excludes core models.
2024-02-24 19:04:52 +11:00
8b8d950137 chore(ui): lint 2024-02-24 19:04:52 +11:00
24fd7f41ff build(ui): restore i18n eslint rule 2024-02-24 19:04:52 +11:00
7c5e458372 chore: ruff 2024-02-24 19:04:52 +11:00
a5dba4b0d9 fix(ui): fix metadata route 2024-02-24 19:04:52 +11:00
72fb1cefff chore(ui): typegen 2024-02-24 19:04:52 +11:00
a64f1c0b20 feat(api): mm metadata route "meta" -> "metadata" 2024-02-24 19:04:52 +11:00
974658107d lint fix 2024-02-24 19:04:52 +11:00
07fb5d5c19 updated translations 2024-02-24 19:04:52 +11:00
20c75e7a7e fix convert endpoint logic 2024-02-24 19:04:52 +11:00
cfcb68696c clean up old model manager components and endpoints 2024-02-24 19:04:52 +11:00
7b1b6d3235 add model convert to checkpoint main models 2024-02-24 19:04:52 +11:00
aefba52a0a fix logic to see if scanned models are already installed, style tweaks 2024-02-24 19:04:52 +11:00
6af46f9c5f add error_reason to ModelInstallJob 2024-02-24 19:04:52 +11:00
190702d011 add error_reason to UI if import fails 2024-02-24 19:04:52 +11:00
7785e8ff79 fix types for ImportQueue, add QuickAdd for scan models 2024-02-24 19:04:52 +11:00
b3beaefa04 refactored and fixed issues with advanced import form 2024-02-24 19:04:52 +11:00
98be81354a fix(ui): misc MM cleanup 2024-02-24 19:04:52 +11:00
2a2a5eb775 chore(ui): temp disable eslint i18 rule 2024-02-24 19:04:52 +11:00
4a42b15b42 fix(ui): fix ImportMainModelResponse type 2024-02-24 19:04:52 +11:00
f24d5e5e31 fix(ui): simplify model install event listeners 2024-02-24 19:04:52 +11:00
4b106bc903 fix(ui): fix model install event types 2024-02-24 19:04:52 +11:00
135ef9066f added advanced import forms, not fully working yet 2024-02-24 19:04:52 +11:00
0567f98e4a get positioning/scrolling working for scan results list 2024-02-24 19:04:52 +11:00
5b66baa3ec basic scan working and renders results 2024-02-24 19:04:52 +11:00
a022aaf258 add scan model endpoint, break add model into tabs 2024-02-24 19:04:52 +11:00
94065b090a update metadata endpoint 2024-02-24 19:04:52 +11:00
091bf9220b allow metadata-less models to be used for GET metadata endpoint 2024-02-24 19:04:52 +11:00
8d243b1fca added status to import queue model 2024-02-24 19:04:52 +11:00
23c412e011 delete model imports and prune all finished, update state with socket messages 2024-02-24 19:04:52 +11:00
66692f02aa fix sync model endpoint 2024-02-24 19:04:52 +11:00
38af1c3a81 form error handling 2024-02-24 19:04:52 +11:00
7b4b7e3781 finish model update 2024-02-24 19:04:52 +11:00
02a3472505 added socket listeners, added more info to ui 2024-02-24 19:04:52 +11:00
909d354a38 edit view for model, depending on type and valid values 2024-02-24 19:04:52 +11:00
7801b8c42f hook up Add Model button 2024-02-24 19:04:52 +11:00
4fd259bb89 single model view 2024-02-24 19:04:52 +11:00
b8b3ef9725 added import model form and importqueue 2024-02-24 19:04:52 +11:00
3a8d5dc349 model list, filtering, searching 2024-02-24 19:04:52 +11:00
358cac9674 workspace for mary and jenn 2024-02-24 19:04:52 +11:00
bdc2b8069b get old UI working somewhat with new endpoints 2024-02-24 19:04:52 +11:00
09295ae43b Allow passing in key on register 2024-02-23 14:47:14 -05:00
80ad14d89f Remove passing keys in on register 2024-02-23 14:33:49 -05:00
c674eb3168 Run ruff 2024-02-23 14:33:49 -05:00
63138640a7 Allow users to run model manager without cuda 2024-02-23 14:33:49 -05:00
d103ff0d6e fix(ui): roll back utility-types
It's `Required` util does not distribute over unions as expected. Also we have `ts-toolbelt` already for some utils.
2024-02-23 07:53:45 +11:00
94931e8ac0 feat(ui): refactor metadata handling
Refactor of metadata recall handling. This is in preparation for a backwards compatibility layer for models.

- Create helpers to fetch a model outside react (e.g. not in a hook)
- Created helpers to parse model metadata
- Renamed a lot of types that were confusing and/or had naming collisions
2024-02-23 07:53:45 +11:00
b409f3aaf9 chore(ui): typegen 2024-02-23 07:53:45 +11:00
f96b7f2e11 fix(nodes): make fields on ModelConfigBase required
The setup of `ModelConfigBase` means autogenerated types have critical fields flagged as nullable (like `key` and `base`). Need to manually flag them as required.
2024-02-23 07:53:45 +11:00
de3be4bd30 feat(ui): replace type-fest with utility-types
- The new package has more useful types
- Only used `JsonObject` from `type-fest`; added an implementation of that type
2024-02-23 07:53:45 +11:00
cc12f57a5a several small model install enhancements
- Support extended HF repoid syntax in TUI. This allows
  installation of subfolders and safetensors files, as in
  `XpucT/Deliberate::Deliberate_v5.safetensors`

- Add `error` and `error_traceback` properties to the install
  job objects.

- Rename the `heuristic_import` route to `heuristic_install`.

- Fix the example `config` input in the `heuristic_install` route.
2024-02-23 07:48:23 +11:00
613f11a3ac use official Deliberate download repo 2024-02-23 07:48:04 +11:00
a6e2d2c5e0 fix repo-id for the Deliberate v5 model
prevent lora and embedding file suffixes from being stripped during installation

apply psychedelicious patch to get compel to load proper TI embedding
2024-02-23 07:48:04 +11:00
ae14df97d6 remove startup dependency on legacy models.yaml file 2024-02-23 07:47:39 +11:00
a6e1ac6096 chore: typing 2024-02-22 10:04:33 -05:00
8530635540 chore: typing fix 2024-02-22 10:04:33 -05:00
b2b7aed030 feat(nodes): added gradient mask node 2024-02-22 10:04:33 -05:00
970d45f691 Run ruff 2024-02-22 09:50:02 -05:00
19b9a22d93 rename endpoint for scanning 2024-02-22 09:50:02 -05:00
c0d9990344 Create /search endpoint, update model object structure in scan model page 2024-02-22 09:50:02 -05:00
4ac5e307c4 chore(ui): bump deps
Notable updates:
- Minor version of RTK includes customizable selectors for RTK Query, so we can remove the patch that was added to ensure only the LRU memoize function was used for perf reasons. Updated to use the LRU memoize function.
- Major version of react-resizable-panels. No breaking changes, works great, and you can now resize all panels when dragging at the intersection point of panels. Cool!
- Minor (?) version of nanostores. `action` API is removed, we were using it in one spot. Fixed.
- @invoke-ai/eslint-config-react has all deps bumped and now has its dependent plugins/configs listed as normal dependencies (as opposed to peer deps). This means we can remove those packages from explicit dev deps.
2024-02-22 07:27:28 +11:00
2815f737fe tidy(ui): remove debugging stmt 2024-02-22 07:26:47 +11:00
63e96fd1ea fix(ui): handle new model format for metadata 2024-02-22 07:26:47 +11:00
66ab56246a fix(ui): use model names in badges 2024-02-22 07:26:47 +11:00
20a56bc757 fix(nodes): fix TI loading 2024-02-22 07:26:47 +11:00
82925e1539 fix(ui): fix package build 2024-02-21 08:31:55 -05:00
0137a0db7b feat(ui): do not subscribe to bulk download sio room if baseUrl is set 2024-02-21 00:00:25 +11:00
b410793684 feat(ui): revise bulk download listeners
- Use a single listener for all of the to keep them in one spot
- Use the bulk download item name as a toast id so we can update the existing toasts
- Update handling to work with other environments
- Move all bulk download handling from components to listener
2024-02-21 00:00:25 +11:00
894e9f127b chore(ui): typegen 2024-02-21 00:00:25 +11:00
dd9b1c8eec feat(bulk_download): update response model, messages 2024-02-21 00:00:25 +11:00
8d9c566656 implementing download for bulk_download events 2024-02-21 00:00:25 +11:00
9db7e073a3 setting up event listeners for bulk download socket 2024-02-21 00:00:25 +11:00
5f64ed5bd5 test: clean up & fix tests
- Deduplicate the mock invocation services. This is possible now that the import order issue is resolved.
- Merge `DummyEventService` into `TestEventService` and update all tests to use `TestEventService`.
2024-02-20 23:39:30 +11:00
7f75f6226b tidy(bulk_download): don't store events service separately
Using the invoker object directly leaves no ambiguity as to what `_events_bus` actually is.
2024-02-20 23:39:30 +11:00
6dc819fd47 tidy(bulk_download): do not rely on pagination API to get all images for board
We can get all images for the board as a list of image names, then pass that to `_image_handler` to get the DTOs, decoupling from the pagination API.
2024-02-20 23:39:30 +11:00
0cc81e5d63 tidy(bulk_download): nit - use or as a coalescing operator
Just a bit cleaner.
2024-02-20 23:39:30 +11:00
daecc54153 tidy(bulk_download): use single underscore for private attrs
Double underscores are used in the app but it doesn't actually do or convey anything that single underscores don't already do. Considered unpythonic except for actual dunder/magic methods.
2024-02-20 23:39:30 +11:00
4c31c7f9f1 tidy(bulk_download): remove class-level attr annotations
These can be misleading as they shadow actual assigned class attributes. This pattern is in the rest of the app but it shouldn't be.
2024-02-20 23:39:30 +11:00
d709c5519f tidy(bulk_download): remove extraneous abstract methods
`start`, `stop` and `__init__` are not required in implementations of an ABC or service.
2024-02-20 23:39:30 +11:00
5d84ecef49 tidy(bulk_download): clean up comments 2024-02-20 23:39:30 +11:00
641d246213 adding bulk_download_item_name to socket events 2024-02-20 23:39:30 +11:00
2e53aa48c9 refactoring handlers to do null check 2024-02-20 23:39:30 +11:00
ef12631450 removing dependency on an output folder, embrace python temp folder for bulk download 2024-02-20 23:39:30 +11:00
d9eb626b62 relocating event_service fixture due to import ordering 2024-02-20 23:39:30 +11:00
8033589629 moving the responsibility of cleaning up board names to the service not the route 2024-02-20 23:39:30 +11:00
124075ae7a updating imports to satisfy ruff 2024-02-20 23:39:30 +11:00
0bde933c89 using temp directory for downloads 2024-02-20 23:39:30 +11:00
fc5c5b6bdd returning the bulk_download_item_name on response for possible polling 2024-02-20 23:39:30 +11:00
ff53563152 narrowing bulk_download stop service scope 2024-02-20 23:39:30 +11:00
12b0d735e7 adding test coverage for new bulk download routes 2024-02-20 23:39:30 +11:00
d06ee94fd3 cleaning up bulk download zip after the response is complete 2024-02-20 23:39:30 +11:00
9dbdb6cf7c replacing import removed during rebase 2024-02-20 23:39:30 +11:00
7c091570fe 97% test coverage on bulk_download 2024-02-20 23:39:30 +11:00
e99f3482cc refactoring bulk_download to be better managed 2024-02-20 23:39:30 +11:00
d999c9ffd6 refactoring dummy event service, DRY principal; adding bulk_download_event to existing invoker tests 2024-02-20 23:39:30 +11:00
888db8ac46 refactoring bulkdownload to consider image category 2024-02-20 23:39:30 +11:00
7deef2cb27 fixing issue where default board did not return images 2024-02-20 23:39:30 +11:00
ada807af0c using the board name to download boards 2024-02-20 23:39:30 +11:00
aa132fb9e3 reworking some of the logic to use a default room, adding endpoint to download file on complete 2024-02-20 23:39:30 +11:00
98a01368b8 linted and styling 2024-02-20 23:39:30 +11:00
fc9a62dbf5 implementation of bulkdownload background task 2024-02-20 23:39:30 +11:00
4d8bec1605 adding socket events for bulk download 2024-02-20 23:39:30 +11:00
cf9dad83bc groundwork for the bulk_download_service 2024-02-20 23:39:30 +11:00
0d0a2a5c91 fix(ui): get workflow editor model selects working 2024-02-20 13:33:31 +11:00
0cab636ab0 fix(ui): get refiner model select working 2024-02-20 13:33:31 +11:00
de097ec58a fix(ui): get vae model select working 2024-02-20 13:33:31 +11:00
bb6f426162 fix(ui): get embedding select working 2024-02-20 13:33:31 +11:00
663f135b3c fix(ui): get lora select working 2024-02-20 13:33:31 +11:00
2f2097662a chore(ui): bump @invoke-ai/ui-library 2024-02-20 13:33:31 +11:00
458c29cfa5 fix(ui): fix low-hanging fruit types 2024-02-20 13:33:31 +11:00
4bec01d6f2 Add a few convenience targets to Makefile
- "test" to run pytests
- "frontend-install" to reinstall pnpm's node modeuls
2024-02-20 10:02:46 +11:00
9d79ee8dc4 chore(nodes): update TODO comment 2024-02-20 09:54:01 +11:00
78dd460348 tidy(nodes): clean up profiler/stats in processor, better comments 2024-02-20 09:54:01 +11:00
9d27d354cf fix(nodes): fix typing on stats service context manager 2024-02-20 09:54:01 +11:00
e8725a1099 fix(nodes): fix model load events
was accessing incorrect properties in event data
2024-02-20 09:54:01 +11:00
479d65b6e1 feat(nodes): making invocation class var in processor 2024-02-20 09:54:01 +11:00
5d4b388dfd feat(nodes): improved error messages in processor 2024-02-20 09:54:01 +11:00
4956fa282b feat(nodes): make processor thread limit and polling interval configurable 2024-02-20 09:54:01 +11:00
51133522b7 tests(nodes): fix tests following removal of services 2024-02-20 09:54:01 +11:00
6d5cc8b1ff chore(nodes): better comments for invocation context 2024-02-20 09:54:01 +11:00
08a5bb90e2 chore(nodes): "context_data" -> "data"
Changed within InvocationContext, for brevity.
2024-02-20 09:54:01 +11:00
39bdf5c4e9 refactor(nodes): move is_canceled to context.util 2024-02-20 09:54:01 +11:00
f31e4205aa feat(nodes): add whole queue_item to InvocationContextData
No reason to not have the whole thing in there.
2024-02-20 09:54:01 +11:00
4d05c4ff66 tidy(nodes): remove extraneous comments 2024-02-20 09:54:01 +11:00
7e88d2a7f1 feat(nodes): better invocation error messages 2024-02-20 09:54:01 +11:00
556f6aa174 chore(nodes): add comments for cancel state 2024-02-20 09:54:01 +11:00
6a74048af8 feat(nodes): promote is_canceled to public node API 2024-02-20 09:54:01 +11:00
2cb51bff11 refactor(nodes): merge processors
Consolidate graph processing logic into session processor.

With graphs as the unit of work, and the session queue distributing graphs, we no longer need the invocation queue or processor.

Instead, the session processor dequeues the next session and processes it in a simple loop, greatly simplifying the app.

- Remove `graph_execution_manager` service.
- Remove `queue` (invocation queue) service.
- Remove `processor` (invocation processor) service.
- Remove queue-related logic from `Invoker`. It now only starts and stops the services, providing them with access to other services.
- Remove unused `invocation_retrieval_error` and `session_retrieval_error` events, these are no longer needed.
- Clean up stats service now that it is less coupled to the rest of the app.
- Refactor cancellation logic - cancellations now originate from session queue (i.e. HTTP cancel endpoint) and are emitted as events. Processor gets the events and sets the canceled event. Access to this event is provided to the invocation context for e.g. the step callback.
- Remove `sessions` router; it provided access to `graph_executions` but that no longer exists.
2024-02-20 09:54:01 +11:00
851e835e0e tidy(nodes): remove commented tests 2024-02-20 09:48:14 +11:00
fe04f28841 chore(ui): typegen 2024-02-20 09:48:14 +11:00
258fc006ec tidy(nodes): remove no-op model_config
Because we now customize the JSON Schema creation for GraphExecutionState, the model_config did nothing.
2024-02-20 09:48:14 +11:00
dcb4ee47d5 tidy(nodes): remove LibraryGraphs
The workflow library supersedes this unused feature.
2024-02-20 09:48:14 +11:00
1a56f5aaf9 tidy(nodes): move node tests to parent dir
Thanks to the resolution of the import vs union issue, we can put tests anywhere.
2024-02-20 09:48:14 +11:00
5fc745653a tidy(nodes): remove GraphInvocation
`GraphInvocation` is a node that can contain a whole graph. It is removed for a number of reasons:

1. This feature was unused (the UI doesn't support it) and there is no plan for it to be used.

The use-case it served is known in other node execution engines as "node groups" or "blocks" - a self-contained group of nodes, which has group inputs and outputs. This is a planned feature that will be handled client-side.

2. It adds substantial complexity to the graph processing logic. It's probably not enough to have a measurable performance impact but it does make it harder to work in the graph logic.

3. It allows for graphs to be recursive, and the improved invocations union handling does not play well with it. Actually, it works fine within `graph.py` but not in the tests for some reason. I do not understand why. There's probably a workaround, but I took this as encouragement to remove `GraphInvocation` from the app since we don't use it.
2024-02-20 09:48:14 +11:00
47b5a90177 fix(nodes): fix OpenAPI schema generation
The change to `Graph.nodes` and `GraphExecutionState.results` validation requires some fanagling to get the OpenAPI schema generation to work. See new comments for a details.
2024-02-20 09:48:14 +11:00
81518ee1af feat(nodes): JIT graph nodes validation
We use pydantic to validate a union of valid invocations when instantiating a graph.

Previously, we constructed the union while creating the `Graph` class. This introduces a dependency on the order of imports.

For example, consider a setup where we have 3 invocations in the app:

- Python executes the module where `FirstInvocation` is defined, registering `FirstInvocation`.
- Python executes the module where `SecondInvocation` is defined, registering `SecondInvocation`.
- Python executes the module where `Graph` is defined. A union of invocations is created and used to define the `Graph.nodes` field. The union contains `FirstInvocation` and `SecondInvocation`.
- Python executes the module where `ThirdInvocation` is defined, registering `ThirdInvocation`.
- A graph is created that includes `ThirdInvocation`. Pydantic validates the graph using the union, which does not know about `ThirdInvocation`, raising a `ValidationError` about an unknown invocation type.

This scenario has been particularly problematic in tests, where we may create invocations dynamically. The test files have to be structured in such a way that the imports happen in the right order. It's a major pain.

This PR refactors the validation of graph nodes to resolve this issue:

- `BaseInvocation` gets a new method `get_typeadapter`. This builds a pydantic `TypeAdapter` for the union of all registered invocations, caching it after the first call.
- `Graph.nodes`'s type is widened to `dict[str, BaseInvocation]`. This actually is a nice bonus, because we get better type hints whenever we reference `some_graph.nodes`.
- A "plain" field validator takes over the validation logic for `Graph.nodes`. "Plain" validators totally override pydantic's own validation logic. The validator grabs the `TypeAdapter` from `BaseInvocation`, then validates each node with it. The validation is identical to the previous implementation - we get the same errors.

`BaseInvocationOutput` gets the same treatment.
2024-02-20 09:48:14 +11:00
b06d63fb34 remove errant def that was crashing invokeai-configure 2024-02-19 17:31:53 +11:00
5278a64301 one more redundant RGB convert removed 2024-02-19 17:31:08 +11:00
4de4473c0f chore: ruff formatting 2024-02-19 17:31:08 +11:00
2c28a850ca chore(invocations): remove redundant RGB conversions 2024-02-19 17:31:08 +11:00
6dada3326d chore(invocations): use IMAGE_MODES constant literal 2024-02-19 17:31:08 +11:00
2dfdc02ec8 fix: removed custom module 2024-02-19 17:31:08 +11:00
1f19db4c6a fix(nodes): canny preprocessor uses RGBA again 2024-02-19 17:31:08 +11:00
7c150c27f2 feat(nodes): format option for get_image method
Also default CNet preprocessors to "RGB"
2024-02-19 17:31:08 +11:00
248916c190 fix: Alpha channel causing issue with DW Processor 2024-02-19 08:17:56 +11:00
be8b99eed5 final tidying before marking PR as ready for review
- Replace AnyModelLoader with ModelLoaderRegistry
- Fix type check errors in multiple files
- Remove apparently unneeded `get_model_config_enum()` method from model manager
- Remove last vestiges of old model manager
- Updated tests and documentation

resolve conflict with seamless.py
2024-02-19 08:16:56 +11:00
2ad0752582 Tidy names and locations of modules
- Rename old "model_management" directory to "model_management_OLD" in order to catch
  dangling references to original model manager.
- Caught and fixed most dangling references (still checking)
- Rename lora, textual_inversion and model_patcher modules
- Introduce a RawModel base class to simplfy the Union returned by the
  model loaders.
- Tidy up the model manager 2-related tests. Add useful fixtures, and
  a finalizer to the queue and installer fixtures that will stop the
  services and release threads.
2024-02-19 08:16:56 +11:00
ba1f8878dd Fix issues identified during PR review by RyanjDick and brandonrising
- ModelMetadataStoreService is now injected into ModelRecordStoreService
  (these two services are really joined at the hip, and should someday be merged)
- ModelRecordStoreService is now injected into ModelManagerService
- Reduced timeout value for the various installer and download wait*() methods
- Introduced a Mock modelmanager for testing
- Removed bare print() statement with _logger in the install helper backend.
- Removed unused code from model loader init file
- Made `locker` a private variable in the `LoadedModel` object.
- Fixed up model merge frontend (will be deprecated anyway!)
2024-02-19 08:16:56 +11:00
bc524026f9 feat(ui): update model identifiers to use key (#5730)
## What type of PR is this? (check all applicable)

- [x] Refactor

## Description

- Update zod schemas & types to use key instead of name/base/type
- Use new `CustomSelect` component instead of `ComboBox` for main model
select and control adapter model selects (less jank, will switch to
ComboBox based on CustomSelect for v4 so you can search the select)

## QA Instructions, Screenshots, Recordings

If you hold your breath, you should be able to generate with a control
adapter.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved. Frontend tests not passing.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-02-16 11:17:35 -05:00
ad7c571983 fix(nodes): fix t2i adapter model loading (#5731)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Fixes t2i adapter loading

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-02-16 11:17:21 -05:00
8559c6a392 fix(nodes): fix t2i adapter model loading 2024-02-16 22:51:47 +11:00
c7904a32f4 chore(ui): lint 2024-02-16 22:42:15 +11:00
17f5484f5b feat(ui): fix main model & control adapter model selects 2024-02-16 22:41:09 +11:00
86a372b02f refactor(ui): url builders for each router
The MM2 router is at `api/v2/models`. URL builder utils make this a bit easier to manage.
2024-02-16 21:57:30 +11:00
2e9aa9391d feat(ui): update model identifier to be key (wip)
- Update most model identifiers to be `{key: string}` instead of name/base/type. Doesn't change the model select components yet.
- Update model _parameters_, stored in redux, to be `{key: string, base: BaseModel}` - we need to store the base model to be able to check model compatibility. May want to store the whole config? Not sure...
2024-02-16 18:56:02 +11:00
0c8112cf28 fix(ui): update model types 2024-02-15 22:17:16 +11:00
019898c7be tests(ui): add type tests 2024-02-15 22:16:55 +11:00
2b1ff8d196 tests(ui): enable vitest type testing
This is useful for the zod schemas and types we have created to match the backend.
2024-02-15 22:16:11 +11:00
79fb691b4d chore(ui): typegen 2024-02-15 22:15:21 +11:00
560ae17e21 feat(ui): export components type 2024-02-15 21:16:25 +11:00
2bd1ab2f1c fix(ui): fix type issues 2024-02-15 20:53:41 +11:00
ed43472582 chore: lint 2024-02-15 20:52:44 +11:00
6e5e9176c0 chore: ruff 2024-02-15 20:50:47 +11:00
4c6bcdbc18 feat(nodes): update invocation context for mm2, update nodes model usage 2024-02-15 20:43:41 +11:00
20e6d4fa3c Raise InvalidModelConfigException when unable to detect load class in ModelLoader 2024-02-15 18:00:16 +11:00
8e51392910 Update _get_hf_load_class to support clipvision models 2024-02-15 18:00:16 +11:00
0b1c2acd61 References to context.services.model_manager.store.get_model can only accept keys, remove invalid assertion 2024-02-15 18:00:16 +11:00
86ac55ab5f Remove references to model_records service, change submodel property on ModelInfo to submodel_type to support new params in model manager 2024-02-15 18:00:16 +11:00
3e82f63c7e improve swagger documentation 2024-02-15 18:00:08 +11:00
631f6cae19 fix a number of typechecking errors 2024-02-15 18:00:08 +11:00
0845a0ed84 add route for model conversion from safetensors to diffusers
- Begin to add SwaggerUI documentation for AnyModelConfig and other
  discriminated Unions.
2024-02-15 18:00:08 +11:00
46c8ce9fed add a JIT download_and_cache() call to the model installer 2024-02-15 18:00:08 +11:00
13a9ea35b5 add back the heuristic_import() method and extend repo_ids to arbitrary file paths 2024-02-15 18:00:08 +11:00
94e8d1b6d5 make model manager v2 ready for PR review
- Replace legacy model manager service with the v2 manager.

- Update invocations to use new load interface.

- Fixed many but not all type checking errors in the invocations. Most
  were unrelated to model manager

- Updated routes. All the new routes live under the route tag
  `model_manager_v2`. To avoid confusion with the old routes,
  they have the URL prefix `/api/v2/models`. The old routes
  have been de-registered.

- Added a pytest for the loader.

- Updated documentation in contributing/MODEL_MANAGER.md
2024-02-15 18:00:08 +11:00
2b1dc74080 consolidate model manager parts into a single class 2024-02-15 17:57:14 +11:00
f7e558d165 probe for required encoder for IPAdapters and add to config 2024-02-15 17:56:01 +11:00
d959276217 fix invokeai_configure script to work with new mm; rename CLIs 2024-02-15 17:56:01 +11:00
dfcf38be91 BREAKING CHANGES: invocations now require model key, not base/type/name
- Implement new model loader and modify invocations and embeddings

- Finish implementation loaders for all models currently supported by
  InvokeAI.

- Move lora, textual_inversion, and model patching support into
  backend/embeddings.

- Restore support for model cache statistics collection (a little ugly,
  needs work).

- Fixed up invocations that load and patch models.

- Move seamless and silencewarnings utils into better location
2024-02-15 17:56:01 +11:00
fbded1c0f2 Multiple refinements on loaders:
- Cache stat collection enabled.
- Implemented ONNX loading.
- Add ability to specify the repo version variant in installer CLI.
- If caller asks for a repo version that doesn't exist, will fall back
  to empty version rather than raising an error.
2024-02-15 17:51:07 +11:00
ad2926a24c added textual inversion and lora loaders 2024-02-15 17:51:07 +11:00
34d5cad4c9 loaders for main, controlnet, ip-adapter, clipvision and t2i 2024-02-15 17:51:07 +11:00
60aa3d4893 model loading and conversion implemented for vaes 2024-02-15 17:50:51 +11:00
5c2884569e add ram cache module and support files 2024-02-15 17:50:31 +11:00
a1307b9f2e add concept of repo variant 2024-02-15 17:50:31 +11:00
f505ec64ba tests(ui): add parseFieldType.test.ts 2024-02-15 17:32:38 +11:00
f22eb368a3 feat(ui): add more types of FieldParseError
Unfortunately you cannot test for both a specific type of error and match its message. Splitting the error classes makes it easier to test expected error conditions.
2024-02-15 17:32:38 +11:00
96ae22c7e0 feat(ui): add vitest
- Add vitest.
- Consolidate vite configs into single file (easier to config everything based on env for testing)
2024-02-15 17:32:38 +11:00
f5447cdc23 feat(ui): workflow schema v3 (WIP)
The changes aim to deduplicate data between workflows and node templates, decoupling workflows from internal implementation details. A good amount of data that was needlessly duplicated from the node template to the workflow is removed.

These changes substantially reduce the file size of workflows (and therefore the images with embedded workflows):

- Default T2I SD1.5 workflow JSON is reduced from 23.7kb (798 lines) to 10.9kb (407 lines).
- Default tiled upscale workflow JSON is reduced from 102.7kb (3341 lines) to 51.9kb (1774 lines).

The trade-off is that we need to reference node templates to get things like the field type and other things. In practice, this is a non-issue, because we need a node template to do anything with a node anyways.

- Field types are not included in the workflow. They are always pulled from the node templates.

The field type is now properly an internal implementation detail and we can change it as needed. Previously this would require a migration for the workflow itself. With the v3 schema, the structure of a field type is an internal implementation detail that we are free to change as we see fit.

- Workflow nodes no long have an `outputs` property and there is no longer such a thing as a `FieldOutputInstance`. These are only on the templates.

These were never referenced at a time when we didn't also have the templates available, and there'd be no reason to do so.

- Node width and height are no longer stored in the node.

These weren't used. Also, per https://reactflow.dev/api-reference/types/node, we shouldn't be programmatically changing these properties. A future enhancement can properly add node resizing.

- `nodeTemplates` slice is merged back into `nodesSlice` as `nodes.templates`. Turns out it's just a hassle having these separate in separate slices.

- Workflow migration logic updated to support the new schema. V1 workflows migrate all the way to v3 now.

- Changes throughout the nodes code to accommodate the above changes.
2024-02-15 17:32:38 +11:00
c76a6bd65f chore(ui): regen types 2024-02-15 17:30:03 +11:00
6c4eeaa569 feat(nodes): add more missing exports to invocation_api
Crawled through a few custom nodes to figure out what I had missed.
2024-02-15 17:30:03 +11:00
1bbd13ead7 chore(nodes): "SAMPLER_NAME_VALUES" -> "SCHEDULER_NAME_VALUES"
This was named inaccurately.
2024-02-15 17:30:03 +11:00
321b939d0e chore(nodes): remove deprecation logic for nodes API 2024-02-15 17:30:03 +11:00
8fb77e431e chore(nodes): export model-related objects from invocation_api 2024-02-15 17:30:03 +11:00
083a4f3faa chore(backend): rename ModelInfo -> LoadedModelInfo
We have two different classes named `ModelInfo` which might need to be used by API consumers. We need to export both but have to deal with this naming collision.

The `ModelInfo` I've renamed here is the one that is returned when a model is loaded. It's the object least likely to be used by API consumers.
2024-02-15 17:30:03 +11:00
2005411f7e feat(nodes): use LATENT_SCALE_FACTOR in primitives.py, noise.py
- LatentsOutput.build
- NoiseOutput.build
- Noise.width, Noise.height multiple_of
2024-02-15 17:30:03 +11:00
ba7b1b2665 feat(nodes): extract LATENT_SCALE_FACTOR to constants.py 2024-02-15 17:30:03 +11:00
b7ffd36cc6 feat(nodes): use TemporaryDirectory to handle ephemeral storage in ObjectSerializerDisk
Replace `delete_on_startup: bool` & associated logic with `ephemeral: bool` and `TemporaryDirectory`.

The temp dir is created inside of `output_dir`. For example, if `output_dir` is `invokeai/outputs/tensors/`, then the temp dir might be `invokeai/outputs/tensors/tmpvj35ht7b/`.

The temp dir is cleaned up when the service is stopped, or when it is GC'd if not properly stopped.

In the event of a catastrophic crash where the temp files are not cleaned up, the user can delete the tempdir themselves.

This situation may not occur in normal use, but if you kill the process, python cannot clean up the temp dir itself. This includes running the app in a debugger and killing the debugger process - something I do relatively often.

Tests updated.
2024-02-15 17:30:03 +11:00
199ddd6623 tests: test ObjectSerializerDisk class name extraction 2024-02-15 17:30:03 +11:00
a7207ed8cf chore(nodes): update ObjectSerializerForwardCache docstring 2024-02-15 17:30:03 +11:00
6bb2dda3f1 chore(nodes): fix pyright ignore 2024-02-15 17:30:03 +11:00
c1e5cd5893 tidy(nodes): "latents" -> "obj" 2024-02-15 17:30:03 +11:00
ff249a2315 tidy(nodes): do not store unnecessarily store invoker 2024-02-15 17:30:03 +11:00
c58f8c3269 feat(nodes): make delete on startup configurable for obj serializer
- The default is to not delete on startup - feels safer.
- The two services using this class _do_ delete on startup.
- The class has "ephemeral" removed from its name.
- Tests & app updated for this change.
2024-02-15 17:30:03 +11:00
ed772a7107 fix(nodes): use metadata/board_id if provided by user, overriding WithMetadata/WithBoard-provided values 2024-02-15 17:30:03 +11:00
cb0b389b4b tidy(nodes): clarify comment 2024-02-15 17:30:03 +11:00
8892df1d97 Revert "feat(nodes): use LATENT_SCALE_FACTOR const in tensor output builders"
This reverts commit ef18fc546560277302f3886e456da9a47e8edce0.
2024-02-15 17:30:03 +11:00
bc5f356390 feat(nodes): use LATENT_SCALE_FACTOR const in tensor output builders 2024-02-15 17:30:03 +11:00
bcb85e100d tests: fix broken tests 2024-02-15 17:30:03 +11:00
1f27ddc07d tidy(nodes): minor spelling correction 2024-02-15 17:30:03 +11:00
7a2b606001 tests: add object serializer tests
These test both object serializer and its forward cache implementation.
2024-02-15 17:30:03 +11:00
83ddcc5f3a feat(nodes): allow _delete_all in obj serializer to be called at any time
`_delete_all` logged how many items it deleted, and had to be called _after_ service start bc it needed access to logger.

Move the logger call to the startup method and return the the deleted stats from `_delete_all`. This lets `_delete_all` be called at any time.
2024-02-15 17:30:03 +11:00
55fa785561 tidy(nodes): remove object serializer on_saved
It's unused.
2024-02-15 17:30:03 +11:00
06429028c8 revert(nodes): revert making tensors/conditioning use item storage
Turns out they are just different enough in purpose that the implementations would be rather unintuitive. I've made a separate ObjectSerializer service to handle tensors and conditioning.

Refined the class a bit too.
2024-02-15 17:30:03 +11:00
8b6e322697 feat(nodes): support custom exception in ephemeral disk storage 2024-02-15 17:30:03 +11:00
54a67459bf feat(nodes): support custom save and load functions in ItemStorageEphemeralDisk 2024-02-15 17:30:03 +11:00
7fe5283e74 feat(nodes): create helper function to generate the item ID 2024-02-15 17:30:03 +11:00
fe0391c86b feat(nodes): use ItemStorageABC for tensors and conditioning
Turns out `ItemStorageABC` was almost identical to `PickleStorageBase`. Instead of maintaining separate classes, we can use `ItemStorageABC` for both.

There's only one change needed - the `ItemStorageABC.set` method must return the newly stored item's ID. This allows us to let the service handle the responsibility of naming the item, but still create the requisite output objects during node execution.

The naming implementation is improved here. It extracts the name of the generic and appends a UUID to that string when saving items.
2024-02-15 17:30:03 +11:00
25386a76ef tidy(nodes): do not refer to files as latents in PickleStorageTorch (again) 2024-02-15 17:30:03 +11:00
fd30cb4d90 feat(nodes): ItemStorageABC typevar no longer bound to pydantic.BaseModel
This bound is totally unnecessary. There's no requirement for any implementation of `ItemStorageABC` to work only on pydantic models.
2024-02-15 17:30:03 +11:00
0266946d3d fix(nodes): add super init to PickleStorageTorch 2024-02-15 17:30:03 +11:00
a7f91b3e01 tidy(nodes): do not refer to files as latents in PickleStorageTorch 2024-02-15 17:30:03 +11:00
de0b72528c feat(nodes): replace latents service with tensors and conditioning services
- New generic class `PickleStorageBase`, implements the same API as `LatentsStorageBase`, use for storing non-serializable data via pickling
- Implementation `PickleStorageTorch` uses `torch.save` and `torch.load`, same as `LatentsStorageDisk`
- Add `tensors: PickleStorageBase[torch.Tensor]` to `InvocationServices`
- Add `conditioning: PickleStorageBase[ConditioningFieldData]` to `InvocationServices`
- Remove `latents` service and all `LatentsStorage` classes
- Update `InvocationContext` and all usage of old `latents` service to use the new services/context wrapper methods
2024-02-15 17:30:03 +11:00
2932652787 tidy(nodes): delete onnx.py
It doesn't work and keeping it updated to prevent the app from starting was getting tedious. Deleted.
2024-02-15 17:30:03 +11:00
db6bc7305a fix(nodes): rearrange fields.py to avoid needing forward refs 2024-02-15 17:30:02 +11:00
a5db204629 tidy(nodes): remove unnecessary, shadowing class attr declarations 2024-02-15 17:30:02 +11:00
8e2b61e19f feat(ui): revise graphs to not use LinearUIOutputInvocation
See this comment for context: https://github.com/invoke-ai/InvokeAI/pull/5491#discussion_r1480760629

- Remove this now-unnecessary node from all graphs
- Update graphs' terminal image-outputting nodes' `is_intermediate` and `board` fields appropriately
- Add util function to prepare the `board` field, tidy the utils
- Update `socketInvocationComplete` listener to work correctly with this change

I've manually tested all graph permutations that were changed (I think this is all...) to ensure images go to the gallery as expected:
- ad-hoc upscaling
- t2i w/ sd1.5
- t2i w/ sd1.5 & hrf
- t2i w/ sdxl
- t2i w/ sdxl + refiner
- i2i w/ sd1.5
- i2i w/ sdxl
- i2i w/ sdxl + refiner
- canvas t2i w/ sd1.5
- canvas t2i w/ sdxl
- canvas t2i w/ sdxl + refiner
- canvas i2i w/ sd1.5
- canvas i2i w/ sdxl
- canvas i2i w/ sdxl + refiner
- canvas inpaint w/ sd1.5
- canvas inpaint w/ sdxl
- canvas inpaint w/ sdxl + refiner
- canvas outpaint w/ sd1.5
- canvas outpaint w/ sdxl
- canvas outpaint w/ sdxl + refiner
2024-02-15 17:30:02 +11:00
a3faa3792a chore(ui): regen types 2024-02-15 17:30:02 +11:00
c16eba78ab feat(nodes): add WithBoard field helper class
This class works the same way as `WithMetadata` - it simply adds a `board` field to the node. The context wrapper function is able to pull the board id from this. This allows image-outputting nodes to get a board field "for free", and have their outputs automatically saved to it.

This is a breaking change for node authors who may have a field called `board`, because it makes `board` a reserved field name. I'll look into how to avoid this - maybe by naming this invoke-managed field `_board` to avoid collisions?

Supporting changes:
- `WithBoard` is added to all image-outputting nodes, giving them the ability to save to board.
- Unused, duplicate `WithMetadata` and `WithWorkflow` classes are deleted from `baseinvocation.py`. The "real" versions are in `fields.py`.
- Remove `LinearUIOutputInvocation`. Now that all nodes that output images also have a `board` field by default, this node is no longer necessary. See comment here for context: https://github.com/invoke-ai/InvokeAI/pull/5491#discussion_r1480760629
- Without `LinearUIOutputInvocation`, the `ImagesInferface.update` method is no longer needed, and removed.

Note: This commit does not bump all node versions. I will ensure that is done correctly before merging the PR of which this commit is a part.

Note: A followup commit will implement the frontend changes to support this change.
2024-02-15 17:30:02 +11:00
1a191c4655 remove unused configdict import 2024-02-15 17:30:02 +11:00
e36d925bce fix(ui): remove original l2i node in HRF graph 2024-02-15 17:30:02 +11:00
b1ba18b3d1 fix(nodes): do not freeze or cache config in context wrapper
- The config is already cached by the config class's `get_config()` method.
- The config mutates itself in its `root_path` property getter. Freezing the class makes any attempt to grab a path from the config error. Unfortunately this means we cannot easily freeze the class without fiddling with the inner workings of `InvokeAIAppConfig`, which is outside the scope here.
2024-02-15 17:30:02 +11:00
aff46759f9 feat(nodes): context.data -> context._data 2024-02-15 17:30:02 +11:00
d7b7dcc7fe feat(nodes): context.__services -> context._services 2024-02-15 17:30:02 +11:00
889a26c5b6 feat(nodes): cache invocation interface config 2024-02-15 17:30:02 +11:00
b4c774896a feat(nodes): do not hide services in invocation context interfaces 2024-02-15 17:30:02 +11:00
afbe889d35 fix(nodes): restore missing context type annotations 2024-02-15 17:30:02 +11:00
9c1e52b1ef tests(nodes): fix mock InvocationContext 2024-02-15 17:30:02 +11:00
3f5ab02da9 chore(nodes): add comments for ConfigInterface 2024-02-15 17:30:02 +11:00
bf48e8a03a feat(nodes): export more things from `invocation_api" 2024-02-15 17:30:02 +11:00
e52434cb99 feat(nodes): add boards interface to invocation context 2024-02-15 17:30:02 +11:00
483bdbcb9f fix(nodes): restore type annotations for InvocationContext 2024-02-15 17:30:02 +11:00
ae421fb4ab feat(nodes): do not freeze InvocationContextData, prevents it from being subclassesd 2024-02-15 17:30:02 +11:00
cc295a9f0a feat: tweak pyright config 2024-02-15 17:30:02 +11:00
a7e23af9c6 feat(nodes): create invocation_api.py
This is the public API for invocations.

Everything a custom node might need should be re-exported from this file.
2024-02-15 17:30:02 +11:00
3de4390711 feat(nodes): move ConditioningFieldData to conditioning_data.py 2024-02-15 17:30:02 +11:00
3ceee2b2b2 tests: fix missing arg for InvocationContext 2024-02-15 17:30:02 +11:00
5c7ed24aab feat(nodes): restore previous invocation context methods with deprecation warnings 2024-02-15 17:30:02 +11:00
183c9c4799 chore: ruff 2024-02-15 17:30:02 +11:00
8baf3f78a2 feat(nodes): tidy invocation_context.py, improve comments 2024-02-15 17:30:02 +11:00
ac2eb16a65 tests: fix tests for new invocation context 2024-02-15 17:30:02 +11:00
4aa7bee4b9 docs: update INVOCATIONS.md 2024-02-15 17:30:02 +11:00
7e5ba2795e feat(nodes): update all invocations to use new invocation context
Update all invocations to use the new context. The changes are all fairly simple, but there are a lot of them.

Supporting minor changes:
- Patch bump for all nodes that use the context
- Update invocation processor to provide new context
- Minor change to `EventServiceBase` to accept a node's ID instead of the dict version of a node
- Minor change to `ModelManagerService` to support the new wrapped context
- Fanagling of imports to avoid circular dependencies
2024-02-15 17:30:02 +11:00
97a6c6eea7 feat: add pyright config
I was having issues with mypy bother over- and under-reporting certain problems. I've added a pyright config.
2024-02-15 17:30:02 +11:00
f0e60a4ba2 feat(nodes): restricts invocation context power
Creates a low-power `InvocationContext` with simplified methods and data.

See `invocation_context.py` for detailed comments.
2024-02-15 17:30:02 +11:00
aa089e8108 tidy(nodes): move all field things to fields.py
Unfortunately, this is necessary to prevent circular imports at runtime.
2024-02-15 17:30:02 +11:00
c5aeb36230 fix: repair Dockerfile for ROCm
With these changes, the Docker image can be built and executed
successfully on hosts with AMD devices with ROCm acceleration.
Previously, a ROCm-enabled version of torch would be installed, but
later removed during installation of InvokeAI itself. This was caused by
InvokeAI needing a newer torch version than was previously installed.

The fix consists of multiple components:
* Update the hardcoded versions of torch and torchvision to the versions
  currently used in pyproject.toml, so that a new version need not be
  installed during installation of InvokeAI.
* Specify --extra-index-url on installation of InvokeAI so that even if
  a verison mismatch occurs, the correct torch version should still be
  installed. This also necessitates changing --index-url to
  --extra-index-url for the Torch repo. Otherwise non-torch dependencies
  would not be found.
* In run.sh, build the image for the selected service.
2024-02-14 22:25:40 -05:00
5e77f0d93b Reorder exposed fields in workflow tab (#5711)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-14 18:32:19 -05:00
d3acb81743 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 18:26:35 -05:00
e0f2404c00 added reset to default back in, removed unneeded activation constraints 2024-02-14 18:07:15 -05:00
5ed7972e5f merge conflict 2024-02-14 17:28:59 -05:00
792131be01 added drag icon, added vertical strategy for smoother scrolling 2024-02-14 17:27:21 -05:00
fc278c5cb1 fix(images_default): correct get_metadata error message
The error was misleading, indicating an issue with getting the image DTO, when it was actually an issue with getting metadata.
2024-02-14 16:21:39 -05:00
d7f6af1f07 possible fix: seamless not being seamless with baked 2024-02-14 16:13:11 -05:00
ff9bd040cc possible fix: Seamless not working with Custom VAE's 2024-02-14 16:13:11 -05:00
17d5f7bebd Critical Space Removal 2024-02-14 16:13:11 -05:00
30dae0f5aa adding back skipped layer 2024-02-14 16:13:11 -05:00
161000cde6 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 15:00:54 -05:00
de832f6862 formatting 2024-02-14 15:00:18 -05:00
21ba3c63de cleanup 2024-02-14 14:52:48 -05:00
a948bd1310 refactored dndsortable to be its own component 2024-02-14 14:47:28 -05:00
2071972a8c refactored to just use a new dnd context, got reordering working and fixed flicker 2024-02-14 14:20:08 -05:00
5ed2f6e6c1 bump 2024-02-14 10:15:50 -05:00
b77f6bd0ad Update accelerate 0.26.1 -> 0.27.0 2024-02-14 10:15:50 -05:00
34cc26a4ed revert to using fetch, add token if needed (#5720)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-14 10:04:12 -05:00
9d6e4ff1fb workflow tab (#5680)
* new workflow tab UI - still using shared state with workflow editor tab

* polish workflow details

* remove workflow tab, add edit/view mode to workflow slice and get that working to switch between within editor tab

* UI updates for view/edit mode

* cleanup

* add warning to view mode

* lint

* start with isTouched false

* working on styling mode toggle

* more UX iteration

* lint

* cleanup

* save original field values to state, add indicator if they have been changed and give user choice to reset

* lint

* fix import and commit translation

* dont switch to view mode when loading a workflow

* warns before clearing editor

* use folder icon

* fix(ui): track do not erase value when resetting field value

- When adding an exposed field, we need to add it to originalExposedFieldValues
- When removing an exposed field, we need to remove it from originalExposedFieldValues
- add `useFieldValue` and `useOriginalFieldValue` hooks to encapsulate related logic

* feat(ui): use IconButton for workflow view/edit button

* feat(ui): change icon for new workflow

It was the same as the workflow tab icon, confusing bc you think it's going to somehow take you to the tab.

* feat(ui): use render props for NewWorkflowConfirmationAlertDialog

There was a lot of potentially sensitive logic shared between the new workflow button and menu items. Also, two instances of ConfirmationAlertDialog.

Using a render prop deduplicates the logic & components

* fix(ui): do not mark workflow touched when loading workflow

This was occurring because the `nodesChanged` action is called by reactflow when loading a workflow. Specifically, it calculates and sets the node dimensions as it loads.

The existing logic set `isTouched` whenever this action was called.

The changes reactflow emits have types, and we can use the change types and data to determine if a change should result in the workflow being marked as touched.

* chore(ui): lint

* chore(ui): lint

* delete empty file

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-14 09:02:07 -05:00
85bbf65967 only refetch intermediates on modal open if it is enabled 2024-02-14 09:47:15 +11:00
3726293258 feat(nodes): improve types in graph.py
Methods `get_node` and `complete` were typed as returning a dynamically created unions `InvocationsUnion` and `InvocationOutputsUnion`, respectively.

Static type analysers cannot work with dynamic objects, so these methods end up as effectively un-annotated, returning `Unknown`.

They now return `BaseInvocation` and `BaseInvocationOutput`, respectively, which are the superclasses of all members of each union. This gives us the best type annotation that is possible.

Note: the return types of these methods are never introspected, so it doesn't really matter what they are at runtime.
2024-02-14 07:56:10 +11:00
8bd65be8c8 Quick Seamless Fixes (#5685)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: It's small

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
This pulls out some of the updates from the WIP Seamless branch that has
yet to be completed, and hardcodes values that are exposed in that
branch. Given that seamless currently does not generate seamless
textures, and this fix results in seamless outputs, it's an improvement
even if it doesn't resolve this in a "perfect" way that exposes all
variables to the end user.

better over perfect.


![f07b7e49-80c2-4659-bb36-d50ec80b1f8b](https://github.com/invoke-ai/InvokeAI/assets/31807370/36a40bd9-8fc4-41d5-bd1e-209fc828987e)
2024-02-13 11:08:07 -07:00
783442c40d Merge branch 'main' into SeamlessFixes 2024-02-13 10:38:55 -07:00
8a147bd6e6 added sortable to linear view, not saving yet 2024-02-13 11:53:49 -05:00
273994b742 chore: bump diffusers 0.26.2 -> 0.26.3
https://github.com/huggingface/diffusers/releases/tag/v0.26.3

This fixes an issue with `DPMSolverSinglestepScheduler` with even numbers of steps.
2024-02-13 08:40:42 -05:00
3339ad4df8 feat(nodes): seamless.py minor cleanup 2024-02-13 13:34:48 +11:00
c3b2a8cb27 Quick Seamless Fixes 2024-02-13 13:34:48 +11:00
daa780940b translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
2289680ae1 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1377 of 1416 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
B N
cda85a0637 translationBot(ui): update translation (German)
Currently translated at 79.4% (1128 of 1419 strings)

translationBot(ui): update translation (German)

Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
1d9801e7be fix(ui): add input el for workflow upload button
Need this to select the file
2024-02-13 13:18:31 +11:00
3ecb1e580f update bc button is only ever used in modal context 2024-02-13 13:18:31 +11:00
6301e58a2e move upload button into workflow library modal 2024-02-13 13:18:31 +11:00
5dd552effa Update 020_INSTALL_MANUAL.md (#5700)
updated the commands for running InvokeAI local and web server

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-02-13 00:36:00 +00:00
25ce505628 exposed field loading state (#5704)
* remove thunk for receivedOpenApiSchema and use RTK query instead. add loading state for exposed fields

* clean up

* ignore any

* fix(ui): do not log on canceled openapi.json queries

- Rely on RTK Query for the `loadSchema` query by providing a custom `jsonReplacer` in our `dynamicBaseQuery`, so we don't need to manage error state.
- Detect when the query was canceled and do not log the error message in those situations.

* feat(ui): `utilitiesApi.endpoints.loadSchema` -> `appInfoApi.endpoints.getOpenAPISchema`

- Utilities is for server actions, move this to `appInfo` bc it fits better there.
- Rename to match convention for HTTP GET queries.
- Fix inverted logic in the `matchRejected` listener (typo'd this)

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-12 18:48:32 -05:00
1dd07fb1eb Updated docs on OpenPose 2024-02-12 11:12:45 -05:00
e82c21b5ba chore: rename DWPose to DW Openpose 2024-02-12 11:12:45 -05:00
50b93992cf cleanup: Remove Openpose Image Processor 2024-02-12 11:12:45 -05:00
f8e566d62a cleanup: unused util functions 2024-02-12 11:12:45 -05:00
f588b95c7f cleanup: remove unused code from the DWPose implementation 2024-02-12 11:12:45 -05:00
67daf1751c fix: lint erros 2024-02-12 11:12:45 -05:00
7d80261d47 chore: Add code attribution for the DWPoseDetector 2024-02-12 11:12:45 -05:00
67cbfeb33d feat: Add output image resizing for DWPose 2024-02-12 11:12:45 -05:00
f7998b4be0 feat: Add DWPose to Linear UI 2024-02-12 11:12:45 -05:00
675c73c94f fix: ruff lint errors 2024-02-12 11:12:45 -05:00
0a27b0379f feat: Initial implementation of DWPoseDetector 2024-02-12 11:12:45 -05:00
0ef18b6477 fix(ui): enable lora when recalling
Closes #5698
2024-02-12 16:47:46 +11:00
6539ef7c9f {release} v3.6.3 (#5696)
## What type of PR is this? (check all applicable)
Release Invoke 3.6.3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke 3.6.3 Release



## QA Instructions, Screenshots, Recordings
Test the installer:
[InvokeAI-installer-v3.6.3.zip](https://github.com/invoke-ai/InvokeAI/files/14233359/InvokeAI-installer-v3.6.3.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi & GitHub
2. Announce on Discord
2024-02-11 16:02:30 -05:00
14c9a1e4f3 Merge branch 'main' into release/3.6.3 2024-02-11 15:36:05 -05:00
64b0feca31 Update ruff 2024-02-11 15:24:28 -05:00
0be9a2d906 Update string formatting 2024-02-11 15:24:28 -05:00
d925f721b9 fix references to .env.sample (#5695)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it is text only, simple, and (hopefully) self-evident

      
## Have you updated all relevant documentation?
- [x] Yes - as far as I can grep.
- [ ] No


## Description

`.env.sample` was misspelled as `env.sample` in a few places.

This changes documentation only. You may need to re-build/deploy docs,
I'm not sure.
2024-02-11 13:43:14 -05:00
4e5be1891a {release} v3.6.3 2024-02-11 10:34:47 -07:00
156d4ec3b2 fix references to .env.sample 2024-02-10 21:11:22 -08:00
c45a43519a chore: bump deps
- ruff 0.1.11 -> 0.2.1
- update config format
2024-02-11 08:50:49 +11:00
763816ca0c chore: bump deps
- pydantic 2.5.3 -> 2.6.1
- uvicorn 0.25.0 -> 0.27.1
2024-02-11 08:50:49 +11:00
B N
83a7c9059f translationBot(ui): update translation (German)
Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-11 08:40:55 +11:00
c5f069a255 feat(backend): remove dependency on basicsr
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep.

Closes #5108
2024-02-11 08:34:54 +11:00
cd169ee082 fix(nodes): deep copy graph inputs (#5686)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state,
invocation outputs, etc) constantly. The meant we were effectively
working with deep-copied objects at all time. We could mutate objects
freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we
cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are
two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node
mutates the output object while `invoke`ing, the next node will get the
mutated object.
- The invocation cache stores live python objects. When a node mutates
an output pulled from the cache, the next node that uses the cached
object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set,
effectively reproducing the same behaviour as we had with the SQLite
session storage. Nodes can safely mutate their inputs and those changes
never leave the node's scope.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes  #5665

The root issue affects CLIP Skip because that node mutates its input
`ClipField`. Specifically, it increments `self.clip.skipped_layers` and
passes `self.clip` as its output. I don't know if there are any other
nodes that do this.

## QA Instructions, Screenshots, Recordings

Two issues to reproduce. 

First is the caching issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7a251e48-bc70-4b8e-8816-84aac41ce4d3)

Note the cache is enabled. Run this simple graph a couple times, and
check the outputs of the CLIP Skip node. You'll see the `skipped_layers`
value increasing each time.

Second is the nodes-sharing-inputs issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/ecdaefab-2beb-4950-b4bf-2a5738ce6832)

Note the cache is _disabled_. Run the graph a couple times and check the
outputs of the two CLIP Skip nodes. You'll see that one has the expected
value for `skipped_layers` and the other has double that.

Now update to the PR and try again. You should see `skipped_layers` is
the right value in all cases.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved. It needs a real review with
braintime.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-02-09 13:24:10 -05:00
66b106f107 Merge branch 'main' into fix/nodes/deep-copy-inputs 2024-02-09 11:49:16 -05:00
b10d745dae fix(ui): when using control image dimensions, round to 8
The control image dimensions were set directly without rounding them to 8, causing an error during generation if they weren't a multiple of 8.
2024-02-09 08:44:11 -05:00
d20f98fb4f fix(nodes): deep copy graph inputs
The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state, invocation outputs, etc) constantly. The meant we were effectively working with deep-copied objects at all time. We could mutate objects freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node mutates the output object while `invoke`ing, the next node will get the mutated object.
- The invocation cache stores live python objects. When a node mutates an output pulled from the cache, the next node that uses the cached object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set, effectively reproducing the same behaviour as we had with the SQLite session storage. Nodes can safely mutate their inputs and those changes never leave the node's scope.

Closes  #5665
2024-02-09 21:17:32 +11:00
c9c150f850 feat(ui): use cfgRescaleMultiplier on canvas graphs 2024-02-09 18:53:08 +11:00
a60e2b7c77 fix existing graphs with cfg_RescaleMultiplier not used 2024-02-09 18:53:08 +11:00
da6e5b2ba1 fix(ui): fix lora count badge when none enabled 2024-02-08 19:22:28 -05:00
c65d497cbc fix(ui): filter disabled LoRAs on sdxl 2024-02-08 19:22:28 -05:00
B N
a68d8fe203 translationBot(ui): update translation (German)
Currently translated at 74.4% (1054 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 69.6% (986 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 68.6% (972 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-09 10:10:50 +11:00
5de2288cfa addressed feedback 2024-02-09 10:09:27 +11:00
2ce70b4457 added button on hover for exposing fields to linear workflow ui 2024-02-09 10:09:27 +11:00
6c5f743e2b Upgrade version of fastapi and socketio 2024-02-09 09:04:01 +11:00
bb242c4e1e Print correct version when a non-default version is selected for install (#5675)
…elected

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Small bugfix: the installer would always print the latest stable version
as the one to be installed, even if a different one was selected. The
selected version would still be installed correctly. This PR fixes the
message.

## QA Instructions, Screenshots, Recordings

Select a pre-release version on install and observe the correct version
being printed. Compare to current behaviour to ascertain the fix.

## Merge Plan

- "This PR can be merged when approved"

## Added/updated tests?

- [ ] Yes
- [x] No
2024-02-08 11:07:14 -05:00
c9e246ed1b fix(installer): print correct version when a non-default version is selected 2024-02-08 09:56:56 -05:00
B N
2175fe3823 translationBot(ui): update translation (German)
Currently translated at 66.2% (938 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-08 07:57:55 +11:00
f64fc2c8b7 feat(installer): add a deprecation message to the in-launcher updater 2024-02-07 14:31:26 -05:00
3d1b5c57ea fix(installer): more reliably upgrade pip 2024-02-07 14:31:26 -05:00
31b9538976 feat(installer): improve directory selection experience 2024-02-07 14:31:26 -05:00
97c1545cca feat(installer): show latest versions in the welcome panel 2024-02-07 14:31:26 -05:00
6a8a3b50bc feat(installer): add an interactive version chooser 2024-02-07 14:31:26 -05:00
5a816818dc feat(installer): get list of (pre-)releases from github api 2024-02-07 14:31:26 -05:00
1cb866d1fc fix(installer): small formatting fix in welcome banner 2024-02-07 14:31:26 -05:00
29bcc4b595 fix(installer) slightly better typing for GPU selection 2024-02-07 14:31:26 -05:00
ca2bb6f0cc fix(installer): bubble up exceptions during install 2024-02-07 14:31:26 -05:00
1c8fc908b2 fix(installer): minor logic fixes 2024-02-07 14:31:26 -05:00
d397beaa47 fix(installer): upgrade the temporary pip before installation 2024-02-07 14:31:26 -05:00
60eea09629 feat(installer): *always* force-reinstall
This has repeatedly shown itself useful in fixing install issues,
especially regarding pytorch CPU/GPU version, so there is little
downside to making this the default.

Performance impact of this should be negligible. Packages will
be reinstalled from pip cache if possible, and downloaded only if
necessary. Impact may be felt on slower disks.
2024-02-07 14:31:26 -05:00
5b7b1122cb tidy(installer): clean up unused code 2024-02-07 14:31:26 -05:00
dfc8d1bb10 tidy(installer): remove unused argument / env var 2024-02-07 14:31:26 -05:00
f9fa62164e tidy(installer): remove .whl publishing and bundling - we now install from pypi 2024-02-07 14:31:26 -05:00
d47905d2fb chore(installer): reorder messages in util script
fail fast if there's a virtualenv activated
2024-02-07 14:31:26 -05:00
03b1cde97d tidy(installer): remove unused update scripts and references thereto 2024-02-07 14:31:26 -05:00
7162ff04df tidy(installer): do not preinstall torch separately 2024-02-07 14:31:26 -05:00
32b1e974ca feat(installer): install from PyPi instead of using prepackaged wheel 2024-02-07 14:31:26 -05:00
82c3c7fc38 tidy(installer): remove unused experimental venv location 2024-02-07 14:31:26 -05:00
3dcbb79ef7 chore(installer): typing pass 2024-02-07 14:31:26 -05:00
3b41104427 Minor dep updates for diffusers and numpy (#5673)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because probably not needed

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

These are another minor dep updates that I was able to test without any
regressions. This will ensure we are up-to-date again.
The fixes are very minor, probably not noticeable in InvokeAI (at least
for diffusers) but it's still good to have them.

This is also to make sure that the RC is releasing with the latest
packages to ensure extended testing.

Greetings

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-07 13:29:29 -05:00
35bf7ee66d Minor dep updates 2024-02-07 17:58:28 +01:00
430e17a5d2 community node: BriaAI RMBG 1.4 (#5671)
## What type of PR is this? (check all applicable)

- [x] Community Node Submission


## Description

- Adds BriaAI's new 1.4 model for background removal. Far superior
results from what I've tested compared to any other BG removal so far:
https://github.com/blessedcoolant/invoke_bria_rmbg
2024-02-07 11:06:31 -05:00
400d66fa5d community node: BriaAI RMBG 1.4 2024-02-07 19:55:04 +05:30
800c481515 add actions for workflow library (#5669)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-07 14:14:54 +00:00
79ae9c4e64 feat(nodes): move profiler/stats cleanup logic to function
Harder to miss something going forward.
2024-02-07 11:26:15 +11:00
0dc6cb0535 feat(nodes): do not log stats errors
The stats service was logging error messages when attempting to retrieve stats for a graph that it wasn't tracking. This was rather noisy.

Instead of logging these errors within the service, we now will just raise the error and let the consumer of the service decide whether or not to log. Our usage of the service at this time is to suppress errors - we don't want to log anything to the console.

Note: With the improvements in the previous two commits, we shouldn't get these errors moving forward, but I still think this change is correct.
2024-02-07 11:26:15 +11:00
810fc19e43 feat(nodes): log stats for canceled graphs
When an invocation is canceled, we consider the graph canceled. Log its graph's stats before resetting its graph's stats. No reason to not log these stats.

We also should stop the profiler at this point, because this graph is finished. If we don't stop it manually, it will stop itself and write the profile to disk when it is next started, but the resultant profile will include more than just its target graph.

Now we get both stats and profiles for canceled graphs.
2024-02-07 11:26:15 +11:00
e0e106367d fix(nodes): do not clear invocation stats on invoke error
When an invocation errored, we clear the stats for the whole graph. Later on, we check the graph for errors and see the failed invocation, and we consider the graph failed. We then attempts to log the stats for the failed graph.

Except now the failed graph has no stats, and the stats raises an error.

The user sees, in the terminal:
- An invocation error
- A stats error (scary!)
- No stats for the failed graph (uninformative!)

What the user should see:
- An invocation error
- Graph stats

The fix is simple - don't reset the graph stats when an invocation has an error.
2024-02-07 11:26:15 +11:00
14472dc09d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 11:16:38 +11:00
e8095b73ae feat(ui): improve types for language picker
Makes it impossible to miss a language or typo.
2024-02-05 10:47:36 +11:00
c979cf5ecc tidy(ui): remove language translation strings
There's no need to do things like translate Arabic into Finnish. We never use those strings. Remove these translations entirely.
2024-02-05 10:47:36 +11:00
1b4dbd283e fix(ui): hardcode language picker languages
Hardcode the options in the dropdown, don't rely on translators to fill this in.

Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
2024-02-05 10:47:36 +11:00
fb50a221f8 fix(ui): fix color input field alpha
Closes #5647

The alpha values in the UI are `0-1` but the backend wants `0-255`.

Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.

The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.

Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
2024-02-05 09:28:20 +11:00
52e07db06b Update communityNodes.md
added Autostereogram nodes
2024-02-05 09:26:41 +11:00
6643b5cec4 feat(ui): log trace when skipping reserved input field type 2024-02-05 09:24:46 +11:00
e8bf9ea058 fix(ui): do not swallow errors during schema parsing
Unknown errors were swallowed during schema parsing. Now they log a warning.
2024-02-05 09:24:46 +11:00
ce3d37e829 fix(ui): handle fields with single option literal
Closes #5616

Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.

When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.

 This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.

 I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
2024-02-05 09:15:09 +11:00
8a61063e84 translationBot(ui): update translation (Turkish)
Currently translated at 57.5% (825 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
87ff96553a translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
209bf105bc translationBot(ui): update translation (Turkish)
Currently translated at 57.3% (822 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
804dbeba34 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
067cd4dc2e translationBot(ui): update translation (Turkish)
Currently translated at 40.6% (582 of 1433 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 38.8% (557 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
feb4a3f242 translationBot(ui): update translation (Azerbaijani)
Currently translated at 0.1% (1 of 1433 strings)

translationBot(ui): added translation (Azerbaijani)

Co-authored-by: Mehrab Poladov <thepoladov@protonmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/az/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
4a886c0a4a Minor dep updates 2024-02-04 13:04:36 -05:00
8e500283b6 Fix broken import in checkpoint_convert (#5635)
* Fix broken import in checkpoint_convert

* simplify the fix

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-04 12:56:51 +00:00
3205371654 feat(ui): better error handling for persist serialize function 2024-02-03 07:39:19 -05:00
d713620d9e refactor(ui): refactor reducer list
Instead of manually naming reducers, use each slice's `name` property. Makes typos impossible.
2024-02-03 07:39:19 -05:00
c1300fa8b1 refactor(ui): refactor persist config
Add more structure around persist configs to avoid bugs from typos and misplaced persist denylists.
2024-02-03 07:39:19 -05:00
0976ddba23 chore(invocation-stats): improve types in _prune_stale_stats 2024-02-03 07:34:06 -05:00
3ebb806410 fix(invocation-stats): use appropriate method to get the type of an invocation 2024-02-03 07:34:06 -05:00
9f274c79dc chore(item-storage): improve types
Provide type args to the generics.
2024-02-03 07:34:06 -05:00
88c08bbfc7 fix(item-storage-memory): throw when requested item does not exist
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.

The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.

The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.

This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.

In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.

Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
2024-02-03 07:34:06 -05:00
c2af124622 fix(ui): refetch intermediates count when settings modal open
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.

So we have to imperatively refetch, by refetching as we open the modal.

Closes #5639
2024-02-03 12:14:37 +11:00
f972fe9836 pref: annotate 2024-02-03 10:18:26 +11:00
dcfc883ab3 perf: remove TypeAdapter 2024-02-03 10:18:26 +11:00
1d2bd6b8f7 perf: TypeAdapter instantiated once 2024-02-03 10:18:26 +11:00
f2777f5096 Port the command-line tools to use model_manager2 (#5546)
* Port the command-line tools to use model_manager2

1.Reimplement the following:

  - invokeai-model-install
  - invokeai-merge
  - invokeai-ti

  To avoid breaking the original modeal manager, the udpated tools
  have been renamed invokeai-model-install2 and invokeai-merge2. The
  textual inversion training script should continue to work with
  existing installations. The "starter" models now live in
  `invokeai/configs/INITIAL_MODELS2.yaml`.

  When the full model manager 2 is in place and working, I'll rename
  these files and commands.

2. Add the `merge` route to the web API. This will merge two or three models,
   resulting a new one.

   - Note that because the model installer selectively installs the `fp16` variant
     of models (rather than both 16- and 32-bit versions as previous),
     the diffusers merge script will choke on any huggingface diffuserse models
     that were downloaded with the new installer. Previously-downloaded models
     should continue to merge correctly. I have a PR
     upstream https://github.com/huggingface/diffusers/pull/6670 to fix
     this.

3. (more important!)
  During implementation of the CLI tools, found and fixed a number of small
  runtime bugs in the model_manager2 implementation:

  - During model database migration, if a registered models file was
    not found on disk, the migration would be aborted. Now the
    offending model is skipped with a log warning.

  - Caught and fixed a condition in which the installer would download the
    entire diffusers repo when the user provided a single `.safetensors`
    file URL.

  - Caught and fixed a condition in which the installer would raise an
    exception and stop the app when a request for an unknown model's metadata
    was passed to Civitai. Now an error is logged and the installer continues.

  - Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
    from Civitai.

* fix ruff issue

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-02 17:18:47 +00:00
d3320dc4ee convert checkpoints to safetensors (#5620)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-02 10:27:24 -05:00
72db2ee352 Merge branch 'main' into sdxl-convert-safetensors 2024-02-02 10:10:49 -05:00
60c3a4ad5e chore: add Hand Refiner to communityNodes.md 2024-02-02 08:12:32 -05:00
cf7a7928af Update mkdocs.yml 2024-02-01 20:43:49 -05:00
1057314508 Fix ruff? 2024-02-01 20:40:28 -05:00
73a077956b Why did my IDE change the comment? 2024-02-01 20:40:28 -05:00
5e1e50bd47 Fix hopefully last import 2024-02-01 20:40:28 -05:00
413fe566b8 Fix imports 2024-02-01 20:40:28 -05:00
c9b5f06c42 Update diffusers + hotfix 2024-02-01 20:40:28 -05:00
b53e432b0f translationBot(ui): update translation (German)
Currently translated at 60.8% (871 of 1432 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-02 11:16:45 +11:00
88164447e9 fix(ui): hide HRF if SDXL model selected 2024-02-02 11:10:54 +11:00
1ac85fd049 tidy(migrator): remove logic to check if graph_executions exists in migration 5
Initially I wanted to show how many sessions were being deleted. In hindsight, this is not great:
- It requires extra logic in the migrator, which should be as simple as possible.
- It may be alarming to see "Clearing 224591 old sessions".

The app still reports on freed space during the DB startup logic.
2024-02-02 09:20:41 +11:00
ee6fc4ab1d chore(item_storage): excise SqliteItemStorage 2024-02-02 09:20:41 +11:00
9f793bdae8 feat(item_storage): implement item_storage_memory with LRU eviction strategy
Implemented with OrderedDict.
2024-02-02 09:20:41 +11:00
a0eecaecd0 feat(item_storage): implement item_storage_memory max_size
Implemented with unordered dict and set.
2024-02-02 09:20:41 +11:00
d532073f5b fix(db): check for graph_executions table before dropping
This is needed to not fail tests; see comment in code.
2024-02-02 09:20:41 +11:00
198e8c9d55 feat(db): add migration 5 to drop graph_executions table 2024-02-02 09:20:41 +11:00
30367deeca feat(nodes): use memory item storage 2024-02-02 09:20:41 +11:00
e73298aea2 tidy(item_storage): remove extraneous class attribute declarations 2024-02-02 09:20:41 +11:00
59279851a3 tidy(item_storage): remove unused list and search methods 2024-02-02 09:20:41 +11:00
2965357d99 feat(nodes): add ItemStorageMemory
The sqlite item storage class can be swapped for this eliminate costly network calls.
2024-02-02 09:20:41 +11:00
8bd32ee142 feat(nodes): add delete method to ItemStorageABC 2024-02-02 09:20:41 +11:00
a4f892dcfb tidy(nodes): remove unused get_raw method on ItemStorageABC 2024-02-02 09:20:41 +11:00
e675983e20 fix(ui): download image opens in new tab (#5625)
* fix(ui): download image opens in new tab

In some environments, a simple `a` element cannot trigger a download of an image. Fetching the image directly can get around this and provide more reliable download functionality.

* use hook for imageUrlToBlob so token gets sent if needed

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-01 20:25:01 +00:00
e9558f97c4 perf(config): change default png_compress_level to 1
This substantially reduces the time spent encoding PNGs. In workflows with many image outputs, this is a drastic improvement.

For a tiled upscaling workflow going from 512x512 to a scale factor of 4, this can provide over 15% speed increase.
2024-02-02 00:32:00 +11:00
a1a611f8cb chore(ui): lint 2024-02-02 00:20:28 +11:00
182dc859a0 chore(ui): update eslint rules
- Add `i18next/no-literal-string` (was removed from upstream config)
- Restore `path/no-relative-imports`, this was lost in the shuffle a while ago
2024-02-02 00:20:28 +11:00
c0240a8568 chore(ui): bump @invoke-ai/eslint-config-react 2024-02-02 00:20:28 +11:00
02bcff29e8 feat: update ROCm to 5.6 everywhere 2024-02-01 00:07:16 -05:00
d4ed64df7d feat: add force-reinstall option to the updater 2024-02-01 00:07:16 -05:00
701f14c1e3 fix: add PyTorch extra-index-url to the updater command 2024-02-01 00:07:16 -05:00
45bf2c7da6 chore(updater): address deprecation of pkg_resources
as per module docstring:
This module is deprecated. Users are directed to importlib.resources,
importlib.metadata and packaging instead.
2024-02-01 00:07:16 -05:00
67ada70a26 docs: update link to frontend README 2024-01-31 22:34:59 -05:00
06bcc07f65 Merge branch 'main' into sdxl-convert-safetensors 2024-01-31 17:00:19 -05:00
4410ecf62c fix(stats): log errors at error level
They were erroneously at warning before.
2024-02-01 08:50:56 +11:00
9f6b9d4d23 fix(stats): preserve stack when raising GESStatsNotFoundError 2024-02-01 08:50:56 +11:00
b24e8dd829 feat(stats): refactor InvocationStatsService to output stats as dataclasses
This allows the stats to be written to disk as JSON and analyzed.

- Add dataclasses to hold stats.
- Move stats pretty-print logic to `__str__` of the new `InvocationStatsSummary` class.
- Add `get_stats` and `dump_stats` methods to `InvocationStatsServiceBase`.
- `InvocationStatsService` now throws if stats are requested for a session it doesn't know about. This avoids needing to do a lot of messy null checks.
- Update `DefaultInvocationProcessor` to use the new stats methods and suppresses the new errors.
2024-02-01 08:50:56 +11:00
25291a2e01 select first image if no selectedImageName 2024-01-31 11:52:47 -05:00
332f3930a5 Allow civit ai API Key on Imports (#5608)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Small PR to allow users to pass in a civit api key via config options

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-31 10:51:33 -05:00
ed466a99ec Merge branch 'main' into fix-civit-model-imports 2024-01-31 10:12:44 -05:00
f68f8898c0 Workflow navigation & save-as (#5607)
* redo top panel of workflow editor

* add checkbox option to save to project, integrate save-as flow into first time saving workflow

* remove log

* remove workflowLibrary as a feature that can be disabled

* lint

* feat(ui): make SaveWorkflowAsDialog a singleton

Fixes an issue where the workflow name would erroneously be an empty string (which it should show the current workflow name).

Also makes it easier to interact with this component.

- Extract the dialog state to a hook
- Render the dialog once in `<NodeEditor />`
- Use the hook in the various buttons that should open the dialog
- Fix a few wonkily named components (pre-existing issue)

* fix(ui): when saving a never-before-saved workflow, do not append " (copy)" to the name

* fix(ui): do not obscure workflow library button with add node popover

This component is kinda janky :/ the popover content somehow renders invisibly over the button. I think it's related to the `<PopoverAnchor />.

Need to redo this in the future, but for now, making the popover render lazily fixes this.

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 13:32:31 +00:00
a0996b1c0a Fix ruff styling 2024-01-31 07:16:14 -06:00
522ff4a042 civit -> civitai 2024-01-31 07:16:14 -06:00
a769f93be0 Remove unnecessary change 2024-01-31 07:16:14 -06:00
2c5ef92979 Move location of config property, comment for explanation of use 2024-01-31 07:16:14 -06:00
5d773dc94c Remove debug line 2024-01-31 07:16:14 -06:00
088e3420e6 Allow passing of civit api key via config 2024-01-31 07:16:14 -06:00
14efc95707 Allow passing of a civit api key 2024-01-31 07:16:14 -06:00
f48a2c5fd2 fix(ui): workflow settings styling
Got borked in the redesign.
2024-01-31 07:16:01 -06:00
74ae4d7774 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
191203ea0c translationBot(ui): update translation (Turkish)
Currently translated at 36.1% (516 of 1427 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
6aceae5c22 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1388 of 1427 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
8c6b3efd39 fix(ui): remove hard reset of cursor on canvas during state reset
Remove resetting cursor when resetting state letting event handlers to take care of presentation
2024-01-31 23:03:14 +11:00
4602efd598 feat: add profiler util (#5601)
* feat(config): add profiling config settings

- `profile_graphs` enables graph profiling with cProfile
- `profiles_dir` sets the output for profiles

* feat(nodes): add Profiler util

Simple wrapper around cProfile.

* feat(nodes): use Profiler in invocation processor

* scripts: add generate_profile_graphs.sh script

Helper to generate graphs for profiles.

* pkg: add snakeviz and gprof2dot to dev deps

These are useful for profiling.

* tests: add tests for profiler util

* fix(profiler): handle previous profile not stopped cleanly

* feat(profiler): add profile_prefix config setting

The prefix is used when writing profile output files. Useful to organise profiles into sessions.

* tidy(profiler): add `_` to private API

* feat(profiler): simplify API

* feat(profiler): use child logger for profiler logs

* chore(profiler): update docstrings

* feat(profiler): stop() returns output path

* chore(profiler): fix docstring

* tests(profiler): update tests

* chore: ruff
2024-01-31 10:51:57 +00:00
f70c0936ca feat: disable/enable LoRas with a switch (#5591)
* feat:  disable/enable LorRas with a switch

* feat:  visually display previous weight when disabled

* style: 🚨 linting

* feat:  lora badge count reflects active loras

* style: 🚨 linting

* feat:  track disabled lora on state instead of weight

* style: 🚨 linting

* feat:  it all works now

tracking isEnabled on lora state, disabled slider when disabled, removed disabled loras from graph, updated badge counting and renamed lora add function

* style: 🚨 linting

* fix: 🐛 enabledLoRAs filter nullish coalescing

* refactor: 🎨 minor changes

renamed lora toggle action, removed errent comment, removed extraneous type annotation

* style: 🚨 linting
2024-01-31 05:50:03 +00:00
0d4de4cc63 changed hotkeys (#5542)
Adds adds ctrl/meta + scroll to change brush size on canvas.

* changed hotkeys

* new hotkey as an additional

* lint fixed"

* added ctrl scroll and removed hotkey

* using

* added fix

* feedbck_changes

* brush size change logic

* feat(ui): also check for meta key when modifying brush size

* feat(ui): add comment linking to where brush size algo was determined

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 15:57:16 +11:00
1e855f8290 Update safetensors and transformers to their latest versions (#5562)
* Update Safetensors to the lastest version

* Update Transformers while at it

* Update transformers again
2024-01-31 04:54:56 +00:00
bb2787584d chore(deps-dev): bump vite in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.0.11 to 5.0.12.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.0.12/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.0.12/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-31 15:47:13 +11:00
a04981b418 This seems to work now 2024-01-30 21:32:08 -05:00
d7f16b7c87 fix(ui): the bottom button on floating side panel clears all queue items 2024-01-31 01:04:24 +11:00
4477e04d59 fix(ui): filter out interactive targets when pressing space on canvas tab
Improve input filtering for better accessibility
2024-01-30 09:56:21 +11:00
30e11b4b42 feat(ui): save the current staging image with shift+s 2024-01-30 09:56:21 +11:00
b93695b78f feat(ui): discard all staging images in canvas on escape 2024-01-30 09:56:21 +11:00
b01311813b fix(ui): activate move tool on pressing space
canvas element is not guaranteed to be in focus (e.g. after accepting new staging image) so we check for the active tab name instead
2024-01-30 09:56:21 +11:00
5ae80fab87 fix(ui): accept staging image hotkey callback 2024-01-30 09:56:21 +11:00
c4291f2136 fix(ui): block gallery navigation when staging images on canvas 2024-01-30 09:56:21 +11:00
287d3c2b04 add UI library to rollup config (#5598)
* try rolling up ui library

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-29 13:13:09 -05:00
7fde19730e translationBot(ui): update translation (Turkish)
Currently translated at 22.8% (326 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-29 14:15:29 +11:00
13575642d8 chore: update issue template
- Improve spelling and grammar
- Add browser, GPU model, python deps fields
- Revise other fields
2024-01-29 14:11:00 +11:00
3f5370b284 feat(ui): add a copy button to the about modal
This copies the dependencies as JSON.
2024-01-28 20:50:08 -06:00
d048eb5b20 docs(ui): add STATE_MGMT.md
Supersedes the mini nanostores doc.
2024-01-29 07:28:20 +11:00
dd7031a472 docs(ui): update README.md
Also moved it to the frontend package root
2024-01-29 07:28:20 +11:00
4160d5ef26 update contributors list to bring into sync with discord roles (#5586)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This brings `docs/other/CONTRIBUTORS.md` into sync with collaborator
roles in Discord as of January 27, 2024.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

## QA Instructions, Screenshots, Recordings

N/A

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-28 11:28:22 -05:00
51bdf2fd19 Merge branch 'main' into docs/update-contributors 2024-01-28 11:26:35 -05:00
6a44697911 translationBot(ui): update translation (Turkish)
Currently translated at 10.5% (151 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 8.1% (116 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 6.6% (95 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
7a1d0ec228 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
b5928fd411 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1387 of 1426 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
2f345d1976 chore(ui): lint 2024-01-28 19:57:53 +11:00
f5d0721fa8 chore(ui): bump @invoke-ai/eslint-config-react 2024-01-28 19:57:53 +11:00
c3b36cb61d chore(ui): remove chakra CLI
It doesn't work now that the theme is external. I'm not sure how to fix it and not sure if it really did much (I don't think I ever got autocomplete...). Maybe it can be implemented in `@invoke-ai/ui-library`.
2024-01-28 19:57:53 +11:00
189c430e46 chore(ui): format
Lots of changed bc the line length is now 120. May as well do it now.
2024-01-28 19:57:53 +11:00
b922ee566a chore(ui): use new prettier config 2024-01-28 19:57:53 +11:00
89da69f647 fix(ui): correct import in ReduxInit 2024-01-28 19:57:53 +11:00
138caa34de chore(ui): lint 2024-01-28 19:57:53 +11:00
26c3378ede chore(ui): use new eslint config, add some overrides 2024-01-28 19:57:53 +11:00
aa134a2db8 chore(ui): remove postinstall script 2024-01-28 19:57:53 +11:00
d0391cb430 chore(ui): bump @invoke-ai/ui-library, add @invoke-ai/eslint-config-react & @invoke-ai/prettier-config-react 2024-01-28 19:57:53 +11:00
c955ea9de0 Update CONTRIBUTORS.md 2024-01-27 17:04:32 -05:00
fc29a5d439 update contributors list to bring into sync with discord roles 2024-01-27 16:59:56 -05:00
7e9942dbab {fix} install docs house keeping (#5583)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
- Update docs to make link to automated installer easier to find
- Fixed issue in SDXL + refiner example workflow 

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Read over docs changes
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?
Deploy new docs
2024-01-27 12:10:47 -05:00
c003967eaa Merge branch 'main' into feat/install_docs_update 2024-01-27 11:55:19 -05:00
b28fcc6be5 lint 2024-01-27 21:36:42 +11:00
418cdbabb7 add option for workflowCategories 2024-01-27 21:36:42 +11:00
18e61e92d9 {fix} install docs house keeping 2024-01-26 21:19:48 -06:00
de20711637 add nanostore for open API schema 2024-01-27 12:43:47 +11:00
55e91b97be dep 2024-01-27 12:43:47 +11:00
f79bbd2d6e account for baseUrl 2024-01-27 12:43:47 +11:00
e1c2c3905d Github action for ensuring PRs are labeled in a way that makes it eas… (#5543)
…y to distinguish what's being changed

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-25 20:37:39 -05:00
03ac93bfc7 Merge branch 'main' into pr-labeler 2024-01-25 20:36:12 -05:00
89da976949 workflow library updates (#5568)
* dont show duplicate toasts if workflow actions fail due to auth

* dynamic order by options based on projectId

* add endpointName to authtoast to makeit unique per endpoint

* lint

* update toast logic to check based on endpoint name w type safety

* fix save as endpoit name

* lint

* fix type

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-25 11:43:47 -05:00
57dafd294d {release} v3.6.2
## What type of PR is this? (check all applicable)

Invoke v3.6.2 release


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.2

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.2.zip](https://github.com/invoke-ai/InvokeAI/files/14046191/InvokeAI-installer-v3.6.2.zip)
2024-01-24 22:05:10 -05:00
e611baa4b4 {release} v3.6.2 2024-01-24 21:40:03 -05:00
fc448d5b6d feat(ui): handle proxy configs rewriting paths
We can't assume that the base URL is `host:port/` - it could be `host:port/some/path/`. Make the path handling dynamic to account for this.
2024-01-25 13:29:56 +11:00
e59954f956 fix workflow updating (#5567)
* retain id through workflow state so that we correctly update or save new

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-24 16:10:19 -05:00
e160cbb1e9 Merge branch 'main' into pr-labeler 2024-01-24 15:44:35 -05:00
86c857b9c2 {release} v3.6.1 (#5564)
## What type of PR is this? (check all applicable)

Invoke 3.6.1 release

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.1.zip](https://github.com/invoke-ai/InvokeAI/files/14041411/InvokeAI-installer-v3.6.1.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

## [optional] Are there any post deployment tasks we need to perform?
PyPI Release & GitHub Release
2024-01-24 12:31:10 -05:00
0a13d7d2c7 {release} v3.6.1 2024-01-24 11:27:36 -05:00
68da5c6d22 feat: Add Depth Anything PreProcessor (#5548)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Have you updated all relevant documentation?
- [x] No


## Description

- This adds the newly released Depth Anything to InvokeAI. A new node
`Depth Anything Processor` has been added to generate depth maps using
this new technique. https://depth-anything.github.io

- All related checkpoints will be downloaded automatically on first
boot. The `DinoV2` models will be loaded to your torch cache dir and the
checkpoints pertaining to Depth Anything will be downloaded to
`any/annotators/depth_anything`.

- Alternatively you can find the checkpoints here and download them to
that folder:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints

- This depth map can be used with any depth ControlNet model out there
but the folks at DepthAnything have also released a custom fine tuned
ControlNet model. From my limited testing, I still prefer the original
depth model because this one seems to be producing weird artifacts. Not
sure if that is a specific problem to Invoke or just the model itself.
I'll test more later. Place these in your controlnet folder like your
other ControlNets. You can get that here:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet

- Also available in the LinearUI

- DepthAnything has three models `large`, `base` and `small` -- I've
defaulted the processor to small but a user can change to the large
model if they wish to do so. Small is way faster but obviously somewhat
of a lesser quality.

- DepthAnything is now the default processor for depth controlnet
models.

## Screenshots


![opera_o3jHnWxVRi](https://github.com/invoke-ai/InvokeAI/assets/54517381/573c66f3-1492-45b0-b6df-25756f5e1d1a)

## Merge Plan

DO NOT MERGE YET. Test it first and I'm sure the model caching can be
done better. Coz I don't think I've done that at all. I would appreciate
if @brandonrising or @lstein or anyone can take a look at that part of
it.
2024-01-24 19:14:34 +05:30
f82744b95e fix: linting issues 2024-01-24 18:45:54 +05:30
5a67bc68a1 Merge branch 'main' into depth-anything 2024-01-23 22:31:19 -06:00
61cf4d4c70 feat: "Remix Image" option on images (#5553)
* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  add new remix hotkey to hotkeys modal

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-24 00:45:30 +00:00
9d20a2d5a3 Update huggingface deps to their lastest versions 2024-01-24 11:14:21 +11:00
8b0ac451e3 translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1378 of 1415 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
470dbe75a2 translationBot(ui): update translation (German)
Currently translated at 60.0% (850 of 1415 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
b7d19b8130 add project as category to back-end 2024-01-24 10:59:04 +11:00
3dc13221d8 add project as a workflow category in the front-end 2024-01-24 10:59:04 +11:00
35184dbd86 fix: incorrect local file path 2024-01-24 03:37:16 +05:30
0868fc2558 Merge branch 'main' into depth-anything 2024-01-24 03:36:25 +05:30
92fb09c4df fix: Move the models to any folder to avoid boot warnings 2024-01-24 03:35:37 +05:30
b4cf5496b6 fix(ui): handle model names with spaces
Remove `trim()` from model identifier schema, which prevented parsed model identifiers from matching.

The root issue here is that model names are identifiers. This will be resolved in the model manager refactor.

Closes #5556
2024-01-23 15:48:10 -06:00
a0e68705dd feat(ui): improved dynamic prompts behaviour
- Bump `@invoke-ai/ui` for updated styles
- Update regex to parse prompts with newlines
- Update styling of overlay button when prompt has an error
- Fix bug where loading and error state sometimes weren't cleared
2024-01-23 15:26:12 -06:00
7cb49e65bd feat: Add Resolution to DepthAnything 2024-01-23 14:13:50 -06:00
39fedb090b feat: Make depth anything the default processor for depth controlnet 2024-01-23 14:13:50 -06:00
f36a691219 feat: Make the depth anything small model the default 2024-01-23 14:13:50 -06:00
6a2eb1d2e4 fix: Change the path of the annotator folder to annotators
Just making this change in case there are other models added to the folder in the future
2024-01-23 14:13:50 -06:00
13123daa3f feat: Add DepthAnything to Linear UI 2024-01-23 14:13:50 -06:00
c859eb865e fix: lint & other minor issues 2024-01-23 14:13:50 -06:00
8f5e2cbcc7 feat: Add Depth Anything PreProcessor 2024-01-23 14:13:50 -06:00
2aed6e2dba fix(ui): duplicate "base model" in merge UI
closes #5505
2024-01-23 14:13:18 -06:00
52b51a6088 fix(ui): recall/use size sets aspect ratio correctly
Added a new action that resets the aspect ratio when dispatched.

Closes #5456
2024-01-23 14:13:18 -06:00
52b24e01e2 feat(ui): remove chakra as direct dependency
Moved a number of things to `@invoke-ai/ui` to support this.

Unfortunately, the bundle size has increased a bit. I will work on that later.
2024-01-23 14:13:18 -06:00
1178fd8bd3 fix(ui): fix styling of some form elements 2024-01-23 14:13:18 -06:00
a0187cc9df fix(ui): remove unused import in storybook 2024-01-23 14:13:18 -06:00
2f656cc357 fix(ui): fix field context menu jank
Closes #5551
2024-01-23 14:13:18 -06:00
71f9ac9985 feat(ui): scollable areas support x-axis scrolling
Closes #5490
2024-01-23 14:13:18 -06:00
8bbdfc45fa fix(ui): increase size of control adapters advanced toggle button 2024-01-23 14:13:18 -06:00
3cbb1a7671 chore(ui): bump @invoke-ai/ui
This includes a minor enhancement, increasing the contrast on tabs.
2024-01-23 14:13:18 -06:00
b74e0de74a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
e7e7793896 feat(ui): remember open/closed state of accordions/expanders 2024-01-23 14:13:18 -06:00
504bdac14a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
b76d2cd716 fix(ui): handle base model compat when recalling parameters
We had a one-behind issue with recalling metadata items that had a model.

For example, when recalling LoRAs, we check against the current main model to decide whether or not the requested LoRA is compatible and may be recalled.

When recalling all params, we are often also recalling the main model, but the compat logic didn't compare against this new main model.

The logic is updated to check against the new main model, if one is being set.

Closes #5512
2024-01-23 14:13:18 -06:00
022b32c724 fix(ui): reset clip skip to 0 if new model is sdxl
Clip skip wasn't actually used in SDXL graphs so enabling it didn't do anything, just a UI quirk.

Closes #5508
2024-01-23 14:13:18 -06:00
653b820da1 tidy(ui): clearer variable names in modelSelected listener 2024-01-23 14:13:18 -06:00
68232e642f Merge branch 'main' into pr-labeler 2024-01-23 09:20:43 -05:00
4ba0bf4dcf docs(ui): update README (#5473)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission

## Description

Update UI README


## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-23 18:47:24 +05:30
5e4daf4bc6 docs: remove separate frontend docs and link to repo
The frontend docs should just be in the frontend. This is a standard practice for monorepos with developer information for specific packages within the monorepo.
2024-01-23 18:04:41 +11:00
7e0713c869 docs(ui): fix typo 2024-01-23 18:04:41 +11:00
099d516ac0 docs(ui): update README 2024-01-23 18:04:41 +11:00
b94f6a4a29 Fix python label, add test label 2024-01-22 15:14:02 -05:00
4caf63d53d Added a few more labels 2024-01-22 15:08:11 -05:00
6057229ceb Github action for ensuring PRs are labeled in a way that makes it easy to distinguish what's being changed 2024-01-22 11:22:33 -05:00
6a2856e46f Updated field descriptions 2024-01-23 02:26:30 +11:00
4dedd63b74 Update defaultNodes.md
Added ideal size node
2024-01-23 02:26:30 +11:00
db74837eb1 Update communityNodes.md
Removed ideal size node
2024-01-23 02:26:30 +11:00
892fe62264 Add Ideal Size node to core nodes
The Ideal Size node is useful for High-Res Optimization as it gives the optimum size for creating an initial generation with minimal artifacts (duplication and other strangeness) from today's models.

After inclusion, front end graph generation can be simplified by offloading calculations for HRO initial generation to this node.
2024-01-23 02:26:30 +11:00
3c79476785 requested changes made 2024-01-22 21:06:40 +11:00
dad364da17 rebased and made more chnges 2024-01-22 21:06:40 +11:00
37bc4f78d0 last 2024-01-22 21:06:40 +11:00
de0b43c81d more strings and translations added 2024-01-22 21:06:40 +11:00
ea1d2d6a4c added translation where needed 2024-01-22 21:06:40 +11:00
fafe8ccc59 fix(api): typo in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
4b88cfac19 fix(api): type in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
5fa13fba36 chore: ruff 2024-01-22 16:10:25 +11:00
f28f761436 fix(api): add NoCacheStaticFiles to prevent *all* caching
The previous method wasn't totally foolproof, and locales/assets were cached.

To solve this once and for all (famous last words, I know), we can subclass `StaticFiles` and use maximally strict no-caching headers to disable caching on all static files.
2024-01-22 16:10:25 +11:00
27d7889780 tidy(ui): remove commented line 2024-01-22 09:37:26 +11:00
a1cf153097 fix(ui): generation accordion defaults to open 2024-01-22 09:37:26 +11:00
d121eefa12 tidy(ui): organise accordions files 2024-01-22 09:37:26 +11:00
c92e25a6a7 fix(ui): add tooltip to model select 2024-01-22 09:37:26 +11:00
8be03dead5 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
1197133d06 fix(ui): fix lora name wrap 2024-01-22 09:37:26 +11:00
850458a554 chore(ui): lint 2024-01-22 09:37:26 +11:00
e96ad41729 feat(ui): use @invoke-ai/ui hooks for modifiers, global menu state 2024-01-22 09:37:26 +11:00
53cf518390 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
b00ace852d fix(ui): settings modal switch widths 2024-01-22 09:37:26 +11:00
be72765d02 fix(ui): bump @invoke-ai/ui, fix TS issues 2024-01-22 09:37:26 +11:00
580d29257c feat(ui): add missing translations 2024-01-22 09:37:26 +11:00
5d068c1da1 feat(ui): migrate to @invoke-ai/ui 2024-01-22 09:37:26 +11:00
8e2ccab1f0 feat(ui): add @invoke-ai/ui 2024-01-22 09:37:26 +11:00
6f478eef62 chore(ui): bump deps 2024-01-22 09:37:26 +11:00
1ff1c370df translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-19 15:11:32 +11:00
5ef87ef2a6 fix(ui): tidy component/props naming in ClearQueueIconButton.tsx 2024-01-19 15:03:16 +11:00
d0709d4f4e fix(ui): use trash icon for clear all queue 2024-01-19 15:03:16 +11:00
2a081b0a27 fix(ui): remove unnecessary fragments 2024-01-19 15:03:16 +11:00
d902533387 feat(ui): use global modifier state for clear queue button mode switch 2024-01-19 15:03:16 +11:00
1174713223 fix(ui): use cancel strings for cancel button 2024-01-19 15:03:16 +11:00
4b1740ad19 chore(ui): format 2024-01-19 15:03:16 +11:00
e03c88ce32 feat: 🚸 shift key queue cancellations 2024-01-19 15:03:16 +11:00
b917ffecbe chore(ui): format 2024-01-19 14:42:31 +11:00
2967a78c5a feat: 💄 update lots of icons 2024-01-19 14:42:31 +11:00
aa25ea62a5 fix(backend) installed models being redownloaded (#5526)
* fix

* fix ruff errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-18 16:53:53 -05:00
1ab0e86085 feat(ui): add about modal w/ app deps (#5462)
* resolved conflicts

* changed logo and some design changes

* feedback changes

* resolved conflicts

* changed logo and some design changes

* feedback changes

* lint fixed

* added translations

* some requested changes done

* all feedback changes done and replace links in settingsmenu comp

* fixed the gap between deps verisons & chnaged heights

* feat(ui): minor about modal styling

* feat(ui): tag app endpoints with FetchOnReconnect

* fix(ui): remove unused translation string

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2024-01-18 13:30:56 +00:00
c9ddbb4241 lint 2024-01-18 23:29:47 +11:00
415a1c7a4f add ids 2024-01-18 23:29:47 +11:00
84a4836ab7 change actions to be for any InvExpander or InvSingleAccordion that has id passed in 2024-01-18 23:29:47 +11:00
dbd6c9c6ed remove actions we can get from mutation data 2024-01-18 23:29:47 +11:00
4f95c077d4 lint fix 2024-01-18 23:29:47 +11:00
0a4cbc4e16 undo 2024-01-18 23:29:47 +11:00
d45b76fab4 undo 2024-01-18 23:29:47 +11:00
9722135cda add various actions for commercial purposes 2024-01-18 23:29:47 +11:00
7366913a31 chore(deps): bump tj-actions/changed-files in /.github/workflows
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v37...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-17 00:25:21 -05:00
bd31b5606c Retain suffix (.safetensors, .bin) when renaming a checkpoint file or LoRA
- closes #5518
2024-01-16 17:27:50 -05:00
2953dea4a0 set width and height based on defaultModel if passed in 2024-01-17 07:08:11 +11:00
f3fed0b10f fix(ui): use native langs for language select
Use each language's own language for their option in the language select. This falls back to the english translation if the language name isn't translated.
2024-01-17 06:50:16 +11:00
db57d426d9 fix(ui): force dark mode 2024-01-17 06:48:04 +11:00
4536e4a8b6 Model Manager Refactor: Install remote models and store their tags and other metadata (#5361)
* add basic functionality for model metadata fetching from hf and civitai

* add storage

* start unit tests

* add unit tests and documentation

* add missing dependency for pytests

* remove redundant fetch; add modified/published dates; updated docs

* add code to select diffusers files based on the variant type

* implement Civitai installs

* make huggingface parallel downloading work

* add unit tests for model installation manager

- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py

* improve Civitai model downloading

- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
  to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc

* add routes for retrieving metadata and tags

* code tidying and documentation

* fix ruff errors

* add file needed to maintain test root diretory in repo for unit tests

* fix self->cls in classmethod

* add pydantic plugin for mypy

* use TestSession instead of requests.Session to prevent any internet activity

improve logging

fix error message formatting

fix logging again

fix forward vs reverse slash issue in Windows install tests

* Several fixes of problems detected during PR review:

- Implement cancel_model_install_job and get_model_install_job routes
  to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
  URL for the default model.

* fix docs and tests to match get_job_by_source() rather than get_job()

* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Call CivitaiMetadata.model_validate_json() directly

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Second round of revisions suggested by @ryanjdick:

- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.

* Code cleanup suggested in PR review:

- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens

* Multiple fixes:

- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
  description, tags and other essential info
- fix a few type errors

* consolidated all invokeai root pytest fixtures into a single location

* Update invokeai/backend/model_manager/metadata/metadata_store.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Small tweaks in response to review comments:

- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
  (but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.

* Additional tweaks and minor bug fixes

- Fix calculation of aggregate diffusers model size to only count the
  size of files, not files + directories (which gives different unit test
  results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
  for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.

* fix unit test that was breaking on windows due to CR/LF changing size of test json files

* fix ruff formatting

* a few last minor fixes before merging:

- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
  manually rather than using tempfile.tmpdir().

* add unit tests for reporting HTTP download errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-14 19:54:53 +00:00
426a7b900f feat(ui): resize options/gallery panels to min on window resize
Per user feedback, this is preferrable to letting them expand when the window grows.

Also bumps `react-resizable-panels` now that one of my PRs is merged to fix an issue.
2024-01-14 11:33:44 +11:00
cc571d9ab2 Quick Fix for right gallery button 2024-01-14 09:49:52 +11:00
296c861e7d Handle bad id in log_stats(...). 2024-01-13 15:19:57 -05:00
aa45d21fd2 Reduce the number of graph_execution_manager.get(...) calls from the InvocationStatsService. 2024-01-13 15:19:57 -05:00
ac42513da9 Remove unused reset_all_stats(...). 2024-01-13 15:19:57 -05:00
e2387546fe Rename GIG -> GB. And move it to where it's being used. 2024-01-13 15:19:57 -05:00
c8929b35f0 Refactor the invocation stats service for better readability and to support reporting the execution wall time. 2024-01-13 15:19:57 -05:00
c000e270a0 Release/v3.6.0 (#5485)
## What type of PR is this? (check all applicable)

Release v3.6.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.0

## QA Instructions, Screenshots, Recordings



[InvokeAI-installer-v3.6.0.zip](https://github.com/invoke-ai/InvokeAI/files/13923761/InvokeAI-installer-v3.6.0.zip)

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce in #releases
2024-01-12 15:26:28 -05:00
8ff28da3b4 {release} v3.6.0 2024-01-12 15:00:49 -05:00
b7b376103c Update default workflows 2024-01-12 14:59:44 -05:00
08d379bb29 Update default workflows 2024-01-12 14:58:21 -05:00
74e644c4ba Allow bfloat16 to be configurable in invoke.yaml (#5469)
* feat: allow bfloat16 to be configurable in invoke.yaml

* fix: `torch_dtype()` util

- Use `choose_precision` to get the precision string
- Do not reference deprecated `config.full_precision` flat (why does this still exist?), if a user had this enabled it would override their actual precision setting and potentially cause a lot of confusion.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-12 18:40:37 +00:00
d4c36da3ee translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-12 22:52:38 +11:00
dfe0b73890 fix(ui): fix usages of panel helpers
Upstream breaking change.
2024-01-12 09:31:07 +11:00
c0c8fa9a89 fix(ui): use nodrag on invinput in workflow editor
Closes #5476
2024-01-12 09:31:07 +11:00
ad7139829c fix(ui): fix canvas space hotkey
Need to do some checks to ensure we aren't taking over input elements, and are focused on the canvas.

Closes #5478
2024-01-12 09:31:07 +11:00
a24e63d440 fix(ui): do not focus board search on load 2024-01-12 09:31:07 +11:00
59437a02c3 feat(ui): restore resizable prompt boxes
The autosize proved to be unpopular. Changed back to resizable.
2024-01-12 09:31:07 +11:00
98a44d7fa1 feat(ui): update assets
- Add various brand images, organise images
- Create favicon for docs pages (light blue version of key logo)
- Rename app title to `Invoke - Community Edition`
2024-01-12 08:02:59 +11:00
07416753be feat(ui): more context in storage errors 2024-01-12 07:54:18 +11:00
630854ce26 3.6 Docs updates (#5412)
* Update UNIFIED_CANVAS.md

* Update index.md

* Update structure

* Docs updates
2024-01-11 16:52:22 +00:00
b55c2b99a7 feat(ui): workflow library styling 2024-01-11 09:42:12 -05:00
f81d36c95f fix(ui): do not string workflow id on rehydrate 2024-01-11 09:42:12 -05:00
26b7aadd32 fix(db): fix workflows pagination math 2024-01-11 09:42:12 -05:00
8e7e3c2b4a Initial Styling Commit 2024-01-11 09:42:12 -05:00
f2e8b66be4 Fix "Cannot import name 'PagingArgumentParser' error when starting textual inversion
- Closes #5395
2024-01-11 13:57:06 +11:00
ff09fd30dc feat(ui): if in dev mode, reset API on reconnect
This retains the current good developer experience when working on the server - the UI should fully reset when you restart the server.
2024-01-11 12:51:15 +11:00
9fcc30c3d6 feat(ui): optimize reconnect queries
Add `FetchOnReconnect` tag, tagging relevant queries with it. This tag is invalidated in the socketConnected listener, when it is determined that the queue changed.
2024-01-11 12:51:15 +11:00
b29a6522ef feat(ui): always check for change to queue status when reconnecting 2024-01-11 12:51:15 +11:00
936d19cd60 feat(ui): improve comments on socketConnected listener 2024-01-11 12:51:15 +11:00
f25b6ee5d1 chore(ui): lint 2024-01-11 12:51:15 +11:00
7dea079220 fix(ui): reduce reconnect requests
- Add checks to the "recovery" logic for socket connect events to reduce the number of network requests.
- Remove the `isInitialized` state from `systemSlice` and make it a nanostore local to the socketConnected listener. It didn't need to be global state. It's also now more clearly named `isFirstConnection`.
- Export the queue status selector (minor improvement, memoizes it correctly).
2024-01-11 12:51:15 +11:00
7fc08962fb translationBot(ui): update translation (Russian)
Currently translated at 97.0% (1361 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
71155d9e72 translationBot(ui): update translation (Hungarian)
Currently translated at 1.9% (28 of 1402 strings)

translationBot(ui): added translation (Hungarian)

Co-authored-by: ItzAttila <attila.gm.studio@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/hu/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
6ccd72349d {release} v3.6.0rc6 (#5467)
## What type of PR is this? (check all applicable)

Release v3.6.0rc6

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release candidate $6 

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc6.zip](https://github.com/invoke-ai/InvokeAI/files/13890206/InvokeAI-installer-v3.6.0rc6.zip)



## Merge Plan
Merge when approved

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & Github
2024-01-10 11:19:39 -05:00
30e12376d3 {release} v3.6.0rc6 2024-01-10 10:45:33 -05:00
23c8a893e1 fix(ui): fix gallery display bug, major lag
- Fixed a bug where after you load more, changing boards doesn't work. The offset and limit for the list image query had some wonky logic, now resolved.
- Addressed major lag in gallery when selecting an image.

Both issues were related to the useMultiselect and useGalleryImages hooks, which caused every image in the gallery to re-render on whenever the selection changed. There's no way to memoize away this - we need to know when the selection changes. This is a longstanding issue.

The selection is only used in a callback, though - the onClick handler for an image to select it (or add it to the existing selection). We don't really need the reactivity for a callback, so we don't need to listen for changes to the selection.

The logic to handle multiple selection is moved to a new `galleryImageClicked` listener, which does all the selection right when it is needed.

The result is that gallery images no long need to do heavy re-renders on any selection change.

Besides the multiselect click handler, there was also inefficient use of DND payloads. Previously, the `IMAGE_DTOS` type had a payload of image DTO objects. This was only used to drag gallery selection into a board. There is no need to hold onto image DTOs when we have the selection state already in redux. We were recalculating this payload for every image, on every tick.

This payload is now just the board id (the only piece of information we need for this particular DND event).

- I also removed some unused DND types while making this change.
2024-01-10 08:22:46 -05:00
7d93329401 feat(ui): de-jank context menu
There was a lot of convoluted, janky logic related to trying to not mount the context menu's portal until its needed. This was in the library where the component was originally copied from.

I've removed that and resolved the jank, at the cost of there being an extra portal for each instance of the context menu. Don't think this is going to be an issue. If it is, the whole context menu could be refactored to be a singleton.
2024-01-10 08:22:46 -05:00
968fb655a4 Report ci disk space + minor docker fixes (#5461)
* ci: add docker build timout; log free space on runner before and after build

* docker: bump frontend builder to node=20.x; skip linting on build

* chore: gitignore .pnpm-store

* update code owners for docker and CI

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-01-10 05:20:26 +00:00
80ec9f4131 chore(ui): lint 2024-01-10 00:11:05 -05:00
f19def5f7b feat(ui): replace aspect ratio icon
closes #5448
2024-01-10 00:11:05 -05:00
9e1dd8ac9c fix(ui): reset canvas coords/dims on reset 2024-01-10 00:11:05 -05:00
ebd68b7a6c feat(ui): support reset canvas view when no image on canvas 2024-01-10 00:11:05 -05:00
68a231afea feat(ui): move canvas stage and base layer to nanostores 2024-01-10 00:11:05 -05:00
21ab650ac0 feat(ui): move canvas tool to nanostores
I was troubleshooting a hotkeys issue on canvas and thought I had broken the tool logic in a past change so I redid it moving it to nanostores. In the end, the issue was an upstream but with the hotkeys library, but I like having tool in nanostores so I'm leaving it.

It's ephemeral interaction state anyways, doesn't need to be in redux.
2024-01-10 00:11:05 -05:00
b501bd709f fix(ui): canvas bbox number input wonky
It was rounding dimensions when it shouldn't.

Closes #5453
2024-01-10 00:11:05 -05:00
4082f25062 feat(ui): do not optimize size when changing between models with same base model
There's a challenge to accomplish this due to our slice structure - the model is stored in `generationSlice`, but `canvasSlice` also needs to have awareness of it. For example, when the model changes, the canvas slice doesn't know what the previous model was, so it doesn't know whether or not to optimize the size.

This means we need to lift the "should we optimize size" information up. To do this, the `modelChanged` action creator accepts the previous model as an optional second arg.

Now the canvas has access to both the previous model and new model selection, and can decide whether or not it should optimize its size setting in the same way that the generation slice does.

Closes  #5452
2024-01-10 00:11:05 -05:00
63d74b4ba6 feat(ui): remove unnecessary tabChanged listener
This was needed when we didn't support SDXL on canvas.
2024-01-10 00:11:05 -05:00
da5907613b fix(ui): fix typing of usGalleryImages
For some reason `ReturnType<typeof useListImagesQuery>` isn't working correctly, and destructuring `queryResult` it results in `any`, when the hook is used.

I've removed the explicit return typing so that consumers of the hook get correct types.
2024-01-10 00:11:05 -05:00
3a9201bd31 feat: pin deps
Organise deps into ~3 categories:
- Core generation dependencies, pinned for reproducible builds.
- Core application dependencies, pinned for reproducible builds.
- Auxiliary dependencies, pinned only if necessary.

I pinned / bumped these to latest:
- `controlnet_aux`
- `fastapi`
- `fastapi-events`
- `huggingface-hub`
- `numpy`
- `python-socketio`
- `torchmetrics`
- `transformers`
- `uvicorn`

I checked the release notes for these and didn't see any breaking changes that would affect us. There is a `fastapi` breaking change in v108 related to background tasks but it doesn't affect us.

I tested on a fresh venv. The app still works and I can generate on macOS.

Hopefully, enforcing explicit pinned versions will reduce the issues where people get CPU torch.

It also means we should periodically bump versions up to ensure we don't get too far behind on our dependencies and have to do painful upgrades.
2024-01-10 00:03:29 -05:00
d6e2cb7cef fix(ui): use memoized selector for workflow watcher
Minor perf improvement.
2024-01-10 15:32:16 +11:00
0809e832d4 fix(ui): use less brutally strict workflow validation
Workflow building would fail when a current image node was in the workflow due to the strict validation.

So we need to use the other workflow builder util first, which strips out extraneous data.

This bug was introduced during an attempt to optimize the workflow building logic, which was causing slowdowns on the workflow editor.
2024-01-10 14:31:14 +11:00
7269c9f02e Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl (#5449)
- Closes #5435

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-08 17:18:26 -05:00
d86d7e5c33 do not show toast if 403 is triggered by forbidden image (#5447)
* do not show toast if 403 is triggered by lack of image access

* remove log

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-08 12:15:46 -05:00
5d87578746 {release} v3.5.0rc5 (#5446)
## What type of PR is this? (check all applicable)

Release - InvokeAI v3.5.0rc5


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release - InvokeAI v3.5.0rc5


## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc5.zip](https://github.com/invoke-ai/InvokeAI/files/13863661/InvokeAI-installer-v3.6.0rc5.zip)


## [optional] Are there any post deployment tasks we need to perform?
Releasee on PyPi & GitHub
2024-01-09 03:40:34 +11:00
04aef021fc {release} v3.5.0rc5 2024-01-08 10:42:16 -05:00
0fc08bb384 ui: redesign followups 8 (#5445)
* feat(ui): get rid of convoluted socket vs appSocket redux actions

There's no need to have `socket...` and `appSocket...` actions.

I did this initially due to a misunderstanding about the sequence of handling from middleware to reducers.

* feat(ui): bump deps

Mainly bumping to get latest `redux-remember`.

A change to socket.io required a change to the types in `useSocketIO`.

* chore(ui): format

* feat(ui): add error handling to redux persistence layer

- Add an error handler to `redux-remember` config using our logger
- Add custom errors representing storage set and get failures
- Update storage driver to raise these accordingly
- wrap method to clear idbkeyval storage and tidy its logic up

* feat(ui): add debuggingLoggerMiddleware

This simply logs every action and a diff of the state change.

Due to the noise this creates, it's not added by default at all. Add it to the middlewares if you want to use it.

* feat(ui): add $socket to window if in dev mode

* fix(ui): do not enable cancel hotkeys on inputs

* fix(ui): use JSON.stringify for ROARR logger serializer

A recent change to ROARR introduced limits to the size of data that will logged. This ends up making our logs far less useful. Change the serializer back to what it was previously.

* feat(ui): change diff util, update debuggerLoggerMiddleware

The previous diff library would present deleted things as `undefined`. Unfortunately, a JSON.stringify cycle will strip those values out. The ROARR logger does this and so the diffs end up being a lot less useful, not showing removed keys.

The new diff library uses a different format for the delta that serializes nicely.

* feat(ui): add migrations to redux persistence layer

- All persisted slices must now have a slice config, consisting of their initial state and a migrate callback. The migrate callback is very simple for now, with no type safety. It adds missing properties to the state. A future enhancement might be to model the each slice's state with e.g. zod and have proper validation and types.
- Persisted slices now have a `_version` property
- The migrate callback is called inside `redux-remember`'s `unserialize` handler. I couldn't figure out a good way to put this into the reducer and do logging (reducers should have no side effects). Also I ran into a weird race condition that I couldn't figure out. And finally, the typings are tricky. This works for now.
- `generationSlice` and `canvasSlice` both need migrations for the new aspect ratio setup, this has been added
- Stuff related to persistence has been moved in to `store.ts` for simplicity

* feat(ui): clean up StorageError class

* fix(ui): scale method default is now 'auto'

* feat(ui): when changing controlnet model, enable autoconfig

* fix(ui): make embedding popover immediately accessible

Prevents hotkeys from being captured when embeddings are still loading.
2024-01-08 09:11:45 -05:00
5779542084 Updated icons + Minor UI Tweaks (#5427)
* feat: 💄 updated icons + minor ui tweaks

* revert: 💄 removes ui tweaks

* revert: 💄 removed more ui tweaks

removed more ui tweaks and a commented-out icon import

* style: 🚨 satisfy the linter

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-07 14:14:44 +11:00
ebda81e96e fix(ui): fix add node autoconnect (#5434)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

The new select component appears to close itself before calling the
onchange handler. This short-circuits the autoconnect logic. Tweaked so
the ordering is correct.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #5425

## QA Instructions, Screenshots, Recordings

bug should be fixed

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-07 08:32:52 +05:30
3fe332e85f fix(ui): fix add node autoconnect
The new select component appears to close itself before calling the onchange handler. This short-circuits the autoconnect logic. Tweaked so the ordering is correct.
2024-01-07 14:00:22 +11:00
3428ea1b3c feat(ui): use config for all numerical params
Centralize the initial/min/max/etc values for all numerical params. We used this for some but at some point stopped updating it.

All numerical params now use their respective configs. Far fewer hardcoded values throughout the app now.

Also updated the config types a bit to better accommodate slider vs number input constraints.
2024-01-07 13:49:29 +11:00
6024fc7baf Update diffusers to the lastest version 2024-01-06 21:47:51 -05:00
75c1c4ce5a fix(ui): fix gallery nav math
- Use the virtuoso grid item container and list containers to calculate imagesPerRow, skipping manual compensation for padding of images
- Round the imagesPerRow instead of flooring - we often will end up with values like 4.99999 due to floating point precision
- Update `getDownImage` comments & logic to be clearer
- Use variables for the ids in query selectors, preventing future typos
- Only scroll if the new selected image is different from the prev one
2024-01-06 20:52:09 -05:00
ffa05a0bb3 Only replace vae when it is the broken SDXL 1.0 version 2024-01-06 14:06:47 -05:00
a20e17330b blackify 2024-01-06 14:06:47 -05:00
4e83644433 if sdxl-vae-fp16-fix model is available then bake it in when converting ckpts 2024-01-06 14:06:47 -05:00
604f0083f2 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1402 of 1402 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
2a8a158823 translationBot(ui): update translation (Russian)
Currently translated at 96.2% (1349 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
f8c3db72e9 feat(ui): improved arrow key navigation in gallery
- Fix preexisting bug where gallery network requests were duplicated when triggering infinite scroll
- Refactor `useNextPrevImage` to not use `state => state` as an input selector - logic split up into different hooks
- Remove use instant scroll for arrow key navigation - smooth scroll is janky when you hold the arrow down and it fires rapidly
- Move gallery nav hotkeys to GalleryImageGrid component, so they work whenever the gallery is open (previously didn't work on canvas or workflow editor tabs)
- Use nanostores for gallery grid refs instead of passing context with virtuoso's context feature, making it much simpler to do the imperative gallery nav
- General gallery hook/component cleanup
2024-01-07 01:19:32 +11:00
60815807f9 fix(ui): fix merge issue w/ selectors 2024-01-07 01:19:32 +11:00
196fb0e014 added support for bottom key 2024-01-07 01:19:32 +11:00
eba668956d up button support in gallery navigation 2024-01-07 01:19:32 +11:00
ee5ec023f4 {release} v3.6.0rc4 (#5424)
## What type of PR is this? (check all applicable)

Release v3.6.0rc4


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release for v3.6.0rc4

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc4.zip…](Installer Zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
- This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & GitHub
2024-01-06 12:02:57 +11:00
d59661e0af {release} v3.6.0rc4 2024-01-06 11:08:00 +11:00
f51e8eeae1 fix(ui): better node footer spacing 2024-01-06 09:09:38 +11:00
6e06935e75 fix(ui): fix favicon
It wasn't in the right place to be bundled into `assets/` by vite.

Also replaced uncategorized board's fallback image with new logo.
2024-01-06 09:09:38 +11:00
f7f697849c Skip weight initialization when resizing text encoder token embeddings to accomodate new TI embeddings. This saves time. 2024-01-05 15:16:00 -05:00
8e17e29a5c fix text color for lora card (#5417)
* use label

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:57:16 -05:00
12e9f17f7a only GET intermediates if that setting is an option (#5416)
* only GET intermediates if that setting is an option

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:40:34 -05:00
cb7e56a9a3 chore(ui): lint 2024-01-05 08:34:46 -05:00
1a710a4c12 fix(ui): restore prev colors for workflow editor
Brand colors are now prefixed with "invoke".
2024-01-05 08:34:46 -05:00
d8d266d3be fix(ui): fix field title spacing
Closes #5405
2024-01-05 08:34:46 -05:00
4716632c23 fix(ui): tsc 2024-01-06 00:03:07 +11:00
3c4150d153 fix(ui): update most other selectors
Just a few stragglers left. Good enough for now.
2024-01-06 00:03:07 +11:00
b71b14d582 fix(ui): update workflow selectors 2024-01-06 00:03:07 +11:00
73481d4aec feat(ui): clean up canvas selectors
Do not memoize unless absolutely necessary. Minor perf improvement
2024-01-06 00:03:07 +11:00
2c049a3b94 feat(ui): clean up a few selectors that do not need to be memoized 2024-01-06 00:03:07 +11:00
367de44a8b fix(ui): tidy remaining selectors
These were just using overly verbose syntax - like explicitly typing `state: RootState`, which is unnecessary.
2024-01-06 00:03:07 +11:00
f5f378d04b fix(ui): revert back to lrumemoize 2024-01-06 00:03:07 +11:00
823edbfdef fix(ui): fix more state => state selectors 2024-01-06 00:03:07 +11:00
29bbb27289 fix(ui): re-add reselect patch
Accidentally removed it last commit.
2024-01-06 00:03:07 +11:00
a23502f7ff fix(ui): do not use state => state as an input selector
This is a no-no, whoops!
2024-01-06 00:03:07 +11:00
ce64dbefce chore(ui): lint 2024-01-06 00:03:07 +11:00
b47afdc3b5 feat(ui): patch reselect to use lruMemoize only
Pending resolution of https://github.com/reduxjs/reselect/issues/635, we can patch `reselect` to use `lruMemoize` exclusively.

Pin RTK and react-redux versions too just to be safe.

This reduces the major GC events that were causing lag/stutters in the app, particularly in canvas and workflow editor.
2024-01-06 00:03:07 +11:00
cde9c3090f fix(ui): use useAppSelector instead of useSelector 2024-01-06 00:03:07 +11:00
6924b04d7c feat(ui): use lruMemoize for all entity adapter selectors 2024-01-06 00:03:07 +11:00
83fbd4bdf2 ui: slightly reposition floating bars. 2024-01-05 23:59:08 +11:00
6460dcc7e0 use torch.bfloat16 on cuda systems 2024-01-04 23:25:52 -05:00
59aa009c93 {release} v3.6.0rc3 (#5408)
## What type of PR is this? (check all applicable)

Release v3.6.0rc3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [] Yes
- [X] No


## Description
Next release candidate

## Related Tickets & Documents
N/A
<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc3.zip…](Installer zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPI & Github
2024-01-05 10:39:20 +11:00
59d2a012cd {release} v3.6.0rc3 2024-01-05 09:40:21 +11:00
7e3b620830 Update README.md 2024-01-04 15:56:44 -05:00
e16b55816f fix(ui): clarify comparison in usePanel 2024-01-05 07:09:37 +11:00
895cb8637e fix(ui): fix panel resize bug
A bug that caused panels to be collapsed on a fresh indexedDb in was fixed in dd32c632cd, but this re-introduced a different bug that caused the panels to expand on window resize, if they were already collapsed.

Revert the previous change and instead add one imperative resize outside the observer, so that on startup, we set both panels to their minimum sizes.
2024-01-05 07:09:37 +11:00
fe5bceb1ed translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1363 of 1400 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-05 07:04:04 +11:00
5d475a40f5 lint 2024-01-04 12:51:36 -05:00
bca7ea1674 option to override logo component 2024-01-04 12:51:36 -05:00
f27bb402fb render one or the other 2024-01-04 11:11:10 -05:00
dd32c632cd fix default panel width (#5403)
* fix default panel width

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 11:04:21 -05:00
9e2e740033 custom components for nav, gallery header, and app info (#5400)
* replace custom header with custom nav component to go below settings

* add option for custom gallery header

* add option for custom app info text on logo hover

* add data-testid for tabs

* remove descriptions

* lint

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 15:30:27 +00:00
d6362ce0bd fix(ui): fix invalid nesting of button in button 2024-01-04 09:36:59 -05:00
2347a00a70 fix(ui): do not show loading state on floating invoke button if disabled 2024-01-04 09:36:59 -05:00
0b7dc721cf chore(ui): lint 2024-01-04 09:36:59 -05:00
ac04a834ef feat(ui): tidy hotkeysmodal state 2024-01-04 09:36:59 -05:00
bbca053b48 feat(ui): style settings modal 2024-01-04 09:36:59 -05:00
fcf2006502 feat(ui): increase brightnesst of accordion title 2024-01-04 09:36:59 -05:00
ac0d0019bd chore(ui): lint 2024-01-04 09:36:59 -05:00
2d922a0a65 feat(ui): restore floating options and gallery buttons 2024-01-04 09:36:59 -05:00
8db14911d7 feat(ui): give tooltips padding from screen edge
We can pass a popperjs modifier to the tooltip to give it this padding.
2024-01-04 09:36:59 -05:00
01bab58b20 fix(ui): do not resize panel when window resizes if panel is collapsed 2024-01-04 09:36:59 -05:00
7a57bc99cf feat(ui): statusindicator changes
We are now using the lefthand vertical strip for the settings menu button. This is a good place for the status indicator.

Really, we only need to display something *if there is a problem*. If the app is processing, the progress bar indicates that.

For the case where the panels are collapsed, I'll add the floating buttons back in some form, and we'll indicate via those if the app is processing something.
2024-01-04 09:36:59 -05:00
d3b6d86e74 feat(ui): tweak badge styles 2024-01-04 09:36:59 -05:00
360b6cb286 fix(ui): fix logo version tooltip 2024-01-04 09:36:59 -05:00
8f9e9e639e fix(ui): fix hotkey key & untranslated string 2024-01-04 09:36:59 -05:00
6930d8ba41 feat(ui): do not wrap expander content in box 2024-01-04 09:36:59 -05:00
7ad74e680d feat(ui): make invexpander button styles less complex
just make it like a normal button - normal and hover state, no difference when its expanded. the icon clearly indicates this, and you see the extra components
2024-01-04 09:36:59 -05:00
c56a6a4ddd feat(ui): make expander divider button, add hover, remove color
On one hand I like the color but on the other it makes this divider a focus point, which doesn't really makes sense to me. I tried several shades but think it adds a bit too much distraction for your eyes.
2024-01-04 09:36:59 -05:00
afad764a00 feat(ui): make badges a bit paler
too stabby in the eye region
2024-01-04 09:36:59 -05:00
49a72bd714 feat(ui): use wrench icon in settings menu for settings 2024-01-04 09:36:59 -05:00
8cf14287b6 feat(ui): simplify App.tsx layout
There was an extra div, needed for the fullscreen file upload dropzone, that made styling the main app containers a bit awkward.

Refactor the uploader a bit to simplify this - no longer need so many app-level wrappers. Much cleaner.
2024-01-04 09:36:59 -05:00
0db47dd5e7 ui: Bolden text & add activation color for expanded state 2024-01-04 09:36:59 -05:00
71f6f77ae8 ui: Change background and padding of advanced settings 2024-01-04 09:36:59 -05:00
6f16229c41 fix: tone down the base color saturation by one step 2024-01-04 09:36:59 -05:00
0cc0d794d1 fix: Minor alignment issues with the queue badge 2024-01-04 09:36:59 -05:00
535639cb95 feat: Update status and progress colors to match new theme 2024-01-04 09:36:59 -05:00
2250bca8d9 feat: Remove Header
Remove header and incorporate everything else into the side bar and other areas
2024-01-04 09:36:59 -05:00
4ce39a5974 fix(ui): remove unused icons 2024-01-04 13:59:25 +11:00
644e9287f0 chore(ui): control adapter docstrings 2024-01-04 13:59:25 +11:00
6a5e0be022 fix(ui): reduce minStepsBetweenThumbs for ca begin/end 2024-01-04 13:59:25 +11:00
707f0f7091 feat(ui): update favicon to new logo 2024-01-04 13:59:25 +11:00
8e709fe05a fix(ui): remove shift+enter to cancel
Whoops!
2024-01-04 13:59:25 +11:00
154da609cb translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-04 01:19:31 +11:00
21975d6268 chore(ui): lint 2024-01-03 09:09:50 -05:00
31035b3e63 fix(ui): fix scaled bbox sliders
Removed logic related to aspect ratio from the components.

When the main bbox changes, if the scale method is auto, the reducers will handle the scaled bbox size appropriately.

Somehow linking up the manual mode to the aspect ratio is tricky, and instead of adding complexity for a rarely-used mode, I'm leaving manual mode as fully manual.
2024-01-03 09:09:50 -05:00
6c05818887 fix(ui): workaround canvas weirdness with locked aspect ratio
Cannot figure out how to allow the bbox to be transformed when aspect ratio is locked from all handles. Only the bottom right handle works as expected.

As a workaround, when the aspect ratio is locked, you can only resize the bbox from the bottom right handle.
2024-01-03 09:09:50 -05:00
77c5b051f0 fix(ui): clean up actionsDenylist 2024-01-03 09:09:50 -05:00
4fdc4c15f9 feat(ui): add optimal size handling 2024-01-03 09:09:50 -05:00
1a4be78013 fix(ui): make aspect ratio preview pixel-perfect size 2024-01-03 09:09:50 -05:00
eb16ad3d6f fix(ui): remove old esc hotkey 2024-01-03 09:09:50 -05:00
1fee08639d feat(ui): tweak board search UI 2024-01-03 09:09:50 -05:00
7caaf40835 feat(ui): reworked hotkeys modal
- Displays all as list
- Uses chakra `Kbd` component for keys
- Provides search box
2024-01-03 09:09:50 -05:00
6bfe994622 feat(ui): bump fontSize in fallback compoennt 2024-01-03 09:09:50 -05:00
8a6f03cd46 feat(ui): improved panel interactions 2024-01-03 09:09:50 -05:00
4ce9f9dc36 fix(ui): scaled bounding box uses canvas aspect ratio 2024-01-03 09:09:50 -05:00
00297716d6 Release: v3.6.0rc2 (#5386)
## What type of PR is this? (check all applicable)

Release

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
v3.6.0rc2 release

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Test latest main & [Uploading
InvokeAI-installer-v3.6.0rc2.zip…](Installer zip)

## Merge Plan
PR can be merged immediately
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Publish release on PyPI and GitHub
2024-01-03 14:58:45 +11:00
50c0dc71eb {release} v3.6.0rc2 2024-01-03 14:21:10 +11:00
29ccc6a3d8 chore(ui): lint 2024-01-03 13:18:50 +11:00
f92a5cbabc fix(ui): fix up hotkeys
- Add Shift+X back (this has been missing for a long time)
- Add secondary toggle options hotkey
2024-01-03 13:18:50 +11:00
acbf10f7ba feat(ui): improve error indicator for model fields 2024-01-03 13:18:50 +11:00
46d830b9fa feat(ui): new logo! 2024-01-03 13:18:50 +11:00
db17ec7a4b feat(ui): use dropzone noKeyboard opt instead of manual listener to disable on spacebar 2024-01-03 13:18:50 +11:00
6320d18846 fix(ui): upscale dropdown activates when clicking image actions
Weird issue with `react-select`... Made the popover lazy as a workaround.

Also updated styling of the popover.
2024-01-03 13:18:50 +11:00
37c8b9d06a fix(ui): fix sdxl style prompts
- Do not _merge_ prompt and style prompt when concat is enabled - either use the prompt as style, or use the style directly.
- Set style prompt metadata correctly.
- Add metadata recall for style prompt.
2024-01-03 13:18:50 +11:00
7ba2108eb0 fix(ui): increase contrast between disabled and enabled inputs 2024-01-03 13:18:50 +11:00
8aeeee4752 fix(ui): fix erroneous vae model display
`react-select` has some weird behaviour where if the value is `undefined`, it shows the last-selected value instead of nothing. Must fall back to `null`
2024-01-03 13:18:50 +11:00
930de51910 feat(ui): add badges for advanced settings 2024-01-03 13:18:50 +11:00
b1b5c0d3b2 fix(ui): fix workflow editor model selector, excise ONNX
Ensure workflow editor model selector component gets a value

This introduced some funky type issues related to ONNX models. ONNX doesn't work anyways (unmaintained). Instead of fixing the types to work with a non-working feature, ONNX is now removed entirely from the UI.

- Remove all refs to ONNX (and Olives)
- Fix some type issues
- Add ONNX nodes to the nodes denylist (so they are not visible in UI)
- Update VAE graph helper, which still had some ONNX logic. It's a very simple change and doesn't change any logic. Just removes some conditions that were for ONNX. I tested it and nothing broke.
- Regenerate types
- Fix prettier and eslint ignores for generated types
- Lint
2024-01-03 13:18:50 +11:00
ebe717099e feat(ui): add $store to window in dev mode
Helpful for troubleshooting.
2024-01-03 13:18:50 +11:00
06245bc761 feat(ui): add support for default values for sliders 2024-01-03 13:18:50 +11:00
b4c0dafdc8 feat(ui): remove unused iterations component 2024-01-03 13:18:50 +11:00
0cefacb3a2 feat(ui): add support for default values for numberinputs 2024-01-03 13:18:50 +11:00
baa5f75976 fix(ui): fix node styles
Got borked when adjusting control adapter styling. Should revisit this later.
2024-01-03 13:18:50 +11:00
989aaedc7f feat(nodes): add title for cfg rescale mult on denoise_latents 2024-01-03 13:18:50 +11:00
93e08df849 fix(ui): min fallback on nodes number fields -> -NUMPY_RAND_MAX 2024-01-03 13:18:50 +11:00
4a43e1c1b8 fix(ui): restore global hotkeys 2024-01-03 13:18:50 +11:00
2bbab9d94e Update recommends db backup when installing RC (#5381)
* Udpater suggest db backup when installing RC

* Update invokeai_update.py to be more specific

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py
2024-01-02 22:44:45 +00:00
a456f6e6f0 lint 2024-01-02 10:02:33 -05:00
a408f562d6 option to use new brand for loader 2024-01-02 10:02:33 -05:00
cefdf9ed00 define text color for tooltips 2024-01-02 10:02:33 -05:00
5413bf07e2 Sisco/docker allow relative paths for invokeai data (#5344)
* Update docker-compose.yml to bind local data path

* Update LOCAL_DATA_PATH in .env.sample

* Add fallback to INVOKEAI_ROOT envar if LOCAL_DATA_PATH not present.

* rename LOCAL_DATA_PATH to INVOKAI_LOCAL_ROOT

* Whoops, didnt mean to include this

* Update docker/docker-compose.yml

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>

* [chore] rename envar

* Apply suggestions from code review

---------

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2024-01-02 13:17:57 +00:00
4cffe282bd feat(ui): disable scan models tab
not working yet WIP
2024-01-02 07:28:53 -05:00
ae8ffe9d51 chore(ui): lint 2024-01-02 07:28:53 -05:00
870cc5b733 feat(ui): dynamic prompts loading ux
- Prompt must have an open curly brace followed by a close curly brace to enable dynamic prompts processing
- If a the given prompt already had a dynamic prompt cached, do not re-process
- If processing is not needed, user may invoke immediately
- Invoke button shows loading state when dynamic prompts are processing, tooltip says generating
- Dynamic prompts preview icon in prompt box shows loading state when processing, tooltip says generating
2024-01-02 07:28:53 -05:00
0b4eb888c5 feat(ui): canvas bbox interaction tweaks
Making the math match the previous implementation
2024-01-02 07:28:53 -05:00
11f1cb5391 fix(ui): fix canvas bbox style when cursor leaves canvas 2024-01-02 07:28:53 -05:00
1e2e26cfc2 feat(ui): add open queue to queue action menu 2024-01-02 07:28:53 -05:00
e9bce6e1c3 fix(ui): fix cut off badge on queue actions menu 2024-01-02 07:28:53 -05:00
799ef0e7c1 fix(ui): control adapter models select disable if incompatible 2024-01-02 07:28:53 -05:00
61c10a7ca8 fix(ui): fix canvas bbox interactions 2024-01-02 07:28:53 -05:00
93880223e6 feat(ui): move strength up one 2024-01-02 07:28:53 -05:00
271456b745 fix(ui): fix badges for image settings canvas 2024-01-02 07:28:53 -05:00
cecee33bc0 feat(ui): support grid size of 8 on canvas
- Support grid size of 8 on canvas
- Internal canvas math works on 8
- Update gridlines rendering to show 64 spaced lines and 32/16/8 when zoomed in
- Bbox manipulation defaults to grid of 64 - hold shift to get grid of 8

Besides being something we support internally, supporting 8 on canvas avoids a lot of hacky logic needed to work well with aspect ratios.
2024-01-02 07:28:53 -05:00
4f43eda09b feat(ui): modularize imagesize components
Canvas and non-canvas have separate width and height and need their own separate aspect ratios. In order to not duplicate a lot of aspect ratio logic, the components relating to image size have been modularized.
2024-01-02 07:28:53 -05:00
011757c497 fix(ui): add numberinput to control adapter weight
Required some rejiggering of the InvControl and InvSlider styles.
2024-01-02 07:28:53 -05:00
2700d0e769 fix(nodes): fix constraints/validation for controlnet
- Fix `weight` and `begin_step_percent`, the constraints were mixed up
- Add model validatort to ensure `begin_step_percent < end_step_percent`
- Bump version
2024-01-02 07:28:53 -05:00
d256d93a2a feat(ui): use larger chevrons for number input steppers 2024-01-02 07:28:53 -05:00
f3c8e986a5 feat(ui): bump badge fontsize to 10px 2024-01-02 07:28:53 -05:00
48f5e4f313 fix(ui): missing denoise strength
accidentally hid it from everywhere
2024-01-02 07:28:53 -05:00
5950ffe064 Release/v3.6.0rc1 (#5372)
## What type of PR is this? (check all applicable)

InvokeAI 3.6.0rc1 Release 


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Update version & frontend build for Invoke v3.6.0rc1

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?

Upload release to PyPI & create release on GitHub
2024-01-02 11:21:37 +11:00
49ca949cd6 {release} Update version to 3.6.0rc1 2024-01-02 10:23:55 +11:00
5d69f1cbf5 Remove frontend build from repo permanantly 2024-01-02 10:18:11 +11:00
9169006171 chore(ui): lint 2024-01-01 08:13:23 -05:00
28b74523d0 fix(ui): fix dynamic prompts with single prompt
Closes #5292

The special handling for single prompt is totally extraneous and caused a bug.
2024-01-01 08:13:23 -05:00
9359c03c3c feat(ui): use zod-less workflow builder when appropriate 2024-01-01 08:13:23 -05:00
598241e0f2 fix(ui): InvContextMenu.placement = 'auto-end'
This ensures the context menus don't get cut off when the window size is very small.
2024-01-01 08:13:23 -05:00
e698a8006c feat(ui): use lruMemoize for argsMemoize on selectors
This provides a small performance improvement, on the order of a few ms per interaction.
2024-01-01 08:13:23 -05:00
34e7b5a7fb chore(ui): lint 2024-01-01 08:13:23 -05:00
5c3dd62ae0 feat(ui): update useGlobalModifiers to store each key independently
This reduces rerenders when the user presses a modifier key.
2024-01-01 08:13:23 -05:00
7e2eeec1f3 feat(ui): optimized workflow building
- Store workflow in nanostore as singleton instead of building for each consumer
- Debounce the build (already was indirectly debounced)
- When the workflow is needed, imperatively grab it from the nanostores, instead of letting react handle it via reactivity
2024-01-01 08:13:23 -05:00
7eb79266c4 feat(ui): split dnd overlay to separate component
This reduces top-level rerenders when zooming in and out on workflow editor
2024-01-01 08:13:23 -05:00
5d4610d981 feat(ui): store node templates in separate slice
Flattens the `nodes` slice. May offer minor perf improvements in addition to just being cleaner.
2024-01-01 08:13:23 -05:00
7c548c5bf3 feat(ui): move canvas interaction state to nanostores
This drastically reduces the computation needed when moving the cursor. It also correctly separates ephemeral interaction state from redux, where it is not needed.

Also removed some unused canvas state.
2024-01-01 08:13:23 -05:00
2a38606342 fix(ui): show denoising strength on canvas 2024-01-01 08:13:23 -05:00
793cf39964 feat(ui): bump react-resizable-panels & improve usePanel hook 2024-01-01 08:13:23 -05:00
ab3e689ee0 fix(ui): fix workflow library new workflow/settings closing
Need to make the menu not lazy. A better solution is to refactor how the settings works, rendering it in a different part of the component tree
2024-01-01 08:13:23 -05:00
20f497054f feat(ui): optimized useMouseOverNode
Manually hook into pubsub to eliminate extraneous rerenders on hook change
2024-01-01 08:13:23 -05:00
6209fef63d fix(ui): focus add node popover on open
Need an extra ref to pass to the InvSelect component.
2024-01-01 08:13:23 -05:00
5168415999 feat(ui): use nanostores for useMouseOverNode
This greatly reduces the weight of the event handlers.
2024-01-01 08:13:23 -05:00
b490c8ae27 chore(ui): bump deps
Includes vite v5 - only change needed is to set .mts for vite config files.
2024-01-01 08:13:23 -05:00
6f354f16ba feat(ui): canvas perf improvements 2024-01-01 08:13:23 -05:00
e108a2302e fix(ui): fix uninteractable canvas bbox 2024-01-01 08:13:23 -05:00
2ffecef792 feat(ui): bump react-resizable-panels, improve panel resize logic 2024-01-01 08:13:23 -05:00
2663a07e94 feat(ui): misc canvas perf improvements
- disable listening when not needed
- use useMemo for gridlines
2024-01-01 08:13:23 -05:00
8d2ef5afc3 feat(ui): disable onlyRenderVisibleElements on Flow
This can cause stuttering when nodes are being moved in and out of the viewport. I think it's better to improve rendering/perf in other ways.
2024-01-01 08:13:23 -05:00
539887b215 feat(ui): misc perf/rerender improvements
More efficient selectors, memoized/stable references to objects, lazy popover/menu rendering.
2024-01-01 08:13:23 -05:00
2ba505cce9 feat(ui): use pubsub to for globalcontextmenuclose
Far more efficient than the crude redux incrementor thing.
2024-01-01 08:13:23 -05:00
bd92a31d15 feat(ui): add createLruSelector
This uses the previous implementation of the memoization function in reselect. It's possible for the new weakmap-based memoization to cause memory leaks in certain scenarios, so we will avoid it for now.
2024-01-01 08:13:23 -05:00
ee2529f3fd lru 2024-01-01 08:13:23 -05:00
89b7082bc0 fix(ui): remove debug stmts 2024-01-01 08:13:23 -05:00
55dfabb892 feat(ui): use make label widths grow
Fixes issue where translations overflowed due to hardcoded widths.
2024-01-01 08:13:23 -05:00
2a41fd0b29 fix(ui): fix field title styling 2024-01-01 08:13:23 -05:00
966919ea4a translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1335 of 1360 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
d3acdcf12f translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
52f9749bf5 feat(ui): partial rebuild of model manager internal logic 2023-12-29 08:26:14 -05:00
2a661450c3 feat(ui): increase size of clear icon on selects 2023-12-29 08:26:14 -05:00
2d96c62fdb feat(ui): more memoization 2023-12-29 08:26:14 -05:00
3e6173ee8c feat(ui): only show refiner models on refiner model select 2023-12-29 08:26:14 -05:00
4e9841c924 feat(ui): add refiner cfg scale & steps defaults & marks 2023-12-29 08:26:14 -05:00
f4ea495d23 feat(ui): InvSwitch and InvSliderThumb are round 2023-12-29 08:26:14 -05:00
43a4b815e8 fix(ui): fix InvSlider vertical thumb styling 2023-12-29 08:26:14 -05:00
4134f18319 fix(ui): InvEditable, linear field view styling 2023-12-29 08:26:14 -05:00
cd292f6c1c fix(ui): remove errant console.log 2023-12-29 08:26:14 -05:00
3ce8f3d6fe feat(ui): more memoization 2023-12-29 08:26:14 -05:00
10fd4f6a61 feat(ui): update panel lib, move gallery to percentages 2023-12-29 08:26:14 -05:00
47b1fd4bce chore(ui): bump deps 2023-12-29 08:26:14 -05:00
300805a25a fix(ui): fix typing issues 2023-12-29 08:26:14 -05:00
56527da73e feat(ui): memoize all components 2023-12-29 08:26:14 -05:00
ca4b8e65c1 feat(ui): use stable objects for animation/native element styles 2023-12-29 08:26:14 -05:00
f5194f9e2d feat(ui): generation accordion badges 2023-12-29 08:26:14 -05:00
ccbbb417f9 feat(ui): fix control adapters styling 2023-12-29 08:26:14 -05:00
37786a26a5 feat(ui): move scaling up to image settings -> advanced 2023-12-29 08:26:14 -05:00
4f2930412e feat(ui): use primitive style props or memoized sx objects 2023-12-29 08:26:14 -05:00
83049a3a5b fix(ui): typo in canvas model handler 2023-12-29 08:26:14 -05:00
38256f97b3 fix(ui): fix word break on LoRACard 2023-12-29 08:26:14 -05:00
77f2aabda4 feat(ui): sort model select options with compatible base model first 2023-12-29 08:26:14 -05:00
e32eb2a649 fix(ui): restore labels in model manager selects 2023-12-29 08:26:14 -05:00
f4cdfa3b9c fix(ui): canvas layer select cut off 2023-12-29 08:26:14 -05:00
e99b715e9e fix(ui): board collapse button styling 2023-12-29 08:26:14 -05:00
ed96c40239 feat(ui): change queue icon 2023-12-29 08:26:14 -05:00
1b3bb932b9 feat(ui): reduce button fontweight to semibold 2023-12-29 08:26:14 -05:00
f0b102d830 feat(ui): ux improvements & redesign
This is a squash merge of a bajillion messy small commits created while iterating on the UI component library and redesign.
2023-12-29 08:26:14 -05:00
a47d91f0e7 feat(api): add max_prompts constraints 2023-12-29 08:26:14 -05:00
358c1f5791 Release/v3.5.1 (#5363)
## What type of PR is this? (check all applicable)

Release v3.5.1

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
InvokeAI v3.5.1 release

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Create GH release
3. Annonce on Discord
2023-12-29 15:20:24 +11:00
faec320d48 {release} v3.5.1 2023-12-29 13:33:47 +11:00
fd074abdc4 Add frontend build 2023-12-29 13:16:23 +11:00
d8eb58cd58 Add frontend build 2023-12-29 13:15:37 +11:00
8937d66412 Add Tiled Upscaling to default workflows (#5362)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Add Tiled Upscaling to default workflows

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-29 12:43:47 +11:00
a6935ae7fb Add Tiled Upscaling to default workflows 2023-12-29 12:26:50 +11:00
69968eb67b add nightmare promptgen to communityNodes.md (#5360)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission

## Description
Adds nightmare promptgen to the community nodes list.
2023-12-29 08:06:44 +11:00
e57f5f129c add nightmare promptgen to communityNodes.md 2023-12-28 13:15:52 -05:00
1b8651fa26 fix(ui): do no create extraneous pos var 2023-12-28 20:44:02 +11:00
f6664960ca Update useBuildNode.ts
Added addition of the rect's top left coordinates to get equivalent behavior.
2023-12-28 20:44:02 +11:00
84a001720c Added back bounds check 2023-12-28 20:44:02 +11:00
c9951cd86b Eliminate constant console deprecation warnings
React Flow 11.10 eliminates the need to use project() and issues a deprecation warning to the console every time that onMouseMove is called (see https://reactflow.dev/whats-new/2023-11-10#rename-usereactflowproject-to-usereactflowscreentoflowposition). This code change eliminates that warning,
2023-12-28 20:44:02 +11:00
83a9e26cd8 Respect torch-sdp in config.yaml (#5353)
If the user specifies `torch-sdp` as the attention type in `config.yaml`, we can go ahead and use it (if available) rather than always throwing an exception.
2023-12-28 05:46:28 +00:00
80812cf7cd Update FE .gitignore and remove FE build (#5357)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
To release 3.5.0 successfully, a front end build needed to be in the
repo so that it would be included in the invokeai package distributed on
PyPi.

This PR remove the frontend build and updates the frontend gitignore to
not include the build.


## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
N/A
2023-12-28 16:08:13 +11:00
2a6c940047 Merge branch 'main' into fix/remove_fe_build 2023-12-28 16:04:16 +11:00
78fe9b642d fix bug when there are two multi vector TI in a prompt (#5356)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: it's a simple fix

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
if there are two multi vector TI in a prompt eg `<ti-1> <ti-2>` with
ti-1 has vector size 16 and ti-2 has vector size 8 then the second one
uses the first ti_embedding.shape[0] and you get errors like eg
"<ti-2-!pad-8> is not found" because ti-2 only has vector size 8 but the
code is taking the wrong ti_embedding.shape[0]

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-28 15:59:27 +11:00
53b835945f Updated with ruff formatting 2023-12-28 11:05:19 +11:00
acba51c888 remove fe build 2023-12-28 09:44:08 +11:00
daa9d50d95 Update FE .gitignore 2023-12-28 08:45:23 +11:00
e38d0e39b7 fix bug when there are two multi vector TI in a prompt 2023-12-27 22:14:14 +01:00
2c632a811b Release/v3.5.0 (#5352)
## What type of PR is this? (check all applicable)

InvokeAI v3.5.0


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
3.5.0 release

## QA Instructions, Screenshots, Recordings

Test Installer: 

[InvokeAI-installer-v3.5.0.zip](https://github.com/invoke-ai/InvokeAI/files/13776161/InvokeAI-installer-v3.5.0.zip)


## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
* Update front end .gitignore & remove the fe build
2023-12-28 08:14:10 +11:00
6afeb37ce5 Update frontend build 2023-12-27 16:41:47 +11:00
85726c164b {release} update version to 3.5.0 2023-12-27 16:07:33 +11:00
17e1ef0140 Update git ignore to include FE build 2023-12-27 16:07:18 +11:00
cdfc01d938 Fix model names to match defaults in workflows & update example workflows in docs (#5351)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [X] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-27 10:10:15 +05:30
dc632a787a Updated example workflows 2023-12-27 15:26:05 +11:00
4e04ea0c0d fix model names to match defaults in workflows 2023-12-27 15:18:12 +11:00
f51bb00b5e Update torch xformers (#5343)
* Update torch to 2.1.2 and xformers to 0.0.23post1

* fix type
2023-12-26 06:48:32 +00:00
12f2357e70 feat(db): handle PIL errors opening images gracefully (#5314)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

For example, if PIL tries to open a *really* big image, it will raise an
exception to prevent reading a huge object into memory.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

-
https://discord.com/channels/1020123559063990373/1149513695567810630/1186200089149046804

## QA Instructions, Screenshots, Recordings

This should fix the error in the discord thread

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Can be merged when @Millu confirms it fixes the issue he ran into

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2023-12-26 15:38:55 +11:00
60629cba3c Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-26 15:27:18 +11:00
5196e4bc38 translationBot(ui): update translation (Korean)
Currently translated at 57.2% (781 of 1365 strings)

Co-authored-by: 이승석 <vidicwb@ajou.ac.kr>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
89e7848079 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1365 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
5b38b5ea7f translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1329 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-24 08:23:10 +11:00
88c1af969f update docker setup, improve docs, fix variable value
Fixes #5336
2023-12-23 08:53:19 -05:00
fbede84405 [feature] Download Queue (#5225)
* add base definition of download manager

* basic functionality working

* add unit tests for download queue

* add documentation and FastAPI route

* fix docs

* add missing test dependency; fix import ordering

* fix file path length checking on windows

* fix ruff check error

* move release() into the __del__ method

* disable testing of stderr messages due to issues with pytest capsys fixture

* fix unsorted imports

* harmonized implementation of start() and stop() calls in download and & install modules

* Update invokeai/app/services/download/download_base.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* replace test datadir fixture with tmp_path

* replace DownloadJobBase->DownloadJob in download manager documentation

* make source and dest arguments to download_queue.download() an AnyHttpURL and Path respectively

* fix pydantic typecheck errors in the download unit test

* ruff formatting

* add "job cancelled" as an event rather than an exception

* fix ruff errors

* Update invokeai/app/services/download/download_default.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* use threading.Event to stop service worker threads; handle unfinished job edge cases

* remove dangling STOP job definition

* fix ruff complaint

* fix ruff check again

* avoid race condition when start() and stop() are called simultaneously from different threads

* avoid race condition in stop() when a job becomes active while shutting down

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2023-12-22 12:35:57 -05:00
756cb9c27e fix(tests): remove graph library from test fixtures 2023-12-23 00:04:48 +11:00
78b29db458 feat(backend): disable graph library
The graph library occasionally causes issues when the default graph changes substantially between versions and pydantic validation fails. See #5289 for an example.

We are not currently using the graph library, so we can disable it until we are ready to use it. It's possible that the workflow library will supersede it anyways.
2023-12-23 00:04:48 +11:00
1225c3fb47 addresses #5224 (#5332)
Co-authored-by: Lincoln Stein <lstein@gmail.com>
2023-12-22 12:30:51 +00:00
4957a360ff close #5209 2023-12-21 23:02:57 -05:00
32ad742f3e Ti trigger from prompt util (#5294)
* Pull logic for extracting TI triggers into a util function

* Remove duplicate regex for ti triggers

* Fix linting for ruff

* Remove unused imports
2023-12-22 03:04:44 +00:00
41cd40541a Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-22 12:21:52 +11:00
2d11d97dad remove MacOS Sonoma check in devices.py (#5312)
* remove MacOS Sonoma check in devices.py

As of pytorch 2.1.0, float16 works with our MPS fixes on Sonoma, so the check is no longer needed.

* remove unused platform import
2023-12-22 00:42:47 +00:00
64858b2523 Update contributingToFrontend.md (#5329)
The project is no longer using yarn as a package manager and have moved
to pnpm, So I wanted to update the documentation on the contribution
page.

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [] No, because:
I spoke with user: imic in the #dev-chat on discord.
      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Merge Plan
- "This PR can be merged when approved"
2023-12-22 08:38:34 +11:00
d5134325f6 Merge branch 'main' into patch-1 2023-12-22 08:37:15 +11:00
702d0f68af remove (Unsaved) if workflow library is disabled 2023-12-22 07:39:17 +11:00
a0d0e9f474 Update contributingToFrontend.md
The project is no longer using yarn as a package manager and have moved to pnpm, So I wanted to update the documentation on the contribution page.
2023-12-21 14:51:17 -05:00
475823835f Update communityNodes.md
Addition of my Adapters-Linked and Metadata-linked nodes
2023-12-21 13:51:59 -05:00
b95d547ccc Add more default workflows (#5325)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Added more default workflows to the workflow library

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-21 14:40:19 +11:00
9b4758f02f Merge branch 'main' into feat/default_workflows 2023-12-21 10:35:02 +11:00
8d2952695d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-21 09:56:06 +11:00
8dd55cc45e t2i with LoRA 2023-12-21 09:54:12 +11:00
562fb1f3a1 add authToastMiddleware back and fix parsing 2023-12-20 14:59:33 -05:00
21ed2d42cd Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-20 21:54:21 +11:00
79cf3ec9a5 Add facedetailer workflow 2023-12-20 18:53:49 +11:00
37b76caccf Added default workflows 2023-12-20 17:42:14 +11:00
Sam
a4f9bfc8f7 Update Dockerfile 2023-12-19 18:38:36 -05:00
Sam
9afdd0f4a8 Update Dockerfile 2023-12-19 18:38:36 -05:00
bee6ad1547 fix(pnpm): replace npm with pnpm in dockerfile 2023-12-19 18:38:36 -05:00
fa3f1b6e41 [Feat] reimport model config records after schema migration (#5281)
* add code to repopulate model config records after schema update

* reformat for ruff

* migrate model records using db cursor rather than the ModelRecordConfigService

* ruff fixes

* tweak exception reporting

* fix: build frontend in  pypi-release workflow

This was missing, resulting in the 3.5.0rc1 having no frontend.

* fix: use node 18, set working directory

- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands

* Don't copy extraneous paths into installer .zip

* feat(installer): delete frontend build after creating installer

This prevents an empty `dist/` from breaking the app on startup.

* feat: add python dist as release artifact, as input to enable publish to pypi

- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do

* translationBot(ui): update translation files

Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI

* freeze yaml migration logic at upgrade to 3.5

* moved migration code to migration_3

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Hosted Weblate <hosted@weblate.org>
2023-12-19 17:01:47 -05:00
d0fa131010 (feat) updater installs from PyPi instead of GitHub releases (#5316)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Updater script pulls from PyPI instead of GitHub releases (this is why
the RC packages are having issues when updating through the launcher
script)

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-19 13:15:38 +11:00
2f438431bd (fix) update logic for installing specific version 2023-12-19 11:05:15 +11:00
bbeb5cb477 Merge branch 'main' into feat/updater_use_pypi 2023-12-19 10:09:03 +11:00
cd3111c324 fix ruff errors 2023-12-19 09:58:10 +11:00
16b7246412 (feat) updater installs from PyPi instead of GitHub releases 2023-12-19 09:30:40 +11:00
42be78d328 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1327 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-19 07:20:14 +11:00
e469e24a58 Update model_probe to work with diffuser-format SD TI embeddings. (#5301)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

      
## Have you updated all relevant documentation?
- [x] Yes (N/A)
- [ ] No


## Description

This change enables the model probe to work with TI embeddings that have
the follow state_dict structure:

```python
{
    "<any_key>": torch.Tensor(...), # where the tensor has shape (N, embedding_dim)
}
```

## QA Instructions, Screenshots, Recordings

I can't imagine an embedding format that would previously have passed
the model probe, and would now fail after this change. That being said,
I'll exercise a bunch of existing TIs before merging.

- [x] Exercise existing TI formats


## Added/updated tests?

- [ ] Yes
- [x] No : _We could really benefit from tests for all of the supported
TI formats... but I'm not taking on that project right now._
2023-12-18 10:01:04 -05:00
cb698ff1fb Update model_probe to work with diffuser-format SD TI embeddings. 2023-12-18 09:51:16 -05:00
45470a3ac8 Merge branch 'main' into feat/db/graceful-migrate-workflows 2023-12-18 23:32:28 +11:00
0e738c4290 Tag model manager v2 api as unstable (#5311)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

As discussed with @psychedelicious , this PR changes the swagger label
on the model manager V2 routes to `model_manager_v2_unstable` in order
to warn community members that the API is liable to change.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-18 07:09:40 -05:00
09d1bc513d Merge branch 'main' into refactor/model-manager2/mark-api-experimental 2023-12-18 07:04:00 -05:00
b6ed4ba559 feat(db): handle PIL errors opening images gracefully
For example, if PIL tries to open a *really* big image, it will raise an exception to prevent reading a huge object into memory.
2023-12-18 18:02:31 +11:00
aefa828237 Tiled upscaling - EvenSplit to use overlap in pixels instead tile fraction (#5309)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Change CalculateImageTilesEvenSplitInvocation to have an overlap in
pixels rather than as a percentage of the tile. This makes it easier to
have predictable blending of the seams as you have a known overlap size.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [x] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-17 21:13:45 -05:00
74ea592d02 tag model manager v2 api as unstable 2023-12-17 14:16:45 -05:00
457b0dfac0 Merge branch 'main' into tiled-upscaling-graph 2023-12-17 15:12:16 +00:00
96a717c4ba In CalculateImageTilesEvenSplitInvocation to have overlap_fraction becomes just overlap. This is now in pixels rather than as a fraction of the tile size.
Update calc_tiles_even_split() with the same change. Ensuring Overlap is within allowed size

Update even_split tests
2023-12-17 15:10:50 +00:00
77b74264a8 Simplify docker compose setup (#5046)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes -
https://github.com/invoke-ai/InvokeAI/pull/5007#discussion_r1378792615
- [ ] No, because: 

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description

Simplify Docker image creation and execution to a single script that
spins up the right service in the docker compose file.
## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Depends on #5007

## QA Instructions, Screenshots, Recordings
N/A
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [x] No : same tests should work.

## [optional] Are there any post deployment tasks we need to perform?

Not to my knowledge.
2023-12-17 17:10:56 +11:00
351078e8aa Merge branch 'main' into simplify-docker-compose-setup 2023-12-17 17:07:55 +11:00
b8354bd1a4 Merge branch 'main' into tiled-upscaling-graph 2023-12-16 19:09:28 +00:00
3b944b8af6 fix: build frontend in pypi-release workflow (#5298)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

## Description

This was missing, resulting in the 3.5.0rc1 having no frontend.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Discord installer thread:
https://discord.com/channels/1020123559063990373/1149513695567810630/1185200427717898260
- Comments from here in the release chat:
https://discord.com/channels/1020123559063990373/1020123559831539744/1185004017521279007

## QA Instructions, Screenshots, Recordings

I've run this locally and it works (I commented out the final steps of
the workflow that do PyPi stuff to ensure I didn't accidentally deploy
something).

You can run the workflow locally with https://github.com/nektos/act.
Suggest using the `gh` CLI version, its very easy to set up if you have
the github CLI installed. Then you can run `gh act -W
.github/workflows/pypi-release.yml` to run the workflow locally in a
docker image.

I don't know this local action runner would actually release to PyPi -
as mentioned, I commented those steps out when testing - but it does
successfully do both frontend and backend builds.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This needs @lstein 's approval.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?

Cut an RC2
2023-12-16 10:40:36 -05:00
b811c037bd Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:36:03 -05:00
5bf61382a4 feat: add python dist as release artifact, as input to enable publish to pypi
- The release workflow never runs automatically. It must be manually kicked off.
- The release workflow has an input. When running it from the GH actions UI, you will see a "Publish build on PyPi" prompt. If this value is "true", the workflow will upload the build to PyPi, releasing it. If this is anything else (e.g. "false", the default), the workflow will build but not upload to PyPi.
- The `dist/` folder (where the python package is built) is uploaded as a workflow artifact as a zip file. This can be downloaded and inspected. This allows "dry" runs of the workflow.
- The workflow job and some steps have been renamed to clarify what they do
2023-12-16 20:02:09 +11:00
0f1c5f382a feat(installer): delete frontend build after creating installer
This prevents an empty `dist/` from breaking the app on startup.
2023-12-16 19:39:29 +11:00
4af1695c60 translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-16 13:10:47 +11:00
df9a903a50 fix(ui): do not cache VAE decode on linear
The VAE decode on linear graphs was getting cached. This caused some unexpected behaviour around image outputs.

For example, say you ran the exact same graph twice. The first time, you get an image written to disk and added to gallery. The second time, the VAE decode is cached and no image file is created. But, the UI still gets the graph complete event and selects the first image in the gallery. The second run does not add an image to the gallery.

There are probbably edge cases related to this - the UI does not expect this to happen. I'm not sure how to handle it any better in the UI.

The solution is to not cache VAE decode on the linear graphs, ever. If you run a graph twice in linear, you expect two images.

This simple change disables the node cache for terminal VAE decode nodes in all linear graphs, ensuring you always get images. If they graph was fully cached, all images after the first will be created very quickly of course.
2023-12-16 12:37:49 +11:00
311be8f97d Merge branch 'main' into fix/pypi-release-frontend-build 2023-12-16 10:15:32 +11:00
3f970c8326 Don't copy extraneous paths into installer .zip 2023-12-15 11:27:21 -05:00
fc150acde5 [feat] Make model prober recognize yet another LoRA format (#5296)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This adds a probe for the SDXL LoRA format found in the wild at
https://civitai.com/models/224641.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

See discord message at:
https://discord.com/channels/1020123559063990373/1149510134058471514/1184982133912113182

## QA Instructions, Screenshots, Recordings

Try installing the SDXL LoRA at the URL given above.
## Merge Plan

This can be merged when approved.
## Added/updated tests?

- [ ] Yes
- [X] No : we do not yet have a comprehensive suite of models to test
probing on.

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 09:49:51 -05:00
1615df3aa1 fix: use node 18, set working directory
- Node 20 has  a problem with `pnpm`; set it to Node 18
- Set the working directory for the frontend commands
2023-12-16 00:32:31 +11:00
b2a8c45553 fix: build frontend in pypi-release workflow
This was missing, resulting in the 3.5.0rc1 having no frontend.
2023-12-15 23:56:31 +11:00
212dbaf9a2 fix comment 2023-12-15 00:25:27 -05:00
ac3cf48d7f make probe recognize lora format at https://civitai.com/models/224641 2023-12-15 00:25:27 -05:00
454f01e0c1 [feature] add ability to filter model listings by format (#5286)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This minor change adds the ability to filter the model lists returned by
V2 of the model manager using the model file format (e.g. "checkpoint").
Just thought this would be a useful feature.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This can be merged when approved without any adverse effects.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : minor feature - tested informally using the router API

## [optional] Are there any post deployment tasks we need to perform?
2023-12-15 00:03:01 -05:00
72dca55e44 Merge branch 'feat/model_manager/search-by-format' of github.com:invoke-ai/InvokeAI into feat/model_manager/search-by-format 2023-12-14 23:55:08 -05:00
264ea6d94d fix ruff errors 2023-12-14 23:54:59 -05:00
60e3e653fa Merge branch 'main' into feat/model_manager/search-by-format 2023-12-14 23:53:54 -05:00
082894c377 Adding Kapa Assistant to Docs (#5290)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ x ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ x ] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ x ] Yes
- [ ] No


## Description
This adds the Kapa assistant to our docs.
2023-12-15 09:47:40 +11:00
4b00f8fc82 Merge branch 'main' into Adding-Kapa-assistant-to-docs 2023-12-15 09:46:25 +11:00
6ea09ba0b6 feat(ui): workflow menu tweaks
- "Reset Workflow Editor" -> "New Workflow"
- "New Workflow" gets nodes icon & is no longer danger coloured
- When creating a new workflow, if the current workflow has unsaved changes, you get a dialog asking for confirmation. If the current workflow is saved, it immediately creates a new workflow.
- "Download Workflow" -> "Save to File"
- "Upload Workflow" -> "Load from File"
- Moved "Load from File" up 1 in the menu
2023-12-14 08:30:59 -05:00
296060db63 Add cpu and rocm profiles. Let invokeai-nvidia service be the default. 2023-12-13 23:23:43 -05:00
d1d8ee71fc Simplify docker compose setup 2023-12-13 23:23:43 -05:00
42c04db167 adding kapa widget to docs 2023-12-13 22:33:50 -05:00
b935768eeb Update mkdocs.yml 2023-12-13 22:28:47 -05:00
ea4ef042f3 Ruff fixes 2023-12-14 12:47:10 +11:00
18b2bcbbee Added Classification from baseinvocation 2023-12-14 12:47:10 +11:00
5ad88c7f86 Fixed classification 2023-12-14 12:47:10 +11:00
3b04fef31d Added classification 2023-12-14 12:47:10 +11:00
bec888923a Fix for ruff 2023-12-14 12:47:10 +11:00
c6235049c7 Add an unsharp mask node to core nodes
Unsharp mask is an image operation that, despite its name, sharpens an image. Like a Gaussian blur, it takes a radius and strength.
2023-12-14 12:47:10 +11:00
e10f6e8962 fix(nodes): mark CalculateImageTilesInvocation as beta
missed this when I added classification
2023-12-13 20:33:25 -05:00
77f04ff8d6 docs: add warning to developer install about database & main 2023-12-14 11:47:33 +11:00
461e474394 fix(nodes): fix embedded workflows with IDs
This model was a bit too strict, and raised validation errors when workflows we expect to *not* have an ID (eg, an embedded workflow) have one.

Now it strips unknown attributes, allowing those workflows to load.
2023-12-14 11:38:04 +11:00
f0c70fe3f1 fix(db): add error handling for workflow migration
- Handle an image file not existing despite being in the database.
- Add a simple pydantic model that tests only for the existence of a workflow's version.
- Check against this new model when migrating workflows, skipping if the workflow fails validation. If it succeeds, the frontend should be able to handle the workflow.
2023-12-14 10:16:56 +11:00
442ac2b828 fix(ui): fix frontend workflow migration when node is missing version
This should default to "1.0.0" to match the behaviour of the backend.
2023-12-14 09:59:11 +11:00
bb986b97f3 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1363 of 1365 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
98655db57b translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1340 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 84.2% (1150 of 1365 strings)

translationBot(ui): update translation (Russian)

Currently translated at 83.1% (1135 of 1365 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
8845894e83 translationBot(ui): update translation (Italian)
Currently translated at 97.0% (1325 of 1365 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-13 17:11:45 -05:00
937c7e957d add merge plan to PR template 2023-12-13 16:59:31 -05:00
569ae7c482 add ability to filter model listings by format 2023-12-13 15:59:21 -05:00
340957f920 Update torch to 2.1.1 and xformers to 0.0.23 2023-12-13 14:49:32 -05:00
076d9b05ea Update transformers to 4.36 and Accelerate to 0.25 2023-12-13 14:42:34 -05:00
2b54e240d4 Bump Diffusers Dependency (#5243)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ X ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: dependency bump

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ x ] No


## Description
Updating diffusers to .24 - fixes a few issues. Needs to be tested to
ensure things like our IP Adapter implementation don't break
2023-12-13 20:31:00 +05:30
5127e9df2d Fix error caused by bump to diffusers 0.24. We pass kwargs now instead of positional args so that we are more robust to future changes. 2023-12-13 09:17:30 -05:00
42329a1849 Updating HF Hub dependency 2023-12-13 09:17:30 -05:00
42bc6ef154 Bump Diffusers Dependency 2023-12-13 09:17:30 -05:00
6c6c45c3da feat(db): add SQLiteMigrator to perform db migrations (#5227)
## What type of PR is this? (check all applicable)

- [x] Refactor
- [x] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Have you discussed this change with the InvokeAI team?

- [x] Yes
- [ ] No, because:

## Description

This PR enhances our SQLite database with migration logic.

### `SQLiteMigrator` class

The new `SQLiteMigrator` class handles safely running database
migrations. It is initialized in the `SqliteDatabase` class's init, and
immediately runs all database migrations.

### `Migration` class

Migrations are reprsented by a `Migration` class, which has 3
attributes:

- `db_version: int`: The database version this migration results in.
- `app_version: str`: The semver app version this migration is run for.
- `migrate: Callable[[sqlite3.Cursor], None]`: A function that performs
the migration. It receives a cursor _only_, but can do anything it wants
to do. A convention is established for these functions.

All schema-creating SQL now lives in a `migrate` function. We haven't
needed to make any data migrations yet, but when we do, this will also
be handled within one of these callbacks.

### Migration Flow

First, migrations are registered with `SQLiteMigrator` with it's
`register_migration` method. This performs some basic checks of the
migration version.

After registering all migrations, they are run with the `run_migrations`
method. This does a few things:

- Creates a `version` table in the DB, if it doesn't already exist. This
table has `db_version INTEGER`, `app_version TEXT` and `migrated_at
DATETIME` columns.
- Sort the migrations by their `db_version`.
- Do some checks to see if we need a migration.
- Backs up the database (if it's a file database). The migration bails
out if this fails.
- Runs each migration. If there is a problem, restore from backup.

### Included Migrations

Migrations are in `invokeai/app/services/shared/sqlite/migrations`.

#### `migrate_1.py`

All\* schema SQL up to 3.4.0post2 is in `migration_1.py`. Running only
this migration should result in a database that is identical to the one
you get from starting up 3.4.0post2.

SQL in this migration is **idempotent** (same as it was when the SQL was
spread across the various services).

#### `migrate_2.py`

Schema changes through 3.5.0 (the upcoming release) are in
`migration_2.py`.

SQL in this migration is **not idempotent**. Future migrations need not
be idempotent, as the migration logic ensures each will only be run
once.

### \*Caveat - ItemStorage

This class provides a generic document-db-like interface for storing
objects. Our `graph_executions` and `graphs` tables are created and
managed by this service. This PR does not touch this class and therefore
does not touch either of those two tables.

We can decide how to handle those tables in the future as the need
arises.

### Change to Model Manager Metadata table

I noticed that there is a `model_manager_metadata` table which included
the app version, and whose `version` property wasn't accessed outside
the service.

I believe the new `version` table fulfills the purpose of this table,
and have removed it.

@lstein Please let me know if this is not right.

## QA Instructions, Screenshots, Recordings

1.  Case 1 - Upgrade

    - Back up your 3.4.0post2 database
    - Run this PR
- It should upgrade your database and everything should work exactly
like it did before

2.  Case 2 - New Install

- Move your database out of the invoke root so that when the app starts,
it creates a new one
    - Run this PR
    - It should work just like a new install

3.  Case 3 - With an In-Memory Database

- Enable the in-memory memory database (set `use_memory_db` under
`Paths` in `invokeai.yaml` to `true`)
    - Run this PR
    - It should work just like a new install


## Added/updated tests?

- [x] Yes: Fairly comprehensive tests are added for the
`SQLiteMigrator`.
- [ ] No : _please replace this line with details on why tests
      have not been included_
2023-12-13 09:04:51 -05:00
f76b04a3b8 fix(db): rename "SQLiteMigrator" -> "SqliteMigrator" 2023-12-13 11:31:15 +11:00
821e0326c9 fix(db): formatting 2023-12-13 11:25:57 +11:00
cc18d86f29 Merge branch 'main' into feat/db/migrations 2023-12-13 11:24:55 +11:00
ed1583383e fix(db): remove stale comment in tests 2023-12-13 11:24:27 +11:00
c50a49719b fix(db): raise a MigrationVersionError when invalid versions are used
This inherits from `ValueError`, so pydantic understands it when doing validation.
2023-12-13 11:21:16 +11:00
ebf5f5d418 feat(db): address feedback, cleanup
- use simpler pattern for migration dependencies
- move SqliteDatabase & migration to utility method `init_db`, use this in both the app and tests, ensuring the same db schema is used in both
2023-12-13 11:19:59 +11:00
386b656530 feat(db): remove unnecessary fixture declaration
Also revert the change to `conftest.py` in which the file was flagged for pytest to crawl for fixtures.
2023-12-13 10:13:03 +11:00
d7cede6c28 chore/fix: bump fastapi to 0.105.0
This fixes a problem with `Annotated` which prevented us from using pydantic's `Field` to specify a discriminator for a union. We had to use FastAPI's `Body` as a workaround.
2023-12-13 09:48:34 +11:00
15de7c21d9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 10:24:00 -05:00
9620f9336c updated comment 2023-12-12 10:24:00 -05:00
a64ced7b29 remove unneeded if else 2023-12-12 10:24:00 -05:00
dd7deff1a3 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 10:24:00 -05:00
612912a6c9 updated tests with a test for tile > image for calc_tiles_min_overlap() 2023-12-12 14:12:22 +00:00
bca2372280 updated comment 2023-12-12 14:02:28 +00:00
0b860582f0 remove unneeded if else 2023-12-12 14:00:06 +00:00
87ff380fe4 fix for calc_tiles_min_overlap when tile size is bigger than image size 2023-12-12 13:40:28 +00:00
2cdda1fda2 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 17:22:52 +11:00
6caa70123d translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 96.4% (1314 of 1363 strings)

Co-authored-by: junzi <nomal.si2621.vip@qq.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-12 17:15:54 +11:00
7e831c8a96 Selected in View within Gallery (#5240)
* selector added

* ref and useeffect added

* scrolling done using useeffect

* fixed scroll and changed the ref name

* fixed scroll again

* created hook for scroll logic

* feat(ui): debounce metadata fetch by 300ms

This vastly reduces the network requests when using the arrow keys to quickly skim through images.

* feat(ui): extract logic to determine virtuoso scrollToIndex align

This needs to be used in `useNextPrevImage()` to ensure the scrolling puts the image at the top or bottom appropriately

* feat(ui): add debounce to image workflow hook

This was spamming network requests like the metadata query

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-12-12 17:14:28 +11:00
3d64bc886d feat(nodes): flag all tiled upscaling nodes as beta 2023-12-12 16:43:05 +11:00
1a136d6167 feat(nodes): fix classification docstrings 2023-12-12 16:43:05 +11:00
43f2837117 feat(nodes): add invocation classifications
Invocations now have a classification:
- Stable: LTS
- Beta: LTS planned, API may change
- Prototype: No LTS planned, API may change, may be removed entirely

The `@invocation` decorator has a new arg `classification`, and an enum `Classification` is added to `baseinvocation.py`.

The default is Stable; this is a non-breaking change.

The classification is presented in the node header as a hammer icon (Beta) or flask icon (prototype).

The icon has a tooltip briefly describing the classification.
2023-12-12 16:43:05 +11:00
5f77ef7e99 feat(db): improve docstrings in migrator 2023-12-12 16:30:57 +11:00
22ccaa4e9a [Feature] Allow the model record migrate script to update existing model records (#5264)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description

1. The new model manager sqlite3-based configuration record storage
system is automatically populated with probed values from existing
models found in the models path when `invokeai-web` starts up for the
first time. However, the user's customization of these models in
`invokeai.yaml`, including such things as the prediction type and model
description, are not automatically copied over. This PR enhances the
`invokeai-migrate-models-to-db` script so that any customized
configuration data from `invokeai.yaml` replaces the original probed
values. This script only needs to be run once, but it does not hurt to
run it additional times. In the near future, I'm going to register this
module with psychedelicious's sqlite migration system so that the update
happens automatically during database migration.

2. The SQL-based model config record system stores a JSON version of the
config, as well as several fields that are broken out into individual
columns for search/indexing purposes. This PR keeps the JSON and the
broken-out fields in sync using the `json_extract()` sqlite3 function to
populate the broken out `base`, `type`, `name`, `path` and `format`
fields in the `model_config` table.

3. Finally, this PR fixes the annoying `invokeai-web` shutdown message:
`TypeError: ModelInstallService.stop() takes 1 positional argument but 2
were given`

## Related Tickets & Documents


- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

If you've run `invokeai-web` at any time since PR #5039, your
`invokeai.db` will have a `model_config` table containing probe
information from all models in the invokeai models directory as well as
those in `autoimport` (if applicable). However, any models present in
`models.yaml` whose paths are outside these directories will not be
present. To add them, and to update the description and other values
from `models.yaml`, run the command `invokeai-migrate-models-to-db`. You
should see the missing models added to the database table with the
correct information.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-12 00:25:05 -05:00
d277bd3c38 Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-12 00:24:43 -05:00
fd4e041e7c feat: serve HTTPS when configured with ssl_certfile 2023-12-12 16:01:43 +11:00
15a3e8076f Merge branch 'main' into feat/enhance-model-db-migrate-script 2023-12-11 23:10:04 -05:00
2fbe3a3104 fix ruff error 2023-12-11 23:04:18 -05:00
b0cfa58526 allow the model record migrate script to update existing model records 2023-12-11 22:47:19 -05:00
285ed26edd Add commands to Makefile for convenient release preparation (#5263)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This PR does three things:

1) It separates out the script that creates the installer zipfile
(`create_installer.sh`) from the script that tags the repository with
the current release version (now called `tag_release.sh`)

2) It adds new targets to Makefile for running the installer script and
tagging.

3) It adds a `help` target that lists the Makefile targets:

```
$ make help
Developer commands:

ruff           Run ruff, fixing any safely-fixable errors and formatting
ruff-unsafe    Run ruff, fixing all fixable errors and formatting
mypy           Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors
mypy-all       Run mypy ignoring the config in pyproject.tom but still ignoring missing imports
frontend-build Build the frontend in order to run on localhost:9090
frontend-dev   Run the frontend in developer mode on localhost:5173
installer-zip  Build the installer .zip file for the current version
tag-release    Tag the GitHub repository with the current version (use at release time only!)
```
`help` is also the default target so that the help message will print
out when only `make` is issued.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [X] No: not needed

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:33:45 -05:00
02565b9a00 Merge branch 'main' into install/release-tools 2023-12-11 22:32:28 -05:00
78a6024d6c Tiled upscaling graph - new nodes (#5234)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [x] No


## Description
Additional tile generation nodes of
CalculateImageTilesEvenSplitInvocation &
CalculateImageTilesMinimumOverlapInvocation
Additional blending method of merge_tiles_with_seam_blending
Updated Node MergeTilesToImageInvocation with seam blending

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-11 22:14:15 -05:00
95198da645 fix(db): fix sqlite migrator tests on windows 2023-12-12 13:54:47 +11:00
ee1f1f3363 Merge remote-tracking branch 'origin/main' into feat/db/migrations 2023-12-12 13:39:47 +11:00
18ba7feca1 feat(db): update docstrings 2023-12-12 13:35:46 +11:00
55b0c7cdc9 feat(db): tidy migration_2 2023-12-12 13:30:29 +11:00
713a83e7da Merge branch 'main' into install/release-tools 2023-12-11 21:20:51 -05:00
f3a97e06ec add the tag_release.sh script 2023-12-11 21:11:37 -05:00
50815d36c6 feat(db): add tests for migration dependencies 2023-12-12 13:09:24 +11:00
a69f518c76 feat(db): tidy dependencies for migrations 2023-12-12 13:09:09 +11:00
18093c4f1d split installer zipfile script from tagging script; add make commands 2023-12-11 21:08:03 -05:00
0cf7fe43af feat(db): refactor migrate callbacks to use dependencies, remote pre/post callbacks 2023-12-12 12:35:42 +11:00
6063760ce2 feat(db): tweak docstring 2023-12-12 11:13:40 +11:00
c5ba4f2ea5 feat(db): remove file backups
Instead of mucking about with the filesystem, we rely on SQLite transactions to handle failed migrations.
2023-12-12 11:12:46 +11:00
3414437eea feat(db): instantiate SqliteMigrator with a SqliteDatabase
Simplifies a couple things:
- Init is more straightforward
- It's clear in the migrator that the connection we are working with is related to the SqliteDatabase
2023-12-12 10:46:08 +11:00
417db71471 feat(db): decouple SqliteDatabase from config object
- Simplify init args to path (None means use memory), logger, and verbose
- Add docstrings to SqliteDatabase (it had almost none)
- Update all usages of the class
2023-12-12 10:30:37 +11:00
afe4e55bf9 feat(db): simplify migration registration validation
With the previous change to assert that the to_version == from_version + 1, this validation can be simpler.
2023-12-12 09:52:03 +11:00
55acc16b2d feat(db): require migration versions to be consecutive 2023-12-12 09:43:09 +11:00
535ce10e99 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 14:40:55 -05:00
Sam
11f4a48144 Add container GID 2023-12-11 14:30:40 -05:00
Sam
67ed4a0245 Respect CONTAINER_UID in Dockerfile chown
CONTAINER_UID is used for the user ID within the container, however I noticed the UID was hard coded to 1000 in the Dockerfile chown -R command.

This leaves the default as 1000, but allows it to be overrriden by setting CONTAINER_UID.
2023-12-11 14:30:40 -05:00
fbbc1037cd missed a rename of overlap to overlap_fraction in test for even_spilt 2023-12-11 17:23:28 +00:00
0852fd4e88 Updated tests for even_split overlap renamed to overlap_fraction 2023-12-11 17:17:29 +00:00
c84526fae5 Fixed Tests that where using round_to_8 and removed redundant tests 2023-12-11 17:05:45 +00:00
f762940335 Merge branch 'main' into tiled-upscaling-graph 2023-12-11 16:57:36 +00:00
fefb78795f - Even_spilt overlap renamed to overlap_fraction
- min_overlap removed * restrictions and round_to_8
- min_overlap handles tile size > image size by clipping the num tiles to 1.
- Updated assert test on min_overlap.
2023-12-11 16:55:27 +00:00
ef8284f009 fix(db): fix tests 2023-12-11 16:41:47 +11:00
290851016e feat(db): move sqlite_migrator into its own module 2023-12-11 16:41:30 +11:00
fa7d002175 fix(tests): fix typing issues 2023-12-11 16:22:29 +11:00
f1b6f78319 fix(db): fix windows db migrator tests
- Ensure db files are closed before manipulating them
- Use contextlib.closing() so that sqlite connections are closed on existing the context
2023-12-11 16:14:25 +11:00
26ab917021 fix(tests): add sqlite migrator to test fixtures 2023-12-11 16:14:25 +11:00
4f3c32a2ee fix(db): remove errant print stmts 2023-12-11 16:14:25 +11:00
77065b1ce1 feat(db): update test for migration chain for missing from 0 2023-12-11 16:14:25 +11:00
41db92b9e8 feat(db): add check for missing migration from 0 2023-12-11 16:14:25 +11:00
c823f5667b feat(db): update sqlite migrator tests 2023-12-11 16:14:25 +11:00
3227b30430 feat(db): extract non-stateful logic to class methods 2023-12-11 16:14:25 +11:00
567f107a81 feat(db): return backup_db_path, move log stmt to run_migrations 2023-12-11 16:14:25 +11:00
b3d5955bc7 fix(db): rename Migrator._migrations -> _migration_set 2023-12-11 16:14:25 +11:00
8726b203d4 fix(db): fix migration chain validation 2023-12-11 16:14:25 +11:00
b3f92e0547 fix(db): fix docstring 2023-12-11 16:14:25 +11:00
72c9a7663f fix(db): add docstring 2023-12-11 16:14:25 +11:00
fcb9e89bd7 feat(db): tidy db naming utils 2023-12-11 16:14:25 +11:00
56966d6d05 feat(db): only reinit db if migrations occurred 2023-12-11 16:14:25 +11:00
e46dc9b34e fix(db): close db conn before reinitializing 2023-12-11 16:14:25 +11:00
e461f9925e feat(db): invert backup/restore logic
Do the migration on a temp copy of the db, then back up the original and move the temp into its file.
2023-12-11 16:14:25 +11:00
abeb1bd3b3 feat(db): reduce power MigrateCallback, only gets cursor
use partial to provide extra dependencies for the image workflow migration function
2023-12-11 16:14:25 +11:00
83e820d721 feat(db): decouple from SqliteDatabase 2023-12-11 16:14:25 +11:00
f8e4b93a74 feat(db): add migration lock file 2023-12-11 16:14:25 +11:00
0710ec30cf feat(db): incorporate feedback 2023-12-11 16:14:25 +11:00
c382329e8c feat(db): move migrator out of SqliteDatabase 2023-12-11 16:14:25 +11:00
a2dc780188 feat: add script to migrate image workflows 2023-12-11 16:14:25 +11:00
abc9dc4d17 fix(tests): fix sqlite migrator backup and restore test
On Windows, we must ensure the connection to the database is closed before exiting the tempfile context.

Also, rejiggered the thing to use the file directly.
2023-12-11 16:14:25 +11:00
3c692018cd fix(db): make idempotency test actually test something 2023-12-11 16:14:25 +11:00
3ba3c1918c fix(db): remove duplicated test case 2023-12-11 16:14:25 +11:00
f2c6819d68 feat(db): add SQLiteMigrator to perform db migrations 2023-12-11 16:14:25 +11:00
ef807cf63a Refactor model manager: model installer component (#5171)
## What type of PR is this? (check all applicable)

- [X] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This is the next phase of the model manager refactor, as discussed with
@psychedelicious and @RyanJDick. This implements the model installer,
which is responsible for managing model weights on disk and installing
new models.

Currently only installation of local files and directories is supported.
Remote installation will be implemented after the queued download
manager is reviewed and approved.

Please see the documentation located at
[docs/contributing/MODEL_MANAGER.md](8695ad6f59/docs/contributing/MODEL_MANAGER.md (model-installation))
for an explanation of how this module works.

Things that have changed relative to the current implementation.

1. Model importation runs in a background thread. Access to the
installation status is through a ModelInstallJob object returned by the
`import_model()` call. In addition, the installation process generates a
series of `model_install` events on the event bus.
2. `model_install_progress` events are documented, but not currently
issued. These will be issued when background downloading is implemented.
3. The model installer currently runs in parallel to the current model
manager. The frontend continues to use `configs/models.yaml` and ignores
what is in the `model_config` table of `invokeai.db`.
4. When the installer is initialized at app startup time, it
synchronizes its database to the contents of the InvokeAI `models`
directory. The current model manager does this as well, so you will see
two log messages indicating that this directory is being scanned.


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

You can test using the FastAPI swagger pages at
http://localhost:9090/docs. Use the calls listed under
`model_manager_v2`. Be aware that only installation of local models
(indicated by their file or directory path) are currently supported.

## Added/updated tests?

- [X] Yes -- see
`tests/app/services/model_install/test_model_install.py`
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-10 23:16:39 -05:00
bbcd58e681 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 21:34:14 -05:00
36043bf38b fixed docstring in probe module 2023-12-10 21:33:54 -05:00
fd68c47920 Merge branch 'main' into refactor/model-manager-3 2023-12-10 21:26:44 -05:00
c5c975c7a9 fix(installer): fix exit on new version 2023-12-11 12:30:13 +11:00
41ad13c282 feat(installer): do not print when aliasing python
Potentially confusing and not useful
2023-12-11 12:30:13 +11:00
e9d7e6bdd5 feat(installer): make active venv error red instead of yellow 2023-12-11 12:30:13 +11:00
49b74d189e feat(installer): improve messages, simplify script
- Color outputs
- Clarify messages
- Do not offer to use existing frontend build (insurance - prevents accidentally using old build)
2023-12-11 12:30:13 +11:00
179bc64490 feat(create_installer): remove extraneous conditional
Using `-f` is functionally equivalent to first checking if the dir exists before removing it. We just want to ensure the build dir doesn't exists.
2023-12-11 12:30:13 +11:00
1feab3da37 fix(installer): update msg in create_installer
More accurate/clearer messages
2023-12-11 12:30:13 +11:00
0a15f3fc35 fix(tests): remove test for frontend build 2023-12-11 12:30:13 +11:00
daf00efa4d fix(api): only attempt to serve UI build if it exists 2023-12-11 12:30:13 +11:00
55cfb879d0 feat: no frontend build in repo
In other words, build frontend when creating installer.

Changes to `create_installer.sh`

- If `python` is not in `PATH` but `python3` is, alias them (well, via function). This is needed on some machines to run the installer without symlinking to `python3`.
- Make the messages about pushing tags clearer. The script force-pushes, so it's possible to accidentally take destructive action. I'm not sure how to otherwise prevent damage, so I just added a warning.
- Print out `pwd` when prompting about being in the `installer` dir.
- Rebuild the frontend - if there is already a frontend build, first checks if the user wants to rebuild it.
- Checks for existence of `../build` dir before deleting - if the dir doesn't exist, the script errors and exits at this point.
- Format and spell check.

Other changes:

- Ignore `dist/` folder.
- Delete `dist/`.

**Note: you may need to use `git rm --cached invokeai/app/frontend/web/dist/` if git still wants to track `dist/`.**
2023-12-11 12:30:13 +11:00
de2879f602 port new code for detecting sdxl-based embeddings 2023-12-10 15:48:02 -05:00
3b1ff4a7f4 resolve test failure caused by renamed sqlite_database module 2023-12-10 12:59:00 -05:00
d7f7fbc8c2 Merge branch 'main' into refactor/model-manager-3 2023-12-10 12:55:28 -05:00
e2567a7e31 Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-10 12:55:24 -05:00
2f3457c02a rename installer __del__() to stop(). Improve probe error messages 2023-12-10 12:55:01 -05:00
aab6369ffe Update invokeai/backend/model_manager/search.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-10 12:24:50 -05:00
4c97b619fb Update tiles.py
merge with main
2023-12-09 22:05:23 +00:00
abdd840fb9 Merge branch 'main' into tiled-upscaling-graph 2023-12-09 22:03:18 +00:00
e656768eb2 more fixes from code review 2023-12-09 21:56:31 +00:00
494c2a9b05 Updates based on code review by @RyanJDick 2023-12-09 18:38:07 +00:00
40d4c7c8e1 fix(ui): add validation to field value reducers (#5256)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

Insurance against invalid inputs. Closes #5250
2023-12-09 11:42:32 +05:30
076284c26f fix(ui): add validation to field value reducers
Insurance against invalid inputs. Closes #5250
2023-12-09 17:09:02 +11:00
1af4260ab6 fix(ui): fix workflow saving
'id' should not be omitted when building a workflow, it makes workflows always save as a copy
2023-12-09 16:35:44 +11:00
08ef71a74e fix(tests): mark non-test-case classes as such
Because their names start with "test", we need to use `__test__ = False` to tell pytest to not treat them as test cases.
2023-12-09 16:31:41 +11:00
8f6e2c0c85 fix(tests): add versions to test nodes
Fixes a warning about missing version.
2023-12-09 16:31:41 +11:00
0ac33f36ef fix(tests): fix pydantic warning about deprecated fields
Calling `inspect.getmembers()` on a pydantic field results in `getattr` being called on all members of the field. Pydantic has some attrs that are marked deprecated.

In our test suite, we do not filter deprecation warnings, so this is surfaced.

Use a context manager to ignore deprecation warnings when calling the function.
2023-12-09 16:31:41 +11:00
9661fa5f76 feat(ui): add eslint unused-imports plugin
Provides autofix for unused imports
2023-12-09 16:12:00 +11:00
ca07449fb4 fix(ui): add typeguard for action.payload
In the latest redux, unknown actions are typed as `unknown`. This forces type-safety upon us, requiring us to be more careful about the shape of actions.

In this case, we don't know if the rejection has a payload or what shape it may be in, so we need to do runtime checks. This is implemented with a simple zod schema, but probably the right way to handle this is to have consistent types in our RTK-Query error logic.
2023-12-09 16:09:26 +11:00
fb39f621c6 feat(ui): bump redux-remember 2023-12-09 16:09:26 +11:00
977d309692 fix(ui): fix memoized selectors
Some had the memoize options twice.
2023-12-09 16:09:26 +11:00
72cb8b83fe feat(ui): upgrade redux and RTK
There are a few breaking changes, which I've addressed.

The vast majority of changes are related to new handling of `reselect`'s `createSelector` options.

For better or worse, we memoize just about all our selectors using lodash `isEqual` for `resultEqualityCheck`. The upgrade requires we explicitly set the `memoize` option to `lruMemoize` to continue using lodash here.

Doing that required changing our `defaultSelectorOptions`.

Instead of changing that and finding dozens of instances where we weren't using that and instead were defining selector options manually, I've created a pre-configured selector: `createMemoizedSelector`.

This is now used everywhere instead of `createSelector`.
2023-12-09 16:09:26 +11:00
99f14b1dfe fix(ui): remove .ladle from tsconfig
was testing this out and forgot to remove
2023-12-09 16:03:09 +11:00
95a3c89a56 chore(ui): lint 2023-12-09 16:03:09 +11:00
b271474812 feat(ui): bump deps 2023-12-09 16:03:09 +11:00
2272925607 feat(ui): disable storybook telemetry 2023-12-09 16:03:09 +11:00
5902a52e40 feat(ui): add storybook 2023-12-09 16:03:09 +11:00
5140056b59 fix(actions): fix lint-frontend 2023-12-09 16:00:37 +11:00
f17b3d0068 feat(ui): migrate to pnpm
- update all scripts
- update the frontend GH action
- remove yarn-related files
- update ignores

Yarn classic + storybook has some weird module resolution issue due to how it hoists dependencies.

See https://github.com/storybookjs/storybook/issues/22431#issuecomment-1630086092

When I did the `package.json` solution in this thread, it broke vite. Next option is to upgrade to yarn 3 or pnpm. I chose pnpm.
2023-12-09 16:00:37 +11:00
5b9d25f57e translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2023-12-09 13:47:40 +11:00
73dbb8792e translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1287 of 1324 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-12-09 13:47:40 +11:00
fc6cebb975 fix(ui): fix extra attrs added to workflow payload 2023-12-09 11:10:16 +11:00
06104f3851 fix(ui): disallow loading/deleting workflow if already open 2023-12-09 11:10:16 +11:00
6e028d691a fix(ui): use translation for unnamed workflows 2023-12-09 11:10:16 +11:00
6d176601cc feat(ui): track & indicate workflow saved status 2023-12-09 11:10:16 +11:00
4627a7c75f tidy(ui): remove unused components 2023-12-09 11:10:16 +11:00
d9a0efb20b chore)ui): typegen 2023-12-09 11:10:16 +11:00
7436aa8e3a feat(workflow_records): do not use default_factory for workflow id
Using default_factory to autogenerate UUIDs doesn't make sense here, and results awkward typescript types.

Remove the default factory and instead manually create a UUID for workflow id. There are only two places where this needs to happen so it's not a big change.
2023-12-09 11:10:16 +11:00
d75d3885c3 fix(ui): fix typo in uiPersistDenylist 2023-12-09 11:10:16 +11:00
db4763a742 feat(ui): use templates for edge validation of workflows
This addresses an edge case where:
1. the workflow references fields that are present on the workflow's nodes, but not on the invocation templates for those nodes and
2. The invocation template for that type does exist

This should be a fairly obscure edge case, but could happen if a user fiddled around with the workflow manually.

I ran into it as a result of two nodes having accidentally mixed up their invocation types, a problem introduced with a wonky merge commit.
2023-12-09 11:10:16 +11:00
13c9f8ffb7 fix(nodes): fix mismatched invocation decorator
This got messed up during a merge commit
2023-12-09 11:10:16 +11:00
e4f67628c0 feat(ui): revise workflow editor buttons
- Add menu to top-right of editor, save/saveas/download/upload/reset/settings moved in here
- Add workflow name to top-center
2023-12-09 11:10:16 +11:00
283bb73418 feat(ui): improve save/as workflow hook
Use a persistent updating toast to indicate saving progress.
2023-12-09 11:10:16 +11:00
5b5a71d40c fix(ui): do not append "(copy)" to workflow name when saving 2023-12-09 11:10:16 +11:00
61060f032a feat(ui): abstract out the global menu close trigger
This logic is moved into a hook.

This is needed for our context menus to close when the user clicks something in reactflow. It needed to be extended to support menus also.
2023-12-09 11:10:16 +11:00
3423b5848f fix(ui): do not disable the metadata and workflow tabs in viewer
Disabling these introduces an issue where, if you were on an image with a workflow/metadata, then switch to one without, you can end up on a disabled tab. This could potentially cause a runtime error.
2023-12-09 11:10:16 +11:00
fd8d1e13a0 feat(ui): clarify workflow building node filter 2023-12-09 11:10:16 +11:00
c42d692ea6 feat: workflow library (#5148)
* chore: bump pydantic to 2.5.2

This release fixes pydantic/pydantic#8175 and allows us to use `JsonValue`

* fix(ui): exclude public/en.json from prettier config

* fix(workflow_records): fix SQLite workflow insertion to ignore duplicates

* feat(backend): update workflows handling

Update workflows handling for Workflow Library.

**Updated Workflow Storage**

"Embedded Workflows" are workflows associated with images, and are now only stored in the image files. "Library Workflows" are not associated with images, and are stored only in DB.

This works out nicely. We have always saved workflows to files, but recently began saving them to the DB in addition to in image files. When that happened, we stopped reading workflows from files, so all the workflows that only existed in images were inaccessible. With this change, access to those workflows is restored, and no workflows are lost.

**Updated Workflow Handling in Nodes**

Prior to this change, workflows were embedded in images by passing the whole workflow JSON to a special workflow field on a node. In the node's `invoke()` function, the node was able to access this workflow and save it with the image. This (inaccurately) models workflows as a property of an image and is rather awkward technically.

A workflow is now a property of a batch/session queue item. It is available in the InvocationContext and therefore available to all nodes during `invoke()`.

**Database Migrations**

Added a `SQLiteMigrator` class to handle database migrations. Migrations were needed to accomodate the DB-related changes in this PR. See the code for details.

The `images`, `workflows` and `session_queue` tables required migrations for this PR, and are using the new migrator. Other tables/services are still creating tables themselves. A followup PR will adapt them to use the migrator.

**Other/Support Changes**

- Add a `has_workflow` column to `images` table to indicate that the image has an embedded workflow.
- Add handling for retrieving the workflow from an image in python. The image file must be fetched, the workflow extracted, and then sent to client, avoiding needing the browser to parse the image file. With the `has_workflow` column, the UI knows if there is a workflow to be fetched, and only fetches when the user requests to load the workflow.
- Add route to get the workflow from an image
- Add CRUD service/routes for the library workflows
- `workflow_images` table and services removed (no longer needed now that embedded workflows are not in the DB)

* feat(ui): updated workflow handling (WIP)

Clientside updates for the backend workflow changes.

Includes roughed-out workflow library UI.

* feat: revert SQLiteMigrator class

Will pursue this in a separate PR.

* feat(nodes): do not overwrite custom node module names

Use a different, simpler method to detect if a node is custom.

* feat(nodes): restore WithWorkflow as no-op class

This class is deprecated and no longer needed. Set its workflow attr value to None (meaning it is now a no-op), and issue a warning when an invocation subclasses it.

* fix(nodes): fix get_workflow from queue item dict func

* feat(backend): add WorkflowRecordListItemDTO

This is the id, name, description, created at and updated at workflow columns/attrs. Used to display lists of workflowsl

* chore(ui): typegen

* feat(ui): add workflow loading, deleting to workflow library UI

* feat(ui): workflow library pagination button styles

* wip

* feat: workflow library WIP

- Save to library
- Duplicate
- Filter/sort
- UI/queries

* feat: workflow library - system graphs - wip

* feat(backend): sync system workflows to db

* fix: merge conflicts

* feat: simplify default workflows

- Rename "system" -> "default"
- Simplify syncing logic
- Update UI to match

* feat(workflows): update default workflows

- Update TextToImage_SD15
- Add TextToImage_SDXL
- Add README

* feat(ui): refine workflow list UI

* fix(workflow_records): typo

* fix(tests): fix tests

* feat(ui): clean up workflow library hooks

* fix(db): fix mis-ordered db cleanup step

It was happening before pruning queue items - should happen afterwards, else you have to restart the app again to free disk space made available by the pruning.

* feat(ui): tweak reset workflow editor translations

* feat(ui): split out workflow redux state

The `nodes` slice is a rather complicated slice. Removing `workflow` makes it a bit more reasonable.

Also helps to flatten state out a bit.

* docs: update default workflows README

* fix: tidy up unused files, unrelated changes

* fix(backend): revert unrelated service organisational changes

* feat(backend): workflow_records.get_many arg "filter_text" -> "query"

* feat(ui): use custom hook in current image buttons

Already in use elsewhere, forgot to use it here.

* fix(ui): remove commented out property

* fix(ui): fix workflow loading

- Different handling for loading from library vs external
- Fix bug where only nodes and edges loaded

* fix(ui): fix save/save-as workflow naming

* fix(ui): fix circular dependency

* fix(db): fix bug with releasing without lock in db.clean()

* fix(db): remove extraneous lock

* chore: bump ruff

* fix(workflow_records): default `category` to `WorkflowCategory.User`

This allows old workflows to validate when reading them from the db or image files.

* hide workflow library buttons if feature is disabled

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-12-09 09:48:38 +11:00
5f37176938 ruff formatting 2023-12-08 19:40:10 +00:00
375a91db32 further updated tests 2023-12-08 19:38:16 +00:00
b7ba426249 Fixed some params on tile gen tests on tests 2023-12-08 18:53:28 +00:00
d3ad356c6a Ruff Formatting
Fix pyTest issues
2023-12-08 18:31:33 +00:00
fdb97c1d02 Merge branch 'main' into tiled-upscaling-graph 2023-12-08 18:22:05 +00:00
8cda42ab0a ruff formatting 2023-12-08 18:17:40 +00:00
fed2bdafeb Added Defaults to calc_tiles_min_overlap for overlap and round
Added tests for min_overlap and even_split tile gen
2023-12-08 18:16:13 +00:00
9ba5752770 fix link to xpuct/deliberate 2023-12-08 06:46:58 -08:00
8648c2c42e Update communityNodes.md with VeyDlin's nodes 2023-12-08 05:34:19 -08:00
b519b6e1e0 add middleware to handle 403 errors (#5245)
* add middleware to handle 403 errors

* remove log

* add logic to warn the user if not all requested images could be deleted

* lint

* fix copy

* feat(ui): simplify batchEnqueuedListener error toast logic

* feat(ui): use translations for error messages

* chore(ui): lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2023-12-07 19:26:15 -05:00
913c68982a Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-06 22:23:49 -05:00
6e1e67aa72 remove source filtering from list_models() 2023-12-06 22:23:08 -05:00
ee6fbabbfb Merge branch 'main' into refactor/model-manager-3 2023-12-06 22:20:06 -05:00
db58efbe65 translationBot(ui): update translation (German)
Currently translated at 62.9% (830 of 1319 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-12-07 00:09:57 +11:00
cd15d8b7a9 ruff formatting
reformatted due to ruff errors
2023-12-06 08:10:22 +00:00
3b4b4ba40a Merge branch 'main' into tiled-upscaling-graph 2023-12-06 08:00:31 +00:00
eecee472b1 chore(deps-dev): bump vite from 4.5.0 to 4.5.1 in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v4.5.1/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v4.5.1/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-12-06 16:57:35 +11:00
7b314116be feat(ui): remove husky (#5235)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

You can only have one pre-commit setup on a repo. Removing husky so it
doesn't interfere with the python pre-commit.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue
https://discord.com/channels/1020123559063990373/1149513625321603162/1181752622684831884
2023-12-06 09:03:43 +05:30
bc6d4111a2 feat(ui): remove husky
You can only have one pre-commit setup on a repo. Removing husky so it doesn't interfere with the python pre-commit.
2023-12-06 14:05:50 +11:00
674d9796d0 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 21:03:16 +00:00
5816320645 Merge branch 'main' into tiled-upscaling-graph 2023-12-05 15:32:49 +00:00
14254e8be8 First check-in of new tile nodes
- calc_tiles_even_split
- calc_tiles_min_overlap
- merge_tiles_with_seam_blending
Update MergeTilesToImageInvocation with seam blending
2023-12-05 12:29:55 +00:00
e990235d32 translationBot(ui): update translation (Korean)
Currently translated at 5.2% (70 of 1321 strings)

Co-authored-by: 이승석 <vidicwb@ajou.ac.kr>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ko/
Translation: InvokeAI/Web UI
2023-12-05 16:00:03 +11:00
5f122186bd translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 99.8% (1317 of 1319 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-12-05 16:00:03 +11:00
3bfaee9c57 Merge branch 'main' into refactor/model-manager-3 2023-12-04 22:51:45 -05:00
1ca0901cbe Ensure that fetching a logger doesn't reset its loglevel to default (#5222)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because: minor bug

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

While writing regression tests for the queued downloader I discovered
that when using `InvokeAILogger.get_logger()` to fetch a
previously-created logger it resets that logger's log level to the
default specified in the global config. In other words, this didn't work
as expected:

```
import logging
from invokeai.backend.util.logging import InvokeAILogger
logger1 = InvokeAILogger.get_logger('TestLogger')
logger1.setLevel(logging.DEBUG)
logger2 = InvokeAILogger.get_logger('TestLogger')
assert logger1.level == logging.DEBUG
assert logger2.level == logging.DEBUG
```

This PR fixes the problem and adds a corresponding pytest.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Added/updated tests?

- [X] Yes
- [ ] No

## [optional] Are there any post deployment tasks we need to perform?
2023-12-04 22:50:59 -05:00
2d7555b7b8 Merge branch 'bugfix/log-levels' of github.com:invoke-ai/InvokeAI into bugfix/log-levels 2023-12-04 22:42:06 -05:00
3c7d1fcd32 clean up get_logger() call 2023-12-04 22:41:59 -05:00
c7fa2db556 Merge branch 'main' into bugfix/log-levels 2023-12-04 22:01:42 -05:00
3b06cc6782 reformatted using newer version of ruff 2023-12-04 21:15:56 -05:00
7c9f48b84d fix ruff check 2023-12-04 21:14:02 -05:00
fed2bf6dab Merge branch 'refactor/model-manager-3' of github.com:invoke-ai/InvokeAI into refactor/model-manager-3 2023-12-04 21:12:40 -05:00
2b583ffcdf implement review suggestions from @RyanjDick 2023-12-04 21:12:10 -05:00
6f46d15c05 Update invokeai/app/services/model_install/model_install_base.py
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
2023-12-04 20:09:41 -05:00
018ccebd6f make ModelLocalSource comparisons work across platforms 2023-12-04 19:07:25 -05:00
620b2d477a implement suggestions from first review by @psychedelicious 2023-12-04 17:08:33 -05:00
f73b678aae Merge branch 'main' into refactor/model-manager-3 2023-12-04 17:06:36 -05:00
bdb0d13a2d fix import order 2023-12-02 11:56:41 -05:00
2d2ef5d72c ensure that setting loglevel on one logger doesn't change others 2023-12-02 11:48:51 -05:00
e46ac45741 port probing changes from main model_probe.py to refactored probe.py 2023-12-01 09:19:24 -05:00
75089b7a9d merge in changes from main 2023-12-01 09:18:07 -05:00
778fd55f0d Merge branch 'main' into refactor/model-manager-3 2023-12-01 09:15:18 -05:00
bb87c988cb Change input field ordering of CropLatentsCoreInvocation to match ImageCropInvocation. 2023-11-29 10:23:55 -05:00
049b0239da Re-organize merge_tiles_with_linear_blending(...) to merge rows horizontally first and then vertically. This change achieves slightly more natural blending on the corners where 4 tiles overlap. 2023-11-29 09:48:56 -05:00
932de08fc0 Infer a tight-fitting output image size from the passed tiles in MergeTilesToImageInvocation. 2023-11-29 09:48:56 -05:00
303791d5c6 Add width and height fields to TileToPropertiesInvocation output to avoid having to calculate them with math nodes. 2023-11-29 09:48:56 -05:00
7e4a689370 Update tiling nodes to use width-before-height field ordering convention. 2023-11-29 09:48:56 -05:00
04e0fefdee Rename CropLatentsInvocation -> CropLatentsCoreInvocation to prevent conflict with custom node. And other minor tidying. 2023-11-29 09:48:56 -05:00
9b4e6da226 Improve documentation of CropLatentsInvocation. 2023-11-29 09:48:56 -05:00
e1c53a2465 Use LATENT_SCALE_FACTOR = 8 constant in CropLatentsInvocation. 2023-11-29 09:48:55 -05:00
121b930abf Copy CropLatentsInvocation from 74647fa9c1/images_to_grids.py (L1117C1-L1167C80). 2023-11-29 09:48:55 -05:00
436560da39 (minor) Add 'Invocation' suffix to all tiling node classes. 2023-11-29 09:48:55 -05:00
3980f79ed5 Tidy up tiles invocations, add documentation. 2023-11-29 09:48:55 -05:00
1d0dc7eeab Add unit tests for merge_tiles_with_linear_blending(...). 2023-11-29 09:48:55 -05:00
1f63fa8236 Add unit tests for calc_tiles_with_overlap(...) and fix a bug in its implementation. 2023-11-29 09:48:55 -05:00
caf47dee09 Add unit tests for tile paste(...) util function. 2023-11-29 09:48:55 -05:00
d742479810 Add nodes for tile splitting and merging. The main motivation for these nodes is for use in tiled upscaling workflows. 2023-11-29 09:48:55 -05:00
ecd3dcd5df Merge branch 'main' into refactor/model-manager-3 2023-11-27 22:15:51 -05:00
a79e814c8d Merge branch 'main' into refactor/model-manager-3 2023-11-27 16:06:42 -05:00
3fe1bef5cd Merge branch 'main' into refactor/model-manager-3 2023-11-27 08:08:01 -05:00
dbd0151c0e make test file path comparison work on windows systems (another fix) 2023-11-26 18:52:25 -05:00
6da508f147 make test file path comparison work on windows systems 2023-11-26 18:40:22 -05:00
8ef596eac7 further changes for ruff 2023-11-26 17:13:31 -05:00
8f4f4d48d5 fix import unsorted import block issues in the tests 2023-11-26 13:37:47 -05:00
60eae7443a Merge branch 'main' into refactor/model-manager-3 2023-11-26 13:33:41 -05:00
8695ad6f59 all features implemented, docs updated, ready for review 2023-11-26 13:18:21 -05:00
dc5c452ef9 rename test/nodes to test/aa_nodes to ensure these tests run first 2023-11-26 09:38:30 -05:00
8aefe2cefe import_model and list_install_jobs router APIs written 2023-11-25 21:45:59 -05:00
ec510d34b5 fix model probing for controlnet checkpoint legacy config files 2023-11-25 15:53:22 -05:00
19baea1883 all backend features in place; config scanning is failing on controlnet 2023-11-24 19:37:46 -05:00
80bc9be3ab make install_path and register_path work; refactor model probing 2023-11-23 23:15:32 -05:00
8c7a7bc897 Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:29:23 -05:00
4aab728590 move name/description logic into model_probe.py 2023-11-22 22:29:02 -05:00
9cf060115d Merge branch 'main' into refactor/model-manager-3 2023-11-22 22:28:31 -05:00
9ea3126118 start implementation of installer 2023-11-20 23:02:30 -05:00
6c56233edc define install abstract base class 2023-11-20 21:57:10 -05:00
1531 changed files with 107432 additions and 292682 deletions

8
.github/CODEOWNERS vendored
View File

@ -1,5 +1,5 @@
# continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername
/.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr
# documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu
@ -10,7 +10,7 @@
# installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername @ebr
/scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername
@ -26,9 +26,7 @@
# front ends
/invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@ -6,10 +6,6 @@ title: '[bug]: '
labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body:
- type: markdown
attributes:
@ -18,10 +14,9 @@ body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
label: Is there an existing issue for this problem?
description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
Please [search](https://github.com/invoke-ai/InvokeAI/issues) first to see if an issue already exists for the problem.
options:
- label: I have searched the existing issues
required: true
@ -33,80 +28,119 @@ body:
- type: dropdown
id: os_dropdown
attributes:
label: OS
description: Which operating System did you use when the bug occured
label: Operating system
description: Your computer's operating system.
multiple: false
options:
- 'Linux'
- 'Windows'
- 'macOS'
- 'other'
validations:
required: true
- type: dropdown
id: gpu_dropdown
attributes:
label: GPU
description: Which kind of Graphic-Adapter is your System using
label: GPU vendor
description: Your GPU's vendor.
multiple: false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
- 'Nvidia (CUDA)'
- 'AMD (ROCm)'
- 'Apple Silicon (MPS)'
- 'None (CPU)'
validations:
required: true
- type: input
id: gpu_model
attributes:
label: GPU model
description: Your GPU's model. If on Apple Silicon, this is your Mac's chip. Leave blank if on CPU.
placeholder: ex. RTX 2080 Ti, Mac M1 Pro
validations:
required: false
- type: input
id: vram
attributes:
label: VRAM
description: Size of the VRAM if known
label: GPU VRAM
description: Your GPU's VRAM. If on Apple Silicon, this is your Mac's unified memory. Leave blank if on CPU.
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
label: Version number
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
The version of Invoke you have installed. If it is not the latest version, please update and try again to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: ex. 3.6.1
validations:
required: true
- type: input
id: browser-version
attributes:
label: Browser
description: Your web browser and version.
placeholder: ex. Firefox 123.0b3
validations:
required: true
- type: textarea
id: python-deps
attributes:
label: Python dependencies
description: |
If the problem occurred during image generation, click the gear icon at the bottom left corner, click "About", click the copy button and then paste here.
validations:
required: false
- type: textarea
id: what-happened
attributes:
label: What happened?
label: What happened
description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
Describe what happened. Include any relevant error messages, stack traces and screenshots here.
placeholder: I clicked button X and then Y happened.
validations:
required: true
- type: textarea
id: what-you-expected
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
placeholder: this is what the result looked like <screenshot>
label: What you expected to happen
description: Describe what you expected to happen.
placeholder: I expected Z to happen.
validations:
required: true
- type: textarea
id: how-to-repro
attributes:
label: How to reproduce the problem
description: List steps to reproduce the problem.
placeholder: Start the app, generate an image with these settings, then click button X.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here
description: Any other context that might help us to understand the problem.
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required: false
- type: input
id: contact
id: discord-username
attributes:
label: Contact Details
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
label: Discord username
description: If you are on the Invoke discord and would prefer to be contacted there, please provide your username.
placeholder: supercoolusername123
validations:
required: false

59
.github/pr_labels.yml vendored Normal file
View File

@ -0,0 +1,59 @@
Root:
- changed-files:
- any-glob-to-any-file: '*'
PythonDeps:
- changed-files:
- any-glob-to-any-file: 'pyproject.toml'
Python:
- changed-files:
- all-globs-to-any-file:
- 'invokeai/**'
- '!invokeai/frontend/web/**'
PythonTests:
- changed-files:
- any-glob-to-any-file: 'tests/**'
CICD:
- changed-files:
- any-glob-to-any-file: .github/**
Docker:
- changed-files:
- any-glob-to-any-file: docker/**
Installer:
- changed-files:
- any-glob-to-any-file: installer/**
Documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Invocations:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/invocations/**'
Backend:
- changed-files:
- any-glob-to-any-file: 'invokeai/backend/**'
Api:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/api/**'
Services:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/services/**'
FrontendDeps:
- changed-files:
- any-glob-to-any-file:
- '**/*/package.json'
- '**/*/pnpm-lock.yaml'
Frontend:
- changed-files:
- any-glob-to-any-file: 'invokeai/frontend/web/**'

View File

@ -42,6 +42,21 @@ Please provide steps on how to test changes, any hardware or
software specifications as well as any other pertinent information.
-->
## Merge Plan
<!--
A merge plan describes how this PR should be handled after it is approved.
Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is merged"
A merge plan is particularly important for large PRs or PRs that touch the
database in any way.
-->
## Added/updated tests?
- [ ] Yes

View File

@ -40,10 +40,14 @@ jobs:
- name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile
echo "----- Free space after cleanup"
df -h
- name: Checkout
uses: actions/checkout@v3
@ -91,6 +95,7 @@ jobs:
# password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container
timeout-minutes: 40
id: docker_build
uses: docker/build-push-action@v4
with:

16
.github/workflows/label-pr.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: "Pull Request Labeler"
on:
- pull_request_target
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: .github/pr_labels.yml

View File

@ -22,12 +22,24 @@ jobs:
runs-on: ubuntu-22.04
steps:
- name: Setup Node 18
uses: actions/setup-node@v3
uses: actions/setup-node@v4
with:
node-version: '18'
- uses: actions/checkout@v3
- run: 'yarn install --frozen-lockfile'
- run: 'yarn run lint:tsc'
- run: 'yarn run lint:madge'
- run: 'yarn run lint:eslint'
- run: 'yarn run lint:prettier'
- name: Checkout
uses: actions/checkout@v4
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install dependencies
run: 'pnpm install --prefer-frozen-lockfile'
- name: Typescript
run: 'pnpm run lint:tsc'
- name: Madge
run: 'pnpm run lint:dpdm'
- name: ESLint
run: 'pnpm run lint:eslint'
- name: Prettier
run: 'pnpm run lint:prettier'
- name: Knip
run: 'pnpm run lint:knip'

View File

@ -1,13 +1,15 @@
name: PyPI Release
on:
push:
paths:
- 'invokeai/version/invokeai_version.py'
workflow_dispatch:
inputs:
publish_package:
description: 'Publish build on PyPi? [true/false]'
required: true
default: 'false'
jobs:
release:
build-and-release:
if: github.repository == 'invoke-ai/InvokeAI'
runs-on: ubuntu-22.04
env:
@ -15,19 +17,43 @@ jobs:
TWINE_PASSWORD: ${{ secrets.PYPI_API_TOKEN }}
TWINE_NON_INTERACTIVE: 1
steps:
- name: checkout sources
uses: actions/checkout@v3
- name: Checkout
uses: actions/checkout@v4
- name: install deps
- name: Setup Node 18
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Setup pnpm
uses: pnpm/action-setup@v2
with:
version: '8.12.1'
- name: Install frontend dependencies
run: pnpm install --prefer-frozen-lockfile
working-directory: invokeai/frontend/web
- name: Build frontend
run: pnpm run build
working-directory: invokeai/frontend/web
- name: Install python dependencies
run: pip install --upgrade build twine
- name: build package
- name: Build python package
run: python3 -m build
- name: check distribution
- name: Upload build as workflow artifact
uses: actions/upload-artifact@v4
with:
name: dist
path: dist
- name: Check distribution
run: twine check dist/*
- name: check PyPI versions
- name: Check PyPI versions
if: github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/heads/release/')
run: |
pip install --upgrade requests
@ -36,6 +62,6 @@ jobs:
EXISTS=scripts.pypi_helper.local_on_pypi(); \
print(f'PACKAGE_EXISTS={EXISTS}')" >> $GITHUB_ENV
- name: upload package
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != ''
- name: Publish build on PyPi
if: env.PACKAGE_EXISTS == 'False' && env.TWINE_PASSWORD != '' && github.event.inputs.publish_package == 'true'
run: twine upload dist/*

View File

@ -58,7 +58,7 @@ jobs:
- name: Check for changed python files
id: changed-files
uses: tj-actions/changed-files@v37
uses: tj-actions/changed-files@v41
with:
files_yaml: |
python:

3
.gitignore vendored
View File

@ -16,7 +16,7 @@ __pycache__/
.Python
build/
develop-eggs/
# dist/
dist/
downloads/
eggs/
.eggs/
@ -187,3 +187,4 @@ installer/install.bat
installer/install.sh
installer/update.bat
installer/update.sh
installer/InvokeAI-Installer/

View File

@ -1,21 +1,63 @@
# simple Makefile with scripts that are otherwise hard to remember
# to use, run from the repo root `make <command>`
default: help
help:
@echo Developer commands:
@echo
@echo "ruff Run ruff, fixing any safely-fixable errors and formatting"
@echo "ruff-unsafe Run ruff, fixing all fixable errors and formatting"
@echo "mypy Run mypy using the config in pyproject.toml to identify type mismatches and other coding errors"
@echo "mypy-all Run mypy ignoring the config in pyproject.tom but still ignoring missing imports"
@echo "test" Run the unit tests.
@echo "frontend-install" Install the pnpm modules needed for the front end
@echo "frontend-build Build the frontend in order to run on localhost:9090"
@echo "frontend-dev Run the frontend in developer mode on localhost:5173"
@echo "installer-zip Build the installer .zip file for the current version"
@echo "tag-release Tag the GitHub repository with the current version (use at release time only!)"
# Runs ruff, fixing any safely-fixable errors and formatting
ruff:
ruff check . --fix
ruff format .
ruff check . --fix
ruff format .
# Runs ruff, fixing all errors it can fix and formatting
ruff-unsafe:
ruff check . --fix --unsafe-fixes
ruff format .
ruff check . --fix --unsafe-fixes
ruff format .
# Runs mypy, using the config in pyproject.toml
mypy:
mypy scripts/invokeai-web.py
mypy scripts/invokeai-web.py
# Runs mypy, ignoring the config in pyproject.toml but still ignoring missing (untyped) imports
# (many files are ignored by the config, so this is useful for checking all files)
mypy-all:
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
mypy scripts/invokeai-web.py --config-file= --ignore-missing-imports
# Run the unit tests
test:
pytest ./tests
# Install the pnpm modules needed for the front end
frontend-install:
rm -rf invokeai/frontend/web/node_modules
cd invokeai/frontend/web && pnpm install
# Build the frontend
frontend-build:
cd invokeai/frontend/web && pnpm build
# Run the frontend in dev mode
frontend-dev:
cd invokeai/frontend/web && pnpm dev
# Installer zip file
installer-zip:
cd installer && ./create_installer.sh
# Tag the release
tag-release:
cd installer && ./tag_release.sh

View File

@ -1,10 +1,10 @@
<div align="center">
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018)
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
# Invoke AI - Generative AI for Professional Creatives
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more.
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
# Invoke - Professional Creative AI Tools for Visual Media
## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
[![discord badge]][discord link]
@ -56,7 +56,9 @@ the foundation for multiple commercial products.
<div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
</div>
@ -125,8 +127,8 @@ and go to http://localhost:9090.
You must have Python 3.10 through 3.11 installed on your machine. Earlier or
later versions are not supported.
Node.js also needs to be installed along with yarn (can be installed with
the command `npm install -g yarn` if needed)
Node.js also needs to be installed along with `pnpm` (can be installed with
the command `npm install -g pnpm` if needed)
1. Open a command-line window on your machine. The PowerShell is recommended for Windows.
2. Create a directory to install InvokeAI into. You'll need at least 15 GB of free space:
@ -167,7 +169,7 @@ the command `npm install -g yarn` if needed)
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
_For non-GPU systems:_
@ -270,7 +272,7 @@ upgrade script.** See the next section for a Windows recipe.
3. Select option [1] to upgrade to the latest release.
4. Once the upgrade is finished you will be returned to the launcher
menu. Select option [7] "Re-run the configure script to fix a broken
menu. Select option [6] "Re-run the configure script to fix a broken
install or to complete a major upgrade".
This will run the configure script against the v2.3 directory and

View File

@ -2,14 +2,17 @@
## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data.
# HOST_INVOKEAI_ROOT is the path on the docker host's filesystem where InvokeAI will store data.
# Outputs will also be stored here by default.
# This **must** be an absolute path.
INVOKEAI_ROOT=
# If relative, it will be relative to the docker directory in which the docker-compose.yml file is located
#HOST_INVOKEAI_ROOT=../../invokeai-data
# INVOKEAI_ROOT is the path to the root of the InvokeAI repository within the container.
# INVOKEAI_ROOT=~/invokeai
# Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN=
## optional variables specific to the docker setup.
# GPU_DRIVER=cuda # or rocm
# GPU_DRIVER=nvidia #| rocm
# CONTAINER_UID=1000

View File

@ -18,8 +18,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG TORCH_VERSION=2.1.2
ARG TORCHVISION_VERSION=0.16.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@ -35,7 +35,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
@ -54,19 +54,21 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
else \
pip install -e "."; \
pip install $extra_index_url_arg -e "."; \
fi
# #### Build the Web UI ------------------------------------
FROM node:18 AS web-builder
FROM node:20-slim AS web-builder
ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable
WORKDIR /build
COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/usr/lib/node_modules \
npm install --include dev
RUN --mount=type=cache,target=/usr/lib/node_modules \
yarn vite build
RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile
RUN npx vite build
#### Runtime stage ---------------------------------------
@ -100,6 +102,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV INVOKEAI_ROOT=/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$INVOKEAI_SRC:$PATH"
ENV CONTAINER_UID=${CONTAINER_UID:-1000}
ENV CONTAINER_GID=${CONTAINER_GID:-1000}
# --link requires buldkit w/ dockerfile syntax 1.4
COPY --link --from=builder ${INVOKEAI_SRC} ${INVOKEAI_SRC}
@ -117,7 +121,7 @@ WORKDIR ${INVOKEAI_SRC}
RUN cd /usr/lib/$(uname -p)-linux-gnu/pkgconfig/ && ln -sf opencv4.pc opencv.pc
RUN python3 -c "from patchmatch import patch_match"
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R 1000:1000 ${INVOKEAI_ROOT}
RUN mkdir -p ${INVOKEAI_ROOT} && chown -R ${CONTAINER_UID}:${CONTAINER_GID} ${INVOKEAI_ROOT}
COPY docker/docker-entrypoint.sh ./
ENTRYPOINT ["/opt/invokeai/docker-entrypoint.sh"]

View File

@ -1,6 +1,14 @@
# InvokeAI Containerized
All commands are to be run from the `docker` directory: `cd docker`
All commands should be run within the `docker` directory: `cd docker`
## Quickstart :rocket:
On a known working Linux+Docker+CUDA (Nvidia) system, execute `./run.sh` in this directory. It will take a few minutes - depending on your internet speed - to install the core models. Once the application starts up, open `http://localhost:9090` in your browser to Invoke!
For more configuration options (using an AMD GPU, custom root directory location, etc): read on.
## Detailed setup
#### Linux
@ -18,12 +26,12 @@ All commands are to be run from the `docker` directory: `cd docker`
This is done via Docker Desktop preferences
## Quickstart
### Configure Invoke environment
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. `docker compose up`
1. Execute `run.sh`
The image will be built automatically if needed.
@ -37,19 +45,21 @@ The runtime directory (holding models and outputs) will be created in the locati
The Docker daemon on the system must be already set up to use the GPU. In case of Linux, this involves installing `nvidia-docker-runtime` and configuring the `nvidia` runtime as default. Steps will be different for AMD. Please see Docker documentation for the most up-to-date instructions for using your GPU with Docker.
To use an AMD GPU, set `GPU_DRIVER=rocm` in your `.env` file.
## Customize
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `docker compose up`, your custom values will be used.
Check the `.env.sample` file. It contains some environment variables for running in Docker. Copy it, name it `.env`, and fill it in with your own values. Next time you run `run.sh`, your custom values will be used.
You can also set these values in `docker-compose.yml` directly, but `.env` will help avoid conflicts when code is updated.
Example (values are optional, but setting `INVOKEAI_ROOT` is highly recommended):
Values are optional, but setting `INVOKEAI_ROOT` is highly recommended. The default is `~/invokeai`. Example:
```bash
INVOKEAI_ROOT=/Volumes/WorkDrive/invokeai
HUGGINGFACE_TOKEN=the_actual_token
CONTAINER_UID=1000
GPU_DRIVER=cuda
GPU_DRIVER=nvidia
```
Any environment variables supported by InvokeAI can be set here - please see the [Configuration docs](https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/) for further detail.

View File

@ -1,11 +0,0 @@
#!/usr/bin/env bash
set -e
build_args=""
[[ -f ".env" ]] && build_args=$(awk '$1 ~ /\=[^$]/ {print "--build-arg " $0 " "}' .env)
echo "docker compose build args:"
echo $build_args
docker compose build $build_args

View File

@ -2,23 +2,8 @@
version: '3.8'
services:
invokeai:
x-invokeai: &invokeai
image: "local/invokeai:latest"
# edit below to run on a container runtime other than nvidia-container-runtime.
# not yet tested with rocm/AMD GPUs
# Comment out the "deploy" section to run on CPU only
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
# For AMD support, comment out the deploy section above and uncomment the devices section below:
#devices:
# - /dev/kfd:/dev/kfd
# - /dev/dri:/dev/dri
build:
context: ..
dockerfile: docker/Dockerfile
@ -36,7 +21,9 @@ services:
ports:
- "${INVOKEAI_PORT:-9090}:9090"
volumes:
- ${INVOKEAI_ROOT:-~/invokeai}:${INVOKEAI_ROOT:-/invokeai}
- type: bind
source: ${HOST_INVOKEAI_ROOT:-${INVOKEAI_ROOT:-~/invokeai}}
target: ${INVOKEAI_ROOT:-/invokeai}
- ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface}
# - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}}
# - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}}
@ -50,3 +37,27 @@ services:
# - |
# invokeai-model-install --yes --default-only --config_file ${INVOKEAI_ROOT}/config_custom.yaml
# invokeai-nodes-web --host 0.0.0.0
services:
invokeai-nvidia:
<<: *invokeai
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
invokeai-cpu:
<<: *invokeai
profiles:
- cpu
invokeai-rocm:
<<: *invokeai
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
profiles:
- rocm

View File

@ -1,11 +1,32 @@
#!/usr/bin/env bash
set -e
set -e -o pipefail
# This script is provided for backwards compatibility with the old docker setup.
# it doesn't do much aside from wrapping the usual docker compose CLI.
run() {
local scriptdir=$(dirname "${BASH_SOURCE[0]}")
cd "$scriptdir" || exit 1
SCRIPTDIR=$(dirname "${BASH_SOURCE[0]}")
cd "$SCRIPTDIR" || exit 1
local build_args=""
local profile=""
docker compose up -d
docker compose logs -f
touch .env
build_args=$(awk '$1 ~ /=[^$]/ && $0 !~ /^#/ {print "--build-arg " $0 " "}' .env) &&
profile="$(awk -F '=' '/GPU_DRIVER/ {print $2}' .env)"
[[ -z "$profile" ]] && profile="nvidia"
local service_name="invokeai-$profile"
if [[ ! -z "$build_args" ]]; then
printf "%s\n" "docker compose build args:"
printf "%s\n" "$build_args"
fi
docker compose build $build_args $service_name
unset build_args
printf "%s\n" "starting service $service_name"
docker compose --profile "$profile" up -d "$service_name"
docker compose logs -f
}
run

Binary file not shown.

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 4.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -0,0 +1,277 @@
# The InvokeAI Download Queue
The DownloadQueueService provides a multithreaded parallel download
queue for arbitrary URLs, with queue prioritization, event handling,
and restart capabilities.
## Simple Example
```
from invokeai.app.services.download import DownloadQueueService, TqdmProgress
download_queue = DownloadQueueService()
for url in ['https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/a-painting-of-a-fire.png?raw=true',
'https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/birdhouse.png?raw=true',
'https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/missing.png',
'https://civitai.com/api/download/models/152309?type=Model&format=SafeTensor',
]:
# urls start downloading as soon as download() is called
download_queue.download(source=url,
dest='/tmp/downloads',
on_progress=TqdmProgress().update
)
download_queue.join() # wait for all downloads to finish
for job in download_queue.list_jobs():
print(job.model_dump_json(exclude_none=True, indent=4),"\n")
```
Output:
```
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/a-painting-of-a-fire.png?raw=true",
"dest": "/tmp/downloads",
"id": 0,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/a-painting-of-a-fire.png",
"job_started": "2023-12-04T05:34:41.742174",
"job_ended": "2023-12-04T05:34:42.592035",
"bytes": 666734,
"total_bytes": 666734
}
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/birdhouse.png?raw=true",
"dest": "/tmp/downloads",
"id": 1,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/birdhouse.png",
"job_started": "2023-12-04T05:34:41.741975",
"job_ended": "2023-12-04T05:34:42.652841",
"bytes": 774949,
"total_bytes": 774949
}
{
"source": "https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/assets/missing.png",
"dest": "/tmp/downloads",
"id": 2,
"priority": 10,
"status": "error",
"job_started": "2023-12-04T05:34:41.742079",
"job_ended": "2023-12-04T05:34:42.147625",
"bytes": 0,
"total_bytes": 0,
"error_type": "HTTPError(Not Found)",
"error": "Traceback (most recent call last):\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 182, in _download_next_item\n self._do_download(job)\n File \"/home/lstein/Projects/InvokeAI/invokeai/app/services/download/download_default.py\", line 206, in _do_download\n raise HTTPError(resp.reason)\nrequests.exceptions.HTTPError: Not Found\n"
}
{
"source": "https://civitai.com/api/download/models/152309?type=Model&format=SafeTensor",
"dest": "/tmp/downloads",
"id": 3,
"priority": 10,
"status": "completed",
"download_path": "/tmp/downloads/xl_more_art-full_v1.safetensors",
"job_started": "2023-12-04T05:34:42.147645",
"job_ended": "2023-12-04T05:34:43.735990",
"bytes": 719020768,
"total_bytes": 719020768
}
```
## The API
The default download queue is `DownloadQueueService`, an
implementation of ABC `DownloadQueueServiceBase`. It juggles multiple
background download requests and provides facilities for interrogating
and cancelling the requests. Access to a current or past download task
is mediated via `DownloadJob` objects which report the current status
of a job request
### The Queue Object
A default download queue is located in
`ApiDependencies.invoker.services.download_queue`. However, you can
create additional instances if you need to isolate your queue from the
main one.
```
queue = DownloadQueueService(event_bus=events)
```
`DownloadQueueService()` takes three optional arguments:
| **Argument** | **Type** | **Default** | **Description** |
|----------------|-----------------|---------------|-----------------|
| `max_parallel_dl` | int | 5 | Maximum number of simultaneous downloads allowed |
| `event_bus` | EventServiceBase | None | System-wide FastAPI event bus for reporting download events |
| `requests_session` | requests.sessions.Session | None | An alternative requests Session object to use for the download |
`max_parallel_dl` specifies how many download jobs are allowed to run
simultaneously. Each will run in a different thread of execution.
`event_bus` is an EventServiceBase, typically the one created at
InvokeAI startup. If present, download events are periodically emitted
on this bus to allow clients to follow download progress.
`requests_session` is a url library requests Session object. It is
used for testing.
### The Job object
The queue operates on a series of download job objects. These objects
specify the source and destination of the download, and keep track of
the progress of the download.
The only job type currently implemented is `DownloadJob`, a pydantic object with the
following fields:
| **Field** | **Type** | **Default** | **Description** |
|----------------|-----------------|---------------|-----------------|
| _Fields passed in at job creation time_ |
| `source` | AnyHttpUrl | | Where to download from |
| `dest` | Path | | Where to download to |
| `access_token` | str | | [optional] string containing authentication token for access |
| `on_start` | Callable | | [optional] callback when the download starts |
| `on_progress` | Callable | | [optional] callback called at intervals during download progress |
| `on_complete` | Callable | | [optional] callback called after successful download completion |
| `on_error` | Callable | | [optional] callback called after an error occurs |
| `id` | int | auto assigned | Job ID, an integer >= 0 |
| `priority` | int | 10 | Job priority. Lower priorities run before higher priorities |
| |
| _Fields updated over the course of the download task_
| `status` | DownloadJobStatus| | Status code |
| `download_path` | Path | | Path to the location of the downloaded file |
| `job_started` | float | | Timestamp for when the job started running |
| `job_ended` | float | | Timestamp for when the job completed or errored out |
| `job_sequence` | int | | A counter that is incremented each time a model is dequeued |
| `bytes` | int | 0 | Bytes downloaded so far |
| `total_bytes` | int | 0 | Total size of the file at the remote site |
| `error_type` | str | | String version of the exception that caused an error during download |
| `error` | str | | String version of the traceback associated with an error |
| `cancelled` | bool | False | Set to true if the job was cancelled by the caller|
When you create a job, you can assign it a `priority`. If multiple
jobs are queued, the job with the lowest priority runs first.
Every job has a `source` and a `dest`. `source` is a pydantic.networks AnyHttpUrl object.
The `dest` is a path on the local filesystem that specifies the
destination for the downloaded object. Its semantics are
described below.
When the job is submitted, it is assigned a numeric `id`. The id can
then be used to fetch the job object from the queue.
The `status` field is updated by the queue to indicate where the job
is in its lifecycle. Values are defined in the string enum
`DownloadJobStatus`, a symbol available from
`invokeai.app.services.download_manager`. Possible values are:
| **Value** | **String Value** | ** Description ** |
|--------------|---------------------|-------------------|
| `WAITING` | waiting | Job is on the queue but not yet running|
| `RUNNING` | running | The download is started |
| `COMPLETED` | completed | Job has finished its work without an error |
| `ERROR` | error | Job encountered an error and will not run again|
`job_started` and `job_ended` indicate when the job
was started (using a python timestamp) and when it completed.
In case of an error, the job's status will be set to `DownloadJobStatus.ERROR`, the text of the
Exception that caused the error will be placed in the `error_type`
field and the traceback that led to the error will be in `error`.
A cancelled job will have status `DownloadJobStatus.ERROR` and an
`error_type` field of "DownloadJobCancelledException". In addition,
the job's `cancelled` property will be set to True.
### Callbacks
Download jobs can be associated with a series of callbacks, each with
the signature `Callable[["DownloadJob"], None]`. The callbacks are assigned
using optional arguments `on_start`, `on_progress`, `on_complete` and
`on_error`. When the corresponding event occurs, the callback wil be
invoked and passed the job. The callback will be run in a `try:`
context in the same thread as the download job. Any exceptions that
occur during execution of the callback will be caught and converted
into a log error message, thereby allowing the download to continue.
#### `TqdmProgress`
The `invokeai.app.services.download.download_default` module defines a
class named `TqdmProgress` which can be used as an `on_progress`
handler to display a completion bar in the console. Use as follows:
```
from invokeai.app.services.download import TqdmProgress
download_queue.download(source='http://some.server.somewhere/some_file',
dest='/tmp/downloads',
on_progress=TqdmProgress().update
)
```
### Events
If the queue was initialized with the InvokeAI event bus (the case
when using `ApiDependencies.invoker.services.download_queue`), then
download events will also be issued on the bus. The events are:
* `download_started` -- This is issued when a job is taken off the
queue and a request is made to the remote server for the URL headers, but before any data
has been downloaded. The event payload will contain the keys `source`
and `download_path`. The latter contains the path that the URL will be
downloaded to.
* `download_progress -- This is issued periodically as the download
runs. The payload contains the keys `source`, `download_path`,
`current_bytes` and `total_bytes`. The latter two fields can be
used to display the percent complete.
* `download_complete` -- This is issued when the download completes
successfully. The payload contains the keys `source`, `download_path`
and `total_bytes`.
* `download_error` -- This is issued when the download stops because
of an error condition. The payload contains the fields `error_type`
and `error`. The former is the text representation of the exception,
and the latter is a traceback showing where the error occurred.
### Job control
To create a job call the queue's `download()` method. You can list all
jobs using `list_jobs()`, fetch a single job by its with
`id_to_job()`, cancel a running job with `cancel_job()`, cancel all
running jobs with `cancel_all_jobs()`, and wait for all jobs to finish
with `join()`.
#### job = queue.download(source, dest, priority, access_token)
Create a new download job and put it on the queue, returning the
DownloadJob object.
#### jobs = queue.list_jobs()
Return a list of all active and inactive `DownloadJob`s.
#### job = queue.id_to_job(id)
Return the job corresponding to given ID.
Return a list of all active and inactive `DownloadJob`s.
#### queue.prune_jobs()
Remove inactive (complete or errored) jobs from the listing returned
by `list_jobs()`.
#### queue.join()
Block until all pending jobs have run to completion or errored out.

View File

@ -9,11 +9,15 @@ complex functionality.
## Invocations Directory
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These can be used as examples to create your own nodes.
InvokeAI Nodes can be found in the `invokeai/app/invocations` directory. These
can be used as examples to create your own nodes.
New nodes should be added to a subfolder in `nodes` direction found at the root level of the InvokeAI installation location. Nodes added to this folder will be able to be used upon application startup.
New nodes should be added to a subfolder in `nodes` direction found at the root
level of the InvokeAI installation location. Nodes added to this folder will be
able to be used upon application startup.
Example `nodes` subfolder structure:
Example `nodes` subfolder structure:
```py
├── __init__.py # Invoke-managed custom node loader
@ -30,14 +34,14 @@ Example `nodes` subfolder structure:
└── fancy_node.py
```
Each node folder must have an `__init__.py` file that imports its nodes. Only nodes imported in the `__init__.py` file are loaded.
See the README in the nodes folder for more examples:
Each node folder must have an `__init__.py` file that imports its nodes. Only
nodes imported in the `__init__.py` file are loaded. See the README in the nodes
folder for more examples:
```py
from .cool_node import CoolInvocation
```
## Creating A New Invocation
In order to understand the process of creating a new Invocation, let us actually
@ -131,7 +135,6 @@ from invokeai.app.invocations.primitives import ImageField
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
# Inputs
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
@ -167,7 +170,6 @@ from invokeai.app.invocations.primitives import ImageField
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
# Inputs
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
@ -197,7 +199,6 @@ from invokeai.app.invocations.image import ImageOutput
class ResizeInvocation(BaseInvocation):
'''Resizes an image'''
# Inputs
image: ImageField = InputField(description="The input image")
width: int = InputField(default=512, ge=64, le=2048, description="Width of the new image")
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
@ -229,30 +230,17 @@ class ResizeInvocation(BaseInvocation):
height: int = InputField(default=512, ge=64, le=2048, description="Height of the new image")
def invoke(self, context: InvocationContext) -> ImageOutput:
# Load the image using InvokeAI's predefined Image Service. Returns the PIL image.
image = context.services.images.get_pil_image(self.image.image_name)
# Load the input image as a PIL image
image = context.images.get_pil(self.image.image_name)
# Resizing the image
# Resize the image
resized_image = image.resize((self.width, self.height))
# Save the image using InvokeAI's predefined Image Service. Returns the prepared PIL image.
output_image = context.services.images.create(
image=resized_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
# Save the image
image_dto = context.images.save(image=resized_image)
# Returning the Image
return ImageOutput(
image=ImageField(
image_name=output_image.image_name,
),
width=output_image.width,
height=output_image.height,
)
# Return an ImageOutput
return ImageOutput.build(image_dto)
```
**Note:** Do not be overwhelmed by the `ImageOutput` process. InvokeAI has a
@ -343,27 +331,25 @@ class ImageColorStringOutput(BaseInvocationOutput):
That's all there is to it.
<!-- TODO: DANGER - we probably do not want people to create their own field types, because this requires a lot of work on the frontend to accomodate.
### Custom Input Fields
Now that you know how to create your own Invocations, let us dive into slightly
more advanced topics.
While creating your own Invocations, you might run into a scenario where the
existing input types in InvokeAI do not meet your requirements. In such cases,
you can create your own input types.
existing fields in InvokeAI do not meet your requirements. In such cases, you
can create your own fields.
Let us create one as an example. Let us say we want to create a color input
field that represents a color code. But before we start on that here are some
general good practices to keep in mind.
**Good Practices**
### Best Practices
- There is no naming convention for input fields but we highly recommend that
you name it something appropriate like `ColorField`.
- It is not mandatory but it is heavily recommended to add a relevant
`docstring` to describe your input field.
`docstring` to describe your field.
- Keep your field in the same file as the Invocation that it is made for or in
another file where it is relevant.
@ -378,10 +364,13 @@ class ColorField(BaseModel):
pass
```
Perfect. Now let us create our custom inputs for our field. This is exactly
similar how you created input fields for your Invocation. All the same rules
apply. Let us create four fields representing the _red(r)_, _blue(b)_,
_green(g)_ and _alpha(a)_ channel of the color.
Perfect. Now let us create the properties for our field. This is similar to how
you created input fields for your Invocation. All the same rules apply. Let us
create four fields representing the _red(r)_, _blue(b)_, _green(g)_ and
_alpha(a)_ channel of the color.
> Technically, the properties are _also_ called fields - but in this case, it
> refers to a `pydantic` field.
```python
class ColorField(BaseModel):
@ -396,25 +385,11 @@ That's it. We now have a new input field type that we can use in our Invocations
like this.
```python
color: ColorField = Field(default=ColorField(r=0, g=0, b=0, a=0), description='Background color of an image')
color: ColorField = InputField(default=ColorField(r=0, g=0, b=0, a=0), description='Background color of an image')
```
### Custom Components For Frontend
### Using the custom field
Every backend input type should have a corresponding frontend component so the
UI knows what to render when you use a particular field type.
When you start the UI, your custom field will be automatically recognized.
If you are using existing field types, we already have components for those. So
you don't have to worry about creating anything new. But this might not always
be the case. Sometimes you might want to create new field types and have the
frontend UI deal with it in a different way.
This is where we venture into the world of React and Javascript and create our
own new components for our Invocations. Do not fear the world of JS. It's
actually pretty straightforward.
Let us create a new component for our custom color field we created above. When
we use a color field, let us say we want the UI to display a color picker for
the user to pick from rather than entering values. That is what we will build
now.
-->
Custom fields only support connection inputs in the Workflow Editor.

File diff suppressed because it is too large Load Diff

View File

@ -1,75 +0,0 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [yarn classic](https://classic.yarnpkg.com/lang/en/) and confirm it is installed by running this:
```bash
npm install --global yarn
yarn --version
```
From `invokeai/frontend/web/` run `yarn install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `yarn dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `yarn build`.

View File

@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](./contributingToFrontend.md)
* #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web)
* #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md)

53
docs/deprecated/2to3.md Normal file
View File

@ -0,0 +1,53 @@
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.

View File

@ -154,14 +154,16 @@ groups in `invokeia.yaml`:
### Web Server
| Setting | Default Value | Description |
|----------|----------------|--------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| Setting | Default Value | Description |
|---------------------|---------------|----------------------------------------------------------------------------------------------------------------------------|
| `host` | `localhost` | Name or IP address of the network interface that the web server will listen on |
| `port` | `9090` | Network port number that the web server will listen on |
| `allow_origins` | `[]` | A list of host names or IP addresses that are allowed to connect to the InvokeAI API in the format `['host1','host2',...]` |
| `allow_credentials` | `true` | Require credentials for a foreign host to access the InvokeAI API (don't change this) |
| `allow_methods` | `*` | List of HTTP methods ("GET", "POST") that the web server is allowed to use when accessing the API |
| `allow_headers` | `*` | List of HTTP headers that the web server will accept when accessing the API |
| `ssl_certfile` | null | Path to an SSL certificate file, used to enable HTTPS. |
| `ssl_keyfile` | null | Path to an SSL keyfile, if the key is not included in the certificate file. |
The documentation for InvokeAI's API can be accessed by browsing to the following URL: [http://localhost:9090/docs].

View File

@ -94,6 +94,8 @@ A model that helps generate creative QR codes that still scan. Can also be used
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
*Note:* The DWPose Processor has replaced the OpenPose processor in Invoke. Workflows and generations that relied on the OpenPose Processor will need to be updated to use the DWPose Processor instead.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.

View File

@ -229,29 +229,28 @@ clarity on the intent and common use cases we expect for utilizing them.
currently being rendered by your browser into a merged copy of the image. This
lowers the resource requirements and should improve performance.
### Seam Correction
### Compositing / Seam Correction
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
by Stable Diffusion into your existing image. To do this, the area around the
`seam` at the boundary between your image and the new generation is
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
automatically blended to produce a seamless output. In a fully automatic
process, a mask is generated to cover the seam, and then the area of the seam is
process, a mask is generated to cover the boundary, and then the area of the boundary is
Inpainted.
Although the default options should work well most of the time, sometimes it can
help to alter the parameters that control the seam Inpainting. A wider seam and
a blur setting of about 1/3 of the seam have been noted as producing
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with
both sides). Seam strength of 0.7 is best for reducing hard seams.
help to alter the parameters that control the Compositing. A larger blur and
a blur setting have been noted as producing
consistently strong results . Strength of 0.7 is best for reducing hard seams.
- **Mode** - What part of the image will have the the Compositing applied to it.
- **Mask edge** will apply Compositing to the edge of the masked area
- **Mask** will apply Compositing to the entire masked area
- **Unmasked** will apply Compositing to the entire image
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur Method** - The method of blur applied to the masked area.
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
mask around the seam.
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
masked area.
- **Seam Strength** - The Image To Image Strength parameter used for the
Inpainting generation that is applied to the seam area.
- **Seam Steps** - The number of generation steps that should be used to Inpaint
the seam.
### Infill & Scaling

BIN
docs/img/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -18,7 +18,7 @@ title: Home
width: 100%;
max-width: 100%;
height: 50px;
background-color: #448AFF;
background-color: #35A4DB;
color: #fff;
font-size: 16px;
border: none;
@ -43,7 +43,7 @@ title: Home
<div align="center" markdown>
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI)
[![project logo](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)](https://github.com/invoke-ai/InvokeAI)
[![discord badge]][discord link]
@ -117,6 +117,11 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
## :octicons-gift-24: InvokeAI Features
### Installation
- [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
- [Manual Installation](installation/020_INSTALL_MANUAL.md)
- [Docker Installation](installation/040_INSTALL_DOCKER.md)
### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
@ -145,60 +150,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
## :material-target: Troubleshooting
Please check out our **[:material-frequently-asked-questions:

View File

@ -477,7 +477,7 @@ Then type the following commands:
=== "AMD System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
### Corrupted configuration file

View File

@ -154,7 +154,7 @@ manager, please follow these steps:
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@ -230,13 +230,13 @@ manager, please follow these steps:
=== "local Webserver"
```bash
invokeai --web
invokeai-web
```
=== "Public Webserver"
```bash
invokeai --web --host 0.0.0.0
invokeai-web --host 0.0.0.0
```
=== "CLI"
@ -293,6 +293,19 @@ manager, please follow these steps:
## Developer Install
!!! warning
InvokeAI uses a SQLite database. By running on `main`, you accept responsibility for your database. This
means making regular backups (especially before pulling) and/or fixing it yourself in the event that a
PR introduces a schema change.
If you don't need persistent backend storage, you can use an ephemeral in-memory database by setting
`use_memory_db: true` under `Path:` in your `invokeai.yaml` file.
If this is untenable, you should run the application via the official installer or a manual install of the
python package from pypi. These releases will not break your database.
If you have an interest in how InvokeAI works, or you would like to
add features or bugfixes, you are encouraged to install the source
code for InvokeAI. For this to work, you will need to install the
@ -300,7 +313,7 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
@ -332,7 +345,7 @@ installation protocol (important!)
=== "ROCm (AMD)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@ -348,7 +361,7 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files
@ -388,3 +401,5 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@ -134,7 +134,7 @@ recipes are available
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.4.2` as described in the [Manual
https://download.pytorch.org/whl/rocm5.6` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer

View File

@ -69,7 +69,7 @@ a token and copy it, since you will need in for the next step.
### Setup
Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary.
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.

View File

@ -18,13 +18,18 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
## **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
## **[Automated Installer (Recommended)](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
third party libraries and InvokeAI itself.
🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.

View File

@ -0,0 +1,10 @@
document.addEventListener("DOMContentLoaded", function () {
var script = document.createElement("script");
script.src = "https://widget.kapa.ai/kapa-widget.bundle.js";
script.setAttribute("data-website-id", "b5973bb1-476b-451e-8cf4-98de86745a10");
script.setAttribute("data-project-name", "Invoke.AI");
script.setAttribute("data-project-color", "#11213C");
script.setAttribute("data-project-logo", "https://avatars.githubusercontent.com/u/113954515?s=280&v=4");
script.async = true;
document.head.appendChild(script);
});

View File

@ -6,10 +6,17 @@ If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](
## Features
### Workflow Library
The Workflow Library enables you to save workflows to the Invoke database, allowing you to easily creating, modify and share workflows as needed.
A curated set of workflows are provided by default - these are designed to help explain important nodes' usage in the Workflow Editor.
![workflow_library](../assets/nodes/workflow_library.png)
### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
To add an input to the Linear UI, right click on the **input label** and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
@ -30,7 +37,7 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts
## Important Nodes & Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -56,7 +63,7 @@ The ImageToLatents node takes in a pixel image and a VAE and outputs a latents.
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png)
![groupsrandseed](../assets/nodes/groupsnoise.png)
### ControlNet

View File

@ -13,25 +13,40 @@ If you'd prefer, you can also just download the whole node folder from the linke
To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
- Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask)
+ [Clothing Mask](#clothing-mask)
+ [Contrast Limited Adaptive Histogram Equalization](#contrast-limited-adaptive-histogram-equalization)
+ [Depth Map from Wavefront OBJ](#depth-map-from-wavefront-obj)
+ [Film Grain](#film-grain)
+ [Generative Grammar-Based Prompt Nodes](#generative-grammar-based-prompt-nodes)
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Ideal Size](#ideal-size)
+ [Hand Refiner with MeshGraphormer](#hand-refiner-with-meshgraphormer)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Image Resize Plus](#image-resize-plus)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Mask Operations](#mask-operations)
+ [Match Histogram](#match-histogram)
+ [Metadata-Linked](#metadata-linked-nodes)
+ [Negative Image](#negative-image)
+ [Nightmare Promptgen](#nightmare-promptgen)
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image)
+ [BriaAI Background Remove](#briaai-remove-background)
+ [Remove Background](#remove-background)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
+ [Simple Skin Detection](#simple-skin-detection)
+ [Text font to Image](#text-font-to-image)
+ [Thresholding](#thresholding)
+ [Unsharp Mask](#unsharp-mask)
@ -41,6 +56,30 @@ To use a community workflow, download the the `.json` node graph file and load i
- [Help](#help)
--------------------------------
### Adapters Linked Nodes
**Description:** A set of nodes for linked adapters (ControlNet, IP-Adaptor & T2I-Adapter). This allows multiple adapters to be chained together without using a `collect` node which means it can be used inside an `iterate` node without any collecting on every iteration issues.
- `ControlNet-Linked` - Collects ControlNet info to pass to other nodes.
- `IP-Adapter-Linked` - Collects IP-Adapter info to pass to other nodes.
- `T2I-Adapter-Linked` - Collects T2I-Adapter info to pass to other nodes.
Note: These are inherited from the core nodes so any update to the core nodes should be reflected in these.
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
--------------------------------
### Autostereogram Nodes
**Description:** Generate autostereogram images from a depth map. This is not a very practically useful node but more a 90s nostalgic indulgence as I used to love these images as a kid.
**Node Link:** https://github.com/skunkworxdark/autostereogram_nodes
**Example Usage:**
</br>
<img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider-depth.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-dots.png" width="200" /> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-pattern.png" width="200" />
--------------------------------
### Average Images
@ -48,6 +87,46 @@ To use a community workflow, download the the `.json` node graph file and load i
**Node Link:** https://github.com/JPPhoto/average-images-node
--------------------------------
### Clean Image Artifacts After Cut
Description: Removes residual artifacts after an image is separated from its background.
Node Link: https://github.com/VeyDlin/clean-artifact-after-cut-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clean-artifact-after-cut-node/master/.readme/node.png" width="500" />
--------------------------------
### Close Color Mask
Description: Generates a mask for images based on a closely matching color, useful for color-based selections.
Node Link: https://github.com/VeyDlin/close-color-mask-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/close-color-mask-node/master/.readme/node.png" width="500" />
--------------------------------
### Clothing Mask
Description: Employs a U2NET neural network trained for the segmentation of clothing items in images.
Node Link: https://github.com/VeyDlin/clothing-mask-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clothing-mask-node/master/.readme/node.png" width="500" />
--------------------------------
### Contrast Limited Adaptive Histogram Equalization
Description: Enhances local image contrast using adaptive histogram equalization with contrast limiting.
Node Link: https://github.com/VeyDlin/clahe-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/clahe-node/master/.readme/node.png" width="500" />
--------------------------------
### Depth Map from Wavefront OBJ
@ -131,13 +210,18 @@ CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of.
### Hand Refiner with MeshGraphormer
**Node Link:** https://github.com/JPPhoto/ideal-size-node
**Description**: Hand Refiner takes in your image and automatically generates a fixed depth map for the hands along with a mask of the hands region that will conveniently allow you to use them along with ControlNet to fix the wonky hands generated by Stable Diffusion
**Node Link:** https://github.com/blessedcoolant/invoke_meshgraphormer
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_meshgraphormer/main/assets/preview.jpg" />
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
@ -164,6 +248,16 @@ This includes 15 Nodes:
</br><img src="https://raw.githubusercontent.com/dwringer/composition-nodes/main/composition_pack_overview.jpg" width="500" />
--------------------------------
### Image Dominant Color
Description: Identifies and extracts the dominant color from an image using k-means clustering.
Node Link: https://github.com/VeyDlin/image-dominant-color-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-dominant-color-node/master/.readme/node.png" width="500" />
--------------------------------
### Image to Character Art Image Nodes
@ -185,6 +279,17 @@ This includes 15 Nodes:
**Node Link:** https://github.com/JPPhoto/image-picker-node
--------------------------------
### Image Resize Plus
Description: Provides various image resizing options such as fill, stretch, fit, center, and crop.
Node Link: https://github.com/VeyDlin/image-resize-plus-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-resize-plus-node/master/.readme/node.png" width="500" />
--------------------------------
### Load Video Frame
@ -209,6 +314,16 @@ This includes 15 Nodes:
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-1.png" width="300" />
<img src="https://gitlab.com/srcrr/shift3d/-/raw/main/example-2.png" width="300" />
--------------------------------
### Mask Operations
Description: Offers logical operations (OR, SUB, AND) for combining and manipulating image masks.
Node Link: https://github.com/VeyDlin/mask-operations-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/mask-operations-node/master/.readme/node.png" width="500" />
--------------------------------
### Match Histogram
@ -226,6 +341,37 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
<img src="https://github.com/skunkworxdark/match_histogram/assets/21961335/ed12f329-a0ef-444a-9bae-129ed60d6097" width="300" />
--------------------------------
### Metadata Linked Nodes
**Description:** A set of nodes for Metadata. Collect Metadata from within an `iterate` node & extract metadata from an image.
- `Metadata Item Linked` - Allows collecting of metadata while within an iterate node with no need for a collect node or conversion to metadata node.
- `Metadata From Image` - Provides Metadata from an image.
- `Metadata To String` - Extracts a String value of a label from metadata.
- `Metadata To Integer` - Extracts an Integer value of a label from metadata.
- `Metadata To Float` - Extracts a Float value of a label from metadata.
- `Metadata To Scheduler` - Extracts a Scheduler value of a label from metadata.
**Node Link:** https://github.com/skunkworxdark/metadata-linked-nodes
--------------------------------
### Negative Image
Description: Creates a negative version of an image, effective for visual effects and mask inversion.
Node Link: https://github.com/VeyDlin/negative-image-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/negative-image-node/master/.readme/node.png" width="500" />
--------------------------------
### Nightmare Promptgen
**Description:** Nightmare Prompt Generator - Uses a local text generation model to create unique imaginative (but usually nightmarish) prompts for InvokeAI. By default, it allows you to choose from some gpt-neo models I finetuned on over 2500 of my own InvokeAI prompts in Compel format, but you're able to add your own, as well. Offers support for replacing any troublesome words with a random choice from list you can also define.
**Node Link:** [https://github.com/gogurtenjoyer/nightmare-promptgen](https://github.com/gogurtenjoyer/nightmare-promptgen)
--------------------------------
### Oobabooga
@ -289,6 +435,26 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Node Link:** https://github.com/fieldOfView/InvokeAI-remote_image
--------------------------------
### BriaAI Remove Background
**Description**: Implements one click background removal with BriaAI's new version 1.4 model which seems to be be producing better results than any other previous background removal tool.
**Node Link:** https://github.com/blessedcoolant/invoke_bria_rmbg
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_bria_rmbg/main/assets/preview.jpg" />
--------------------------------
### Remove Background
Description: An integration of the rembg package to remove backgrounds from images using multiple U2NET models.
Node Link: https://github.com/VeyDlin/remove-background-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/remove-background-node/master/.readme/node.png" width="500" />
--------------------------------
### Retroize
@ -301,6 +467,17 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
<img src="https://github.com/Ar7ific1al/InvokeAI_nodes_retroize/assets/2306586/de8b4fa6-324c-4c2d-b36c-297600c73974" width="500" />
--------------------------------
### Simple Skin Detection
Description: Detects skin in images based on predefined color thresholds.
Node Link: https://github.com/VeyDlin/simple-skin-detection-node
View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/simple-skin-detection-node/master/.readme/node.png" width="500" />
--------------------------------
### Size Stepper Nodes
@ -386,6 +563,7 @@ See full docs here: https://github.com/skunkworxdark/XYGrid_nodes/edit/main/READ
<img src="https://github.com/skunkworxdark/XYGrid_nodes/blob/main/images/collage.png" width="300" />
--------------------------------
### Example Node Template

View File

@ -36,6 +36,7 @@ their descriptions.
| Integer Math | Perform basic math operations on two integers |
| Convert Image Mode | Converts an image to a different mode. |
| Crop Image | Crops an image to a specified box. The box can be outside of the image. |
| Ideal Size | Calculates an ideal image size for latents for a first pass of a multi-pass upscaling to avoid duplication and other artifacts |
| Image Hue Adjustment | Adjusts the Hue of an image. |
| Inverse Lerp Image | Inverse linear interpolation of all pixels of an image |
| Image Primitive | An image primitive value |
@ -80,7 +81,7 @@ their descriptions.
| ONNX Text to Latents | Generates latents from conditionings. |
| ONNX Model Loader | Loads a main model, outputting its submodels. |
| OpenCV Inpaint | Simple inpaint using opencv. |
| Openpose Processor | Applies Openpose processing to image |
| DW Openpose Processor | Applies Openpose processing to image |
| PIDI Processor | Applies PIDI processing to image |
| Prompts from File | Loads prompts from a text file |
| Random Integer | Outputs a single random integer. |

View File

@ -1,6 +1,6 @@
# Example Workflows
We've curated some example workflows for you to get started with Workflows in InvokeAI
We've curated some example workflows for you to get started with Workflows in InvokeAI! These can also be found in the Workflow Library, located in the Workflow Editor of Invoke.
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!

View File

@ -13,46 +13,69 @@ We thank them for all of their time and hard work.
- [Lincoln D. Stein](mailto:lincoln.stein@gmail.com)
## **Current core team**
## **Current Core Team**
* @lstein (Lincoln Stein) - Co-maintainer
* @blessedcoolant - Co-maintainer
* @hipsterusername (Kent Keirsey) - Co-maintainer, CEO, Positive Vibes
* @psychedelicious (Spencer Mabrito) - Web Team Leader
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @damian0815 - Attention Systems and Compel Maintainer
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @genomancer (Gregg Helt) - Controlnet support
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @josh is toast (Josh Corbett) - Web Development
* @cheerio (Mary Rogers) - Lead Engineer & Web App Development
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @sunija - Standalone version
* @genomancer (Gregg Helt) - Controlnet support
* @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
* @ryanjdick (Ryan Dick) - Machine Learning & Training
* @millu (Millun Atluri) - Community Manager, Documentation, Node-wrangler
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @JPPhoto - Core image generation nodes
* @dunkeroni - Image generation backend
* @SkunkWorxDark - Image generation backend
* @keturn (Kevin Turner) - Diffusers
* @millu (Millun Atluri) - Community Wizard, Documentation, Node-wrangler,
* @glimmerleaf (Devon Hopkins) - Community Wizard
* @gogurt enjoyer - Discord moderator and end user support
* @whosawhatsis - Discord moderator and end user support
* @dwinrger - Discord moderator and end user support
* @526christian - Discord moderator and end user support
* @harvester62 - Discord moderator and end user support
## **Honored Team Alumni**
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @damian0815 - Attention Systems and Compel Maintainer
* @netsvetaev (Artur) - Localization support
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @tildebyte - Installation and configuration
* @mauwii (Matthias Wilde) - Installation, release, continuous integration
## **Full List of Contributors by Commit Name**
- 이승석
- AbdBarho
- ablattmann
- AdamOStark
- Adam Rice
- Airton Silva
- Aldo Hoeben
- Alexander Eichhorn
- Alexandre D. Roberge
- Alexandre Macabies
- Alfie John
- Andreas Rozek
- Andre LaBranche
- Andy Bearman
- Andy Luhrs
- Andy Pilate
- Anonymous
- Anthony Monthe
- Any-Winter-4079
- apolinario
- Ar7ific1al
- ArDiouscuros
- Armando C. Santisbon
- Arnold Cordewiner
- Arthur Holstvoogd
- artmen1516
- Artur
@ -64,13 +87,16 @@ We thank them for all of their time and hard work.
- blhook
- BlueAmulet
- Bouncyknighter
- Brandon
- Brandon Rising
- Brent Ozar
- Brian Racer
- bsilvereagle
- c67e708d
- camenduru
- CapableWeb
- Carson Katri
- chainchompa
- Chloe
- Chris Dawson
- Chris Hayes
@ -86,30 +112,45 @@ We thank them for all of their time and hard work.
- cpacker
- Cragin Godley
- creachec
- CrypticWit
- d8ahazard
- damian
- damian0815
- Damian at mba
- Damian Stewart
- Daniel Manzke
- Danny Beer
- Dan Sully
- Darren Ringer
- David Burnett
- David Ford
- David Regla
- David Sisco
- David Wager
- Daya Adianto
- db3000
- DekitaRPG
- Denis Olshin
- Dennis
- dependabot[bot]
- Dmitry Parnas
- Dobrynia100
- Dominic Letz
- DrGunnarMallon
- Drun555
- dunkeroni
- Edward Johan
- elliotsayes
- Elrik
- ElrikUnderlake
- Eric Khun
- Eric Wolf
- Eugene
- Eugene Brodsky
- ExperimentalCyborg
- Fabian Bahl
- Fabio 'MrWHO' Torchetti
- Fattire
- fattire
- Felipe Nogueira
- Félix Sanz
@ -118,8 +159,12 @@ We thank them for all of their time and hard work.
- gabrielrotbart
- gallegonovato
- Gérald LONLAS
- Gille
- GitHub Actions Bot
- glibesyck
- gogurtenjoyer
- Gohsuke Shimada
- greatwolf
- greentext2
- Gregg Helt
- H4rk
@ -131,6 +176,7 @@ We thank them for all of their time and hard work.
- Hosted Weblate
- Iman Karim
- ismail ihsan bülbül
- ItzAttila
- Ivan Efimov
- jakehl
- Jakub Kolčář
@ -141,6 +187,7 @@ We thank them for all of their time and hard work.
- Jason Toffaletti
- Jaulustus
- Jeff Mahoney
- Jennifer Player
- jeremy
- Jeremy Clark
- JigenD
@ -148,19 +195,26 @@ We thank them for all of their time and hard work.
- Johan Roxendal
- Johnathon Selstad
- Jonathan
- Jordan Hewitt
- Joseph Dries III
- Josh Corbett
- JPPhoto
- jspraul
- junzi
- Justin Wong
- Juuso V
- Kaspar Emanuel
- Katsuyuki-Karasawa
- Keerigan45
- Kent Keirsey
- Kevin Brack
- Kevin Coakley
- Kevin Gibbons
- Kevin Schaul
- Kevin Turner
- Kieran Klaassen
- krummrey
- Kyle
- Kyle Lacy
- Kyle Schouviller
- Lawrence Norton
@ -171,10 +225,15 @@ We thank them for all of their time and hard work.
- Lynne Whitehorn
- majick
- Marco Labarile
- Marta Nahorniuk
- Martin Kristiansen
- Mary Hipp
- maryhipp
- Mary Hipp Rogers
- mastercaster
- mastercaster9000
- Matthias Wild
- mauwii
- michaelk71
- mickr777
- Mihai
@ -182,11 +241,15 @@ We thank them for all of their time and hard work.
- Mikhail Tishin
- Millun Atluri
- Minjune Song
- Mitchell Allain
- mitien
- mofuzz
- Muhammad Usama
- Name
- _nderscore
- Neil Wang
- nekowaiz
- nemuruibai
- Netzer R
- Nicholas Koh
- Nicholas Körfer
@ -197,9 +260,11 @@ We thank them for all of their time and hard work.
- ofirkris
- Olivier Louvignes
- owenvincent
- pand4z31
- Patrick Esser
- Patrick Tien
- Patrick von Platen
- Paul Curry
- Paul Sajna
- pejotr
- Peter Baylies
@ -207,6 +272,7 @@ We thank them for all of their time and hard work.
- plucked
- prixt
- psychedelicious
- psychedelicious@windows
- Rainer Bernhardt
- Riccardo Giovanetti
- Rich Jones
@ -215,16 +281,22 @@ We thank them for all of their time and hard work.
- Robert Bolender
- Robin Rombach
- Rohan Barar
- Rohinish
- rpagliuca
- rromb
- Rupesh Sreeraman
- Ryan
- Ryan Cao
- Ryan Dick
- Saifeddine
- Saifeddine ALOUI
- Sam
- SammCheese
- Sam McLeod
- Sammy
- sammyf
- Samuel Husso
- Saurav Maheshkar
- Scott Lahteine
- Sean McLellan
- Sebastian Aigner
@ -232,16 +304,21 @@ We thank them for all of their time and hard work.
- Sergey Krashevich
- Shapor Naghibzadeh
- Shawn Zhong
- Simona Liliac
- Simon Vans-Colina
- skunkworxdark
- slashtechno
- SoheilRezaei
- Song, Pengcheng
- spezialspezial
- ssantos
- StAlKeR7779
- Stefan Tobler
- Stephan Koglin-Fischer
- SteveCaruso
- Steve Martinelli
- Steven Frank
- Surisen
- System X - Files
- Taylor Kems
- techicode
@ -260,26 +337,34 @@ We thank them for all of their time and hard work.
- tyler
- unknown
- user1
- vedant-3010
- Vedant Madane
- veprogames
- wa.code
- wfng92
- whjms
- whosawhatsis
- Will
- William Becher
- William Chong
- Wilson E. Alvarez
- woweenie
- Wubbbi
- xra
- Yeung Yiu Hung
- ymgenesis
- Yorzaren
- Yosuke Shinya
- yun saki
- ZachNagengast
- Zadagu
- zeptofine
- Zerdoumi
- Васянатор
- 冯不游
- 唐澤 克幸
## **Original CompVis Authors**
## **Original CompVis (Stable Diffusion) Authors**
- [Robin Rombach](https://github.com/rromb)
- [Patrick von Platen](https://github.com/patrickvonplaten)

View File

@ -0,0 +1,5 @@
:root {
--md-primary-fg-color: #35A4DB;
--md-primary-fg-color--light: #35A4DB;
--md-primary-fg-color--dark: #35A4DB;
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,8 @@
{
"name": "Text to Image",
"name": "Text to Image - SD1.5",
"author": "InvokeAI",
"description": "Sample text to image workflow for Stable Diffusion 1.5/2",
"version": "1.0.1",
"version": "1.1.0",
"contact": "invoke@invoke.ai",
"tags": "text2image, SD1.5, SD2, default",
"notes": "",
@ -18,10 +18,19 @@
{
"nodeId": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"fieldName": "prompt"
},
{
"nodeId": "55705012-79b9-4aac-9f26-c0b10309785b",
"fieldName": "width"
},
{
"nodeId": "55705012-79b9-4aac-9f26-c0b10309785b",
"fieldName": "height"
}
],
"meta": {
"version": "1.0.0"
"category": "default",
"version": "2.0.0"
},
"nodes": [
{
@ -30,44 +39,56 @@
"data": {
"id": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "compel",
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Negative Prompt",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "StringField"
},
"value": ""
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
}
},
"label": "Negative Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 261,
"height": 259,
"position": {
"x": 995.7263915923627,
"y": 239.67783573351227
"x": 1000,
"y": 350
}
},
{
@ -76,37 +97,60 @@
"data": {
"id": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "noise",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.1",
"nodePack": "invokeai",
"inputs": {
"seed": {
"id": "6431737c-918a-425d-a3b4-5d57e2f35d4d",
"name": "seed",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 0
},
"width": {
"id": "38fc5b66-fe6e-47c8-bba9-daf58e454ed7",
"name": "width",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 512
},
"height": {
"id": "16298330-e2bf-4872-a514-d6923df53cbb",
"name": "height",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 512
},
"use_cpu": {
"id": "c7c436d3-7a7a-4e76-91e4-c6deb271623c",
"name": "use_cpu",
"type": "boolean",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": true
}
},
@ -114,35 +158,40 @@
"noise": {
"id": "50f650dc-0184-4e23-a927-0497a96fe954",
"name": "noise",
"type": "LatentsField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"width": {
"id": "bb8a452b-133d-42d1-ae4a-3843d7e4109a",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "35cfaa12-3b8b-4b7a-a884-327ff3abddd9",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 389,
"height": 388,
"position": {
"x": 993.4442117555518,
"y": 605.6757415334787
"x": 600,
"y": 325
}
},
{
@ -151,13 +200,24 @@
"data": {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"type": "main_model_loader",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"model": {
"id": "993eabd2-40fd-44fe-bce7-5d0c7075ddab",
"name": "model",
"type": "MainModelField",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "MainModelField"
},
"value": {
"model_name": "stable-diffusion-v1-5",
"base_model": "sd-1",
@ -169,35 +229,40 @@
"unet": {
"id": "5c18c9db-328d-46d0-8cb9-143391c410be",
"name": "unet",
"type": "UNetField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "UNetField"
}
},
"clip": {
"id": "6effcac0-ec2f-4bf5-a49e-a2c29cf921f4",
"name": "clip",
"type": "ClipField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
},
"vae": {
"id": "57683ba3-f5f5-4f58-b9a2-4b83dacad4a1",
"name": "vae",
"type": "VaeField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "VaeField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 226,
"position": {
"x": 163.04436745878343,
"y": 254.63156870373479
"x": 600,
"y": 25
}
},
{
@ -206,44 +271,56 @@
"data": {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "compel",
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"prompt": {
"id": "7739aff6-26cb-4016-8897-5a1fb2305e4e",
"name": "prompt",
"type": "string",
"fieldKind": "input",
"label": "Positive Prompt",
"value": ""
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "StringField"
},
"value": "Super cute tiger cub, national geographic award-winning photograph"
},
"clip": {
"id": "48d23dce-a6ae-472a-9f8c-22a714ea5ce0",
"name": "clip",
"type": "ClipField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ClipField"
}
}
},
"outputs": {
"conditioning": {
"id": "37cf3a9d-f6b7-4b64-8ff6-2558c5ecc447",
"name": "conditioning",
"type": "ConditioningField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
}
},
"label": "Positive Compel Prompt",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 261,
"height": 259,
"position": {
"x": 595.7263915923627,
"y": 239.67783573351227
"x": 1000,
"y": 25
}
},
{
@ -252,21 +329,36 @@
"data": {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"type": "rand_int",
"label": "Random Seed",
"isOpen": false,
"notes": "",
"isIntermediate": true,
"useCache": false,
"version": "1.0.0",
"nodePack": "invokeai",
"inputs": {
"low": {
"id": "3ec65a37-60ba-4b6c-a0b2-553dd7a84b84",
"name": "low",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 0
},
"high": {
"id": "085f853a-1a5f-494d-8bec-e4ba29a3f2d1",
"name": "high",
"type": "integer",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 2147483647
}
},
@ -274,23 +366,20 @@
"value": {
"id": "812ade4d-7699-4261-b9fc-a6c9d2ab55ee",
"name": "value",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "Random Seed",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": false,
"version": "1.0.0"
}
},
"width": 320,
"height": 218,
"height": 32,
"position": {
"x": 541.094822888628,
"y": 694.5704476446829
"x": 600,
"y": 275
}
},
{
@ -299,144 +388,224 @@
"data": {
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "denoise_latents",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": true,
"useCache": true,
"version": "1.5.0",
"nodePack": "invokeai",
"inputs": {
"positive_conditioning": {
"id": "90b7f4f8-ada7-4028-8100-d2e54f192052",
"name": "positive_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
},
"negative_conditioning": {
"id": "9393779e-796c-4f64-b740-902a1177bf53",
"name": "negative_conditioning",
"type": "ConditioningField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ConditioningField"
}
},
"noise": {
"id": "8e17f1e5-4f98-40b1-b7f4-86aeeb4554c1",
"name": "noise",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"steps": {
"id": "9b63302d-6bd2-42c9-ac13-9b1afb51af88",
"name": "steps",
"type": "integer",
"fieldKind": "input",
"label": "",
"value": 10
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
},
"value": 50
},
"cfg_scale": {
"id": "87dd04d3-870e-49e1-98bf-af003a810109",
"name": "cfg_scale",
"type": "FloatPolymorphic",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "FloatField"
},
"value": 7.5
},
"denoising_start": {
"id": "f369d80f-4931-4740-9bcd-9f0620719fab",
"name": "denoising_start",
"type": "float",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 0
},
"denoising_end": {
"id": "747d10e5-6f02-445c-994c-0604d814de8c",
"name": "denoising_end",
"type": "float",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 1
},
"scheduler": {
"id": "1de84a4e-3a24-4ec8-862b-16ce49633b9b",
"name": "scheduler",
"type": "Scheduler",
"fieldKind": "input",
"label": "",
"value": "euler"
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "SchedulerField"
},
"value": "unipc"
},
"unet": {
"id": "ffa6fef4-3ce2-4bdb-9296-9a834849489b",
"name": "unet",
"type": "UNetField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "UNetField"
}
},
"control": {
"id": "077b64cb-34be-4fcc-83f2-e399807a02bd",
"name": "control",
"type": "ControlPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "ControlField"
}
},
"ip_adapter": {
"id": "1d6948f7-3a65-4a65-a20c-768b287251aa",
"name": "ip_adapter",
"type": "IPAdapterPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "IPAdapterField"
}
},
"t2i_adapter": {
"id": "75e67b09-952f-4083-aaf4-6b804d690412",
"name": "t2i_adapter",
"type": "T2IAdapterPolymorphic",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": true,
"name": "T2IAdapterField"
}
},
"cfg_rescale_multiplier": {
"id": "9101f0a6-5fe0-4826-b7b3-47e5d506826c",
"name": "cfg_rescale_multiplier",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "FloatField"
},
"value": 0
},
"latents": {
"id": "334d4ba3-5a99-4195-82c5-86fb3f4f7d43",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"denoise_mask": {
"id": "0d3dbdbf-b014-4e95-8b18-ff2ff9cb0bfa",
"name": "denoise_mask",
"type": "DenoiseMaskField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "DenoiseMaskField"
}
}
},
"outputs": {
"latents": {
"id": "70fa5bbc-0c38-41bb-861a-74d6d78d2f38",
"name": "latents",
"type": "LatentsField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"width": {
"id": "98ee0e6c-82aa-4e8f-8be5-dc5f00ee47f0",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "e8cb184a-5e1a-47c8-9695-4b8979564f5d",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": true,
"useCache": true,
"version": "1.4.0"
}
},
"width": 320,
"height": 646,
"height": 703,
"position": {
"x": 1476.5794704734735,
"y": 256.80174342731783
"x": 1400,
"y": 25
}
},
{
@ -445,153 +614,185 @@
"data": {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "l2i",
"label": "",
"isOpen": true,
"notes": "",
"isIntermediate": false,
"useCache": true,
"version": "1.2.0",
"nodePack": "invokeai",
"inputs": {
"metadata": {
"id": "ab375f12-0042-4410-9182-29e30db82c85",
"name": "metadata",
"type": "MetadataField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "MetadataField"
}
},
"latents": {
"id": "3a7e7efd-bff5-47d7-9d48-615127afee78",
"name": "latents",
"type": "LatentsField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "LatentsField"
}
},
"vae": {
"id": "a1f5f7a1-0795-4d58-b036-7820c0b0ef2b",
"name": "vae",
"type": "VaeField",
"fieldKind": "input",
"label": ""
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "VaeField"
}
},
"tiled": {
"id": "da52059a-0cee-4668-942f-519aa794d739",
"name": "tiled",
"type": "boolean",
"fieldKind": "input",
"label": "",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": false
},
"fp32": {
"id": "c4841df3-b24e-4140-be3b-ccd454c2522c",
"name": "fp32",
"type": "boolean",
"fieldKind": "input",
"label": "",
"value": false
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "BooleanField"
},
"value": true
}
},
"outputs": {
"image": {
"id": "72d667d0-cf85-459d-abf2-28bd8b823fe7",
"name": "image",
"type": "ImageField",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "ImageField"
}
},
"width": {
"id": "c8c907d8-1066-49d1-b9a6-83bdcd53addc",
"name": "width",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
},
"height": {
"id": "230f359c-b4ea-436c-b372-332d7dcdca85",
"name": "height",
"type": "integer",
"fieldKind": "output"
"fieldKind": "output",
"type": {
"isCollection": false,
"isCollectionOrScalar": false,
"name": "IntegerField"
}
}
},
"label": "",
"isOpen": true,
"notes": "",
"embedWorkflow": false,
"isIntermediate": false,
"useCache": true,
"version": "1.0.0"
}
},
"width": 320,
"height": 267,
"height": 266,
"position": {
"x": 2037.9648469717395,
"y": 426.10844427600136
"x": 1800,
"y": 25
}
}
],
"edges": [
{
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"sourceHandle": "value",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"targetHandle": "seed",
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default"
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value",
"targetHandle": "seed"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "clip",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"targetHandle": "clip",
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "default",
"sourceHandle": "clip",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"targetHandle": "clip",
"targetHandle": "clip"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default"
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "default",
"sourceHandle": "clip",
"targetHandle": "clip"
},
{
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"sourceHandle": "noise",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "noise",
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"type": "default"
"source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "noise",
"targetHandle": "noise"
},
{
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "positive_conditioning",
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"type": "default"
},
{
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "negative_conditioning",
"targetHandle": "positive_conditioning"
},
{
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "unet",
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"targetHandle": "unet",
"type": "default",
"sourceHandle": "conditioning",
"targetHandle": "negative_conditioning"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"type": "default"
},
{
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"sourceHandle": "latents",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "latents",
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default"
},
{
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"sourceHandle": "vae",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "unet",
"targetHandle": "unet"
},
{
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"targetHandle": "vae",
"type": "default",
"sourceHandle": "latents",
"targetHandle": "latents"
},
{
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"type": "default"
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "vae",
"targetHandle": "vae"
}
]
}
}

View File

@ -2,56 +2,60 @@
set -e
cd "$(dirname "$0")"
BCYAN="\e[1;36m"
BYELLOW="\e[1;33m"
BGREEN="\e[1;32m"
BRED="\e[1;31m"
RED="\e[31m"
RESET="\e[0m"
function is_bin_in_path {
builtin type -P "$1" &>/dev/null
}
function git_show {
git show -s --format=oneline --abbrev-commit "$1" | cat
}
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo "A virtual environment is activated. Please deactivate it before proceeding".
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
VERSION=$(cd ..; python -c "from invokeai.version import __version__ as version; print(version)")
cd "$(dirname "$0")"
echo
echo -e "${BYELLOW}This script must be run from the installer directory!${RESET}"
echo "The current working directory is $(pwd)"
read -p "If that looks right, press any key to proceed, or CTRL-C to exit..."
echo
# Some machines only have `python3` in PATH, others have `python` - make an alias.
# We can use a function to approximate an alias within a non-interactive shell.
if ! is_bin_in_path python && is_bin_in_path python3; then
function python {
python3 "$@"
}
fi
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
)
PATCH=""
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v3-latest"
echo Building installer for version $VERSION
echo "Be certain that you're in the 'installer' directory before continuing."
read -p "Press any key to continue, or CTRL-C to exit..."
read -e -p "Tag this repo with '${VERSION}' and '${LATEST_TAG}'? [n]: " input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
git push origin :refs/tags/$VERSION
if ! git tag -fa $VERSION ; then
echo "Existing/invalid tag"
exit -1
fi
git push origin :refs/tags/$LATEST_TAG
git tag -fa $LATEST_TAG
echo "remember to push --tags!"
fi
echo -e "${BGREEN}HEAD${RESET}:"
git_show HEAD
echo
# ----------------------
echo Building the wheel
# install the 'build' package in the user site packages, if needed
# could be improved by using a temporary venv, but it's tiny and harmless
if [[ $(python -c 'from importlib.util import find_spec; print(find_spec("build") is None)') == "True" ]]; then
pip install --user build
fi
rm -r ../build
python -m build --wheel --outdir dist/ ../.
# ----------------------
echo Building installer zip fles for InvokeAI $VERSION
echo
echo "Building installer zip files for InvokeAI ${VERSION}..."
echo
# get rid of any old ones
rm -f *.zip
@ -59,12 +63,11 @@ rm -rf InvokeAI-Installer
# copy content
mkdir InvokeAI-Installer
for f in templates lib *.txt *.reg; do
for f in templates *.txt *.reg; do
cp -r ${f} InvokeAI-Installer/
done
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib
# Install scripts
# Mac/Linux
@ -72,13 +75,13 @@ cp install.sh.in InvokeAI-Installer/install.sh
chmod a+x InvokeAI-Installer/install.sh
# Windows
perl -p -e "s/^set INVOKEAI_VERSION=.*/set INVOKEAI_VERSION=$VERSION/" install.bat.in > InvokeAI-Installer/install.bat
cp install.bat.in InvokeAI-Installer/install.bat
cp WinLongPathsEnabled.reg InvokeAI-Installer/
# Zip everything up
zip -r InvokeAI-installer-$VERSION.zip InvokeAI-Installer
# clean up
rm -rf InvokeAI-Installer tmp dist
rm -rf InvokeAI-Installer tmp dist ../invokeai/frontend/web/dist/
exit 0

View File

@ -15,7 +15,6 @@ if "%1" == "use-cache" (
@rem Config
@rem The version in the next line is replaced by an up to date release number
@rem when create_installer.sh is run. Change the release number there.
set INVOKEAI_VERSION=latest
set INSTRUCTIONS=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
set TROUBLESHOOTING=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting
set PYTHON_URL=https://www.python.org/downloads/windows/

View File

@ -11,7 +11,7 @@ import sys
import venv
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Union
from typing import Optional, Tuple
SUPPORTED_PYTHON = ">=3.10.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
@ -21,40 +21,20 @@ OS = platform.uname().system
ARCH = platform.uname().machine
VERSION = "latest"
### Feature flags
# Install the virtualenv into the runtime dir
FF_VENV_IN_RUNTIME = True
# Install the wheel packaged with the installer
FF_USE_LOCAL_WHEEL = True
class Installer:
"""
Deploys an InvokeAI installation into a given path
"""
reqs: list[str] = INSTALLER_REQS
def __init__(self) -> None:
self.reqs = INSTALLER_REQS
self.preflight()
if os.getenv("VIRTUAL_ENV") is not None:
print("A virtual environment is already activated. Please 'deactivate' before installation.")
sys.exit(-1)
self.bootstrap()
def preflight(self) -> None:
"""
Preflight checks
"""
# TODO
# verify python version
# on macOS verify XCode tools are present
# verify libmesa, libglx on linux
# check that the system arch is not i386 (?)
# check that the system has a GPU, and the type of GPU
pass
self.available_releases = get_github_releases()
def mktemp_venv(self) -> TemporaryDirectory:
"""
@ -78,12 +58,9 @@ class Installer:
return venv_dir
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory:
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory | None:
"""
Bootstrap the installer venv with packages required at install time
:return: path to the virtual environment directory that was bootstrapped
:rtype: TemporaryDirectory
"""
print("Initializing the installer. This may take a minute - please wait...")
@ -95,39 +72,27 @@ class Installer:
cmd.extend(self.reqs)
try:
res = subprocess.check_output(cmd).decode()
# upgrade pip to the latest version to avoid a confusing message
res = upgrade_pip(Path(venv_dir.name))
if verbose:
print(res)
# run the install prerequisites installation
res = subprocess.check_output(cmd).decode()
if verbose:
print(res)
return venv_dir
except subprocess.CalledProcessError as e:
print(e)
def app_venv(self, path: str = None):
def app_venv(self, venv_parent) -> Path:
"""
Create a virtualenv for the InvokeAI installation
"""
# explicit venv location
# currently unused in normal operation
# useful for testing or special cases
if path is not None:
venv_dir = Path(path)
# experimental / testing
elif not FF_VENV_IN_RUNTIME:
if OS == "Windows":
venv_dir_parent = os.getenv("APPDATA", "~/AppData/Roaming")
elif OS == "Darwin":
# there is no environment variable on macOS to find this
# TODO: confirm this is working as expected
venv_dir_parent = "~/Library/Application Support"
elif OS == "Linux":
venv_dir_parent = os.getenv("XDG_DATA_DIR", "~/.local/share")
venv_dir = Path(venv_dir_parent).expanduser().resolve() / f"InvokeAI/{VERSION}/venv"
# stable / current
else:
venv_dir = self.dest / ".venv"
venv_dir = venv_parent / ".venv"
# Prefer to copy python executables
# so that updates to system python don't break InvokeAI
@ -141,7 +106,7 @@ class Installer:
return venv_dir
def install(
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None
self, version=None, root: str = "~/invokeai", yes_to_all=False, find_links: Optional[Path] = None
) -> None:
"""
Install the InvokeAI application into the given runtime path
@ -158,15 +123,20 @@ class Installer:
import messages
messages.welcome()
messages.welcome(self.available_releases)
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve()
self.dest = default_path if yes_to_all else messages.dest_path(root)
version = messages.choose_version(self.available_releases)
auto_dest = Path(os.environ.get("INVOKEAI_ROOT", root)).expanduser().resolve()
destination = auto_dest if yes_to_all else messages.dest_path(root)
if destination is None:
print("Could not find or create the destination directory. Installation cancelled.")
sys.exit(0)
# create the venv for the app
self.venv = self.app_venv()
self.venv = self.app_venv(venv_parent=destination)
self.instance = InvokeAiInstance(runtime=self.dest, venv=self.venv, version=version)
self.instance = InvokeAiInstance(runtime=destination, venv=self.venv, version=version)
# install dependencies and the InvokeAI application
(extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None)
@ -190,7 +160,7 @@ class InvokeAiInstance:
A single runtime directory *may* be shared by multiple virtual environments, though this isn't currently tested or supported.
"""
def __init__(self, runtime: Path, venv: Path, version: str) -> None:
def __init__(self, runtime: Path, venv: Path, version: str = "stable") -> None:
self.runtime = runtime
self.venv = venv
self.pip = get_pip_from_venv(venv)
@ -199,6 +169,7 @@ class InvokeAiInstance:
set_sys_path(venv)
os.environ["INVOKEAI_ROOT"] = str(self.runtime.expanduser().resolve())
os.environ["VIRTUAL_ENV"] = str(self.venv.expanduser().resolve())
upgrade_pip(venv)
def get(self) -> tuple[Path, Path]:
"""
@ -212,54 +183,7 @@ class InvokeAiInstance:
def install(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install this instance, including dependencies and the app itself
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
"""
import messages
# install torch first to ensure the correct version gets installed.
# works with either source or wheel install with negligible impact on installation times.
messages.simple_banner("Installing PyTorch :fire:")
self.install_torch(extra_index_url, find_links)
messages.simple_banner("Installing the InvokeAI Application :art:")
self.install_app(extra_index_url, optional_modules, find_links)
def install_torch(self, extra_index_url=None, find_links=None):
"""
Install PyTorch
"""
from plumbum import FG, local
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.0",
"torchmetrics==0.11.4",
"torchvision>=0.14.1",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
]
& FG
)
def install_app(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install the application with pip.
Supports installation from PyPi or from a local source directory.
Install the package from PyPi.
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
@ -271,53 +195,52 @@ class InvokeAiInstance:
:type find_links: Path
"""
## this only applies to pypi installs; TODO actually use this
if self.version == "pre":
import messages
# not currently used, but may be useful for "install most recent version" option
if self.version == "prerelease":
version = None
pre = "--pre"
pre_flag = "--pre"
elif self.version == "stable":
version = None
pre_flag = None
else:
version = self.version
pre = None
pre_flag = None
## TODO: only local wheel will be installed as of now; support for --version arg is TODO
if FF_USE_LOCAL_WHEEL:
# if no wheel, try to do a source install before giving up
try:
src = str(next(Path(__file__).parent.glob("InvokeAI-*.whl")))
except StopIteration:
try:
src = Path(__file__).parents[1].expanduser().resolve()
# if the above directory contains one of these files, we'll do a source install
next(src.glob("pyproject.toml"))
next(src.glob("invokeai"))
except StopIteration:
print("Unable to find a wheel or perform a source install. Giving up.")
src = "invokeai"
if optional_modules:
src += optional_modules
if version:
src += f"=={version}"
elif version == "source":
# this makes an assumption about the location of the installer package in the source tree
src = Path(__file__).parents[1].expanduser().resolve()
else:
# will install from PyPi
src = f"invokeai=={version}" if version is not None else "invokeai"
messages.simple_banner("Installing the InvokeAI Application :art:")
from plumbum import FG, local
from plumbum import FG, ProcessExecutionError, local # type: ignore
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"--use-pep517",
str(src) + (optional_modules if optional_modules else ""),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
pre,
]
& FG
)
pipeline = pip[
"install",
"--require-virtualenv",
"--force-reinstall",
"--use-pep517",
str(src),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
pre_flag,
]
try:
_ = pipeline & FG
except ProcessExecutionError as e:
print(f"Error: {e}")
print(
"Could not install InvokeAI. Please try downloading the latest version of the installer and install again."
)
sys.exit(1)
def configure(self):
"""
@ -373,7 +296,6 @@ class InvokeAiInstance:
ext = "bat" if OS == "Windows" else "sh"
# scripts = ['invoke', 'update']
scripts = ["invoke"]
for script in scripts:
@ -408,6 +330,23 @@ def get_pip_from_venv(venv_path: Path) -> str:
return str(venv_path.expanduser().resolve() / pip)
def upgrade_pip(venv_path: Path) -> str | None:
"""
Upgrade the pip executable in the given virtual environment
"""
python = "Scripts\\python.exe" if OS == "Windows" else "bin/python"
python = str(venv_path.expanduser().resolve() / python)
try:
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
except subprocess.CalledProcessError as e:
print(e)
result = None
return result
def set_sys_path(venv_path: Path) -> None:
"""
Given a path to a virtual environment, set the sys.path, in a cross-platform fashion,
@ -431,7 +370,43 @@ def set_sys_path(venv_path: Path) -> None:
sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve()))
def get_torch_source() -> (Union[str, None], str):
def get_github_releases() -> tuple[list, list] | None:
"""
Query Github for published (pre-)release versions.
Return a tuple where the first element is a list of stable releases and the second element is a list of pre-releases.
Return None if the query fails for any reason.
"""
import requests
## get latest releases using github api
url = "https://api.github.com/repos/invoke-ai/InvokeAI/releases"
releases, pre_releases = [], []
try:
res = requests.get(url)
res.raise_for_status()
tag_info = res.json()
for tag in tag_info:
if not tag["prerelease"]:
releases.append(tag["tag_name"].lstrip("v"))
else:
pre_releases.append(tag["tag_name"].lstrip("v"))
except requests.HTTPError as e:
print(f"Error: {e}")
print("Could not fetch version information from GitHub. Please check your network connection and try again.")
return
except Exception as e:
print(f"Error: {e}")
print("An unexpected error occurred while trying to fetch version information from GitHub. Please try again.")
return
releases.sort(reverse=True)
pre_releases.sort(reverse=True)
return releases, pre_releases
def get_torch_source() -> Tuple[str | None, str | None]:
"""
Determine the extra index URL for pip to use for torch installation.
This depends on the OS and the graphics accelerator in use.
@ -446,25 +421,26 @@ def get_torch_source() -> (Union[str, None], str):
:rtype: list
"""
from messages import graphical_accelerator
from messages import select_gpu
# device can be one of: "cuda", "rocm", "cpu", "idk"
device = graphical_accelerator()
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = select_gpu()
url = None
optional_modules = "[onnx]"
if OS == "Linux":
if device == "rocm":
url = "https://download.pytorch.org/whl/rocm5.4.2"
elif device == "cpu":
if device.value == "rocm":
url = "https://download.pytorch.org/whl/rocm5.6"
elif device.value == "cpu":
url = "https://download.pytorch.org/whl/cpu"
if device == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
elif OS == "Windows":
if device.value == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device.value == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@ -5,10 +5,11 @@ Installer user interaction
import os
import platform
from enum import Enum
from pathlib import Path
from prompt_toolkit import HTML, prompt
from prompt_toolkit.completion import PathCompleter
from prompt_toolkit.completion import FuzzyWordCompleter, PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
from rich.console import Console, Group, group
@ -35,16 +36,26 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def welcome():
def welcome(available_releases: tuple | None = None) -> None:
@group()
def text():
if (platform_specific := _platform_specific_help()) != "":
if (platform_specific := _platform_specific_help()) is not None:
yield platform_specific
yield ""
yield Text.from_markup(
"Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",
justify="center",
)
if available_releases is not None:
latest_stable = available_releases[0][0]
last_pre = available_releases[1][0]
yield ""
yield Text.from_markup(
f"[red3]🠶[/] Latest stable release (recommended): [b bright_white]{latest_stable}", justify="center"
)
yield Text.from_markup(
f"[red3]🠶[/] Last published pre-release version: [b bright_white]{last_pre}", justify="center"
)
console.rule()
print(
@ -61,19 +72,30 @@ def welcome():
console.line()
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:")
dest_confirmed = Confirm.ask(
":stop_sign: (re)install in this location?",
default=False,
)
else:
print(f"InvokeAI will be installed in {dest}")
dest_confirmed = Confirm.ask("Use this location?", default=True)
def choose_version(available_releases: tuple | None = None) -> str:
"""
Prompt the user to choose an Invoke version to install
"""
# short circuit if we couldn't get a version list
# still try to install the latest stable version
if available_releases is None:
return "stable"
console.print(":grey_question: [orange3]Please choose an Invoke version to install.")
choices = available_releases[0] + available_releases[1]
response = prompt(
message=f" <Enter> to install the recommended release ({choices[0]}). <Tab> or type to pick a version: ",
complete_while_typing=True,
completer=FuzzyWordCompleter(choices),
)
console.print(f" Version {choices[0] if response == '' else response} will be installed.")
console.line()
return dest_confirmed
return "stable" if response == "" else response
def user_wants_auto_configuration() -> bool:
@ -109,7 +131,23 @@ def user_wants_auto_configuration() -> bool:
return choice.lower().startswith("a")
def dest_path(dest=None) -> Path:
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":stop_sign: Directory {dest} already exists!")
print(" Is this location correct?")
default = False
else:
print(f":file_folder: InvokeAI will be installed in {dest}")
default = True
dest_confirmed = Confirm.ask(" Please confirm:", default=default)
console.line()
return dest_confirmed
def dest_path(dest=None) -> Path | None:
"""
Prompt the user for the destination path and create the path
@ -124,25 +162,21 @@ def dest_path(dest=None) -> Path:
else:
dest = Path.cwd().expanduser().resolve()
prev_dest = init_path = dest
dest_confirmed = confirm_install(dest)
dest_confirmed = False
while not dest_confirmed:
# if the given destination already exists, the starting point for browsing is its parent directory.
# the user may have made a typo, or otherwise wants to place the root dir next to an existing one.
# if the destination dir does NOT exist, then the user must have changed their mind about the selection.
# since we can't read their mind, start browsing at Path.cwd().
browse_start = (prev_dest.parent if prev_dest.exists() else Path.cwd()).expanduser().resolve()
browse_start = (dest or Path.cwd()).expanduser().resolve()
path_completer = PathCompleter(
only_directories=True,
expanduser=True,
get_paths=lambda: [browse_start], # noqa: B023
get_paths=lambda: [str(browse_start)], # noqa: B023
# get_paths=lambda: [".."].extend(list(browse_start.iterdir()))
)
console.line()
console.print(f"[orange3]Please select the destination directory for the installation:[/] \\[{browse_start}]: ")
console.print(f":grey_question: [orange3]Please select the install destination:[/] \\[{browse_start}]: ")
selected = prompt(
">>> ",
complete_in_thread=True,
@ -155,6 +189,7 @@ def dest_path(dest=None) -> Path:
)
prev_dest = dest
dest = Path(selected)
console.line()
dest_confirmed = confirm_install(dest.expanduser().resolve())
@ -182,41 +217,45 @@ def dest_path(dest=None) -> Path:
console.rule("Goodbye!")
def graphical_accelerator():
class GpuType(Enum):
CUDA = "cuda"
CUDA_AND_DML = "cuda_and_dml"
ROCM = "rocm"
CPU = "cpu"
AUTODETECT = "autodetect"
def select_gpu() -> GpuType:
"""
Prompt the user to select the graphical accelerator in their system
This does not validate user's choices (yet), but only offers choices
valid for the platform.
CUDA is the fallback.
We may be able to detect the GPU driver by shelling out to `modprobe` or `lspci`,
but this is not yet supported or reliable. Also, some users may have exotic preferences.
Prompt the user to select the GPU driver
"""
if ARCH == "arm64" and OS != "Darwin":
print(f"Only CPU acceleration is available on {ARCH} architecture. Proceeding with that.")
return "cpu"
return GpuType.CPU
nvidia = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"cuda",
GpuType.CUDA,
)
nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
"cuda_and_dml",
GpuType.CUDA_AND_DML,
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
"rocm",
GpuType.ROCM,
)
cpu = (
"no compatible GPU, or specifically prefer to use the CPU",
"cpu",
"Do not install any GPU support, use CPU for generation (slow)",
GpuType.CPU,
)
idk = (
autodetect = (
"I'm not sure what to choose",
"idk",
GpuType.AUTODETECT,
)
options = []
if OS == "Windows":
options = [nvidia, nvidia_with_dml, cpu]
if OS == "Linux":
@ -230,7 +269,7 @@ def graphical_accelerator():
return options[0][1]
# "I don't know" is always added the last option
options.append(idk)
options.append(autodetect) # type: ignore
options = {str(i): opt for i, opt in enumerate(options, 1)}
@ -265,9 +304,9 @@ def graphical_accelerator():
),
)
if options[choice][1] == "idk":
if options[choice][1] is GpuType.AUTODETECT:
console.print(
"No problem. We will try to install a version that [i]should[/i] be compatible. :crossed_fingers:"
"No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
)
return options[choice][1]
@ -291,7 +330,7 @@ def windows_long_paths_registry() -> None:
"""
with open(str(Path(__file__).parent / "WinLongPathsEnabled.reg"), "r", encoding="utf-16le") as code:
syntax = Syntax(code.read(), line_numbers=True)
syntax = Syntax(code.read(), line_numbers=True, lexer="regedit")
console.print(
Panel(
@ -301,7 +340,7 @@ def windows_long_paths_registry() -> None:
"We will now apply a registry fix to enable long paths on Windows. InvokeAI needs this to function correctly. We are asking your permission to modify the Windows Registry on your behalf.",
"",
"This is the change that will be applied:",
syntax,
str(syntax),
]
)
),
@ -340,7 +379,7 @@ def introduction() -> None:
console.line(2)
def _platform_specific_help() -> str:
def _platform_specific_help() -> Text | None:
if OS == "Darwin":
text = Text.from_markup(
"""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/]."""
@ -354,5 +393,5 @@ def _platform_specific_help() -> str:
[deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]"""
)
else:
text = ""
return
return text

71
installer/tag_release.sh Executable file
View File

@ -0,0 +1,71 @@
#!/bin/bash
set -e
BCYAN="\e[1;36m"
BYELLOW="\e[1;33m"
BGREEN="\e[1;32m"
BRED="\e[1;31m"
RED="\e[31m"
RESET="\e[0m"
function does_tag_exist {
git rev-parse --quiet --verify "refs/tags/$1" >/dev/null
}
function git_show_ref {
git show-ref --dereference $1 --abbrev 7
}
function git_show {
git show -s --format='%h %s' $1
}
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
)
PATCH=""
MAJOR_VERSION=$(echo $VERSION | sed 's/\..*$//')
VERSION="v${VERSION}${PATCH}"
LATEST_TAG="v${MAJOR_VERSION}-latest"
if does_tag_exist $VERSION; then
echo -e "${BCYAN}${VERSION}${RESET} already exists:"
git_show_ref tags/$VERSION
echo
fi
if does_tag_exist $LATEST_TAG; then
echo -e "${BCYAN}${LATEST_TAG}${RESET} already exists:"
git_show_ref tags/$LATEST_TAG
echo
fi
echo -e "${BGREEN}HEAD${RESET}:"
git_show
echo
echo -e -n "Create tags ${BCYAN}${VERSION}${RESET} and ${BCYAN}${LATEST_TAG}${RESET} @ ${BGREEN}HEAD${RESET}, ${RED}deleting existing tags on remote${RESET}? "
read -e -p 'y/n [n]: ' input
RESPONSE=${input:='n'}
if [ "$RESPONSE" == 'y' ]; then
echo
echo -e "Deleting ${BCYAN}${VERSION}${RESET} tag on remote..."
git push --delete origin $VERSION
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${VERSION}${RESET} locally..."
if ! git tag -fa $VERSION; then
echo "Existing/invalid tag"
exit -1
fi
echo -e "Deleting ${BCYAN}${LATEST_TAG}${RESET} tag on remote..."
git push --delete origin $LATEST_TAG
echo -e "Tagging ${BGREEN}HEAD${RESET} with ${BCYAN}${LATEST_TAG}${RESET} locally..."
git tag -fa $LATEST_TAG
echo -e "Pushing updated tags to remote..."
git push origin --tags
fi
exit 0

View File

@ -15,7 +15,7 @@ echo 4. Download and install models
echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console
echo 8. Update InvokeAI
echo 8. Update InvokeAI (DEPRECATED - please use the installer)
echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help
echo Q - Quit
@ -52,8 +52,10 @@ IF /I "%choice%" == "1" (
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "8" (
echo Running invokeai-update...
python -m invokeai.frontend.install.invokeai_update
echo UPDATING FROM WITHIN THE APP IS BEING DEPRECATED.
echo Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.
timeout 4
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
@ -77,4 +79,3 @@ pause
:ending
exit /b

View File

@ -90,7 +90,9 @@ do_choice() {
;;
8)
clear
printf "Update InvokeAI\n"
printf "UPDATING FROM WITHIN THE APP IS BEING DEPRECATED\n"
printf "Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.\n"
sleep 4
python -m invokeai.frontend.install.invokeai_update
;;
9)
@ -122,7 +124,7 @@ do_dialog() {
5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console"
8 "Update InvokeAI"
8 "Update InvokeAI (DEPRECATED - please use the installer)"
9 "Run the InvokeAI image database maintenance script"
10 "Command-line help"
)

View File

@ -1,72 +0,0 @@
@echo off
setlocal EnableExtensions EnableDelayedExpansion
PUSHD "%~dp0"
set INVOKE_AI_VERSION=latest
set arg=%1
if "%arg%" neq "" (
if "%arg:~0,2%" equ "/?" (
echo Usage: update.bat ^<release name or branch^>
echo Updates InvokeAI to use the indicated version of the code base.
echo Find the version or branch for the release you want, and pass it as the argument.
echo For example '.\update.bat v2.2.5' for release 2.2.5.
echo '.\update.bat main' for the latest development version
echo.
echo If no argument provided then will install the most recent release, equivalent to
echo '.\update.bat latest'
exit /b
) else (
set INVOKE_AI_VERSION=%arg%
)
)
set INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/!INVOKE_AI_VERSION!.zip"
set INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/!INVOKE_AI_VERSION!/environments-and-requirements/requirements-base.txt
set INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
call curl -I "%INVOKE_AI_DEP%" -fs >.tmp.out
if %errorlevel% neq 0 (
echo '!INVOKE_AI_VERSION!' is not a known branch name or tag. Please check the version and try again.
echo "Press any key to continue"
pause
exit /b
)
del .tmp.out
echo This script will update InvokeAI and all its dependencies to !INVOKE_AI_SRC!.
echo If you do not want to do this, press control-C now!
pause
call curl -L "%INVOKE_AI_DEP%" > environments-and-requirements/requirements-base.txt
call curl -L "%INVOKE_AI_MODELS%" > configs/INITIAL_MODELS.yaml
call .venv\Scripts\activate.bat
call .venv\Scripts\python -mpip install -r requirements.txt
if %errorlevel% neq 0 (
echo Installation of requirements failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
call .venv\Scripts\python -mpip install !INVOKE_AI_SRC!
if %errorlevel% neq 0 (
echo Installation of InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
@rem call .venv\Scripts\invokeai-configure --root=.
@rem if %errorlevel% neq 0 (
@rem echo Configuration InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
@rem pause
@rem exit /b
@rem )
echo InvokeAI has been updated to '%INVOKE_AI_VERSION%'
echo "Press any key to continue"
pause
endlocal

View File

@ -1,58 +0,0 @@
#!/usr/bin/env bash
set -eu
if [ $# -ge 1 ] && [ "${1:0:2}" == "-h" ]; then
echo "Usage: update.sh <release>"
echo "Updates InvokeAI to use the indicated version of the code base."
echo "Find the version or branch for the release you want, and pass it as the argument."
echo "For example: update.sh v2.2.5 for release 2.2.5."
echo " update.sh main for the current development version."
echo ""
echo "If no argument provided then will install the version tagged with 'latest', equivalent to"
echo "update.sh latest"
exit -1
fi
INVOKE_AI_VERSION=${1:-latest}
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/$INVOKE_AI_VERSION.zip"
INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/environments-and-requirements/requirements-base.txt
INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
# ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
function _err_exit {
if test "$1" -ne 0
then
echo "Something went wrong while installing InvokeAI and/or its requirements."
echo "Update cannot continue. Please report this error to https://github.com/invoke-ai/InvokeAI/issues"
echo -e "Error code $1; Error caught was '$2'"
read -p "Press any key to exit..."
exit
fi
}
if ! curl -I "$INVOKE_AI_DEP" -fs >/dev/null; then
echo \'$INVOKE_AI_VERSION\' is not a known branch name or tag. Please check the version and try again.
exit
fi
echo This script will update InvokeAI and all its dependencies to version \'$INVOKE_AI_VERSION\'.
echo If you do not want to do this, press control-C now!
read -p "Press any key to continue, or CTRL-C to exit..."
curl -L "$INVOKE_AI_DEP" > environments-and-requirements/requirements-base.txt
curl -L "$INVOKE_AI_MODELS" > configs/INITIAL_MODELS.yaml
. .venv/bin/activate
./.venv/bin/python -mpip install -r requirements.txt
_err_exit $? "The pip program failed to install InvokeAI's requirements."
./.venv/bin/python -mpip install $INVOKE_AI_SRC
_err_exit $? "The pip program failed to install InvokeAI."
echo InvokeAI updated to \'$INVOKE_AI_VERSION\'

View File

@ -2,6 +2,12 @@
from logging import Logger
import torch
from invokeai.app.services.object_serializer.object_serializer_disk import ObjectSerializerDisk
from invokeai.app.services.object_serializer.object_serializer_forward_cache import ObjectSerializerForwardCache
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningFieldData
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__
@ -9,27 +15,22 @@ from ..services.board_image_records.board_image_records_sqlite import SqliteBoar
from ..services.board_images.board_images_default import BoardImagesService
from ..services.board_records.board_records_sqlite import SqliteBoardRecordStorage
from ..services.boards.boards_default import BoardService
from ..services.bulk_download.bulk_download_default import BulkDownloadService
from ..services.config import InvokeAIAppConfig
from ..services.download import DownloadQueueService
from ..services.image_files.image_files_disk import DiskImageFileStorage
from ..services.image_records.image_records_sqlite import SqliteImageRecordStorage
from ..services.images.images_default import ImageService
from ..services.invocation_cache.invocation_cache_memory import MemoryInvocationCache
from ..services.invocation_processor.invocation_processor_default import DefaultInvocationProcessor
from ..services.invocation_queue.invocation_queue_memory import MemoryInvocationQueue
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_manager.model_manager_default import ModelManagerService
from ..services.model_metadata import ModelMetadataStoreSQL
from ..services.model_records import ModelRecordServiceSQL
from ..services.names.names_default import SimpleNameService
from ..services.session_processor.session_processor_default import DefaultSessionProcessor
from ..services.session_queue.session_queue_sqlite import SqliteSessionQueue
from ..services.shared.default_graphs import create_system_graphs
from ..services.shared.graph import GraphExecutionState, LibraryGraph
from ..services.shared.sqlite.sqlite_database import SqliteDatabase
from ..services.urls.urls_default import LocalUrlService
from ..services.workflow_records.workflow_records_sqlite import SqliteWorkflowRecordsStorage
from .events import FastAPIEventService
@ -60,14 +61,18 @@ class ApiDependencies:
invoker: Invoker
@staticmethod
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger):
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}")
output_folder = config.output_path
if output_folder is None:
raise ValueError("Output folder is not set")
db = SqliteDatabase(config, logger)
image_files = DiskImageFileStorage(f"{output_folder}/images")
db = init_db(config=config, logger=logger, image_files=image_files)
configuration = config
logger = logger
@ -77,19 +82,26 @@ class ApiDependencies:
board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_library = SqliteItemStorage[LibraryGraph](db=db, table_name="graphs")
image_files = DiskImageFileStorage(f"{output_folder}/images")
bulk_download = BulkDownloadService()
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f"{output_folder}/latents"))
model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db)
tensors = ObjectSerializerForwardCache(
ObjectSerializerDisk[torch.Tensor](output_folder / "tensors", ephemeral=True)
)
conditioning = ObjectSerializerForwardCache(
ObjectSerializerDisk[ConditioningFieldData](output_folder / "conditioning", ephemeral=True)
)
download_queue_service = DownloadQueueService(event_bus=events)
model_metadata_service = ModelMetadataStoreSQL(db=db)
model_manager = ModelManagerService.build_model_manager(
app_config=configuration,
model_record_service=ModelRecordServiceSQL(db=db, metadata_store=model_metadata_service),
download_queue=download_queue_service,
events=events,
)
names = SimpleNameService()
performance_statistics = InvocationStatsService()
processor = DefaultInvocationProcessor()
queue = MemoryInvocationQueue()
session_processor = DefaultSessionProcessor()
session_queue = SqliteSessionQueue(db=db)
urls = LocalUrlService()
@ -100,34 +112,30 @@ class ApiDependencies:
board_images=board_images,
board_records=board_records,
boards=boards,
bulk_download=bulk_download,
configuration=configuration,
events=events,
graph_execution_manager=graph_execution_manager,
graph_library=graph_library,
image_files=image_files,
image_records=image_records,
images=images,
invocation_cache=invocation_cache,
latents=latents,
logger=logger,
model_manager=model_manager,
model_records=model_record_service,
download_queue=download_queue_service,
names=names,
performance_statistics=performance_statistics,
processor=processor,
queue=queue,
session_processor=session_processor,
session_queue=session_queue,
urls=urls,
workflow_records=workflow_records,
tensors=tensors,
conditioning=conditioning,
)
create_system_graphs(services.graph_library)
ApiDependencies.invoker = Invoker(services)
db.clean()
@staticmethod
def shutdown():
def shutdown() -> None:
if ApiDependencies.invoker:
ApiDependencies.invoker.stop()

View File

@ -0,0 +1,28 @@
from typing import Any
from starlette.responses import Response
from starlette.staticfiles import StaticFiles
class NoCacheStaticFiles(StaticFiles):
"""
This class is used to override the default caching behavior of starlette for static files,
ensuring we *never* cache static files. It modifies the file response headers to strictly
never cache the files.
Static files include the javascript bundles, fonts, locales, and some images. Generated
images are not included, as they are served by a router.
"""
def __init__(self, *args: Any, **kwargs: Any):
self.cachecontrol = "max-age=0, no-cache, no-store, , must-revalidate"
self.pragma = "no-cache"
self.expires = "0"
super().__init__(*args, **kwargs)
def file_response(self, *args: Any, **kwargs: Any) -> Response:
resp = super().file_response(*args, **kwargs)
resp.headers.setdefault("Cache-Control", self.cachecontrol)
resp.headers.setdefault("Pragma", self.pragma)
resp.headers.setdefault("Expires", self.expires)
return resp

View File

@ -0,0 +1,111 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for the download queue."""
from typing import List, Optional
from fastapi import Body, Path, Response
from fastapi.routing import APIRouter
from pydantic.networks import AnyHttpUrl
from starlette.exceptions import HTTPException
from invokeai.app.services.download import (
DownloadJob,
UnknownJobIDException,
)
from ..dependencies import ApiDependencies
download_queue_router = APIRouter(prefix="/v1/download_queue", tags=["download_queue"])
@download_queue_router.get(
"/",
operation_id="list_downloads",
)
async def list_downloads() -> List[DownloadJob]:
"""Get a list of active and inactive jobs."""
queue = ApiDependencies.invoker.services.download_queue
return queue.list_jobs()
@download_queue_router.patch(
"/",
operation_id="prune_downloads",
responses={
204: {"description": "All completed jobs have been pruned"},
400: {"description": "Bad request"},
},
)
async def prune_downloads() -> Response:
"""Prune completed and errored jobs."""
queue = ApiDependencies.invoker.services.download_queue
queue.prune_jobs()
return Response(status_code=204)
@download_queue_router.post(
"/i/",
operation_id="download",
)
async def download(
source: AnyHttpUrl = Body(description="download source"),
dest: str = Body(description="download destination"),
priority: int = Body(default=10, description="queue priority"),
access_token: Optional[str] = Body(default=None, description="token for authorization to download"),
) -> DownloadJob:
"""Download the source URL to the file or directory indicted in dest."""
queue = ApiDependencies.invoker.services.download_queue
return queue.download(source, Path(dest), priority, access_token)
@download_queue_router.get(
"/i/{id}",
operation_id="get_download_job",
responses={
200: {"description": "Success"},
404: {"description": "The requested download JobID could not be found"},
},
)
async def get_download_job(
id: int = Path(description="ID of the download job to fetch."),
) -> DownloadJob:
"""Get a download job using its ID."""
try:
job = ApiDependencies.invoker.services.download_queue.id_to_job(id)
return job
except UnknownJobIDException as e:
raise HTTPException(status_code=404, detail=str(e))
@download_queue_router.delete(
"/i/{id}",
operation_id="cancel_download_job",
responses={
204: {"description": "Job has been cancelled"},
404: {"description": "The requested download JobID could not be found"},
},
)
async def cancel_download_job(
id: int = Path(description="ID of the download job to cancel."),
) -> Response:
"""Cancel a download job using its ID."""
try:
queue = ApiDependencies.invoker.services.download_queue
job = queue.id_to_job(id)
queue.cancel_job(job)
return Response(status_code=204)
except UnknownJobIDException as e:
raise HTTPException(status_code=404, detail=str(e))
@download_queue_router.delete(
"/i",
operation_id="cancel_all_download_jobs",
responses={
204: {"description": "Download jobs have been cancelled"},
},
)
async def cancel_all_download_jobs() -> Response:
"""Cancel all download jobs."""
ApiDependencies.invoker.services.download_queue.cancel_all_jobs()
return Response(status_code=204)

View File

@ -2,13 +2,13 @@ import io
import traceback
from typing import Optional
from fastapi import Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi import BackgroundTasks, Body, HTTPException, Path, Query, Request, Response, UploadFile
from fastapi.responses import FileResponse
from fastapi.routing import APIRouter
from PIL import Image
from pydantic import BaseModel, Field, ValidationError
from invokeai.app.invocations.baseinvocation import MetadataField, MetadataFieldValidator
from invokeai.app.invocations.fields import MetadataField, MetadataFieldValidator
from invokeai.app.services.image_records.image_records_common import ImageCategory, ImageRecordChanges, ResourceOrigin
from invokeai.app.services.images.images_common import ImageDTO, ImageUrlsDTO
from invokeai.app.services.shared.pagination import OffsetPaginatedResults
@ -375,16 +375,67 @@ async def unstar_images_in_list(
class ImagesDownloaded(BaseModel):
response: Optional[str] = Field(
description="If defined, the message to display to the user when images begin downloading"
default=None, description="The message to display to the user when images begin downloading"
)
bulk_download_item_name: Optional[str] = Field(
default=None, description="The name of the bulk download item for which events will be emitted"
)
@images_router.post("/download", operation_id="download_images_from_list", response_model=ImagesDownloaded)
@images_router.post(
"/download", operation_id="download_images_from_list", response_model=ImagesDownloaded, status_code=202
)
async def download_images_from_list(
image_names: list[str] = Body(description="The list of names of images to download", embed=True),
background_tasks: BackgroundTasks,
image_names: Optional[list[str]] = Body(
default=None, description="The list of names of images to download", embed=True
),
board_id: Optional[str] = Body(
default=None, description="The board from which image should be downloaded from", embed=True
default=None, description="The board from which image should be downloaded", embed=True
),
) -> ImagesDownloaded:
# return ImagesDownloaded(response="Your images are downloading")
raise HTTPException(status_code=501, detail="Endpoint is not yet implemented")
if (image_names is None or len(image_names) == 0) and board_id is None:
raise HTTPException(status_code=400, detail="No images or board id specified.")
bulk_download_item_id: str = ApiDependencies.invoker.services.bulk_download.generate_item_id(board_id)
background_tasks.add_task(
ApiDependencies.invoker.services.bulk_download.handler,
image_names,
board_id,
bulk_download_item_id,
)
return ImagesDownloaded(bulk_download_item_name=bulk_download_item_id + ".zip")
@images_router.api_route(
"/download/{bulk_download_item_name}",
methods=["GET"],
operation_id="get_bulk_download_item",
response_class=Response,
responses={
200: {
"description": "Return the complete bulk download item",
"content": {"application/zip": {}},
},
404: {"description": "Image not found"},
},
)
async def get_bulk_download_item(
background_tasks: BackgroundTasks,
bulk_download_item_name: str = Path(description="The bulk_download_item_name of the bulk download item to get"),
) -> FileResponse:
"""Gets a bulk download zip file"""
try:
path = ApiDependencies.invoker.services.bulk_download.get_path(bulk_download_item_name)
response = FileResponse(
path,
media_type="application/zip",
filename=bulk_download_item_name,
content_disposition_type="inline",
)
response.headers["Cache-Control"] = f"max-age={IMAGE_MAX_AGE}"
background_tasks.add_task(ApiDependencies.invoker.services.bulk_download.delete, bulk_download_item_name)
return response
except Exception:
raise HTTPException(status_code=404)

View File

@ -0,0 +1,751 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import pathlib
import shutil
from hashlib import sha1
from random import randbytes
from typing import Any, Dict, List, Optional, Set
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict, Field
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_install import ModelInstallJob
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
ModelRecordOrderBy,
ModelSummary,
UnknownModelException,
)
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
MainCheckpointConfig,
ModelFormat,
ModelType,
SubModelType,
)
from invokeai.backend.model_manager.merge import MergeInterpolationMethod, ModelMerger
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from invokeai.backend.model_manager.search import ModelSearch
from ..dependencies import ApiDependencies
model_manager_router = APIRouter(prefix="/v2/models", tags=["model_manager"])
class ModelsList(BaseModel):
"""Return list of configs."""
models: List[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
class ModelTagSet(BaseModel):
"""Return tags for a set of models."""
key: str
name: str
author: str
tags: Set[str]
##############################################################################
# These are example inputs and outputs that are used in places where Swagger
# is unable to generate a correct example.
##############################################################################
example_model_config = {
"path": "string",
"name": "string",
"base": "sd-1",
"type": "main",
"format": "checkpoint",
"config": "string",
"key": "string",
"original_hash": "string",
"current_hash": "string",
"description": "string",
"source": "string",
"last_modified": 0,
"vae": "string",
"variant": "normal",
"prediction_type": "epsilon",
"repo_variant": "fp16",
"upcast_attention": False,
"ztsnr_training": False,
}
example_model_input = {
"path": "/path/to/model",
"name": "model_name",
"base": "sd-1",
"type": "main",
"format": "checkpoint",
"config": "configs/stable-diffusion/v1-inference.yaml",
"description": "Model description",
"vae": None,
"variant": "normal",
}
example_model_metadata = {
"name": "ip_adapter_sd_image_encoder",
"author": "InvokeAI",
"tags": [
"transformers",
"safetensors",
"clip_vision_model",
"endpoints_compatible",
"region:us",
"has_space",
"license:apache-2.0",
],
"files": [
{
"url": "https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder/resolve/main/README.md",
"path": "ip_adapter_sd_image_encoder/README.md",
"size": 628,
"sha256": None,
},
{
"url": "https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder/resolve/main/config.json",
"path": "ip_adapter_sd_image_encoder/config.json",
"size": 560,
"sha256": None,
},
{
"url": "https://huggingface.co/InvokeAI/ip_adapter_sd_image_encoder/resolve/main/model.safetensors",
"path": "ip_adapter_sd_image_encoder/model.safetensors",
"size": 2528373448,
"sha256": "6ca9667da1ca9e0b0f75e46bb030f7e011f44f86cbfb8d5a36590fcd7507b030",
},
],
"type": "huggingface",
"id": "InvokeAI/ip_adapter_sd_image_encoder",
"tag_dict": {"license": "apache-2.0"},
"last_modified": "2023-09-23T17:33:25Z",
}
##############################################################################
# ROUTES
##############################################################################
@model_manager_router.get(
"/",
operation_id="list_model_records",
)
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[ModelFormat] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_manager.store
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(
record_store.search_by_attr(
base_model=base_model, model_type=model_type, model_name=model_name, model_format=model_format
)
)
else:
found_models.extend(
record_store.search_by_attr(model_type=model_type, model_name=model_name, model_format=model_format)
)
return ModelsList(models=found_models)
@model_manager_router.get(
"/get_by_attrs",
operation_id="get_model_records_by_attrs",
response_model=AnyModelConfig,
)
async def get_model_records_by_attrs(
name: str = Query(description="The name of the model"),
type: ModelType = Query(description="The type of the model"),
base: BaseModelType = Query(description="The base model of the model"),
) -> AnyModelConfig:
"""Gets a model by its attributes. The main use of this route is to provide backwards compatibility with the old
model manager, which identified models by a combination of name, base and type."""
configs = ApiDependencies.invoker.services.model_manager.store.search_by_attr(
base_model=base, model_type=type, model_name=name
)
if not configs:
raise HTTPException(status_code=404, detail="No model found with these attributes")
return configs[0]
@model_manager_router.get(
"/i/{key}",
operation_id="get_model_record",
responses={
200: {
"description": "The model configuration was retrieved successfully",
"content": {"application/json": {"example": example_model_config}},
},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
},
)
async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_manager.store
try:
config: AnyModelConfig = record_store.get_model(key)
return config
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@model_manager_router.get("/summary", operation_id="list_model_summary")
async def list_model_summary(
page: int = Query(default=0, description="The page to get"),
per_page: int = Query(default=10, description="The number of models per page"),
order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
) -> PaginatedResults[ModelSummary]:
"""Gets a page of model summary data."""
record_store = ApiDependencies.invoker.services.model_manager.store
results: PaginatedResults[ModelSummary] = record_store.list_models(page=page, per_page=per_page, order_by=order_by)
return results
@model_manager_router.get(
"/i/{key}/metadata",
operation_id="get_model_metadata",
responses={
200: {
"description": "The model metadata was retrieved successfully",
"content": {"application/json": {"example": example_model_metadata}},
},
400: {"description": "Bad request"},
},
)
async def get_model_metadata(
key: str = Path(description="Key of the model repo metadata to fetch."),
) -> Optional[AnyModelRepoMetadata]:
"""Get a model metadata object."""
record_store = ApiDependencies.invoker.services.model_manager.store
result: Optional[AnyModelRepoMetadata] = record_store.get_metadata(key)
return result
@model_manager_router.get(
"/tags",
operation_id="list_tags",
)
async def list_tags() -> Set[str]:
"""Get a unique set of all the model tags."""
record_store = ApiDependencies.invoker.services.model_manager.store
result: Set[str] = record_store.list_tags()
return result
class FoundModel(BaseModel):
path: str = Field(description="Path to the model")
is_installed: bool = Field(description="Whether or not the model is already installed")
@model_manager_router.get(
"/scan_folder",
operation_id="scan_for_models",
responses={
200: {"description": "Directory scanned successfully"},
400: {"description": "Invalid directory path"},
},
status_code=200,
response_model=List[FoundModel],
)
async def scan_for_models(
scan_path: str = Query(description="Directory path to search for models", default=None),
) -> List[FoundModel]:
path = pathlib.Path(scan_path)
if not scan_path or not path.is_dir():
raise HTTPException(
status_code=400,
detail=f"The search path '{scan_path}' does not exist or is not directory",
)
search = ModelSearch()
try:
found_model_paths = search.search(path)
models_path = ApiDependencies.invoker.services.configuration.models_path
# If the search path includes the main models directory, we need to exclude core models from the list.
# TODO(MM2): Core models should be handled by the model manager so we can determine if they are installed
# without needing to crawl the filesystem.
core_models_path = pathlib.Path(models_path, "core").resolve()
non_core_model_paths = [p for p in found_model_paths if not p.is_relative_to(core_models_path)]
installed_models = ApiDependencies.invoker.services.model_manager.store.search_by_attr()
resolved_installed_model_paths: list[str] = []
installed_model_sources: list[str] = []
# This call lists all installed models.
for model in installed_models:
path = pathlib.Path(model.path)
# If the model has a source, we need to add it to the list of installed sources.
if model.source:
installed_model_sources.append(model.source)
# If the path is not absolute, that means it is in the app models directory, and we need to join it with
# the models path before resolving.
if not path.is_absolute():
resolved_installed_model_paths.append(str(pathlib.Path(models_path, path).resolve()))
continue
resolved_installed_model_paths.append(str(path.resolve()))
scan_results: list[FoundModel] = []
# Check if the model is installed by comparing the resolved paths, appending to the scan result.
for p in non_core_model_paths:
path = str(p)
is_installed = path in resolved_installed_model_paths or path in installed_model_sources
found_model = FoundModel(path=path, is_installed=is_installed)
scan_results.append(found_model)
except Exception as e:
raise HTTPException(
status_code=500,
detail=f"An error occurred while searching the directory: {e}",
)
return scan_results
@model_manager_router.get(
"/tags/search",
operation_id="search_by_metadata_tags",
)
async def search_by_metadata_tags(
tags: Set[str] = Query(default=None, description="Tags to search for"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_manager.store
results = record_store.search_by_metadata_tag(tags)
return ModelsList(models=results)
@model_manager_router.patch(
"/i/{key}",
operation_id="update_model_record",
responses={
200: {
"description": "The model was updated successfully",
"content": {"application/json": {"example": example_model_config}},
},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
)
async def update_model_record(
key: Annotated[str, Path(description="Unique key of model")],
info: Annotated[
AnyModelConfig, Body(description="Model config", discriminator="type", example=example_model_input)
],
) -> AnyModelConfig:
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_manager.store
try:
model_response: AnyModelConfig = record_store.update_model(key, config=info)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
@model_manager_router.delete(
"/i/{key}",
operation_id="del_model_record",
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
)
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""
Delete model record from database.
The configuration record will be removed. The corresponding weights files will be
deleted as well if they reside within the InvokeAI "models" directory.
"""
logger = ApiDependencies.invoker.services.logger
try:
installer = ApiDependencies.invoker.services.model_manager.install
installer.delete(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@model_manager_router.post(
"/i/",
operation_id="add_model_record",
responses={
201: {
"description": "The model added successfully",
"content": {"application/json": {"example": example_model_config}},
},
409: {"description": "There is already a model corresponding to this path or repo_id"},
415: {"description": "Unrecognized file/folder format"},
},
status_code=201,
)
async def add_model_record(
config: Annotated[
AnyModelConfig, Body(description="Model config", discriminator="type", example=example_model_input)
],
) -> AnyModelConfig:
"""Add a model using the configuration information appropriate for its type."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_manager.store
if config.key == "<NOKEY>":
config.key = sha1(randbytes(100)).hexdigest()
logger.info(f"Created model {config.key} for {config.name}")
try:
record_store.add_model(config.key, config)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
# now fetch it out
result: AnyModelConfig = record_store.get_model(config.key)
return result
@model_manager_router.post(
"/install",
operation_id="install_model",
responses={
201: {"description": "The model imported successfully"},
415: {"description": "Unrecognized file/folder format"},
424: {"description": "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
)
async def install_model(
source: str = Query(description="Model source to install, can be a local path, repo_id, or remote URL"),
# TODO(MM2): Can we type this?
config: Optional[Dict[str, Any]] = Body(
description="Dict of fields that override auto-probed values in the model config record, such as name, description and prediction_type ",
default=None,
example={"name": "string", "description": "string"},
),
access_token: Optional[str] = None,
) -> ModelInstallJob:
"""Install a model using a string identifier.
`source` can be any of the following.
1. A path on the local filesystem ('C:\\users\\fred\\model.safetensors')
2. A Url pointing to a single downloadable model file
3. A HuggingFace repo_id with any of the following formats:
- model/name
- model/name:fp16:vae
- model/name::vae -- use default precision
- model/name:fp16:path/to/model.safetensors
- model/name::path/to/model.safetensors
`config` is an optional dict containing model configuration values that will override
the ones that are probed automatically.
`access_token` is an optional access token for use with Urls that require
authentication.
Models will be downloaded, probed, configured and installed in a
series of background threads. The return object has `status` attribute
that can be used to monitor progress.
See the documentation for `import_model_record` for more information on
interpreting the job information returned by this route.
"""
logger = ApiDependencies.invoker.services.logger
try:
installer = ApiDependencies.invoker.services.model_manager.install
result: ModelInstallJob = installer.heuristic_import(
source=source,
config=config,
access_token=access_token,
)
logger.info(f"Started installation of {source}")
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return result
@model_manager_router.get(
"/import",
operation_id="list_model_install_jobs",
)
async def list_model_install_jobs() -> List[ModelInstallJob]:
"""Return the list of model install jobs.
Install jobs have a numeric `id`, a `status`, and other fields that provide information on
the nature of the job and its progress. The `status` is one of:
* "waiting" -- Job is waiting in the queue to run
* "downloading" -- Model file(s) are downloading
* "running" -- Model has downloaded and the model probing and registration process is running
* "completed" -- Installation completed successfully
* "error" -- An error occurred. Details will be in the "error_type" and "error" fields.
* "cancelled" -- Job was cancelled before completion.
Once completed, information about the model such as its size, base
model, type, and metadata can be retrieved from the `config_out`
field. For multi-file models such as diffusers, information on individual files
can be retrieved from `download_parts`.
See the example and schema below for more information.
"""
jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_manager.install.list_jobs()
return jobs
@model_manager_router.get(
"/import/{id}",
operation_id="get_model_install_job",
responses={
200: {"description": "Success"},
404: {"description": "No such job"},
},
)
async def get_model_install_job(id: int = Path(description="Model install id")) -> ModelInstallJob:
"""
Return model install job corresponding to the given source. See the documentation for 'List Model Install Jobs'
for information on the format of the return value.
"""
try:
result: ModelInstallJob = ApiDependencies.invoker.services.model_manager.install.get_job_by_id(id)
return result
except ValueError as e:
raise HTTPException(status_code=404, detail=str(e))
@model_manager_router.delete(
"/import/{id}",
operation_id="cancel_model_install_job",
responses={
201: {"description": "The job was cancelled successfully"},
415: {"description": "No such job"},
},
status_code=201,
)
async def cancel_model_install_job(id: int = Path(description="Model install job ID")) -> None:
"""Cancel the model install job(s) corresponding to the given job ID."""
installer = ApiDependencies.invoker.services.model_manager.install
try:
job = installer.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=415, detail=str(e))
installer.cancel_job(job)
@model_manager_router.patch(
"/import",
operation_id="prune_model_install_jobs",
responses={
204: {"description": "All completed and errored jobs have been pruned"},
400: {"description": "Bad request"},
},
)
async def prune_model_install_jobs() -> Response:
"""Prune all completed and errored jobs from the install job list."""
ApiDependencies.invoker.services.model_manager.install.prune_jobs()
return Response(status_code=204)
@model_manager_router.patch(
"/sync",
operation_id="sync_models_to_config",
responses={
204: {"description": "Model config record database resynced with files on disk"},
400: {"description": "Bad request"},
},
)
async def sync_models_to_config() -> Response:
"""
Traverse the models and autoimport directories.
Model files without a corresponding
record in the database are added. Orphan records without a models file are deleted.
"""
ApiDependencies.invoker.services.model_manager.install.sync_to_config()
return Response(status_code=204)
@model_manager_router.put(
"/convert/{key}",
operation_id="convert_model",
responses={
200: {
"description": "Model converted successfully",
"content": {"application/json": {"example": example_model_config}},
},
400: {"description": "Bad request"},
404: {"description": "Model not found"},
409: {"description": "There is already a model registered at this location"},
},
)
async def convert_model(
key: str = Path(description="Unique key of the safetensors main model to convert to diffusers format."),
) -> AnyModelConfig:
"""
Permanently convert a model into diffusers format, replacing the safetensors version.
Note that during the conversion process the key and model hash will change.
The return value is the model configuration for the converted model.
"""
model_manager = ApiDependencies.invoker.services.model_manager
logger = ApiDependencies.invoker.services.logger
loader = ApiDependencies.invoker.services.model_manager.load
store = ApiDependencies.invoker.services.model_manager.store
installer = ApiDependencies.invoker.services.model_manager.install
try:
model_config = store.get_model(key)
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=424, detail=str(e))
if not isinstance(model_config, MainCheckpointConfig):
logger.error(f"The model with key {key} is not a main checkpoint model.")
raise HTTPException(400, f"The model with key {key} is not a main checkpoint model.")
# loading the model will convert it into a cached diffusers file
model_manager.load_model_by_config(model_config, submodel_type=SubModelType.Scheduler)
# Get the path of the converted model from the loader
cache_path = loader.convert_cache.cache_path(key)
assert cache_path.exists()
# temporarily rename the original safetensors file so that there is no naming conflict
original_name = model_config.name
model_config.name = f"{original_name}.DELETE"
store.update_model(key, config=model_config)
# install the diffusers
try:
new_key = installer.install_path(
cache_path,
config={
"name": original_name,
"description": model_config.description,
"original_hash": model_config.original_hash,
"source": model_config.source,
},
)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
# get the original metadata
if orig_metadata := store.get_metadata(key):
store.metadata_store.add_metadata(new_key, orig_metadata)
# delete the original safetensors file
installer.delete(key)
# delete the cached version
shutil.rmtree(cache_path)
# return the config record for the new diffusers directory
new_config: AnyModelConfig = store.get_model(new_key)
return new_config
@model_manager_router.put(
"/merge",
operation_id="merge",
responses={
200: {
"description": "Model converted successfully",
"content": {"application/json": {"example": example_model_config}},
},
400: {"description": "Bad request"},
404: {"description": "Model not found"},
409: {"description": "There is already a model registered at this location"},
},
)
async def merge(
keys: List[str] = Body(description="Keys for two to three models to merge", min_length=2, max_length=3),
merged_model_name: Optional[str] = Body(description="Name of destination model", default=None),
alpha: float = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
force: bool = Body(
description="Force merging of models created with different versions of diffusers",
default=False,
),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method", default=None),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> AnyModelConfig:
"""
Merge diffusers models. The process is controlled by a set parameters provided in the body of the request.
```
Argument Description [default]
-------- ----------------------
keys List of 2-3 model keys to merge together. All models must use the same base type.
merged_model_name Name for the merged model [Concat model names]
alpha Alpha value (0.0-1.0). Higher values give more weight to the second model [0.5]
force If true, force the merge even if the models were generated by different versions of the diffusers library [False]
interp Interpolation method. One of "weighted_sum", "sigmoid", "inv_sigmoid" or "add_difference" [weighted_sum]
merge_dest_directory Specify a directory to store the merged model in [models directory]
```
"""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {keys} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
installer = ApiDependencies.invoker.services.model_manager.install
merger = ModelMerger(installer)
model_names = [installer.record_store.get_model(x).name for x in keys]
response = merger.merge_diffusion_models_and_save(
model_keys=keys,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory=dest,
)
except UnknownModelException:
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{keys}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@ -1,164 +0,0 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
from hashlib import sha1
from random import randbytes
from typing import List, Optional
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict
from starlette.exceptions import HTTPException
from typing_extensions import Annotated
from invokeai.app.services.model_records import (
DuplicateModelException,
InvalidModelException,
UnknownModelException,
)
from invokeai.backend.model_manager.config import (
AnyModelConfig,
BaseModelType,
ModelType,
)
from ..dependencies import ApiDependencies
model_records_router = APIRouter(prefix="/v1/model/record", tags=["models"])
class ModelsList(BaseModel):
"""Return list of configs."""
models: list[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True)
@model_records_router.get(
"/",
operation_id="list_model_records",
)
async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
found_models: list[AnyModelConfig] = []
if base_models:
for base_model in base_models:
found_models.extend(record_store.search_by_attr(base_model=base_model, model_type=model_type))
else:
found_models.extend(record_store.search_by_attr(model_type=model_type))
return ModelsList(models=found_models)
@model_records_router.get(
"/i/{key}",
operation_id="get_model_record",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
},
)
async def get_model_record(
key: str = Path(description="Key of the model record to fetch."),
) -> AnyModelConfig:
"""Get a model record"""
record_store = ApiDependencies.invoker.services.model_records
try:
return record_store.get_model(key)
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.patch(
"/i/{key}",
operation_id="update_model_record",
responses={
200: {"description": "The model was updated successfully"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
response_model=AnyModelConfig,
)
async def update_model_record(
key: Annotated[str, Path(description="Unique key of model")],
info: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
try:
model_response = record_store.update_model(key, config=info)
logger.info(f"Updated model: {key}")
except UnknownModelException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
return model_response
@model_records_router.delete(
"/i/{key}",
operation_id="del_model_record",
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
)
async def del_model_record(
key: str = Path(description="Unique key of model to remove from model registry."),
) -> Response:
"""Delete Model"""
logger = ApiDependencies.invoker.services.logger
try:
record_store = ApiDependencies.invoker.services.model_records
record_store.del_model(key)
logger.info(f"Deleted model: {key}")
return Response(status_code=204)
except UnknownModelException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.post(
"/i/",
operation_id="add_model_record",
responses={
201: {"description": "The model added successfully"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
415: {"description": "Unrecognized file/folder format"},
},
status_code=201,
)
async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig:
"""
Add a model using the configuration information appropriate for its type.
"""
logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>":
config.key = sha1(randbytes(100)).hexdigest()
logger.info(f"Created model {config.key} for {config.name}")
try:
record_store.add_model(config.key, config)
except DuplicateModelException as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
# now fetch it out
return record_store.get_model(config.key)

View File

@ -1,427 +0,0 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654), 2023 Kent Keirsey (https://github.com/hipsterusername), 2023 Lincoln D. Stein
import pathlib
from typing import Annotated, List, Literal, Optional, Union
from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter
from starlette.exceptions import HTTPException
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management import MergeInterpolationMethod
from invokeai.backend.model_management.models import (
OPENAPI_MODEL_CONFIGS,
InvalidModelException,
ModelNotFoundException,
SchedulerPredictionType,
)
from ..dependencies import ApiDependencies
models_router = APIRouter(prefix="/v1/models", tags=["models"])
UpdateModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
UpdateModelResponseValidator = TypeAdapter(UpdateModelResponse)
ImportModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelResponseValidator = TypeAdapter(ImportModelResponse)
ConvertModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ConvertModelResponseValidator = TypeAdapter(ConvertModelResponse)
MergeModelResponse = Union[tuple(OPENAPI_MODEL_CONFIGS)]
ImportModelAttributes = Union[tuple(OPENAPI_MODEL_CONFIGS)]
class ModelsList(BaseModel):
models: list[Union[tuple(OPENAPI_MODEL_CONFIGS)]]
model_config = ConfigDict(use_enum_values=True)
ModelsListValidator = TypeAdapter(ModelsList)
@models_router.get(
"/",
operation_id="list_models",
responses={200: {"model": ModelsList}},
)
async def list_models(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
) -> ModelsList:
"""Gets a list of models"""
if base_models and len(base_models) > 0:
models_raw = []
for base_model in base_models:
models_raw.extend(ApiDependencies.invoker.services.model_manager.list_models(base_model, model_type))
else:
models_raw = ApiDependencies.invoker.services.model_manager.list_models(None, model_type)
models = ModelsListValidator.validate_python({"models": models_raw})
return models
@models_router.patch(
"/{base_model}/{model_type}/{model_name}",
operation_id="update_model",
responses={
200: {"description": "The model was updated successfully"},
400: {"description": "Bad request"},
404: {"description": "The model could not be found"},
409: {"description": "There is already a model corresponding to the new name"},
},
status_code=200,
response_model=UpdateModelResponse,
)
async def update_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
) -> UpdateModelResponse:
"""Update model contents with a new config. If the model name or base fields are changed, then the model is renamed."""
logger = ApiDependencies.invoker.services.logger
try:
previous_info = ApiDependencies.invoker.services.model_manager.list_model(
model_name=model_name,
base_model=base_model,
model_type=model_type,
)
# rename operation requested
if info.model_name != model_name or info.base_model != base_model:
ApiDependencies.invoker.services.model_manager.rename_model(
base_model=base_model,
model_type=model_type,
model_name=model_name,
new_name=info.model_name,
new_base=info.base_model,
)
logger.info(f"Successfully renamed {base_model.value}/{model_name}=>{info.base_model}/{info.model_name}")
# update information to support an update of attributes
model_name = info.model_name
base_model = info.base_model
new_info = ApiDependencies.invoker.services.model_manager.list_model(
model_name=model_name,
base_model=base_model,
model_type=model_type,
)
if new_info.get("path") != previous_info.get(
"path"
): # model manager moved model path during rename - don't overwrite it
info.path = new_info.get("path")
# replace empty string values with None/null to avoid phenomenon of vae: ''
info_dict = info.model_dump()
info_dict = {x: info_dict[x] if info_dict[x] else None for x in info_dict.keys()}
ApiDependencies.invoker.services.model_manager.update_model(
model_name=model_name,
base_model=base_model,
model_type=model_type,
model_attributes=info_dict,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=model_name,
base_model=base_model,
model_type=model_type,
)
model_response = UpdateModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
except Exception as e:
logger.error(str(e))
raise HTTPException(status_code=400, detail=str(e))
return model_response
@models_router.post(
"/import",
operation_id="import_model",
responses={
201: {"description": "The model imported successfully"},
404: {"description": "The model could not be found"},
415: {"description": "Unrecognized file/folder format"},
424: {"description": "The model appeared to import successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
response_model=ImportModelResponse,
)
async def import_model(
location: str = Body(description="A model path, repo_id or URL to import"),
prediction_type: Optional[Literal["v_prediction", "epsilon", "sample"]] = Body(
description="Prediction type for SDv2 checkpoints and rare SDv1 checkpoints",
default=None,
),
) -> ImportModelResponse:
"""Add a model using its local path, repo_id, or remote URL. Model characteristics will be probed and configured automatically"""
location = location.strip("\"' ")
items_to_import = {location}
prediction_types = {x.value: x for x in SchedulerPredictionType}
logger = ApiDependencies.invoker.services.logger
try:
installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import=items_to_import,
prediction_type_helper=lambda x: prediction_types.get(prediction_type),
)
info = installed_models.get(location)
if not info:
logger.error("Import failed")
raise HTTPException(status_code=415)
logger.info(f"Successfully imported {location}, got {info}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.name, base_model=info.base_model, model_type=info.model_type
)
return ImportModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
except InvalidModelException as e:
logger.error(str(e))
raise HTTPException(status_code=415)
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
@models_router.post(
"/add",
operation_id="add_model",
responses={
201: {"description": "The model added successfully"},
404: {"description": "The model could not be found"},
424: {"description": "The model appeared to add successfully, but could not be found in the model manager"},
409: {"description": "There is already a model corresponding to this path or repo_id"},
},
status_code=201,
response_model=ImportModelResponse,
)
async def add_model(
info: Union[tuple(OPENAPI_MODEL_CONFIGS)] = Body(description="Model configuration"),
) -> ImportModelResponse:
"""Add a model using the configuration information appropriate for its type. Only local models can be added by path"""
logger = ApiDependencies.invoker.services.logger
try:
ApiDependencies.invoker.services.model_manager.add_model(
info.model_name,
info.base_model,
info.model_type,
model_attributes=info.model_dump(),
)
logger.info(f"Successfully added {info.model_name}")
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name=info.model_name,
base_model=info.base_model,
model_type=info.model_type,
)
return ImportModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
except ValueError as e:
logger.error(str(e))
raise HTTPException(status_code=409, detail=str(e))
@models_router.delete(
"/{base_model}/{model_type}/{model_name}",
operation_id="del_model",
responses={
204: {"description": "Model deleted successfully"},
404: {"description": "Model not found"},
},
status_code=204,
response_model=None,
)
async def delete_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
) -> Response:
"""Delete Model"""
logger = ApiDependencies.invoker.services.logger
try:
ApiDependencies.invoker.services.model_manager.del_model(
model_name, base_model=base_model, model_type=model_type
)
logger.info(f"Deleted model: {model_name}")
return Response(status_code=204)
except ModelNotFoundException as e:
logger.error(str(e))
raise HTTPException(status_code=404, detail=str(e))
@models_router.put(
"/convert/{base_model}/{model_type}/{model_name}",
operation_id="convert_model",
responses={
200: {"description": "Model converted successfully"},
400: {"description": "Bad request"},
404: {"description": "Model not found"},
},
status_code=200,
response_model=ConvertModelResponse,
)
async def convert_model(
base_model: BaseModelType = Path(description="Base model"),
model_type: ModelType = Path(description="The type of model"),
model_name: str = Path(description="model name"),
convert_dest_directory: Optional[str] = Query(
default=None, description="Save the converted model to the designated directory"
),
) -> ConvertModelResponse:
"""Convert a checkpoint model into a diffusers model, optionally saving to the indicated destination directory, or `models` if none."""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Converting model: {model_name}")
dest = pathlib.Path(convert_dest_directory) if convert_dest_directory else None
ApiDependencies.invoker.services.model_manager.convert_model(
model_name,
base_model=base_model,
model_type=model_type,
convert_dest_directory=dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
model_name, base_model=base_model, model_type=model_type
)
response = ConvertModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException as e:
raise HTTPException(status_code=404, detail=f"Model '{model_name}' not found: {str(e)}")
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response
@models_router.get(
"/search",
operation_id="search_for_models",
responses={
200: {"description": "Directory searched successfully"},
404: {"description": "Invalid directory path"},
},
status_code=200,
response_model=List[pathlib.Path],
)
async def search_for_models(
search_path: pathlib.Path = Query(description="Directory path to search for models"),
) -> List[pathlib.Path]:
if not search_path.is_dir():
raise HTTPException(
status_code=404,
detail=f"The search path '{search_path}' does not exist or is not directory",
)
return ApiDependencies.invoker.services.model_manager.search_for_models(search_path)
@models_router.get(
"/ckpt_confs",
operation_id="list_ckpt_configs",
responses={
200: {"description": "paths retrieved successfully"},
},
status_code=200,
response_model=List[pathlib.Path],
)
async def list_ckpt_configs() -> List[pathlib.Path]:
"""Return a list of the legacy checkpoint configuration files stored in `ROOT/configs/stable-diffusion`, relative to ROOT."""
return ApiDependencies.invoker.services.model_manager.list_checkpoint_configs()
@models_router.post(
"/sync",
operation_id="sync_to_config",
responses={
201: {"description": "synchronization successful"},
},
status_code=201,
response_model=bool,
)
async def sync_to_config() -> bool:
"""Call after making changes to models.yaml, autoimport directories or models directory to synchronize
in-memory data structures with disk data structures."""
ApiDependencies.invoker.services.model_manager.sync_to_config()
return True
# There's some weird pydantic-fastapi behaviour that requires this to be a separate class
# TODO: After a few updates, see if it works inside the route operation handler?
class MergeModelsBody(BaseModel):
model_names: List[str] = Field(description="model name", min_length=2, max_length=3)
merged_model_name: Optional[str] = Field(description="Name of destination model")
alpha: Optional[float] = Field(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5)
interp: Optional[MergeInterpolationMethod] = Field(description="Interpolation method")
force: Optional[bool] = Field(
description="Force merging of models created with different versions of diffusers",
default=False,
)
merge_dest_directory: Optional[str] = Field(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
)
model_config = ConfigDict(protected_namespaces=())
@models_router.put(
"/merge/{base_model}",
operation_id="merge_models",
responses={
200: {"description": "Model converted successfully"},
400: {"description": "Incompatible models"},
404: {"description": "One or more models not found"},
},
status_code=200,
response_model=MergeModelResponse,
)
async def merge_models(
body: Annotated[MergeModelsBody, Body(description="Model configuration", embed=True)],
base_model: BaseModelType = Path(description="Base model"),
) -> MergeModelResponse:
"""Convert a checkpoint model into a diffusers model"""
logger = ApiDependencies.invoker.services.logger
try:
logger.info(
f"Merging models: {body.model_names} into {body.merge_dest_directory or '<MODELS>'}/{body.merged_model_name}"
)
dest = pathlib.Path(body.merge_dest_directory) if body.merge_dest_directory else None
result = ApiDependencies.invoker.services.model_manager.merge_models(
model_names=body.model_names,
base_model=base_model,
merged_model_name=body.merged_model_name or "+".join(body.model_names),
alpha=body.alpha,
interp=body.interp,
force=body.force,
merge_dest_directory=dest,
)
model_raw = ApiDependencies.invoker.services.model_manager.list_model(
result.name,
base_model=base_model,
model_type=ModelType.Main,
)
response = ConvertModelResponseValidator.validate_python(model_raw)
except ModelNotFoundException:
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{body.model_names}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@ -1,276 +0,0 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from fastapi import HTTPException, Path
from fastapi.routing import APIRouter
from ...services.shared.graph import GraphExecutionState
from ..dependencies import ApiDependencies
session_router = APIRouter(prefix="/v1/sessions", tags=["sessions"])
# @session_router.post(
# "/",
# operation_id="create_session",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid json"},
# },
# deprecated=True,
# )
# async def create_session(
# queue_id: str = Query(default="", description="The id of the queue to associate the session with"),
# graph: Optional[Graph] = Body(default=None, description="The graph to initialize the session with"),
# ) -> GraphExecutionState:
# """Creates a new session, optionally initializing it with an invocation graph"""
# session = ApiDependencies.invoker.create_execution_state(queue_id=queue_id, graph=graph)
# return session
# @session_router.get(
# "/",
# operation_id="list_sessions",
# responses={200: {"model": PaginatedResults[GraphExecutionState]}},
# deprecated=True,
# )
# async def list_sessions(
# page: int = Query(default=0, description="The page of results to get"),
# per_page: int = Query(default=10, description="The number of results per page"),
# query: str = Query(default="", description="The query string to search for"),
# ) -> PaginatedResults[GraphExecutionState]:
# """Gets a list of sessions, optionally searching"""
# if query == "":
# result = ApiDependencies.invoker.services.graph_execution_manager.list(page, per_page)
# else:
# result = ApiDependencies.invoker.services.graph_execution_manager.search(query, page, per_page)
# return result
@session_router.get(
"/{session_id}",
operation_id="get_session",
responses={
200: {"model": GraphExecutionState},
404: {"description": "Session not found"},
},
)
async def get_session(
session_id: str = Path(description="The id of the session to get"),
) -> GraphExecutionState:
"""Gets a session"""
session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
if session is None:
raise HTTPException(status_code=404)
else:
return session
# @session_router.post(
# "/{session_id}/nodes",
# operation_id="add_node",
# responses={
# 200: {"model": str},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def add_node(
# session_id: str = Path(description="The id of the session"),
# node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
# description="The node to add"
# ),
# ) -> str:
# """Adds a node to the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# try:
# session.add_node(node)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session.id
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# @session_router.put(
# "/{session_id}/nodes/{node_path}",
# operation_id="update_node",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def update_node(
# session_id: str = Path(description="The id of the session"),
# node_path: str = Path(description="The path to the node in the graph"),
# node: Annotated[Union[BaseInvocation.get_invocations()], Field(discriminator="type")] = Body( # type: ignore
# description="The new node"
# ),
# ) -> GraphExecutionState:
# """Updates a node in the graph and removes all linked edges"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# try:
# session.update_node(node_path, node)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# @session_router.delete(
# "/{session_id}/nodes/{node_path}",
# operation_id="delete_node",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def delete_node(
# session_id: str = Path(description="The id of the session"),
# node_path: str = Path(description="The path to the node to delete"),
# ) -> GraphExecutionState:
# """Deletes a node in the graph and removes all linked edges"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# try:
# session.delete_node(node_path)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# @session_router.post(
# "/{session_id}/edges",
# operation_id="add_edge",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def add_edge(
# session_id: str = Path(description="The id of the session"),
# edge: Edge = Body(description="The edge to add"),
# ) -> GraphExecutionState:
# """Adds an edge to the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# try:
# session.add_edge(edge)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# # TODO: the edge being in the path here is really ugly, find a better solution
# @session_router.delete(
# "/{session_id}/edges/{from_node_id}/{from_field}/{to_node_id}/{to_field}",
# operation_id="delete_edge",
# responses={
# 200: {"model": GraphExecutionState},
# 400: {"description": "Invalid node or link"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def delete_edge(
# session_id: str = Path(description="The id of the session"),
# from_node_id: str = Path(description="The id of the node the edge is coming from"),
# from_field: str = Path(description="The field of the node the edge is coming from"),
# to_node_id: str = Path(description="The id of the node the edge is going to"),
# to_field: str = Path(description="The field of the node the edge is going to"),
# ) -> GraphExecutionState:
# """Deletes an edge from the graph"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# try:
# edge = Edge(
# source=EdgeConnection(node_id=from_node_id, field=from_field),
# destination=EdgeConnection(node_id=to_node_id, field=to_field),
# )
# session.delete_edge(edge)
# ApiDependencies.invoker.services.graph_execution_manager.set(
# session
# ) # TODO: can this be done automatically, or add node through an API?
# return session
# except NodeAlreadyExecutedError:
# raise HTTPException(status_code=400)
# except IndexError:
# raise HTTPException(status_code=400)
# @session_router.put(
# "/{session_id}/invoke",
# operation_id="invoke_session",
# responses={
# 200: {"model": None},
# 202: {"description": "The invocation is queued"},
# 400: {"description": "The session has no invocations ready to invoke"},
# 404: {"description": "Session not found"},
# },
# deprecated=True,
# )
# async def invoke_session(
# queue_id: str = Query(description="The id of the queue to associate the session with"),
# session_id: str = Path(description="The id of the session to invoke"),
# all: bool = Query(default=False, description="Whether or not to invoke all remaining invocations"),
# ) -> Response:
# """Invokes a session"""
# session = ApiDependencies.invoker.services.graph_execution_manager.get(session_id)
# if session is None:
# raise HTTPException(status_code=404)
# if session.is_complete():
# raise HTTPException(status_code=400)
# ApiDependencies.invoker.invoke(queue_id, session, invoke_all=all)
# return Response(status_code=202)
# @session_router.delete(
# "/{session_id}/invoke",
# operation_id="cancel_session_invoke",
# responses={202: {"description": "The invocation is canceled"}},
# deprecated=True,
# )
# async def cancel_session_invoke(
# session_id: str = Path(description="The id of the session to cancel"),
# ) -> Response:
# """Invokes a session"""
# ApiDependencies.invoker.cancel(session_id)
# return Response(status_code=202)

View File

@ -23,10 +23,11 @@ class DynamicPromptsResponse(BaseModel):
)
async def parse_dynamicprompts(
prompt: str = Body(description="The prompt to parse with dynamicprompts"),
max_prompts: int = Body(default=1000, description="The max number of prompts to generate"),
max_prompts: int = Body(ge=1, le=10000, default=1000, description="The max number of prompts to generate"),
combinatorial: bool = Body(default=True, description="Whether to use the combinatorial generator"),
) -> DynamicPromptsResponse:
"""Creates a batch process"""
max_prompts = min(max_prompts, 10000)
generator: Union[RandomPromptGenerator, CombinatorialPromptGenerator]
try:
error: Optional[str] = None

View File

@ -12,14 +12,25 @@ class SocketIO:
__sio: AsyncServer
__app: ASGIApp
__sub_queue: str = "subscribe_queue"
__unsub_queue: str = "unsubscribe_queue"
__sub_bulk_download: str = "subscribe_bulk_download"
__unsub_bulk_download: str = "unsubscribe_bulk_download"
def __init__(self, app: FastAPI):
self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="socket.io")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="/ws/socket.io")
app.mount("/ws", self.__app)
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue)
self.__sio.on("unsubscribe_queue", handler=self._handle_unsub_queue)
self.__sio.on(self.__sub_queue, handler=self._handle_sub_queue)
self.__sio.on(self.__unsub_queue, handler=self._handle_unsub_queue)
local_handler.register(event_name=EventServiceBase.queue_event, _func=self._handle_queue_event)
local_handler.register(event_name=EventServiceBase.model_event, _func=self._handle_model_event)
self.__sio.on(self.__sub_bulk_download, handler=self._handle_sub_bulk_download)
self.__sio.on(self.__unsub_bulk_download, handler=self._handle_unsub_bulk_download)
local_handler.register(event_name=EventServiceBase.bulk_download_event, _func=self._handle_bulk_download_event)
async def _handle_queue_event(self, event: Event):
await self.__sio.emit(
@ -28,10 +39,28 @@ class SocketIO:
room=event[1]["data"]["queue_id"],
)
async def _handle_sub_queue(self, sid, data, *args, **kwargs):
async def _handle_sub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.enter_room(sid, data["queue_id"])
async def _handle_unsub_queue(self, sid, data, *args, **kwargs):
async def _handle_unsub_queue(self, sid, data, *args, **kwargs) -> None:
if "queue_id" in data:
await self.__sio.leave_room(sid, data["queue_id"])
async def _handle_model_event(self, event: Event) -> None:
await self.__sio.emit(event=event[1]["event"], data=event[1]["data"])
async def _handle_bulk_download_event(self, event: Event):
await self.__sio.emit(
event=event[1]["event"],
data=event[1]["data"],
room=event[1]["data"]["bulk_download_id"],
)
async def _handle_sub_bulk_download(self, sid, data, *args, **kwargs):
if "bulk_download_id" in data:
await self.__sio.enter_room(sid, data["bulk_download_id"])
async def _handle_unsub_bulk_download(self, sid, data, *args, **kwargs):
if "bulk_download_id" in data:
await self.__sio.leave_room(sid, data["bulk_download_id"])

View File

@ -3,8 +3,10 @@
# values from the command line or config file.
import sys
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__
from .invocations.fields import InputFieldJSONSchemaExtra, OutputFieldJSONSchemaExtra
from .services.config import InvokeAIAppConfig
app_config = InvokeAIAppConfig.get_config()
@ -27,8 +29,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi
from fastapi.responses import FileResponse, HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi.responses import HTMLResponse
from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema
@ -45,19 +46,16 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
app_info,
board_images,
boards,
download_queue,
images,
model_records,
models,
model_manager,
session_queue,
sessions,
utilities,
workflows,
)
from .api.sockets import SocketIO
from .invocations.baseinvocation import (
BaseInvocation,
InputFieldJSONSchemaExtra,
OutputFieldJSONSchemaExtra,
UIConfigBase,
)
@ -75,7 +73,7 @@ mimetypes.add_type("text/css", ".css")
# Create the app
# TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
app = FastAPI(title="Invoke - Community Edition", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
# Add event handler
event_handler_id: int = id(app)
@ -111,11 +109,9 @@ async def shutdown_event() -> None:
# Include all routers
app.include_router(sessions.session_router, prefix="/api")
app.include_router(utilities.utilities_router, prefix="/api")
app.include_router(models.models_router, prefix="/api")
app.include_router(model_records.model_records_router, prefix="/api")
app.include_router(model_manager.model_manager_router, prefix="/api")
app.include_router(download_queue.download_queue_router, prefix="/api")
app.include_router(images.images_router, prefix="/api")
app.include_router(boards.boards_router, prefix="/api")
app.include_router(board_images.board_images_router, prefix="/api")
@ -152,6 +148,8 @@ def custom_openapi() -> dict[str, Any]:
# TODO: note that we assume the schema_key here is the TYPE.__name__
# This could break in some cases, figure out a better way to do it
output_type_titles[schema_key] = output_schema["title"]
openapi_schema["components"]["schemas"][schema_key] = output_schema
openapi_schema["components"]["schemas"][schema_key]["class"] = "output"
# Add Node Editor UI helper schemas
ui_config_schemas = models_json_schema(
@ -174,23 +172,24 @@ def custom_openapi() -> dict[str, Any]:
outputs_ref = {"$ref": f"#/components/schemas/{output_type_title}"}
invoker_schema["output"] = outputs_ref
invoker_schema["class"] = "invocation"
openapi_schema["components"]["schemas"][f"{output_type_title}"]["class"] = "output"
from invokeai.backend.model_management.models import get_model_config_enums
# This code no longer seems to be necessary?
# Leave it here just in case
#
# from invokeai.backend.model_manager import get_model_config_formats
# formats = get_model_config_formats()
# for model_config_name, enum_set in formats.items():
for model_config_format_enum in set(get_model_config_enums()):
name = model_config_format_enum.__qualname__
# if model_config_name in openapi_schema["components"]["schemas"]:
# # print(f"Config with name {name} already defined")
# continue
if name in openapi_schema["components"]["schemas"]:
# print(f"Config with name {name} already defined")
continue
openapi_schema["components"]["schemas"][name] = {
"title": name,
"description": "An enumeration.",
"type": "string",
"enum": [v.value for v in model_config_format_enum],
}
# openapi_schema["components"]["schemas"][model_config_name] = {
# "title": model_config_name,
# "description": "An enumeration.",
# "type": "string",
# "enum": [v.value for v in enum_set],
# }
app.openapi_schema = openapi_schema
return app.openapi_schema
@ -203,8 +202,8 @@ app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid a
def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
swagger_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Swagger UI",
swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
)
@ -212,25 +211,20 @@ def overridden_swagger() -> HTMLResponse:
def overridden_redoc() -> HTMLResponse:
return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title,
redoc_favicon_url="/static/docs/favicon.ico",
title=f"{app.title} - Redoc",
redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
)
web_root_path = Path(list(web_dir.__path__)[0])
# Cannot add headers to StaticFiles, so we must serve index.html with a custom route
# Add cache-control: no-store header to prevent caching of index.html, which leads to broken UIs at release
@app.get("/", include_in_schema=False, name="ui_root")
def get_index() -> FileResponse:
return FileResponse(Path(web_root_path, "dist/index.html"), headers={"Cache-Control": "no-store"})
# # Must mount *after* the other routes else it borks em
app.mount("/static", StaticFiles(directory=Path(web_root_path, "static/")), name="static") # docs favicon is in here
app.mount("/assets", StaticFiles(directory=Path(web_root_path, "dist/assets/")), name="assets")
app.mount("/locales", StaticFiles(directory=Path(web_root_path, "dist/locales/")), name="locales")
try:
app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
except RuntimeError:
logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
app.mount(
"/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
) # docs favicon is in here
def invoke_api() -> None:
@ -271,6 +265,8 @@ def invoke_api() -> None:
port=port,
loop="asyncio",
log_level=app_config.log_level,
ssl_certfile=app_config.ssl_certfile,
ssl_keyfile=app_config.ssl_keyfile,
)
server = uvicorn.Server(config)

View File

@ -4,20 +4,37 @@ from __future__ import annotations
import inspect
import re
import warnings
from abc import ABC, abstractmethod
from enum import Enum
from inspect import signature
from types import UnionType
from typing import TYPE_CHECKING, Any, Callable, ClassVar, Iterable, Literal, Optional, Type, TypeVar, Union, cast
from typing import (
TYPE_CHECKING,
Annotated,
Any,
Callable,
ClassVar,
Iterable,
Literal,
Optional,
Type,
TypeVar,
Union,
cast,
)
import semver
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter, create_model
from pydantic.fields import FieldInfo, _Unset
from pydantic import BaseModel, ConfigDict, Field, TypeAdapter, create_model
from pydantic.fields import FieldInfo
from pydantic_core import PydanticUndefined
from typing_extensions import TypeAliasType
from invokeai.app.invocations.fields import (
FieldKind,
Input,
)
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.app.services.workflow_records.workflow_records_common import WorkflowWithoutID
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.metaenum import MetaEnum
from invokeai.app.util.misc import uuid_string
from invokeai.backend.util.logging import InvokeAILogger
@ -38,391 +55,17 @@ class InvalidFieldError(TypeError):
pass
class Input(str, Enum, metaclass=MetaEnum):
class Classification(str, Enum, metaclass=MetaEnum):
"""
The type of input a field accepts.
- `Input.Direct`: The field must have its value provided directly, when the invocation and field \
are instantiated.
- `Input.Connection`: The field must have its value provided by a connection.
- `Input.Any`: The field may have its value provided either directly or by a connection.
The classification of an Invocation.
- `Stable`: The invocation, including its inputs/outputs and internal logic, is stable. You may build workflows with it, having confidence that they will not break because of a change in this invocation.
- `Beta`: The invocation is not yet stable, but is planned to be stable in the future. Workflows built around this invocation may break, but we are committed to supporting this invocation long-term.
- `Prototype`: The invocation is not yet stable and may be removed from the application at any time. Workflows built around this invocation may break, and we are *not* committed to supporting this invocation.
"""
Connection = "connection"
Direct = "direct"
Any = "any"
class FieldKind(str, Enum, metaclass=MetaEnum):
"""
The kind of field.
- `Input`: An input field on a node.
- `Output`: An output field on a node.
- `Internal`: A field which is treated as an input, but cannot be used in node definitions. Metadata is
one example. It is provided to nodes via the WithMetadata class, and we want to reserve the field name
"metadata" for this on all nodes. `FieldKind` is used to short-circuit the field name validation logic,
allowing "metadata" for that field.
- `NodeAttribute`: The field is a node attribute. These are fields which are not inputs or outputs,
but which are used to store information about the node. For example, the `id` and `type` fields are node
attributes.
The presence of this in `json_schema_extra["field_kind"]` is used when initializing node schemas on app
startup, and when generating the OpenAPI schema for the workflow editor.
"""
Input = "input"
Output = "output"
Internal = "internal"
NodeAttribute = "node_attribute"
class UIType(str, Enum, metaclass=MetaEnum):
"""
Type hints for the UI for situations in which the field type is not enough to infer the correct UI type.
- Model Fields
The most common node-author-facing use will be for model fields. Internally, there is no difference
between SD-1, SD-2 and SDXL model fields - they all use the class `MainModelField`. To ensure the
base-model-specific UI is rendered, use e.g. `ui_type=UIType.SDXLMainModelField` to indicate that
the field is an SDXL main model field.
- Any Field
We cannot infer the usage of `typing.Any` via schema parsing, so you *must* use `ui_type=UIType.Any` to
indicate that the field accepts any type. Use with caution. This cannot be used on outputs.
- Scheduler Field
Special handling in the UI is needed for this field, which otherwise would be parsed as a plain enum field.
- Internal Fields
Similar to the Any Field, the `collect` and `iterate` nodes make use of `typing.Any`. To facilitate
handling these types in the client, we use `UIType._Collection` and `UIType._CollectionItem`. These
should not be used by node authors.
- DEPRECATED Fields
These types are deprecated and should not be used by node authors. A warning will be logged if one is
used, and the type will be ignored. They are included here for backwards compatibility.
"""
# region Model Field Types
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VaeModel = "VAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
# endregion
# region Misc Field Types
Scheduler = "SchedulerField"
Any = "AnyField"
# endregion
# region Internal Field Types
_Collection = "CollectionField"
_CollectionItem = "CollectionItemField"
# endregion
# region DEPRECATED
Boolean = "DEPRECATED_Boolean"
Color = "DEPRECATED_Color"
Conditioning = "DEPRECATED_Conditioning"
Control = "DEPRECATED_Control"
Float = "DEPRECATED_Float"
Image = "DEPRECATED_Image"
Integer = "DEPRECATED_Integer"
Latents = "DEPRECATED_Latents"
String = "DEPRECATED_String"
BooleanCollection = "DEPRECATED_BooleanCollection"
ColorCollection = "DEPRECATED_ColorCollection"
ConditioningCollection = "DEPRECATED_ConditioningCollection"
ControlCollection = "DEPRECATED_ControlCollection"
FloatCollection = "DEPRECATED_FloatCollection"
ImageCollection = "DEPRECATED_ImageCollection"
IntegerCollection = "DEPRECATED_IntegerCollection"
LatentsCollection = "DEPRECATED_LatentsCollection"
StringCollection = "DEPRECATED_StringCollection"
BooleanPolymorphic = "DEPRECATED_BooleanPolymorphic"
ColorPolymorphic = "DEPRECATED_ColorPolymorphic"
ConditioningPolymorphic = "DEPRECATED_ConditioningPolymorphic"
ControlPolymorphic = "DEPRECATED_ControlPolymorphic"
FloatPolymorphic = "DEPRECATED_FloatPolymorphic"
ImagePolymorphic = "DEPRECATED_ImagePolymorphic"
IntegerPolymorphic = "DEPRECATED_IntegerPolymorphic"
LatentsPolymorphic = "DEPRECATED_LatentsPolymorphic"
StringPolymorphic = "DEPRECATED_StringPolymorphic"
MainModel = "DEPRECATED_MainModel"
UNet = "DEPRECATED_UNet"
Vae = "DEPRECATED_Vae"
CLIP = "DEPRECATED_CLIP"
Collection = "DEPRECATED_Collection"
CollectionItem = "DEPRECATED_CollectionItem"
Enum = "DEPRECATED_Enum"
WorkflowField = "DEPRECATED_WorkflowField"
IsIntermediate = "DEPRECATED_IsIntermediate"
BoardField = "DEPRECATED_BoardField"
MetadataItem = "DEPRECATED_MetadataItem"
MetadataItemCollection = "DEPRECATED_MetadataItemCollection"
MetadataItemPolymorphic = "DEPRECATED_MetadataItemPolymorphic"
MetadataDict = "DEPRECATED_MetadataDict"
# endregion
class UIComponent(str, Enum, metaclass=MetaEnum):
"""
The type of UI component to use for a field, used to override the default components, which are
inferred from the field type.
"""
None_ = "none"
Textarea = "textarea"
Slider = "slider"
class InputFieldJSONSchemaExtra(BaseModel):
"""
Extra attributes to be added to input fields and their OpenAPI schema. Used during graph execution,
and by the workflow editor during schema parsing and UI rendering.
"""
input: Input
orig_required: bool
field_kind: FieldKind
default: Optional[Any] = None
orig_default: Optional[Any] = None
ui_hidden: bool = False
ui_type: Optional[UIType] = None
ui_component: Optional[UIComponent] = None
ui_order: Optional[int] = None
ui_choice_labels: Optional[dict[str, str]] = None
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
)
class OutputFieldJSONSchemaExtra(BaseModel):
"""
Extra attributes to be added to input fields and their OpenAPI schema. Used by the workflow editor
during schema parsing and UI rendering.
"""
field_kind: FieldKind
ui_hidden: bool
ui_type: Optional[UIType]
ui_order: Optional[int]
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
)
def InputField(
# copied from pydantic's Field
# TODO: Can we support default_factory?
default: Any = _Unset,
default_factory: Callable[[], Any] | None = _Unset,
title: str | None = _Unset,
description: str | None = _Unset,
pattern: str | None = _Unset,
strict: bool | None = _Unset,
gt: float | None = _Unset,
ge: float | None = _Unset,
lt: float | None = _Unset,
le: float | None = _Unset,
multiple_of: float | None = _Unset,
allow_inf_nan: bool | None = _Unset,
max_digits: int | None = _Unset,
decimal_places: int | None = _Unset,
min_length: int | None = _Unset,
max_length: int | None = _Unset,
# custom
input: Input = Input.Any,
ui_type: Optional[UIType] = None,
ui_component: Optional[UIComponent] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
ui_choice_labels: Optional[dict[str, str]] = None,
) -> Any:
"""
Creates an input field for an invocation.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.Field) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param Input input: [Input.Any] The kind of input this field requires. \
`Input.Direct` means a value must be provided on instantiation. \
`Input.Connection` means the value must be provided by a connection. \
`Input.Any` means either will do.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param UIComponent ui_component: [None] Optionally specifies a specific component to use in the UI. \
The UI will always render a suitable component, but sometimes you want something different than the default. \
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI.
:param dict[str, str] ui_choice_labels: [None] Specifies the labels to use for the choices in an enum field.
"""
json_schema_extra_ = InputFieldJSONSchemaExtra(
input=input,
ui_type=ui_type,
ui_component=ui_component,
ui_hidden=ui_hidden,
ui_order=ui_order,
ui_choice_labels=ui_choice_labels,
field_kind=FieldKind.Input,
orig_required=True,
)
"""
There is a conflict between the typing of invocation definitions and the typing of an invocation's
`invoke()` function.
On instantiation of a node, the invocation definition is used to create the python class. At this time,
any number of fields may be optional, because they may be provided by connections.
On calling of `invoke()`, however, those fields may be required.
For example, consider an ResizeImageInvocation with an `image: ImageField` field.
`image` is required during the call to `invoke()`, but when the python class is instantiated,
the field may not be present. This is fine, because that image field will be provided by a
connection from an ancestor node, which outputs an image.
This means we want to type the `image` field as optional for the node class definition, but required
for the `invoke()` function.
If we use `typing.Optional` in the node class definition, the field will be typed as optional in the
`invoke()` method, and we'll have to do a lot of runtime checks to ensure the field is present - or
any static type analysis tools will complain.
To get around this, in node class definitions, we type all fields correctly for the `invoke()` function,
but secretly make them optional in `InputField()`. We also store the original required bool and/or default
value. When we call `invoke()`, we use this stored information to do an additional check on the class.
"""
if default_factory is not _Unset and default_factory is not None:
default = default_factory()
logger.warn('"default_factory" is not supported, calling it now to set "default"')
# These are the args we may wish pass to the pydantic `Field()` function
field_args = {
"default": default,
"title": title,
"description": description,
"pattern": pattern,
"strict": strict,
"gt": gt,
"ge": ge,
"lt": lt,
"le": le,
"multiple_of": multiple_of,
"allow_inf_nan": allow_inf_nan,
"max_digits": max_digits,
"decimal_places": decimal_places,
"min_length": min_length,
"max_length": max_length,
}
# We only want to pass the args that were provided, otherwise the `Field()`` function won't work as expected
provided_args = {k: v for (k, v) in field_args.items() if v is not PydanticUndefined}
# Because we are manually making fields optional, we need to store the original required bool for reference later
json_schema_extra_.orig_required = default is PydanticUndefined
# Make Input.Any and Input.Connection fields optional, providing None as a default if the field doesn't already have one
if input is Input.Any or input is Input.Connection:
default_ = None if default is PydanticUndefined else default
provided_args.update({"default": default_})
if default is not PydanticUndefined:
# Before invoking, we'll check for the original default value and set it on the field if the field has no value
json_schema_extra_.default = default
json_schema_extra_.orig_default = default
elif default is not PydanticUndefined:
default_ = default
provided_args.update({"default": default_})
json_schema_extra_.orig_default = default_
return Field(
**provided_args,
json_schema_extra=json_schema_extra_.model_dump(exclude_none=True),
)
def OutputField(
# copied from pydantic's Field
default: Any = _Unset,
title: str | None = _Unset,
description: str | None = _Unset,
pattern: str | None = _Unset,
strict: bool | None = _Unset,
gt: float | None = _Unset,
ge: float | None = _Unset,
lt: float | None = _Unset,
le: float | None = _Unset,
multiple_of: float | None = _Unset,
allow_inf_nan: bool | None = _Unset,
max_digits: int | None = _Unset,
decimal_places: int | None = _Unset,
min_length: int | None = _Unset,
max_length: int | None = _Unset,
# custom
ui_type: Optional[UIType] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
) -> Any:
"""
Creates an output field for an invocation output.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
"""
return Field(
default=default,
title=title,
description=description,
pattern=pattern,
strict=strict,
gt=gt,
ge=ge,
lt=lt,
le=le,
multiple_of=multiple_of,
allow_inf_nan=allow_inf_nan,
max_digits=max_digits,
decimal_places=decimal_places,
min_length=min_length,
max_length=max_length,
json_schema_extra=OutputFieldJSONSchemaExtra(
ui_type=ui_type,
ui_hidden=ui_hidden,
ui_order=ui_order,
field_kind=FieldKind.Output,
).model_dump(exclude_none=True),
)
Stable = "stable"
Beta = "beta"
Prototype = "prototype"
class UIConfigBase(BaseModel):
@ -438,6 +81,7 @@ class UIConfigBase(BaseModel):
description='The node\'s version. Should be a valid semver string e.g. "1.0.0" or "3.8.13".',
)
node_pack: Optional[str] = Field(default=None, description="Whether or not this is a custom node")
classification: Classification = Field(default=Classification.Stable, description="The node's classification")
model_config = ConfigDict(
validate_assignment=True,
@ -445,33 +89,6 @@ class UIConfigBase(BaseModel):
)
class InvocationContext:
"""Initialized and provided to on execution of invocations."""
services: InvocationServices
graph_execution_state_id: str
queue_id: str
queue_item_id: int
queue_batch_id: str
workflow: Optional[WorkflowWithoutID]
def __init__(
self,
services: InvocationServices,
queue_id: str,
queue_item_id: int,
queue_batch_id: str,
graph_execution_state_id: str,
workflow: Optional[WorkflowWithoutID],
):
self.services = services
self.graph_execution_state_id = graph_execution_state_id
self.queue_id = queue_id
self.queue_item_id = queue_item_id
self.queue_batch_id = queue_batch_id
self.workflow = workflow
class BaseInvocationOutput(BaseModel):
"""
Base class for all invocation outputs.
@ -480,6 +97,7 @@ class BaseInvocationOutput(BaseModel):
"""
_output_classes: ClassVar[set[BaseInvocationOutput]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
@classmethod
def register_output(cls, output: BaseInvocationOutput) -> None:
@ -492,10 +110,14 @@ class BaseInvocationOutput(BaseModel):
return cls._output_classes
@classmethod
def get_outputs_union(cls) -> UnionType:
"""Gets a union of all invocation outputs."""
outputs_union = Union[tuple(cls._output_classes)] # type: ignore [valid-type]
return outputs_union # type: ignore [return-value]
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation output types."""
if not cls._typeadapter:
InvocationOutputsUnion = TypeAliasType(
"InvocationOutputsUnion", Annotated[Union[tuple(cls._output_classes)], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(InvocationOutputsUnion)
return cls._typeadapter
@classmethod
def get_output_types(cls) -> Iterable[str]:
@ -544,6 +166,7 @@ class BaseInvocation(ABC, BaseModel):
"""
_invocation_classes: ClassVar[set[BaseInvocation]] = set()
_typeadapter: ClassVar[Optional[TypeAdapter[Any]]] = None
@classmethod
def get_type(cls) -> str:
@ -556,10 +179,14 @@ class BaseInvocation(ABC, BaseModel):
cls._invocation_classes.add(invocation)
@classmethod
def get_invocations_union(cls) -> UnionType:
"""Gets a union of all invocation types."""
invocations_union = Union[tuple(cls._invocation_classes)] # type: ignore [valid-type]
return invocations_union # type: ignore [return-value]
def get_typeadapter(cls) -> TypeAdapter[Any]:
"""Gets a pydantc TypeAdapter for the union of all invocation types."""
if not cls._typeadapter:
InvocationsUnion = TypeAliasType(
"InvocationsUnion", Annotated[Union[tuple(cls._invocation_classes)], Field(discriminator="type")]
)
cls._typeadapter = TypeAdapter(InvocationsUnion)
return cls._typeadapter
@classmethod
def get_invocations(cls) -> Iterable[BaseInvocation]:
@ -606,6 +233,7 @@ class BaseInvocation(ABC, BaseModel):
schema["category"] = uiconfig.category
if uiconfig.node_pack is not None:
schema["node_pack"] = uiconfig.node_pack
schema["classification"] = uiconfig.classification
schema["version"] = uiconfig.version
if "required" not in schema or not isinstance(schema["required"], list):
schema["required"] = []
@ -616,7 +244,7 @@ class BaseInvocation(ABC, BaseModel):
"""Invoke with provided context and return outputs."""
pass
def invoke_internal(self, context: InvocationContext) -> BaseInvocationOutput:
def invoke_internal(self, context: InvocationContext, services: "InvocationServices") -> BaseInvocationOutput:
"""
Internal invoke method, calls `invoke()` after some prep.
Handles optional fields that are required to call `invoke()` and invocation cache.
@ -641,23 +269,23 @@ class BaseInvocation(ABC, BaseModel):
raise MissingInputException(self.model_fields["type"].default, field_name)
# skip node cache codepath if it's disabled
if context.services.configuration.node_cache_size == 0:
if services.configuration.node_cache_size == 0:
return self.invoke(context)
output: BaseInvocationOutput
if self.use_cache:
key = context.services.invocation_cache.create_key(self)
cached_value = context.services.invocation_cache.get(key)
key = services.invocation_cache.create_key(self)
cached_value = services.invocation_cache.get(key)
if cached_value is None:
context.services.logger.debug(f'Invocation cache miss for type "{self.get_type()}": {self.id}')
services.logger.debug(f'Invocation cache miss for type "{self.get_type()}": {self.id}')
output = self.invoke(context)
context.services.invocation_cache.save(key, output)
services.invocation_cache.save(key, output)
return output
else:
context.services.logger.debug(f'Invocation cache hit for type "{self.get_type()}": {self.id}')
services.logger.debug(f'Invocation cache hit for type "{self.get_type()}": {self.id}')
return cached_value
else:
context.services.logger.debug(f'Skipping invocation cache for "{self.get_type()}": {self.id}')
services.logger.debug(f'Skipping invocation cache for "{self.get_type()}": {self.id}')
return self.invoke(context)
id: str = Field(
@ -698,9 +326,7 @@ RESERVED_NODE_ATTRIBUTE_FIELD_NAMES = {
"workflow",
}
RESERVED_INPUT_FIELD_NAMES = {
"metadata",
}
RESERVED_INPUT_FIELD_NAMES = {"metadata", "board"}
RESERVED_OUTPUT_FIELD_NAMES = {"type"}
@ -709,8 +335,10 @@ class _Model(BaseModel):
pass
# Get all pydantic model attrs, methods, etc
RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
with warnings.catch_warnings():
warnings.simplefilter("ignore", category=DeprecationWarning)
# Get all pydantic model attrs, methods, etc
RESERVED_PYDANTIC_FIELD_NAMES = {m[0] for m in inspect.getmembers(_Model())}
def validate_fields(model_fields: dict[str, FieldInfo], model_type: str) -> None:
@ -779,6 +407,7 @@ def invocation(
category: Optional[str] = None,
version: Optional[str] = None,
use_cache: Optional[bool] = True,
classification: Classification = Classification.Stable,
) -> Callable[[Type[TBaseInvocation]], Type[TBaseInvocation]]:
"""
Registers an invocation.
@ -789,6 +418,7 @@ def invocation(
:param Optional[str] category: Adds a category to the invocation. Used to group the invocations in the UI. Defaults to None.
:param Optional[str] version: Adds a version to the invocation. Must be a valid semver string. Defaults to None.
:param Optional[bool] use_cache: Whether or not to use the invocation cache. Defaults to True. The user may override this in the workflow editor.
:param Classification classification: The classification of the invocation. Defaults to FeatureClassification.Stable. Use Beta or Prototype if the invocation is unstable.
"""
def wrapper(cls: Type[TBaseInvocation]) -> Type[TBaseInvocation]:
@ -809,6 +439,7 @@ def invocation(
cls.UIConfig.title = title
cls.UIConfig.tags = tags
cls.UIConfig.category = category
cls.UIConfig.classification = classification
# Grab the node pack's name from the module name, if it's a custom node
is_custom_node = cls.__module__.rsplit(".", 1)[0] == "invokeai.app.invocations"
@ -905,37 +536,3 @@ def invocation_output(
return cls
return wrapper
class MetadataField(RootModel):
"""
Pydantic model for metadata with custom root of type dict[str, Any].
Metadata is stored without a strict schema.
"""
root: dict[str, Any] = Field(description="The metadata")
MetadataFieldValidator = TypeAdapter(MetadataField)
class WithMetadata(BaseModel):
metadata: Optional[MetadataField] = Field(
default=None,
description=FieldDescriptions.metadata,
json_schema_extra=InputFieldJSONSchemaExtra(
field_kind=FieldKind.Internal,
input=Input.Connection,
orig_required=False,
).model_dump(exclude_none=True),
)
class WithWorkflow:
workflow = None
def __init_subclass__(cls) -> None:
logger.warn(
f"{cls.__module__.split('.')[0]}.{cls.__name__}: WithWorkflow is deprecated. Use `context.workflow` to access the workflow."
)
super().__init_subclass__()

View File

@ -5,9 +5,11 @@ import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.primitives import IntegerCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(

View File

@ -1,40 +1,39 @@
import re
from dataclasses import dataclass
from typing import List, Optional, Union
from typing import Iterator, List, Optional, Tuple, Union
import torch
from compel import Compel, ReturnedEmbeddingsType
from compel.prompt_parser import Blend, Conjunction, CrossAttentionControlSubstitute, FlattenedPrompt, Fragment
from transformers import CLIPTextModel, CLIPTokenizer
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.invocations.fields import (
FieldDescriptions,
Input,
InputField,
OutputField,
UIComponent,
)
from invokeai.app.invocations.primitives import ConditioningOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
ConditioningFieldData,
ExtraConditioningInfo,
SDXLConditioningInfo,
)
from invokeai.backend.util.devices import torch_dtype
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import ModelNotFoundException, ModelType
from ...backend.util.devices import torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,
invocation,
invocation_output,
)
from .model import ClipField
@dataclass
class ConditioningFieldData:
conditionings: List[BasicConditioningInfo]
# unconditioned: Optional[torch.Tensor]
# unconditioned: Optional[torch.Tensor]
# class ConditioningAlgo(str, Enum):
@ -48,7 +47,7 @@ class ConditioningFieldData:
title="Prompt",
tags=["prompt", "compel"],
category="conditioning",
version="1.0.0",
version="1.0.1",
)
class CompelInvocation(BaseInvocation):
"""Parse prompt using compel package to conditioning."""
@ -66,49 +65,27 @@ class CompelInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.model_dump(),
context=context,
)
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.model_dump(),
context=context,
)
tokenizer_info = context.models.load(**self.clip.tokenizer.model_dump())
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**self.clip.text_encoder.model_dump())
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, CLIPTextModel)
def _lora_loader():
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.clip.loras:
lora_info = context.services.model_manager.get_model(
**lora.model_dump(exclude={"weight"}), context=context
)
yield (lora_info.context.model, lora.weight)
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info
return
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
# loras = [(context.models.get(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
context=context,
).context.model,
)
)
except ModelNotFoundException:
# print(e)
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
ti_list = generate_ti_list(self.prompt, text_encoder_info.config.base, context)
with (
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
tokenizer,
ti_manager,
),
@ -116,8 +93,9 @@ class CompelInvocation(BaseInvocation):
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora_text_encoder(text_encoder, _lora_loader()),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, self.clip.skipped_layers),
ModelPatcher.apply_clip_skip(text_encoder_model, self.clip.skipped_layers),
):
assert isinstance(text_encoder, CLIPTextModel)
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
@ -128,7 +106,7 @@ class CompelInvocation(BaseInvocation):
conjunction = Compel.parse_prompt_string(self.prompt)
if context.services.configuration.log_tokenization:
if context.config.get().log_tokenization:
log_tokenization_for_conjunction(conjunction, tokenizer)
c, options = compel.build_conditioning_tensor_for_conjunction(conjunction)
@ -149,17 +127,14 @@ class CompelInvocation(BaseInvocation):
]
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
conditioning_name = context.conditioning.save(conditioning_data)
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
return ConditioningOutput.build(conditioning_name)
class SDXLPromptInvocationBase:
"""Prompt processor for SDXL models."""
def run_clip_compel(
self,
context: InvocationContext,
@ -168,26 +143,25 @@ class SDXLPromptInvocationBase:
get_pooled: bool,
lora_prefix: str,
zero_on_empty: bool,
):
tokenizer_info = context.services.model_manager.get_model(
**clip_field.tokenizer.model_dump(),
context=context,
)
text_encoder_info = context.services.model_manager.get_model(
**clip_field.text_encoder.model_dump(),
context=context,
)
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[ExtraConditioningInfo]]:
tokenizer_info = context.models.load(**clip_field.tokenizer.model_dump())
tokenizer_model = tokenizer_info.model
assert isinstance(tokenizer_model, CLIPTokenizer)
text_encoder_info = context.models.load(**clip_field.text_encoder.model_dump())
text_encoder_model = text_encoder_info.model
assert isinstance(text_encoder_model, CLIPTextModel)
# return zero on empty
if prompt == "" and zero_on_empty:
cpu_text_encoder = text_encoder_info.context.model
cpu_text_encoder = text_encoder_info.model
assert isinstance(cpu_text_encoder, torch.nn.Module)
c = torch.zeros(
(
1,
cpu_text_encoder.config.max_position_embeddings,
cpu_text_encoder.config.hidden_size,
),
dtype=text_encoder_info.context.cache.precision,
dtype=cpu_text_encoder.dtype,
)
if get_pooled:
c_pooled = torch.zeros(
@ -198,40 +172,21 @@ class SDXLPromptInvocationBase:
c_pooled = None
return c, c_pooled, None
def _lora_loader():
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in clip_field.loras:
lora_info = context.services.model_manager.get_model(
**lora.model_dump(exclude={"weight"}), context=context
)
yield (lora_info.context.model, lora.weight)
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
lora_model = lora_info.model
assert isinstance(lora_model, LoRAModelRaw)
yield (lora_model, lora.weight)
del lora_info
return
# loras = [(context.services.model_manager.get_model(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
# loras = [(context.models.get(**lora.dict(exclude={"weight"})).context.model, lora.weight) for lora in self.clip.loras]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", prompt):
name = trigger[1:-1]
try:
ti_list.append(
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=clip_field.text_encoder.base_model,
model_type=ModelType.TextualInversion,
context=context,
).context.model,
)
)
except ModelNotFoundException:
# print(e)
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
ti_list = generate_ti_list(prompt, text_encoder_info.config.base, context)
with (
ModelPatcher.apply_ti(tokenizer_info.context.model, text_encoder_info.context.model, ti_list) as (
ModelPatcher.apply_ti(tokenizer_model, text_encoder_model, ti_list) as (
tokenizer,
ti_manager,
),
@ -239,8 +194,9 @@ class SDXLPromptInvocationBase:
# Apply the LoRA after text_encoder has been moved to its target device for faster patching.
ModelPatcher.apply_lora(text_encoder, _lora_loader(), lora_prefix),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
ModelPatcher.apply_clip_skip(text_encoder_info.context.model, clip_field.skipped_layers),
ModelPatcher.apply_clip_skip(text_encoder_model, clip_field.skipped_layers),
):
assert isinstance(text_encoder, CLIPTextModel)
compel = Compel(
tokenizer=tokenizer,
text_encoder=text_encoder,
@ -253,7 +209,7 @@ class SDXLPromptInvocationBase:
conjunction = Compel.parse_prompt_string(prompt)
if context.services.configuration.log_tokenization:
if context.config.get().log_tokenization:
# TODO: better logging for and syntax
log_tokenization_for_conjunction(conjunction, tokenizer)
@ -286,7 +242,7 @@ class SDXLPromptInvocationBase:
title="SDXL Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
version="1.0.1",
)
class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -357,6 +313,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
dim=1,
)
assert c2_pooled is not None
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
@ -368,14 +325,9 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
]
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
conditioning_name = context.conditioning.save(conditioning_data)
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
return ConditioningOutput.build(conditioning_name)
@invocation(
@ -383,7 +335,7 @@ class SDXLCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
title="SDXL Refiner Prompt",
tags=["sdxl", "compel", "prompt"],
category="conditioning",
version="1.0.0",
version="1.0.1",
)
class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase):
"""Parse prompt using compel package to conditioning."""
@ -410,6 +362,7 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
add_time_ids = torch.tensor([original_size + crop_coords + (self.aesthetic_score,)])
assert c2_pooled is not None
conditioning_data = ConditioningFieldData(
conditionings=[
SDXLConditioningInfo(
@ -421,14 +374,9 @@ class SDXLRefinerCompelPromptInvocation(BaseInvocation, SDXLPromptInvocationBase
]
)
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
context.services.latents.save(conditioning_name, conditioning_data)
conditioning_name = context.conditioning.save(conditioning_data)
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
return ConditioningOutput.build(conditioning_name)
@invocation_output("clip_skip_output")
@ -449,7 +397,7 @@ class ClipSkipInvocation(BaseInvocation):
"""Skip layers in clip text_encoder model."""
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection, title="CLIP")
skipped_layers: int = InputField(default=0, description=FieldDescriptions.skipped_layers)
skipped_layers: int = InputField(default=0, ge=0, description=FieldDescriptions.skipped_layers)
def invoke(self, context: InvocationContext) -> ClipSkipInvocationOutput:
self.clip.skipped_layers += self.skipped_layers
@ -459,9 +407,9 @@ class ClipSkipInvocation(BaseInvocation):
def get_max_token_count(
tokenizer,
tokenizer: CLIPTokenizer,
prompt: Union[FlattenedPrompt, Blend, Conjunction],
truncate_if_too_long=False,
truncate_if_too_long: bool = False,
) -> int:
if type(prompt) is Blend:
blend: Blend = prompt
@ -473,7 +421,9 @@ def get_max_token_count(
return len(get_tokens_for_prompt_object(tokenizer, prompt, truncate_if_too_long))
def get_tokens_for_prompt_object(tokenizer, parsed_prompt: FlattenedPrompt, truncate_if_too_long=True) -> List[str]:
def get_tokens_for_prompt_object(
tokenizer: CLIPTokenizer, parsed_prompt: FlattenedPrompt, truncate_if_too_long: bool = True
) -> List[str]:
if type(parsed_prompt) is Blend:
raise ValueError("Blend is not supported here - you need to get tokens for each of its .children")
@ -486,24 +436,29 @@ def get_tokens_for_prompt_object(tokenizer, parsed_prompt: FlattenedPrompt, trun
for x in parsed_prompt.children
]
text = " ".join(text_fragments)
tokens = tokenizer.tokenize(text)
tokens: List[str] = tokenizer.tokenize(text)
if truncate_if_too_long:
max_tokens_length = tokenizer.model_max_length - 2 # typically 75
tokens = tokens[0:max_tokens_length]
return tokens
def log_tokenization_for_conjunction(c: Conjunction, tokenizer, display_label_prefix=None):
def log_tokenization_for_conjunction(
c: Conjunction, tokenizer: CLIPTokenizer, display_label_prefix: Optional[str] = None
) -> None:
display_label_prefix = display_label_prefix or ""
for i, p in enumerate(c.prompts):
if len(c.prompts) > 1:
this_display_label_prefix = f"{display_label_prefix}(conjunction part {i + 1}, weight={c.weights[i]})"
else:
assert display_label_prefix is not None
this_display_label_prefix = display_label_prefix
log_tokenization_for_prompt_object(p, tokenizer, display_label_prefix=this_display_label_prefix)
def log_tokenization_for_prompt_object(p: Union[Blend, FlattenedPrompt], tokenizer, display_label_prefix=None):
def log_tokenization_for_prompt_object(
p: Union[Blend, FlattenedPrompt], tokenizer: CLIPTokenizer, display_label_prefix: Optional[str] = None
) -> None:
display_label_prefix = display_label_prefix or ""
if type(p) is Blend:
blend: Blend = p
@ -543,7 +498,12 @@ def log_tokenization_for_prompt_object(p: Union[Blend, FlattenedPrompt], tokeniz
log_tokenization_for_text(text, tokenizer, display_label=display_label_prefix)
def log_tokenization_for_text(text, tokenizer, display_label=None, truncate_if_too_long=False):
def log_tokenization_for_text(
text: str,
tokenizer: CLIPTokenizer,
display_label: Optional[str] = None,
truncate_if_too_long: Optional[bool] = False,
) -> None:
"""shows how the prompt is tokenized
# usually tokens have '</w>' to indicate end-of-word,
# but for readability it has been replaced with ' '

View File

@ -0,0 +1,17 @@
from typing import Literal
from invokeai.backend.stable_diffusion.schedulers import SCHEDULER_MAP
LATENT_SCALE_FACTOR = 8
"""
HACK: Many nodes are currently hard-coded to use a fixed latent scale factor of 8. This is fragile, and will need to
be addressed if future models use a different latent scale factor. Also, note that there may be places where the scale
factor is hard-coded to a literal '8' rather than using this constant.
The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
"""
SCHEDULER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
"""A literal type representing the valid scheduler names."""
IMAGE_MODES = Literal["L", "RGB", "RGBA", "CMYK", "YCbCr", "LAB", "HSV", "I", "F"]
"""A literal type for PIL image modes supported by Invoke"""

View File

@ -17,31 +17,30 @@ from controlnet_aux import (
MidasDetector,
MLSDdetector,
NormalBaeDetector,
OpenposeDetector,
PidiNetDetector,
SamDetector,
ZoeDetector,
)
from controlnet_aux.util import HWC3, ade_palette
from PIL import Image
from pydantic import BaseModel, ConfigDict, Field, field_validator
from pydantic import BaseModel, Field, field_validator, model_validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from ...backend.model_management import BaseModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
Input,
InputField,
InvocationContext,
OutputField,
WithBoard,
WithMetadata,
invocation,
invocation_output,
)
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from .baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
CONTROLNET_MODE_VALUES = Literal["balanced", "more_prompt", "more_control", "unbalanced"]
CONTROLNET_RESIZE_VALUES = Literal[
@ -55,10 +54,7 @@ CONTROLNET_RESIZE_VALUES = Literal[
class ControlNetModelField(BaseModel):
"""ControlNet model field"""
model_name: str = Field(description="Name of the ControlNet model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Model config record key for the ControlNet model")
class ControlField(BaseModel):
@ -75,17 +71,16 @@ class ControlField(BaseModel):
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
"""Validate that all control weights in the valid range"""
if isinstance(v, list):
for i in v:
if i < -1 or i > 2:
raise ValueError("Control weights must be within -1 to 2 range")
else:
if v < -1 or v > 2:
raise ValueError("Control weights must be within -1 to 2 range")
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("control_output")
class ControlOutput(BaseInvocationOutput):
@ -95,17 +90,17 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0")
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.1")
class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet"
default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)"
default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
@ -113,6 +108,17 @@ class ControlNetInvocation(BaseInvocation):
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> "ControlNetInvocation":
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput(
control=ControlField(
@ -128,7 +134,7 @@ class ControlNetInvocation(BaseInvocation):
# This invocation exists for other invocations to subclass it - do not register with @invocation!
class ImageProcessorInvocation(BaseInvocation, WithMetadata):
class ImageProcessorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Base class for invocations that preprocess images for ControlNet"""
image: ImageField = InputField(description="The image to process")
@ -137,23 +143,18 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata):
# superclass just passes through image without processing
return image
def load_image(self, context: InvocationContext) -> Image.Image:
# allows override for any special formatting specific to the preprocessor
return context.images.get_pil(self.image.image_name, "RGB")
def invoke(self, context: InvocationContext) -> ImageOutput:
raw_image = context.services.images.get_pil_image(self.image.image_name)
raw_image = self.load_image(context)
# image type should be PIL.PngImagePlugin.PngImageFile ?
processed_image = self.run_processor(raw_image)
# currently can't see processed image in node UI without a showImage node,
# so for now setting image_type to RESULT instead of INTERMEDIATE so will get saved in gallery
image_dto = context.services.images.create(
image=processed_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.CONTROL,
session_id=context.graph_execution_state_id,
node_id=self.id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=processed_image)
"""Builds an ImageOutput and its ImageField"""
processed_image_field = ImageField(image_name=image_dto.image_name)
@ -172,7 +173,7 @@ class ImageProcessorInvocation(BaseInvocation, WithMetadata):
title="Canny Processor",
tags=["controlnet", "canny"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class CannyImageProcessorInvocation(ImageProcessorInvocation):
"""Canny edge detection for ControlNet"""
@ -184,6 +185,10 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
default=200, ge=0, le=255, description="The high threshold of the Canny pixel gradient (0-255)"
)
def load_image(self, context: InvocationContext) -> Image.Image:
# Keep alpha channel for Canny processing to detect edges of transparent areas
return context.images.get_pil(self.image.image_name, "RGBA")
def run_processor(self, image):
canny_processor = CannyDetector()
processed_image = canny_processor(image, self.low_threshold, self.high_threshold)
@ -195,7 +200,7 @@ class CannyImageProcessorInvocation(ImageProcessorInvocation):
title="HED (softedge) Processor",
tags=["controlnet", "hed", "softedge"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class HedImageProcessorInvocation(ImageProcessorInvocation):
"""Applies HED edge detection to image"""
@ -224,7 +229,7 @@ class HedImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Processor",
tags=["controlnet", "lineart"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class LineartImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art processing to image"""
@ -246,7 +251,7 @@ class LineartImageProcessorInvocation(ImageProcessorInvocation):
title="Lineart Anime Processor",
tags=["controlnet", "lineart", "anime"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies line art anime processing to image"""
@ -264,37 +269,12 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@invocation(
"openpose_image_processor",
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.2.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = openpose_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,
)
return processed_image
@invocation(
"midas_depth_image_processor",
title="Midas Depth Processor",
tags=["controlnet", "midas"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Midas depth processing to image"""
@ -321,7 +301,7 @@ class MidasDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Normal BAE Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies NormalBae processing to image"""
@ -338,7 +318,7 @@ class NormalbaeImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.0"
"mlsd_image_processor", title="MLSD Processor", tags=["controlnet", "mlsd"], category="controlnet", version="1.2.1"
)
class MlsdImageProcessorInvocation(ImageProcessorInvocation):
"""Applies MLSD processing to image"""
@ -361,7 +341,7 @@ class MlsdImageProcessorInvocation(ImageProcessorInvocation):
@invocation(
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.0"
"pidi_image_processor", title="PIDI Processor", tags=["controlnet", "pidi"], category="controlnet", version="1.2.1"
)
class PidiImageProcessorInvocation(ImageProcessorInvocation):
"""Applies PIDI processing to image"""
@ -388,7 +368,7 @@ class PidiImageProcessorInvocation(ImageProcessorInvocation):
title="Content Shuffle Processor",
tags=["controlnet", "contentshuffle"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
"""Applies content shuffle processing to image"""
@ -418,7 +398,7 @@ class ContentShuffleImageProcessorInvocation(ImageProcessorInvocation):
title="Zoe (Depth) Processor",
tags=["controlnet", "zoe", "depth"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Zoe depth processing to image"""
@ -434,7 +414,7 @@ class ZoeDepthImageProcessorInvocation(ImageProcessorInvocation):
title="Mediapipe Face Processor",
tags=["controlnet", "mediapipe", "face"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
"""Applies mediapipe face processing to image"""
@ -443,10 +423,6 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
min_confidence: float = InputField(default=0.5, ge=0, le=1, description="Minimum confidence for face detection")
def run_processor(self, image):
# MediaPipeFaceDetector throws an error if image has alpha channel
# so convert to RGB if needed
if image.mode == "RGBA":
image = image.convert("RGB")
mediapipe_face_processor = MediapipeFaceDetector()
processed_image = mediapipe_face_processor(image, max_faces=self.max_faces, min_confidence=self.min_confidence)
return processed_image
@ -457,7 +433,7 @@ class MediapipeFaceProcessorInvocation(ImageProcessorInvocation):
title="Leres (Depth) Processor",
tags=["controlnet", "leres", "depth"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class LeresImageProcessorInvocation(ImageProcessorInvocation):
"""Applies leres processing to image"""
@ -486,7 +462,7 @@ class LeresImageProcessorInvocation(ImageProcessorInvocation):
title="Tile Resample Processor",
tags=["controlnet", "tile"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class TileResamplerProcessorInvocation(ImageProcessorInvocation):
"""Tile resampler processor"""
@ -526,7 +502,7 @@ class TileResamplerProcessorInvocation(ImageProcessorInvocation):
title="Segment Anything Processor",
tags=["controlnet", "segmentanything"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class SegmentAnythingProcessorInvocation(ImageProcessorInvocation):
"""Applies segment anything processing to image"""
@ -568,7 +544,7 @@ class SamDetectorReproducibleColors(SamDetector):
title="Color Map Processor",
tags=["controlnet"],
category="controlnet",
version="1.2.0",
version="1.2.1",
)
class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a color map from the provided image"""
@ -576,7 +552,6 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
color_map_tile_size: int = InputField(default=64, ge=0, description=FieldDescriptions.tile_size)
def run_processor(self, image: Image.Image):
image = image.convert("RGB")
np_image = np.array(image, dtype=np.uint8)
height, width = np_image.shape[:2]
@ -591,3 +566,57 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map)
return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
@invocation(
"depth_anything_image_processor",
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image
@invocation(
"dw_openpose_image_processor",
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.0.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image: Image.Image):
dw_openpose = DWOpenposeDetector()
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
draw_hands=self.draw_hands,
draw_body=self.draw_body,
resolution=self.image_resolution,
)
return processed_image

View File

@ -5,22 +5,24 @@ import cv2 as cv
import numpy
from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.invocations.fields import ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, invocation
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.2.0")
class CvInpaintInvocation(BaseInvocation, WithMetadata):
@invocation("cv_inpaint", title="OpenCV Inpaint", tags=["opencv", "inpaint"], category="inpaint", version="1.2.1")
class CvInpaintInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Simple inpaint using opencv."""
image: ImageField = InputField(description="The image to inpaint")
mask: ImageField = InputField(description="The mask to use when inpainting")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
mask = context.services.images.get_pil_image(self.mask.image_name)
image = context.images.get_pil(self.image.image_name)
mask = context.images.get_pil(self.mask.image_name)
# Convert to cv image/mask
# TODO: consider making these utility functions
@ -34,18 +36,6 @@ class CvInpaintInvocation(BaseInvocation, WithMetadata):
# TODO: consider making a utility function
image_inpainted = Image.fromarray(cv.cvtColor(cv_inpainted, cv.COLOR_BGR2RGB))
image_dto = context.services.images.create(
image=image_inpainted,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=context.workflow,
)
image_dto = context.images.save(image=image_inpainted)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)

View File

@ -13,15 +13,13 @@ from pydantic import field_validator
import invokeai.assets.fonts as font_assets
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
InputField,
InvocationContext,
OutputField,
WithMetadata,
invocation,
invocation_output,
)
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.invocations.fields import ImageField, InputField, OutputField, WithBoard, WithMetadata
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory
from invokeai.app.services.shared.invocation_context import InvocationContext
@invocation_output("face_mask_output")
@ -306,37 +304,37 @@ def extract_face(
# Adjust the crop boundaries to stay within the original image's dimensions
if x_min < 0:
context.services.logger.warning("FaceTools --> -X-axis padding reached image edge.")
context.logger.warning("FaceTools --> -X-axis padding reached image edge.")
x_max -= x_min
x_min = 0
elif x_max > mask.width:
context.services.logger.warning("FaceTools --> +X-axis padding reached image edge.")
context.logger.warning("FaceTools --> +X-axis padding reached image edge.")
x_min -= x_max - mask.width
x_max = mask.width
if y_min < 0:
context.services.logger.warning("FaceTools --> +Y-axis padding reached image edge.")
context.logger.warning("FaceTools --> +Y-axis padding reached image edge.")
y_max -= y_min
y_min = 0
elif y_max > mask.height:
context.services.logger.warning("FaceTools --> -Y-axis padding reached image edge.")
context.logger.warning("FaceTools --> -Y-axis padding reached image edge.")
y_min -= y_max - mask.height
y_max = mask.height
# Ensure the crop is square and adjust the boundaries if needed
if x_max - x_min != crop_size:
context.services.logger.warning("FaceTools --> Limiting x-axis padding to constrain bounding box to a square.")
context.logger.warning("FaceTools --> Limiting x-axis padding to constrain bounding box to a square.")
diff = crop_size - (x_max - x_min)
x_min -= diff // 2
x_max += diff - diff // 2
if y_max - y_min != crop_size:
context.services.logger.warning("FaceTools --> Limiting y-axis padding to constrain bounding box to a square.")
context.logger.warning("FaceTools --> Limiting y-axis padding to constrain bounding box to a square.")
diff = crop_size - (y_max - y_min)
y_min -= diff // 2
y_max += diff - diff // 2
context.services.logger.info(f"FaceTools --> Calculated bounding box (8 multiple): {crop_size}")
context.logger.info(f"FaceTools --> Calculated bounding box (8 multiple): {crop_size}")
# Crop the output image to the specified size with the center of the face mesh as the center.
mask = mask.crop((x_min, y_min, x_max, y_max))
@ -368,7 +366,7 @@ def get_faces_list(
# Generate the face box mask and get the center of the face.
if not should_chunk:
context.services.logger.info("FaceTools --> Attempting full image face detection.")
context.logger.info("FaceTools --> Attempting full image face detection.")
result = generate_face_box_mask(
context=context,
minimum_confidence=minimum_confidence,
@ -380,7 +378,7 @@ def get_faces_list(
draw_mesh=draw_mesh,
)
if should_chunk or len(result) == 0:
context.services.logger.info("FaceTools --> Chunking image (chunk toggled on, or no face found in full image).")
context.logger.info("FaceTools --> Chunking image (chunk toggled on, or no face found in full image).")
width, height = image.size
image_chunks = []
x_offsets = []
@ -399,7 +397,7 @@ def get_faces_list(
x_offsets.append(x)
y_offsets.append(0)
fx += increment
context.services.logger.info(f"FaceTools --> Chunk starting at x = {x}")
context.logger.info(f"FaceTools --> Chunk starting at x = {x}")
elif height > width:
# Portrait - slice the image vertically
fy = 0.0
@ -411,10 +409,10 @@ def get_faces_list(
x_offsets.append(0)
y_offsets.append(y)
fy += increment
context.services.logger.info(f"FaceTools --> Chunk starting at y = {y}")
context.logger.info(f"FaceTools --> Chunk starting at y = {y}")
for idx in range(len(image_chunks)):
context.services.logger.info(f"FaceTools --> Evaluating faces in chunk {idx}")
context.logger.info(f"FaceTools --> Evaluating faces in chunk {idx}")
result = result + generate_face_box_mask(
context=context,
minimum_confidence=minimum_confidence,
@ -428,7 +426,7 @@ def get_faces_list(
if len(result) == 0:
# Give up
context.services.logger.warning(
context.logger.warning(
"FaceTools --> No face detected in chunked input image. Passing through original image."
)
@ -437,7 +435,7 @@ def get_faces_list(
return all_faces
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.0")
@invocation("face_off", title="FaceOff", tags=["image", "faceoff", "face", "mask"], category="image", version="1.2.1")
class FaceOffInvocation(BaseInvocation, WithMetadata):
"""Bound, extract, and mask a face from an image using MediaPipe detection"""
@ -470,11 +468,11 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
)
if len(all_faces) == 0:
context.services.logger.warning("FaceOff --> No faces detected. Passing through original image.")
context.logger.warning("FaceOff --> No faces detected. Passing through original image.")
return None
if self.face_id > len(all_faces) - 1:
context.services.logger.warning(
context.logger.warning(
f"FaceOff --> Face ID {self.face_id} is outside of the number of faces detected ({len(all_faces)}). Passing through original image."
)
return None
@ -486,7 +484,7 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
return face_data
def invoke(self, context: InvocationContext) -> FaceOffOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
result = self.faceoff(context=context, image=image)
if result is None:
@ -500,24 +498,9 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
x = result["x_min"]
y = result["y_min"]
image_dto = context.services.images.create(
image=result_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=context.workflow,
)
image_dto = context.images.save(image=result_image)
mask_dto = context.services.images.create(
image=result_mask,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
mask_dto = context.images.save(image=result_mask, image_category=ImageCategory.MASK)
output = FaceOffOutput(
image=ImageField(image_name=image_dto.image_name),
@ -531,7 +514,7 @@ class FaceOffInvocation(BaseInvocation, WithMetadata):
return output
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.0")
@invocation("face_mask_detection", title="FaceMask", tags=["image", "face", "mask"], category="image", version="1.2.1")
class FaceMaskInvocation(BaseInvocation, WithMetadata):
"""Face mask creation using mediapipe face detection"""
@ -580,7 +563,7 @@ class FaceMaskInvocation(BaseInvocation, WithMetadata):
if len(intersected_face_ids) == 0:
id_range_str = ",".join([str(id) for id in id_range])
context.services.logger.warning(
context.logger.warning(
f"Face IDs must be in range of detected faces - requested {self.face_ids}, detected {id_range_str}. Passing through original image."
)
return FaceMaskResult(
@ -616,27 +599,12 @@ class FaceMaskInvocation(BaseInvocation, WithMetadata):
)
def invoke(self, context: InvocationContext) -> FaceMaskOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
result = self.facemask(context=context, image=image)
image_dto = context.services.images.create(
image=result["image"],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=context.workflow,
)
image_dto = context.images.save(image=result["image"])
mask_dto = context.services.images.create(
image=result["mask"],
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.MASK,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
)
mask_dto = context.images.save(image=result["mask"], image_category=ImageCategory.MASK)
output = FaceMaskOutput(
image=ImageField(image_name=image_dto.image_name),
@ -649,9 +617,9 @@ class FaceMaskInvocation(BaseInvocation, WithMetadata):
@invocation(
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.0"
"face_identifier", title="FaceIdentifier", tags=["image", "face", "identifier"], category="image", version="1.2.1"
)
class FaceIdentifierInvocation(BaseInvocation, WithMetadata):
class FaceIdentifierInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Outputs an image with detected face IDs printed on each face. For use with other FaceTools."""
image: ImageField = InputField(description="Image to face detect")
@ -705,21 +673,9 @@ class FaceIdentifierInvocation(BaseInvocation, WithMetadata):
return image
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
result_image = self.faceidentifier(context=context, image=image)
image_dto = context.services.images.create(
image=result_image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
workflow=context.workflow,
)
image_dto = context.images.save(image=result_image)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)

View File

@ -0,0 +1,566 @@
from enum import Enum
from typing import Any, Callable, Optional, Tuple
from pydantic import BaseModel, ConfigDict, Field, RootModel, TypeAdapter
from pydantic.fields import _Unset
from pydantic_core import PydanticUndefined
from invokeai.app.util.metaenum import MetaEnum
from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger()
class UIType(str, Enum, metaclass=MetaEnum):
"""
Type hints for the UI for situations in which the field type is not enough to infer the correct UI type.
- Model Fields
The most common node-author-facing use will be for model fields. Internally, there is no difference
between SD-1, SD-2 and SDXL model fields - they all use the class `MainModelField`. To ensure the
base-model-specific UI is rendered, use e.g. `ui_type=UIType.SDXLMainModelField` to indicate that
the field is an SDXL main model field.
- Any Field
We cannot infer the usage of `typing.Any` via schema parsing, so you *must* use `ui_type=UIType.Any` to
indicate that the field accepts any type. Use with caution. This cannot be used on outputs.
- Scheduler Field
Special handling in the UI is needed for this field, which otherwise would be parsed as a plain enum field.
- Internal Fields
Similar to the Any Field, the `collect` and `iterate` nodes make use of `typing.Any`. To facilitate
handling these types in the client, we use `UIType._Collection` and `UIType._CollectionItem`. These
should not be used by node authors.
- DEPRECATED Fields
These types are deprecated and should not be used by node authors. A warning will be logged if one is
used, and the type will be ignored. They are included here for backwards compatibility.
"""
# region Model Field Types
SDXLMainModel = "SDXLMainModelField"
SDXLRefinerModel = "SDXLRefinerModelField"
ONNXModel = "ONNXModelField"
VaeModel = "VAEModelField"
LoRAModel = "LoRAModelField"
ControlNetModel = "ControlNetModelField"
IPAdapterModel = "IPAdapterModelField"
# endregion
# region Misc Field Types
Scheduler = "SchedulerField"
Any = "AnyField"
# endregion
# region Internal Field Types
_Collection = "CollectionField"
_CollectionItem = "CollectionItemField"
# endregion
# region DEPRECATED
Boolean = "DEPRECATED_Boolean"
Color = "DEPRECATED_Color"
Conditioning = "DEPRECATED_Conditioning"
Control = "DEPRECATED_Control"
Float = "DEPRECATED_Float"
Image = "DEPRECATED_Image"
Integer = "DEPRECATED_Integer"
Latents = "DEPRECATED_Latents"
String = "DEPRECATED_String"
BooleanCollection = "DEPRECATED_BooleanCollection"
ColorCollection = "DEPRECATED_ColorCollection"
ConditioningCollection = "DEPRECATED_ConditioningCollection"
ControlCollection = "DEPRECATED_ControlCollection"
FloatCollection = "DEPRECATED_FloatCollection"
ImageCollection = "DEPRECATED_ImageCollection"
IntegerCollection = "DEPRECATED_IntegerCollection"
LatentsCollection = "DEPRECATED_LatentsCollection"
StringCollection = "DEPRECATED_StringCollection"
BooleanPolymorphic = "DEPRECATED_BooleanPolymorphic"
ColorPolymorphic = "DEPRECATED_ColorPolymorphic"
ConditioningPolymorphic = "DEPRECATED_ConditioningPolymorphic"
ControlPolymorphic = "DEPRECATED_ControlPolymorphic"
FloatPolymorphic = "DEPRECATED_FloatPolymorphic"
ImagePolymorphic = "DEPRECATED_ImagePolymorphic"
IntegerPolymorphic = "DEPRECATED_IntegerPolymorphic"
LatentsPolymorphic = "DEPRECATED_LatentsPolymorphic"
StringPolymorphic = "DEPRECATED_StringPolymorphic"
MainModel = "DEPRECATED_MainModel"
UNet = "DEPRECATED_UNet"
Vae = "DEPRECATED_Vae"
CLIP = "DEPRECATED_CLIP"
Collection = "DEPRECATED_Collection"
CollectionItem = "DEPRECATED_CollectionItem"
Enum = "DEPRECATED_Enum"
WorkflowField = "DEPRECATED_WorkflowField"
IsIntermediate = "DEPRECATED_IsIntermediate"
BoardField = "DEPRECATED_BoardField"
MetadataItem = "DEPRECATED_MetadataItem"
MetadataItemCollection = "DEPRECATED_MetadataItemCollection"
MetadataItemPolymorphic = "DEPRECATED_MetadataItemPolymorphic"
MetadataDict = "DEPRECATED_MetadataDict"
class UIComponent(str, Enum, metaclass=MetaEnum):
"""
The type of UI component to use for a field, used to override the default components, which are
inferred from the field type.
"""
None_ = "none"
Textarea = "textarea"
Slider = "slider"
class FieldDescriptions:
denoising_start = "When to start denoising, expressed a percentage of total steps"
denoising_end = "When to stop denoising, expressed a percentage of total steps"
cfg_scale = "Classifier-Free Guidance scale"
cfg_rescale_multiplier = "Rescale multiplier for CFG guidance, used for models trained with zero-terminal SNR"
scheduler = "Scheduler to use during inference"
positive_cond = "Positive conditioning tensor"
negative_cond = "Negative conditioning tensor"
noise = "Noise tensor"
clip = "CLIP (tokenizer, text encoder, LoRAs) and skipped layer count"
unet = "UNet (scheduler, LoRAs)"
vae = "VAE"
cond = "Conditioning tensor"
controlnet_model = "ControlNet model to load"
vae_model = "VAE model to load"
lora_model = "LoRA model to load"
main_model = "Main model (UNet, VAE, CLIP) to load"
sdxl_main_model = "SDXL Main model (UNet, VAE, CLIP1, CLIP2) to load"
sdxl_refiner_model = "SDXL Refiner Main Modde (UNet, VAE, CLIP2) to load"
onnx_main_model = "ONNX Main model (UNet, VAE, CLIP) to load"
lora_weight = "The weight at which the LoRA is applied to each model"
compel_prompt = "Prompt to be parsed by Compel to create a conditioning tensor"
raw_prompt = "Raw prompt text (no parsing)"
sdxl_aesthetic = "The aesthetic score to apply to the conditioning tensor"
skipped_layers = "Number of layers to skip in text encoder"
seed = "Seed for random number generation"
steps = "Number of steps to run"
width = "Width of output (px)"
height = "Height of output (px)"
control = "ControlNet(s) to apply"
ip_adapter = "IP-Adapter to apply"
t2i_adapter = "T2I-Adapter(s) to apply"
denoised_latents = "Denoised latents tensor"
latents = "Latents tensor"
strength = "Strength of denoising (proportional to steps)"
metadata = "Optional metadata to be saved with the image"
metadata_collection = "Collection of Metadata"
metadata_item_polymorphic = "A single metadata item or collection of metadata items"
metadata_item_label = "Label for this metadata item"
metadata_item_value = "The value for this metadata item (may be any type)"
workflow = "Optional workflow to be saved with the image"
interp_mode = "Interpolation mode"
torch_antialias = "Whether or not to apply antialiasing (bilinear or bicubic only)"
fp32 = "Whether or not to use full float32 precision"
precision = "Precision to use"
tiled = "Processing using overlapping tiles (reduce memory consumption)"
detect_res = "Pixel resolution for detection"
image_res = "Pixel resolution for output image"
safe_mode = "Whether or not to use safe mode"
scribble_mode = "Whether or not to use scribble mode"
scale_factor = "The factor by which to scale"
blend_alpha = (
"Blending factor. 0.0 = use input A only, 1.0 = use input B only, 0.5 = 50% mix of input A and input B."
)
num_1 = "The first number"
num_2 = "The second number"
mask = "The mask to use for the operation"
board = "The board to save the image to"
image = "The image to process"
tile_size = "Tile size"
inclusive_low = "The inclusive low value"
exclusive_high = "The exclusive high value"
decimal_places = "The number of decimal places to round to"
freeu_s1 = 'Scaling factor for stage 1 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process.'
freeu_s2 = 'Scaling factor for stage 2 to attenuate the contributions of the skip features. This is done to mitigate the "oversmoothing effect" in the enhanced denoising process.'
freeu_b1 = "Scaling factor for stage 1 to amplify the contributions of backbone features."
freeu_b2 = "Scaling factor for stage 2 to amplify the contributions of backbone features."
class ImageField(BaseModel):
"""An image primitive field"""
image_name: str = Field(description="The name of the image")
class BoardField(BaseModel):
"""A board primitive field"""
board_id: str = Field(description="The id of the board")
class DenoiseMaskField(BaseModel):
"""An inpaint mask field"""
mask_name: str = Field(description="The name of the mask image")
masked_latents_name: Optional[str] = Field(default=None, description="The name of the masked image latents")
gradient: bool = Field(default=False, description="Used for gradient inpainting")
class LatentsField(BaseModel):
"""A latents tensor primitive field"""
latents_name: str = Field(description="The name of the latents")
seed: Optional[int] = Field(default=None, description="Seed used to generate this latents")
class ColorField(BaseModel):
"""A color primitive field"""
r: int = Field(ge=0, le=255, description="The red component")
g: int = Field(ge=0, le=255, description="The green component")
b: int = Field(ge=0, le=255, description="The blue component")
a: int = Field(ge=0, le=255, description="The alpha component")
def tuple(self) -> Tuple[int, int, int, int]:
return (self.r, self.g, self.b, self.a)
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
# endregion
class MetadataField(RootModel):
"""
Pydantic model for metadata with custom root of type dict[str, Any].
Metadata is stored without a strict schema.
"""
root: dict[str, Any] = Field(description="The metadata")
MetadataFieldValidator = TypeAdapter(MetadataField)
class Input(str, Enum, metaclass=MetaEnum):
"""
The type of input a field accepts.
- `Input.Direct`: The field must have its value provided directly, when the invocation and field \
are instantiated.
- `Input.Connection`: The field must have its value provided by a connection.
- `Input.Any`: The field may have its value provided either directly or by a connection.
"""
Connection = "connection"
Direct = "direct"
Any = "any"
class FieldKind(str, Enum, metaclass=MetaEnum):
"""
The kind of field.
- `Input`: An input field on a node.
- `Output`: An output field on a node.
- `Internal`: A field which is treated as an input, but cannot be used in node definitions. Metadata is
one example. It is provided to nodes via the WithMetadata class, and we want to reserve the field name
"metadata" for this on all nodes. `FieldKind` is used to short-circuit the field name validation logic,
allowing "metadata" for that field.
- `NodeAttribute`: The field is a node attribute. These are fields which are not inputs or outputs,
but which are used to store information about the node. For example, the `id` and `type` fields are node
attributes.
The presence of this in `json_schema_extra["field_kind"]` is used when initializing node schemas on app
startup, and when generating the OpenAPI schema for the workflow editor.
"""
Input = "input"
Output = "output"
Internal = "internal"
NodeAttribute = "node_attribute"
class InputFieldJSONSchemaExtra(BaseModel):
"""
Extra attributes to be added to input fields and their OpenAPI schema. Used during graph execution,
and by the workflow editor during schema parsing and UI rendering.
"""
input: Input
orig_required: bool
field_kind: FieldKind
default: Optional[Any] = None
orig_default: Optional[Any] = None
ui_hidden: bool = False
ui_type: Optional[UIType] = None
ui_component: Optional[UIComponent] = None
ui_order: Optional[int] = None
ui_choice_labels: Optional[dict[str, str]] = None
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
)
class WithMetadata(BaseModel):
"""
Inherit from this class if your node needs a metadata input field.
"""
metadata: Optional[MetadataField] = Field(
default=None,
description=FieldDescriptions.metadata,
json_schema_extra=InputFieldJSONSchemaExtra(
field_kind=FieldKind.Internal,
input=Input.Connection,
orig_required=False,
).model_dump(exclude_none=True),
)
class WithWorkflow:
workflow = None
def __init_subclass__(cls) -> None:
logger.warn(
f"{cls.__module__.split('.')[0]}.{cls.__name__}: WithWorkflow is deprecated. Use `context.workflow` to access the workflow."
)
super().__init_subclass__()
class WithBoard(BaseModel):
"""
Inherit from this class if your node needs a board input field.
"""
board: Optional[BoardField] = Field(
default=None,
description=FieldDescriptions.board,
json_schema_extra=InputFieldJSONSchemaExtra(
field_kind=FieldKind.Internal,
input=Input.Direct,
orig_required=False,
).model_dump(exclude_none=True),
)
class OutputFieldJSONSchemaExtra(BaseModel):
"""
Extra attributes to be added to input fields and their OpenAPI schema. Used by the workflow editor
during schema parsing and UI rendering.
"""
field_kind: FieldKind
ui_hidden: bool
ui_type: Optional[UIType]
ui_order: Optional[int]
model_config = ConfigDict(
validate_assignment=True,
json_schema_serialization_defaults_required=True,
)
def InputField(
# copied from pydantic's Field
# TODO: Can we support default_factory?
default: Any = _Unset,
default_factory: Callable[[], Any] | None = _Unset,
title: str | None = _Unset,
description: str | None = _Unset,
pattern: str | None = _Unset,
strict: bool | None = _Unset,
gt: float | None = _Unset,
ge: float | None = _Unset,
lt: float | None = _Unset,
le: float | None = _Unset,
multiple_of: float | None = _Unset,
allow_inf_nan: bool | None = _Unset,
max_digits: int | None = _Unset,
decimal_places: int | None = _Unset,
min_length: int | None = _Unset,
max_length: int | None = _Unset,
# custom
input: Input = Input.Any,
ui_type: Optional[UIType] = None,
ui_component: Optional[UIComponent] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
ui_choice_labels: Optional[dict[str, str]] = None,
) -> Any:
"""
Creates an input field for an invocation.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.Field) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param Input input: [Input.Any] The kind of input this field requires. \
`Input.Direct` means a value must be provided on instantiation. \
`Input.Connection` means the value must be provided by a connection. \
`Input.Any` means either will do.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param UIComponent ui_component: [None] Optionally specifies a specific component to use in the UI. \
The UI will always render a suitable component, but sometimes you want something different than the default. \
For example, a `string` field will default to a single-line input, but you may want a multi-line textarea instead. \
For this case, you could provide `UIComponent.Textarea`.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI.
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI.
:param dict[str, str] ui_choice_labels: [None] Specifies the labels to use for the choices in an enum field.
"""
json_schema_extra_ = InputFieldJSONSchemaExtra(
input=input,
ui_type=ui_type,
ui_component=ui_component,
ui_hidden=ui_hidden,
ui_order=ui_order,
ui_choice_labels=ui_choice_labels,
field_kind=FieldKind.Input,
orig_required=True,
)
"""
There is a conflict between the typing of invocation definitions and the typing of an invocation's
`invoke()` function.
On instantiation of a node, the invocation definition is used to create the python class. At this time,
any number of fields may be optional, because they may be provided by connections.
On calling of `invoke()`, however, those fields may be required.
For example, consider an ResizeImageInvocation with an `image: ImageField` field.
`image` is required during the call to `invoke()`, but when the python class is instantiated,
the field may not be present. This is fine, because that image field will be provided by a
connection from an ancestor node, which outputs an image.
This means we want to type the `image` field as optional for the node class definition, but required
for the `invoke()` function.
If we use `typing.Optional` in the node class definition, the field will be typed as optional in the
`invoke()` method, and we'll have to do a lot of runtime checks to ensure the field is present - or
any static type analysis tools will complain.
To get around this, in node class definitions, we type all fields correctly for the `invoke()` function,
but secretly make them optional in `InputField()`. We also store the original required bool and/or default
value. When we call `invoke()`, we use this stored information to do an additional check on the class.
"""
if default_factory is not _Unset and default_factory is not None:
default = default_factory()
logger.warn('"default_factory" is not supported, calling it now to set "default"')
# These are the args we may wish pass to the pydantic `Field()` function
field_args = {
"default": default,
"title": title,
"description": description,
"pattern": pattern,
"strict": strict,
"gt": gt,
"ge": ge,
"lt": lt,
"le": le,
"multiple_of": multiple_of,
"allow_inf_nan": allow_inf_nan,
"max_digits": max_digits,
"decimal_places": decimal_places,
"min_length": min_length,
"max_length": max_length,
}
# We only want to pass the args that were provided, otherwise the `Field()`` function won't work as expected
provided_args = {k: v for (k, v) in field_args.items() if v is not PydanticUndefined}
# Because we are manually making fields optional, we need to store the original required bool for reference later
json_schema_extra_.orig_required = default is PydanticUndefined
# Make Input.Any and Input.Connection fields optional, providing None as a default if the field doesn't already have one
if input is Input.Any or input is Input.Connection:
default_ = None if default is PydanticUndefined else default
provided_args.update({"default": default_})
if default is not PydanticUndefined:
# Before invoking, we'll check for the original default value and set it on the field if the field has no value
json_schema_extra_.default = default
json_schema_extra_.orig_default = default
elif default is not PydanticUndefined:
default_ = default
provided_args.update({"default": default_})
json_schema_extra_.orig_default = default_
return Field(
**provided_args,
json_schema_extra=json_schema_extra_.model_dump(exclude_none=True),
)
def OutputField(
# copied from pydantic's Field
default: Any = _Unset,
title: str | None = _Unset,
description: str | None = _Unset,
pattern: str | None = _Unset,
strict: bool | None = _Unset,
gt: float | None = _Unset,
ge: float | None = _Unset,
lt: float | None = _Unset,
le: float | None = _Unset,
multiple_of: float | None = _Unset,
allow_inf_nan: bool | None = _Unset,
max_digits: int | None = _Unset,
decimal_places: int | None = _Unset,
min_length: int | None = _Unset,
max_length: int | None = _Unset,
# custom
ui_type: Optional[UIType] = None,
ui_hidden: bool = False,
ui_order: Optional[int] = None,
) -> Any:
"""
Creates an output field for an invocation output.
This is a wrapper for Pydantic's [Field](https://docs.pydantic.dev/1.10/usage/schema/#field-customization) \
that adds a few extra parameters to support graph execution and the node editor UI.
:param UIType ui_type: [None] Optionally provides an extra type hint for the UI. \
In some situations, the field's type is not enough to infer the correct UI type. \
For example, model selection fields should render a dropdown UI component to select a model. \
Internally, there is no difference between SD-1, SD-2 and SDXL model fields, they all use \
`MainModelField`. So to ensure the base-model-specific UI is rendered, you can use \
`UIType.SDXLMainModelField` to indicate that the field is an SDXL main model field.
:param bool ui_hidden: [False] Specifies whether or not this field should be hidden in the UI. \
:param int ui_order: [None] Specifies the order in which this field should be rendered in the UI. \
"""
return Field(
default=default,
title=title,
description=description,
pattern=pattern,
strict=strict,
gt=gt,
ge=ge,
lt=lt,
le=le,
multiple_of=multiple_of,
allow_inf_nan=allow_inf_nan,
max_digits=max_digits,
decimal_places=decimal_places,
min_length=min_length,
max_length=max_length,
json_schema_extra=OutputFieldJSONSchemaExtra(
ui_type=ui_type,
ui_hidden=ui_hidden,
ui_order=ui_order,
field_kind=FieldKind.Output,
).model_dump(exclude_none=True),
)

File diff suppressed because it is too large Load Diff

View File

@ -6,14 +6,16 @@ from typing import Literal, Optional, get_args
import numpy as np
from PIL import Image, ImageOps
from invokeai.app.invocations.primitives import ColorField, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.invocations.fields import ColorField, ImageField
from invokeai.app.invocations.primitives import ImageOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from invokeai.backend.image_util.cv2_inpaint import cv2_inpaint
from invokeai.backend.image_util.lama import LaMA
from invokeai.backend.image_util.patchmatch import PatchMatch
from .baseinvocation import BaseInvocation, InputField, InvocationContext, WithMetadata, invocation
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField, WithBoard, WithMetadata
from .image import PIL_RESAMPLING_MAP, PIL_RESAMPLING_MODES
@ -118,8 +120,8 @@ def tile_fill_missing(im: Image.Image, tile_size: int = 16, seed: Optional[int]
return si
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.0")
class InfillColorInvocation(BaseInvocation, WithMetadata):
@invocation("infill_rgba", title="Solid Color Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
class InfillColorInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with a solid color"""
image: ImageField = InputField(description="The image to infill")
@ -129,33 +131,20 @@ class InfillColorInvocation(BaseInvocation, WithMetadata):
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
solid_bg = Image.new("RGBA", image.size, self.color.tuple())
infilled = Image.alpha_composite(solid_bg, image.convert("RGBA"))
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=infilled)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
class InfillTileInvocation(BaseInvocation, WithMetadata):
@invocation("infill_tile", title="Tile Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.2")
class InfillTileInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image with tiles of the image"""
image: ImageField = InputField(description="The image to infill")
@ -168,33 +157,20 @@ class InfillTileInvocation(BaseInvocation, WithMetadata):
)
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
infilled = tile_fill_missing(image.copy(), seed=self.seed, tile_size=self.tile_size)
infilled.paste(image, (0, 0), image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=infilled)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)
@invocation(
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.0"
"infill_patchmatch", title="PatchMatch Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1"
)
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata):
class InfillPatchMatchInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the PatchMatch algorithm"""
image: ImageField = InputField(description="The image to infill")
@ -202,7 +178,7 @@ class InfillPatchMatchInvocation(BaseInvocation, WithMetadata):
resample_mode: PIL_RESAMPLING_MODES = InputField(default="bicubic", description="The resampling mode")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name).convert("RGBA")
image = context.images.get_pil(self.image.image_name).convert("RGBA")
resample_mode = PIL_RESAMPLING_MAP[self.resample_mode]
@ -227,77 +203,38 @@ class InfillPatchMatchInvocation(BaseInvocation, WithMetadata):
infilled.paste(image, (0, 0), mask=image.split()[-1])
# image.paste(infilled, (0, 0), mask=image.split()[-1])
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=infilled)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.0")
class LaMaInfillInvocation(BaseInvocation, WithMetadata):
@invocation("infill_lama", title="LaMa Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
class LaMaInfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using the LaMa model"""
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
infilled = infill_lama(image.copy())
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=infilled)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.0")
class CV2InfillInvocation(BaseInvocation, WithMetadata):
@invocation("infill_cv2", title="CV2 Infill", tags=["image", "inpaint"], category="inpaint", version="1.2.1")
class CV2InfillInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Infills transparent areas of an image using OpenCV Inpainting"""
image: ImageField = InputField(description="The image to infill")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
infilled = infill_cv2(image.copy())
image_dto = context.services.images.create(
image=infilled,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=infilled)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)

View File

@ -1,37 +1,29 @@
import os
from builtins import float
from typing import List, Union
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, Field, field_validator, model_validator
from typing_extensions import Self
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
invocation,
invocation_output,
)
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.model_management.models.ip_adapter import get_ip_adapter_image_encoder_model_id
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.model_manager.config import BaseModelType, ModelType
# LS: Consider moving these two classes into model.py
class IPAdapterModelField(BaseModel):
model_name: str = Field(description="Name of the IP-Adapter model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Key to the IP-Adapter model")
class CLIPVisionModelField(BaseModel):
model_name: str = Field(description="Name of the CLIP Vision image encoder model")
base_model: BaseModelType = Field(description="Base model (usually 'Any')")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Key to the CLIP Vision image encoder model")
class IPAdapterField(BaseModel):
@ -39,7 +31,6 @@ class IPAdapterField(BaseModel):
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
# weight: float = Field(default=1.0, ge=0, description="The weight of the IP-Adapter.")
begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
@ -47,6 +38,17 @@ class IPAdapterField(BaseModel):
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v: float) -> float:
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> Self:
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("ip_adapter_output")
class IPAdapterOutput(BaseInvocationOutput):
@ -54,7 +56,7 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.0")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.2")
class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes."""
@ -64,36 +66,37 @@ class IPAdapterInvocation(BaseInvocation):
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
)
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField(
default=1, ge=-1, description="The weight given to the IP-Adapter", title="Weight"
default=1, description="The weight given to the IP-Adapter", title="Weight"
)
begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the IP-Adapter is first applied (% of total steps)"
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
)
end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
)
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v: float) -> float:
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> Self:
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.services.model_manager.model_info(
self.ip_adapter_model.model_name, self.ip_adapter_model.base_model, ModelType.IPAdapter
)
# HACK(ryand): This is bad for a couple of reasons: 1) we are bypassing the model manager to read the model
# directly, and 2) we are reading from disk every time this invocation is called without caching the result.
# A better solution would be to store the image encoder model reference in the IP-Adapter model info, but this
# is currently messy due to differences between how the model info is generated when installing a model from
# disk vs. downloading the model.
image_encoder_model_id = get_ip_adapter_image_encoder_model_id(
os.path.join(context.services.configuration.get_config().models_path, ip_adapter_info["path"])
)
ip_adapter_info = context.models.get_config(self.ip_adapter_model.key)
image_encoder_model_id = ip_adapter_info.image_encoder_model_id
image_encoder_model_name = image_encoder_model_id.split("/")[-1].strip()
image_encoder_model = CLIPVisionModelField(
model_name=image_encoder_model_name,
base_model=BaseModelType.Any,
image_encoder_models = context.models.search_by_attrs(
model_name=image_encoder_model_name, base_model=BaseModelType.Any, model_type=ModelType.CLIPVision
)
assert len(image_encoder_models) == 1
image_encoder_model = CLIPVisionModelField(key=image_encoder_models[0].key)
return IPAdapterOutput(
ip_adapter=IPAdapterField(
image=self.image,

View File

@ -1,14 +1,17 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import math
from contextlib import ExitStack
from functools import singledispatchmethod
from typing import List, Literal, Optional, Union
from typing import Any, Iterator, List, Literal, Optional, Tuple, Union
import einops
import numpy as np
import numpy.typing as npt
import torch
import torchvision.transforms as T
from diffusers import AutoencoderKL, AutoencoderTiny
from diffusers.configuration_utils import ConfigMixin
from diffusers.image_processor import VaeImageProcessor
from diffusers.models.adapter import T2IAdapter
from diffusers.models.attention_processor import (
@ -17,34 +20,44 @@ from diffusers.models.attention_processor import (
LoRAXFormersAttnProcessor,
XFormersAttnProcessor,
)
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from diffusers.schedulers import DPMSolverSDEScheduler
from diffusers.schedulers import SchedulerMixin as Scheduler
from PIL import Image, ImageFilter
from pydantic import field_validator
from torchvision.transforms.functional import resize as tv_resize
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR, SCHEDULER_NAME_VALUES
from invokeai.app.invocations.fields import (
ConditioningField,
DenoiseMaskField,
FieldDescriptions,
ImageField,
Input,
InputField,
LatentsField,
OutputField,
UIType,
WithBoard,
WithMetadata,
)
from invokeai.app.invocations.ip_adapter import IPAdapterField
from invokeai.app.invocations.primitives import (
DenoiseMaskField,
DenoiseMaskOutput,
ImageField,
ImageOutput,
LatentsField,
LatentsOutput,
build_latents_output,
)
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter, IPAdapterPlus
from invokeai.backend.model_management.models import ModelType, SilenceWarnings
from invokeai.backend.lora import LoRAModelRaw
from invokeai.backend.model_manager import BaseModelType, LoadedModel
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion import PipelineIntermediateState, set_seamless
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import ConditioningData, IPAdapterConditioningInfo
from invokeai.backend.util.silence_warnings import SilenceWarnings
from ...backend.model_management.lora import ModelPatcher
from ...backend.model_management.models import BaseModelType
from ...backend.model_management.seamless import set_seamless
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.stable_diffusion.diffusers_pipeline import (
ControlNetData,
IPAdapterData,
@ -58,16 +71,9 @@ from ...backend.util.devices import choose_precision, choose_torch_device
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
UIType,
WithMetadata,
invocation,
invocation_output,
)
from .compel import ConditioningField
from .controlnet_image_processors import ControlField
from .model import ModelInfo, UNetField, VaeField
@ -76,18 +82,10 @@ if choose_torch_device() == torch.device("mps"):
DEFAULT_PRECISION = choose_precision(choose_torch_device())
SAMPLER_NAME_VALUES = Literal[tuple(SCHEDULER_MAP.keys())]
# HACK: Many nodes are currently hard-coded to use a fixed latent scale factor of 8. This is fragile, and will need to
# be addressed if future models use a different latent scale factor. Also, note that there may be places where the scale
# factor is hard-coded to a literal '8' rather than using this constant.
# The ratio of image:latent dimensions is LATENT_SCALE_FACTOR:1, or 8:1.
LATENT_SCALE_FACTOR = 8
@invocation_output("scheduler_output")
class SchedulerOutput(BaseInvocationOutput):
scheduler: SAMPLER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
scheduler: SCHEDULER_NAME_VALUES = OutputField(description=FieldDescriptions.scheduler, ui_type=UIType.Scheduler)
@invocation(
@ -100,7 +98,7 @@ class SchedulerOutput(BaseInvocationOutput):
class SchedulerInvocation(BaseInvocation):
"""Selects a scheduler."""
scheduler: SAMPLER_NAME_VALUES = InputField(
scheduler: SCHEDULER_NAME_VALUES = InputField(
default="euler",
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
@ -115,7 +113,7 @@ class SchedulerInvocation(BaseInvocation):
title="Create Denoise Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class CreateDenoiseMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
@ -130,10 +128,10 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
ui_order=4,
)
def prep_mask_tensor(self, mask_image):
def prep_mask_tensor(self, mask_image: Image.Image) -> torch.Tensor:
if mask_image.mode != "L":
mask_image = mask_image.convert("L")
mask_tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
mask_tensor: torch.Tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
if mask_tensor.dim() == 3:
mask_tensor = mask_tensor.unsqueeze(0)
# if shape is not None:
@ -143,41 +141,90 @@ class CreateDenoiseMaskInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> DenoiseMaskOutput:
if self.image is not None:
image = context.services.images.get_pil_image(self.image.image_name)
image = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image.dim() == 3:
image = image.unsqueeze(0)
image = context.images.get_pil(self.image.image_name)
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
image_tensor = image_tensor.unsqueeze(0)
else:
image = None
image_tensor = None
mask = self.prep_mask_tensor(
context.services.images.get_pil_image(self.mask.image_name),
context.images.get_pil(self.mask.image_name),
)
if image is not None:
vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(),
context=context,
)
if image_tensor is not None:
vae_info = context.models.load(**self.vae.vae.model_dump())
img_mask = tv_resize(mask, image.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image * torch.where(img_mask < 0.5, 0.0, 1.0)
img_mask = tv_resize(mask, image_tensor.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
masked_image = image_tensor * torch.where(img_mask < 0.5, 0.0, 1.0)
# TODO:
masked_latents = ImageToLatentsInvocation.vae_encode(vae_info, self.fp32, self.tiled, masked_image.clone())
masked_latents_name = f"{context.graph_execution_state_id}__{self.id}_masked_latents"
context.services.latents.save(masked_latents_name, masked_latents)
masked_latents_name = context.tensors.save(tensor=masked_latents)
else:
masked_latents_name = None
mask_name = f"{context.graph_execution_state_id}__{self.id}_mask"
context.services.latents.save(mask_name, mask)
mask_name = context.tensors.save(tensor=mask)
return DenoiseMaskOutput(
denoise_mask=DenoiseMaskField(
mask_name=mask_name,
masked_latents_name=masked_latents_name,
),
return DenoiseMaskOutput.build(
mask_name=mask_name,
masked_latents_name=masked_latents_name,
gradient=False,
)
@invocation(
"create_gradient_mask",
title="Create Gradient Mask",
tags=["mask", "denoise"],
category="latents",
version="1.0.0",
)
class CreateGradientMaskInvocation(BaseInvocation):
"""Creates mask for denoising model run."""
mask: ImageField = InputField(default=None, description="Image which will be masked", ui_order=1)
edge_radius: int = InputField(
default=16, ge=0, description="How far to blur/expand the edges of the mask", ui_order=2
)
coherence_mode: Literal["Gaussian Blur", "Box Blur", "Staged"] = InputField(default="Gaussian Blur", ui_order=3)
minimum_denoise: float = InputField(
default=0.0, ge=0, le=1, description="Minimum denoise level for the coherence region", ui_order=4
)
@torch.no_grad()
def invoke(self, context: InvocationContext) -> DenoiseMaskOutput:
mask_image = context.images.get_pil(self.mask.image_name, mode="L")
if self.coherence_mode == "Box Blur":
blur_mask = mask_image.filter(ImageFilter.BoxBlur(self.edge_radius))
else: # Gaussian Blur OR Staged
# Gaussian Blur uses standard deviation. 1/2 radius is a good approximation
blur_mask = mask_image.filter(ImageFilter.GaussianBlur(self.edge_radius / 2))
mask_tensor: torch.Tensor = image_resized_to_grid_as_tensor(mask_image, normalize=False)
blur_tensor: torch.Tensor = image_resized_to_grid_as_tensor(blur_mask, normalize=False)
# redistribute blur so that the edges are 0 and blur out to 1
blur_tensor = (blur_tensor - 0.5) * 2
threshold = 1 - self.minimum_denoise
if self.coherence_mode == "Staged":
# wherever the blur_tensor is masked to any degree, convert it to threshold
blur_tensor = torch.where((blur_tensor < 1), threshold, blur_tensor)
else:
# wherever the blur_tensor is above threshold but less than 1, drop it to threshold
blur_tensor = torch.where((blur_tensor > threshold) & (blur_tensor < 1), threshold, blur_tensor)
# multiply original mask to force actually masked regions to 0
blur_tensor = mask_tensor * blur_tensor
mask_name = context.tensors.save(tensor=blur_tensor.unsqueeze(1))
return DenoiseMaskOutput.build(
mask_name=mask_name,
masked_latents_name=None,
gradient=True,
)
@ -188,10 +235,7 @@ def get_scheduler(
seed: int,
) -> Scheduler:
scheduler_class, scheduler_extra_config = SCHEDULER_MAP.get(scheduler_name, SCHEDULER_MAP["ddim"])
orig_scheduler_info = context.services.model_manager.get_model(
**scheduler_info.model_dump(),
context=context,
)
orig_scheduler_info = context.models.load(**scheduler_info.model_dump())
with orig_scheduler_info as orig_scheduler:
scheduler_config = orig_scheduler.config
@ -199,7 +243,7 @@ def get_scheduler(
scheduler_config = scheduler_config["_backup"]
scheduler_config = {
**scheduler_config,
**scheduler_extra_config,
**scheduler_extra_config, # FIXME
"_backup": scheduler_config,
}
@ -212,6 +256,7 @@ def get_scheduler(
# hack copied over from generate.py
if not hasattr(scheduler, "uses_inpainting_model"):
scheduler.uses_inpainting_model = lambda: False
assert isinstance(scheduler, Scheduler)
return scheduler
@ -220,7 +265,7 @@ def get_scheduler(
title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents",
version="1.5.0",
version="1.5.2",
)
class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images"""
@ -248,7 +293,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
description=FieldDescriptions.denoising_start,
)
denoising_end: float = InputField(default=1.0, ge=0, le=1, description=FieldDescriptions.denoising_end)
scheduler: SAMPLER_NAME_VALUES = InputField(
scheduler: SCHEDULER_NAME_VALUES = InputField(
default="euler",
description=FieldDescriptions.scheduler,
ui_type=UIType.Scheduler,
@ -279,7 +324,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
ui_order=7,
)
cfg_rescale_multiplier: float = InputField(
default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
title="CFG Rescale Multiplier", default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
)
latents: Optional[LatentsField] = InputField(
default=None,
@ -295,7 +340,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
)
@field_validator("cfg_scale")
def ge_one(cls, v):
def ge_one(cls, v: Union[List[float], float]) -> Union[List[float], float]:
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
@ -306,34 +351,18 @@ class DenoiseLatentsInvocation(BaseInvocation):
raise ValueError("cfg_scale must be greater than 1")
return v
# TODO: pass this an emitter method or something? or a session for dispatching?
def dispatch_progress(
self,
context: InvocationContext,
source_node_id: str,
intermediate_state: PipelineIntermediateState,
base_model: BaseModelType,
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.model_dump(),
source_node_id=source_node_id,
base_model=base_model,
)
def get_conditioning_data(
self,
context: InvocationContext,
scheduler,
unet,
seed,
scheduler: Scheduler,
unet: UNet2DConditionModel,
seed: int,
) -> ConditioningData:
positive_cond_data = context.services.latents.get(self.positive_conditioning.conditioning_name)
positive_cond_data = context.conditioning.load(self.positive_conditioning.conditioning_name)
c = positive_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
extra_conditioning_info = c.extra_conditioning
negative_cond_data = context.services.latents.get(self.negative_conditioning.conditioning_name)
negative_cond_data = context.conditioning.load(self.negative_conditioning.conditioning_name)
uc = negative_cond_data.conditionings[0].to(device=unet.device, dtype=unet.dtype)
conditioning_data = ConditioningData(
@ -350,7 +379,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
),
)
conditioning_data = conditioning_data.add_scheduler_args_if_applicable(
conditioning_data = conditioning_data.add_scheduler_args_if_applicable( # FIXME
scheduler,
# for ddim scheduler
eta=0.0, # ddim_eta
@ -362,8 +391,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
def create_pipeline(
self,
unet,
scheduler,
unet: UNet2DConditionModel,
scheduler: Scheduler,
) -> StableDiffusionGeneratorPipeline:
# TODO:
# configure_model_padding(
@ -374,10 +403,10 @@ class DenoiseLatentsInvocation(BaseInvocation):
class FakeVae:
class FakeVaeConfig:
def __init__(self):
def __init__(self) -> None:
self.block_out_channels = [0]
def __init__(self):
def __init__(self) -> None:
self.config = FakeVae.FakeVaeConfig()
return StableDiffusionGeneratorPipeline(
@ -394,11 +423,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
def prep_control_data(
self,
context: InvocationContext,
control_input: Union[ControlField, List[ControlField]],
control_input: Optional[Union[ControlField, List[ControlField]]],
latents_shape: List[int],
exit_stack: ExitStack,
do_classifier_free_guidance: bool = True,
) -> List[ControlNetData]:
) -> Optional[List[ControlNetData]]:
# Assuming fixed dimensional scaling of LATENT_SCALE_FACTOR.
control_height_resize = latents_shape[2] * LATENT_SCALE_FACTOR
control_width_resize = latents_shape[3] * LATENT_SCALE_FACTOR
@ -420,18 +449,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
# and if weight is None, populate with default 1.0?
controlnet_data = []
for control_info in control_list:
control_model = exit_stack.enter_context(
context.services.model_manager.get_model(
model_name=control_info.control_model.model_name,
model_type=ModelType.ControlNet,
base_model=control_info.control_model.base_model,
context=context,
)
)
control_model = exit_stack.enter_context(context.models.load(key=control_info.control_model.key))
# control_models.append(control_model)
control_image_field = control_info.image
input_image = context.services.images.get_pil_image(control_image_field.image_name)
input_image = context.images.get_pil(control_image_field.image_name)
# self.image.image_type, self.image.image_name
# FIXME: still need to test with different widths, heights, devices, dtypes
# and add in batch_size, num_images_per_prompt?
@ -489,27 +511,17 @@ class DenoiseLatentsInvocation(BaseInvocation):
conditioning_data.ip_adapter_conditioning = []
for single_ip_adapter in ip_adapter:
ip_adapter_model: Union[IPAdapter, IPAdapterPlus] = exit_stack.enter_context(
context.services.model_manager.get_model(
model_name=single_ip_adapter.ip_adapter_model.model_name,
model_type=ModelType.IPAdapter,
base_model=single_ip_adapter.ip_adapter_model.base_model,
context=context,
)
context.models.load(key=single_ip_adapter.ip_adapter_model.key)
)
image_encoder_model_info = context.services.model_manager.get_model(
model_name=single_ip_adapter.image_encoder_model.model_name,
model_type=ModelType.CLIPVision,
base_model=single_ip_adapter.image_encoder_model.base_model,
context=context,
)
image_encoder_model_info = context.models.load(key=single_ip_adapter.image_encoder_model.key)
# `single_ip_adapter.image` could be a list or a single ImageField. Normalize to a list here.
single_ipa_images = single_ip_adapter.image
if not isinstance(single_ipa_images, list):
single_ipa_images = [single_ipa_images]
single_ipa_image_fields = single_ip_adapter.image
if not isinstance(single_ipa_image_fields, list):
single_ipa_image_fields = [single_ipa_image_fields]
single_ipa_images = [context.services.images.get_pil_image(image.image_name) for image in single_ipa_images]
single_ipa_images = [context.images.get_pil(image.image_name) for image in single_ipa_image_fields]
# TODO(ryand): With some effort, the step of running the CLIP Vision encoder could be done before any other
# models are needed in memory. This would help to reduce peak memory utilization in low-memory environments.
@ -553,26 +565,20 @@ class DenoiseLatentsInvocation(BaseInvocation):
t2i_adapter_data = []
for t2i_adapter_field in t2i_adapter:
t2i_adapter_model_info = context.services.model_manager.get_model(
model_name=t2i_adapter_field.t2i_adapter_model.model_name,
model_type=ModelType.T2IAdapter,
base_model=t2i_adapter_field.t2i_adapter_model.base_model,
context=context,
)
image = context.services.images.get_pil_image(t2i_adapter_field.image.image_name)
t2i_adapter_model_config = context.models.get_config(key=t2i_adapter_field.t2i_adapter_model.key)
t2i_adapter_loaded_model = context.models.load(key=t2i_adapter_field.t2i_adapter_model.key)
image = context.images.get_pil(t2i_adapter_field.image.image_name)
# The max_unet_downscale is the maximum amount that the UNet model downscales the latent image internally.
if t2i_adapter_field.t2i_adapter_model.base_model == BaseModelType.StableDiffusion1:
if t2i_adapter_model_config.base == BaseModelType.StableDiffusion1:
max_unet_downscale = 8
elif t2i_adapter_field.t2i_adapter_model.base_model == BaseModelType.StableDiffusionXL:
elif t2i_adapter_model_config.base == BaseModelType.StableDiffusionXL:
max_unet_downscale = 4
else:
raise ValueError(
f"Unexpected T2I-Adapter base model type: '{t2i_adapter_field.t2i_adapter_model.base_model}'."
)
raise ValueError(f"Unexpected T2I-Adapter base model type: '{t2i_adapter_model_config.base}'.")
t2i_adapter_model: T2IAdapter
with t2i_adapter_model_info as t2i_adapter_model:
with t2i_adapter_loaded_model as t2i_adapter_model:
total_downscale_factor = t2i_adapter_model.total_downscale_factor
# Resize the T2I-Adapter input image.
@ -592,7 +598,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
do_classifier_free_guidance=False,
width=t2i_input_width,
height=t2i_input_height,
num_channels=t2i_adapter_model.config.in_channels,
num_channels=t2i_adapter_model.config["in_channels"], # mypy treats this as a FrozenDict
device=t2i_adapter_model.device,
dtype=t2i_adapter_model.dtype,
resize_mode=t2i_adapter_field.resize_mode,
@ -617,7 +623,15 @@ class DenoiseLatentsInvocation(BaseInvocation):
# original idea by https://github.com/AmericanPresidentJimmyCarter
# TODO: research more for second order schedulers timesteps
def init_scheduler(self, scheduler, device, steps, denoising_start, denoising_end):
def init_scheduler(
self,
scheduler: Union[Scheduler, ConfigMixin],
device: torch.device,
steps: int,
denoising_start: float,
denoising_end: float,
) -> Tuple[int, List[int], int]:
assert isinstance(scheduler, ConfigMixin)
if scheduler.config.get("cpu_only", False):
scheduler.set_timesteps(steps, device="cpu")
timesteps = scheduler.timesteps.to(device=device)
@ -629,11 +643,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
_timesteps = timesteps[:: scheduler.order]
# get start timestep index
t_start_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_start)))
t_start_val = int(round(scheduler.config["num_train_timesteps"] * (1 - denoising_start)))
t_start_idx = len(list(filter(lambda ts: ts >= t_start_val, _timesteps)))
# get end timestep index
t_end_val = int(round(scheduler.config.num_train_timesteps * (1 - denoising_end)))
t_end_val = int(round(scheduler.config["num_train_timesteps"] * (1 - denoising_end)))
t_end_idx = len(list(filter(lambda ts: ts >= t_end_val, _timesteps[t_start_idx:])))
# apply order to indexes
@ -646,18 +660,20 @@ class DenoiseLatentsInvocation(BaseInvocation):
return num_inference_steps, timesteps, init_timestep
def prep_inpaint_mask(self, context, latents):
def prep_inpaint_mask(
self, context: InvocationContext, latents: torch.Tensor
) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], bool]:
if self.denoise_mask is None:
return None, None
return None, None, False
mask = context.services.latents.get(self.denoise_mask.mask_name)
mask = context.tensors.load(self.denoise_mask.mask_name)
mask = tv_resize(mask, latents.shape[-2:], T.InterpolationMode.BILINEAR, antialias=False)
if self.denoise_mask.masked_latents_name is not None:
masked_latents = context.services.latents.get(self.denoise_mask.masked_latents_name)
masked_latents = context.tensors.load(self.denoise_mask.masked_latents_name)
else:
masked_latents = None
return 1 - mask, masked_latents
return 1 - mask, masked_latents, self.denoise_mask.gradient
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
@ -665,11 +681,11 @@ class DenoiseLatentsInvocation(BaseInvocation):
seed = None
noise = None
if self.noise is not None:
noise = context.services.latents.get(self.noise.latents_name)
noise = context.tensors.load(self.noise.latents_name)
seed = self.noise.seed
if self.latents is not None:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
if seed is None:
seed = self.latents.seed
@ -684,7 +700,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
if seed is None:
seed = 0
mask, masked_latents = self.prep_inpaint_mask(context, latents)
mask, masked_latents, gradient_mask = self.prep_inpaint_mask(context, latents)
# TODO(ryand): I have hard-coded `do_classifier_free_guidance=True` to mirror the behaviour of ControlNets,
# below. Investigate whether this is appropriate.
@ -695,35 +711,30 @@ class DenoiseLatentsInvocation(BaseInvocation):
do_classifier_free_guidance=True,
)
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
# get the unet's config so that we can pass the base to dispatch_progress()
unet_config = context.models.get_config(self.unet.unet.key)
def step_callback(state: PipelineIntermediateState):
self.dispatch_progress(context, source_node_id, state, self.unet.unet.base_model)
def step_callback(state: PipelineIntermediateState) -> None:
context.util.sd_step_callback(state, unet_config.base)
def _lora_loader():
def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.unet.loras:
lora_info = context.services.model_manager.get_model(
**lora.model_dump(exclude={"weight"}),
context=context,
)
yield (lora_info.context.model, lora.weight)
lora_info = context.models.load(**lora.model_dump(exclude={"weight"}))
yield (lora_info.model, lora.weight)
del lora_info
return
unet_info = context.services.model_manager.get_model(
**self.unet.unet.model_dump(),
context=context,
)
unet_info = context.models.load(**self.unet.unet.model_dump())
assert isinstance(unet_info.model, UNet2DConditionModel)
with (
ExitStack() as exit_stack,
ModelPatcher.apply_freeu(unet_info.context.model, self.unet.freeu_config),
set_seamless(unet_info.context.model, self.unet.seamless_axes),
ModelPatcher.apply_freeu(unet_info.model, self.unet.freeu_config),
set_seamless(unet_info.model, self.unet.seamless_axes), # FIXME
unet_info as unet,
# Apply the LoRA after unet has been moved to its target device for faster patching.
ModelPatcher.apply_lora_unet(unet, _lora_loader()),
):
assert isinstance(unet, UNet2DConditionModel)
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:
noise = noise.to(device=unet.device, dtype=unet.dtype)
@ -777,6 +788,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
seed=seed,
mask=mask,
masked_latents=masked_latents,
gradient_mask=gradient_mask,
num_inference_steps=num_inference_steps,
conditioning_data=conditioning_data,
control_data=controlnet_data,
@ -791,9 +803,8 @@ class DenoiseLatentsInvocation(BaseInvocation):
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, result_latents)
return build_latents_output(latents_name=name, latents=result_latents, seed=seed)
name = context.tensors.save(tensor=result_latents)
return LatentsOutput.build(latents_name=name, latents=result_latents, seed=seed)
@invocation(
@ -801,9 +812,9 @@ class DenoiseLatentsInvocation(BaseInvocation):
title="Latents to Image",
tags=["latents", "image", "vae", "l2i"],
category="latents",
version="1.2.0",
version="1.2.1",
)
class LatentsToImageInvocation(BaseInvocation, WithMetadata):
class LatentsToImageInvocation(BaseInvocation, WithMetadata, WithBoard):
"""Generates an image from latents."""
latents: LatentsField = InputField(
@ -819,14 +830,12 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(),
context=context,
)
vae_info = context.models.load(**self.vae.vae.model_dump())
with set_seamless(vae_info.context.model, self.vae.seamless_axes), vae_info as vae:
with set_seamless(vae_info.model, self.vae.seamless_axes), vae_info as vae:
assert isinstance(vae, torch.nn.Module)
latents = latents.to(vae.device)
if self.fp32:
vae.to(dtype=torch.float32)
@ -853,7 +862,7 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata):
vae.to(dtype=torch.float16)
latents = latents.half()
if self.tiled or context.services.configuration.tiled_decode:
if self.tiled or context.config.get().tiled_decode:
vae.enable_tiling()
else:
vae.disable_tiling()
@ -877,22 +886,9 @@ class LatentsToImageInvocation(BaseInvocation, WithMetadata):
if choose_torch_device() == torch.device("mps"):
mps.empty_cache()
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
image_dto = context.images.save(image=image)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
return ImageOutput.build(image_dto)
LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic", "trilinear", "area", "nearest-exact"]
@ -903,7 +899,7 @@ LATENTS_INTERPOLATION_MODE = Literal["nearest", "linear", "bilinear", "bicubic",
title="Resize Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class ResizeLatentsInvocation(BaseInvocation):
"""Resizes latents to explicit width/height (in pixels). Provided dimensions are floor-divided by 8."""
@ -926,7 +922,7 @@ class ResizeLatentsInvocation(BaseInvocation):
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
# TODO:
device = choose_torch_device()
@ -944,10 +940,8 @@ class ResizeLatentsInvocation(BaseInvocation):
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
name = context.tensors.save(tensor=resized_latents)
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation(
@ -955,7 +949,7 @@ class ResizeLatentsInvocation(BaseInvocation):
title="Scale Latents",
tags=["latents", "resize"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class ScaleLatentsInvocation(BaseInvocation):
"""Scales latents by a given factor."""
@ -969,7 +963,7 @@ class ScaleLatentsInvocation(BaseInvocation):
antialias: bool = InputField(default=False, description=FieldDescriptions.torch_antialias)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
# TODO:
device = choose_torch_device()
@ -988,10 +982,8 @@ class ScaleLatentsInvocation(BaseInvocation):
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, resized_latents)
return build_latents_output(latents_name=name, latents=resized_latents, seed=self.latents.seed)
name = context.tensors.save(tensor=resized_latents)
return LatentsOutput.build(latents_name=name, latents=resized_latents, seed=self.latents.seed)
@invocation(
@ -999,7 +991,7 @@ class ScaleLatentsInvocation(BaseInvocation):
title="Image to Latents",
tags=["latents", "image", "vae", "i2l"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class ImageToLatentsInvocation(BaseInvocation):
"""Encodes an image into latents."""
@ -1015,8 +1007,9 @@ class ImageToLatentsInvocation(BaseInvocation):
fp32: bool = InputField(default=DEFAULT_PRECISION == "float32", description=FieldDescriptions.fp32)
@staticmethod
def vae_encode(vae_info, upcast, tiled, image_tensor):
def vae_encode(vae_info: LoadedModel, upcast: bool, tiled: bool, image_tensor: torch.Tensor) -> torch.Tensor:
with vae_info as vae:
assert isinstance(vae, torch.nn.Module)
orig_dtype = vae.dtype
if upcast:
vae.to(dtype=torch.float32)
@ -1060,12 +1053,9 @@ class ImageToLatentsInvocation(BaseInvocation):
@torch.no_grad()
def invoke(self, context: InvocationContext) -> LatentsOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(),
context=context,
)
vae_info = context.models.load(**self.vae.vae.model_dump())
image_tensor = image_resized_to_grid_as_tensor(image.convert("RGB"))
if image_tensor.dim() == 3:
@ -1073,22 +1063,26 @@ class ImageToLatentsInvocation(BaseInvocation):
latents = self.vae_encode(vae_info, self.fp32, self.tiled, image_tensor)
name = f"{context.graph_execution_state_id}__{self.id}"
latents = latents.to("cpu")
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=latents, seed=None)
name = context.tensors.save(tensor=latents)
return LatentsOutput.build(latents_name=name, latents=latents, seed=None)
@singledispatchmethod
@staticmethod
def _encode_to_tensor(vae: AutoencoderKL, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
assert isinstance(vae, torch.nn.Module)
image_tensor_dist = vae.encode(image_tensor).latent_dist
latents = image_tensor_dist.sample().to(dtype=vae.dtype) # FIXME: uses torch.randn. make reproducible!
latents: torch.Tensor = image_tensor_dist.sample().to(
dtype=vae.dtype
) # FIXME: uses torch.randn. make reproducible!
return latents
@_encode_to_tensor.register
@staticmethod
def _(vae: AutoencoderTiny, image_tensor: torch.FloatTensor) -> torch.FloatTensor:
return vae.encode(image_tensor).latents
assert isinstance(vae, torch.nn.Module)
latents: torch.FloatTensor = vae.encode(image_tensor).latents
return latents
@invocation(
@ -1096,7 +1090,7 @@ class ImageToLatentsInvocation(BaseInvocation):
title="Blend Latents",
tags=["latents", "blend"],
category="latents",
version="1.0.0",
version="1.0.1",
)
class BlendLatentsInvocation(BaseInvocation):
"""Blend two latents using a given alpha. Latents must have same size."""
@ -1112,8 +1106,8 @@ class BlendLatentsInvocation(BaseInvocation):
alpha: float = InputField(default=0.5, description=FieldDescriptions.blend_alpha)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents_a = context.services.latents.get(self.latents_a.latents_name)
latents_b = context.services.latents.get(self.latents_b.latents_name)
latents_a = context.tensors.load(self.latents_a.latents_name)
latents_b = context.tensors.load(self.latents_b.latents_name)
if latents_a.shape != latents_b.shape:
raise Exception("Latents to blend must be the same size.")
@ -1121,7 +1115,12 @@ class BlendLatentsInvocation(BaseInvocation):
# TODO:
device = choose_torch_device()
def slerp(t, v0, v1, DOT_THRESHOLD=0.9995):
def slerp(
t: Union[float, npt.NDArray[Any]], # FIXME: maybe use np.float32 here?
v0: Union[torch.Tensor, npt.NDArray[Any]],
v1: Union[torch.Tensor, npt.NDArray[Any]],
DOT_THRESHOLD: float = 0.9995,
) -> Union[torch.Tensor, npt.NDArray[Any]]:
"""
Spherical linear interpolation
Args:
@ -1154,12 +1153,16 @@ class BlendLatentsInvocation(BaseInvocation):
v2 = s0 * v0 + s1 * v1
if inputs_are_torch:
v2 = torch.from_numpy(v2).to(device)
return v2
v2_torch: torch.Tensor = torch.from_numpy(v2).to(device)
return v2_torch
else:
assert isinstance(v2, np.ndarray)
return v2
# blend
blended_latents = slerp(self.alpha, latents_a, latents_b)
bl = slerp(self.alpha, latents_a, latents_b)
assert isinstance(bl, torch.Tensor)
blended_latents: torch.Tensor = bl # for type checking convenience
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
blended_latents = blended_latents.to("cpu")
@ -1167,10 +1170,8 @@ class BlendLatentsInvocation(BaseInvocation):
if device == torch.device("mps"):
mps.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
# context.services.latents.set(name, resized_latents)
context.services.latents.save(name, blended_latents)
return build_latents_output(latents_name=name, latents=blended_latents)
name = context.tensors.save(tensor=blended_latents)
return LatentsOutput.build(latents_name=name, latents=blended_latents)
# The Crop Latents node was copied from @skunkworxdark's implementation here:
@ -1180,7 +1181,7 @@ class BlendLatentsInvocation(BaseInvocation):
title="Crop Latents",
tags=["latents", "crop"],
category="latents",
version="1.0.0",
version="1.0.1",
)
# TODO(ryand): Named `CropLatentsCoreInvocation` to prevent a conflict with custom node `CropLatentsInvocation`.
# Currently, if the class names conflict then 'GET /openapi.json' fails.
@ -1215,7 +1216,7 @@ class CropLatentsCoreInvocation(BaseInvocation):
)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
x1 = self.x // LATENT_SCALE_FACTOR
y1 = self.y // LATENT_SCALE_FACTOR
@ -1224,7 +1225,61 @@ class CropLatentsCoreInvocation(BaseInvocation):
cropped_latents = latents[..., y1:y2, x1:x2]
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, cropped_latents)
name = context.tensors.save(tensor=cropped_latents)
return build_latents_output(latents_name=name, latents=cropped_latents)
return LatentsOutput.build(latents_name=name, latents=cropped_latents)
@invocation_output("ideal_size_output")
class IdealSizeOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
width: int = OutputField(description="The ideal width of the image (in pixels)")
height: int = OutputField(description="The ideal height of the image (in pixels)")
@invocation(
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""
width: int = InputField(default=1024, description="Final image width")
height: int = InputField(default=576, description="Final image height")
unet: UNetField = InputField(default=None, description=FieldDescriptions.unet)
multiplier: float = InputField(
default=1.0,
description="Amount to multiply the model's dimensions by when calculating the ideal size (may result in initial generation artifacts if too large)",
)
def trim_to_multiple_of(self, *args: int, multiple_of: int = LATENT_SCALE_FACTOR) -> Tuple[int, ...]:
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
unet_config = context.models.get_config(**self.unet.unet.model_dump())
aspect = self.width / self.height
dimension: float = 512
if unet_config.base == BaseModelType.StableDiffusion2:
dimension = 768
elif unet_config.base == BaseModelType.StableDiffusionXL:
dimension = 1024
dimension = dimension * self.multiplier
min_dimension = math.floor(dimension * 0.5)
model_area = dimension * dimension # hardcoded for now since all models are trained on square images
if aspect > 1.0:
init_height = max(min_dimension, math.sqrt(model_area / aspect))
init_width = init_height * aspect
else:
init_width = max(min_dimension, math.sqrt(model_area * aspect))
init_height = init_width / aspect
scaled_width, scaled_height = self.trim_to_multiple_of(
math.floor(init_width),
math.floor(init_height),
)
return IdealSizeOutput(width=scaled_width, height=scaled_height)

View File

@ -5,10 +5,11 @@ from typing import Literal
import numpy as np
from pydantic import ValidationInfo, field_validator
from invokeai.app.invocations.fields import FieldDescriptions, InputField
from invokeai.app.invocations.primitives import FloatOutput, IntegerOutput
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .baseinvocation import BaseInvocation, invocation
@invocation("add", title="Add Integers", tags=["math", "add"], category="math", version="1.0.0")

View File

@ -5,20 +5,22 @@ from pydantic import BaseModel, ConfigDict, Field
from invokeai.app.invocations.baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
MetadataField,
OutputField,
UIType,
invocation,
invocation_output,
)
from invokeai.app.invocations.controlnet_image_processors import ControlField
from invokeai.app.invocations.fields import (
FieldDescriptions,
ImageField,
InputField,
MetadataField,
OutputField,
UIType,
)
from invokeai.app.invocations.ip_adapter import IPAdapterModelField
from invokeai.app.invocations.model import LoRAModelField, MainModelField, VAEModelField
from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.t2i_adapter import T2IAdapterField
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.services.shared.invocation_context import InvocationContext
from ...version import __version__
@ -31,7 +33,7 @@ class MetadataItemField(BaseModel):
class LoRAMetadataField(BaseModel):
"""LoRA Metadata Field"""
lora: LoRAModelField = Field(description=FieldDescriptions.lora_model)
model: LoRAModelField = Field(description=FieldDescriptions.lora_model)
weight: float = Field(description=FieldDescriptions.lora_weight)
@ -112,7 +114,7 @@ GENERATION_MODES = Literal[
]
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.0.1")
@invocation("core_metadata", title="Core Metadata", tags=["metadata"], category="metadata", version="1.1.1")
class CoreMetadataInvocation(BaseInvocation):
"""Collects core generation metadata into a MetadataField"""

View File

@ -1,31 +1,24 @@
import copy
from typing import List, Optional
from pydantic import BaseModel, ConfigDict, Field
from pydantic import BaseModel, Field
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.shared.models import FreeUConfig
from ...backend.model_management import BaseModelType, ModelType, SubModelType
from ...backend.model_manager import SubModelType
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
invocation,
invocation_output,
)
class ModelInfo(BaseModel):
model_name: str = Field(description="Info to load submodel")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Info to load submodel")
submodel: Optional[SubModelType] = Field(default=None, description="Info to load submodel")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Key of model as returned by ModelRecordServiceBase.get_model()")
submodel_type: Optional[SubModelType] = Field(default=None, description="Info to load submodel")
class LoraInfo(ModelInfo):
@ -55,7 +48,7 @@ class VaeField(BaseModel):
@invocation_output("unet_output")
class UNetOutput(BaseInvocationOutput):
"""Base class for invocations that output a UNet field"""
"""Base class for invocations that output a UNet field."""
unet: UNetField = OutputField(description=FieldDescriptions.unet, title="UNet")
@ -84,20 +77,13 @@ class ModelLoaderOutput(UNetOutput, CLIPOutput, VAEOutput):
class MainModelField(BaseModel):
"""Main model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Model key")
class LoRAModelField(BaseModel):
"""LoRA model field"""
model_name: str = Field(description="Name of the LoRA model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="LoRA model key")
@invocation(
@ -105,7 +91,7 @@ class LoRAModelField(BaseModel):
title="Main Model",
tags=["model"],
category="model",
version="1.0.0",
version="1.0.1",
)
class MainModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
@ -114,85 +100,40 @@ class MainModelLoaderInvocation(BaseInvocation):
# TODO: precision?
def invoke(self, context: InvocationContext) -> ModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.Main
key = self.model.key
# TODO: not found exceptions
if not context.services.model_manager.model_exists(
model_name=model_name,
base_model=base_model,
model_type=model_type,
):
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
"""
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.Tokenizer,
):
raise Exception(
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.TextEncoder,
):
raise Exception(
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.UNet,
):
raise Exception(
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
)
"""
if not context.models.exists(key):
raise Exception(f"Unknown model {key}")
return ModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.UNet,
key=key,
submodel_type=SubModelType.UNet,
),
scheduler=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Scheduler,
key=key,
submodel_type=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Tokenizer,
key=key,
submodel_type=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.TextEncoder,
key=key,
submodel_type=SubModelType.TextEncoder,
),
loras=[],
skipped_layers=0,
),
vae=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Vae,
key=key,
submodel_type=SubModelType.Vae,
),
),
)
@ -206,7 +147,7 @@ class LoraLoaderOutput(BaseInvocationOutput):
clip: Optional[ClipField] = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.0")
@invocation("lora_loader", title="LoRA", tags=["model"], category="model", version="1.0.1")
class LoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -229,21 +170,16 @@ class LoraLoaderInvocation(BaseInvocation):
if self.lora is None:
raise Exception("No LoRA provided")
base_model = self.lora.base_model
lora_name = self.lora.model_name
lora_key = self.lora.key
if not context.services.model_manager.model_exists(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
):
raise Exception(f"Unkown lora name: {lora_name}!")
if not context.models.exists(lora_key):
raise Exception(f"Unkown lora: {lora_key}!")
if self.unet is not None and any(lora.model_name == lora_name for lora in self.unet.loras):
raise Exception(f'Lora "{lora_name}" already applied to unet')
if self.unet is not None and any(lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'Lora "{lora_key}" already applied to unet')
if self.clip is not None and any(lora.model_name == lora_name for lora in self.clip.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip')
if self.clip is not None and any(lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip')
output = LoraLoaderOutput()
@ -251,10 +187,8 @@ class LoraLoaderInvocation(BaseInvocation):
output.unet = copy.deepcopy(self.unet)
output.unet.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
key=lora_key,
submodel_type=None,
weight=self.weight,
)
)
@ -263,10 +197,8 @@ class LoraLoaderInvocation(BaseInvocation):
output.clip = copy.deepcopy(self.clip)
output.clip.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
key=lora_key,
submodel_type=None,
weight=self.weight,
)
)
@ -288,7 +220,7 @@ class SDXLLoraLoaderOutput(BaseInvocationOutput):
title="SDXL LoRA",
tags=["lora", "model"],
category="model",
version="1.0.0",
version="1.0.1",
)
class SDXLLoraLoaderInvocation(BaseInvocation):
"""Apply selected lora to unet and text_encoder."""
@ -318,24 +250,19 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
if self.lora is None:
raise Exception("No LoRA provided")
base_model = self.lora.base_model
lora_name = self.lora.model_name
lora_key = self.lora.key
if not context.services.model_manager.model_exists(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
):
raise Exception(f"Unknown lora name: {lora_name}!")
if not context.models.exists(lora_key):
raise Exception(f"Unknown lora: {lora_key}!")
if self.unet is not None and any(lora.model_name == lora_name for lora in self.unet.loras):
raise Exception(f'Lora "{lora_name}" already applied to unet')
if self.unet is not None and any(lora.key == lora_key for lora in self.unet.loras):
raise Exception(f'Lora "{lora_key}" already applied to unet')
if self.clip is not None and any(lora.model_name == lora_name for lora in self.clip.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip')
if self.clip is not None and any(lora.key == lora_key for lora in self.clip.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip')
if self.clip2 is not None and any(lora.model_name == lora_name for lora in self.clip2.loras):
raise Exception(f'Lora "{lora_name}" already applied to clip2')
if self.clip2 is not None and any(lora.key == lora_key for lora in self.clip2.loras):
raise Exception(f'Lora "{lora_key}" already applied to clip2')
output = SDXLLoraLoaderOutput()
@ -343,10 +270,8 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
output.unet = copy.deepcopy(self.unet)
output.unet.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
key=lora_key,
submodel_type=None,
weight=self.weight,
)
)
@ -355,10 +280,8 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
output.clip = copy.deepcopy(self.clip)
output.clip.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
key=lora_key,
submodel_type=None,
weight=self.weight,
)
)
@ -367,10 +290,8 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
output.clip2 = copy.deepcopy(self.clip2)
output.clip2.loras.append(
LoraInfo(
base_model=base_model,
model_name=lora_name,
model_type=ModelType.Lora,
submodel=None,
key=lora_key,
submodel_type=None,
weight=self.weight,
)
)
@ -381,13 +302,10 @@ class SDXLLoraLoaderInvocation(BaseInvocation):
class VAEModelField(BaseModel):
"""Vae model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
model_config = ConfigDict(protected_namespaces=())
key: str = Field(description="Model's key")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.0")
@invocation("vae_loader", title="VAE", tags=["vae", "model"], category="model", version="1.0.1")
class VaeLoaderInvocation(BaseInvocation):
"""Loads a VAE model, outputting a VaeLoaderOutput"""
@ -398,25 +316,12 @@ class VaeLoaderInvocation(BaseInvocation):
)
def invoke(self, context: InvocationContext) -> VAEOutput:
base_model = self.vae_model.base_model
model_name = self.vae_model.model_name
model_type = ModelType.Vae
key = self.vae_model.key
if not context.services.model_manager.model_exists(
base_model=base_model,
model_name=model_name,
model_type=model_type,
):
raise Exception(f"Unkown vae name: {model_name}!")
return VAEOutput(
vae=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
)
)
)
if not context.models.exists(key):
raise Exception(f"Unkown vae: {key}!")
return VAEOutput(vae=VaeField(vae=ModelInfo(key=key)))
@invocation_output("seamless_output")

View File

@ -4,17 +4,15 @@
import torch
from pydantic import field_validator
from invokeai.app.invocations.latent import LatentsField
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import FieldDescriptions, InputField, LatentsField, OutputField
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.misc import SEED_MAX
from ...backend.util.devices import choose_torch_device, torch_dtype
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
InputField,
InvocationContext,
OutputField,
invocation,
invocation_output,
)
@ -69,13 +67,13 @@ class NoiseOutput(BaseInvocationOutput):
width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height)
def build_noise_output(latents_name: str, latents: torch.Tensor, seed: int):
return NoiseOutput(
noise=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
@classmethod
def build(cls, latents_name: str, latents: torch.Tensor, seed: int) -> "NoiseOutput":
return cls(
noise=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * LATENT_SCALE_FACTOR,
height=latents.size()[2] * LATENT_SCALE_FACTOR,
)
@invocation(
@ -96,13 +94,13 @@ class NoiseInvocation(BaseInvocation):
)
width: int = InputField(
default=512,
multiple_of=8,
multiple_of=LATENT_SCALE_FACTOR,
gt=0,
description=FieldDescriptions.width,
)
height: int = InputField(
default=512,
multiple_of=8,
multiple_of=LATENT_SCALE_FACTOR,
gt=0,
description=FieldDescriptions.height,
)
@ -124,6 +122,5 @@ class NoiseInvocation(BaseInvocation):
seed=self.seed,
use_cpu=self.use_cpu,
)
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, noise)
return build_noise_output(latents_name=name, latents=noise, seed=self.seed)
name = context.tensors.save(tensor=noise)
return NoiseOutput.build(latents_name=name, latents=noise, seed=self.seed)

View File

@ -1,508 +0,0 @@
# Copyright (c) 2023 Borisov Sergey (https://github.com/StAlKeR7779)
import inspect
import re
# from contextlib import ExitStack
from typing import List, Literal, Union
import numpy as np
import torch
from diffusers.image_processor import VaeImageProcessor
from pydantic import BaseModel, ConfigDict, Field, field_validator
from tqdm import tqdm
from invokeai.app.invocations.primitives import ConditioningField, ConditioningOutput, ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.util.step_callback import stable_diffusion_step_callback
from invokeai.backend import BaseModelType, ModelType, SubModelType
from ...backend.model_management import ONNXModelPatcher
from ...backend.stable_diffusion import PipelineIntermediateState
from ...backend.util import choose_torch_device
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,
UIType,
WithMetadata,
invocation,
invocation_output,
)
from .controlnet_image_processors import ControlField
from .latent import SAMPLER_NAME_VALUES, LatentsField, LatentsOutput, build_latents_output, get_scheduler
from .model import ClipField, ModelInfo, UNetField, VaeField
ORT_TO_NP_TYPE = {
"tensor(bool)": np.bool_,
"tensor(int8)": np.int8,
"tensor(uint8)": np.uint8,
"tensor(int16)": np.int16,
"tensor(uint16)": np.uint16,
"tensor(int32)": np.int32,
"tensor(uint32)": np.uint32,
"tensor(int64)": np.int64,
"tensor(uint64)": np.uint64,
"tensor(float16)": np.float16,
"tensor(float)": np.float32,
"tensor(double)": np.float64,
}
PRECISION_VALUES = Literal[tuple(ORT_TO_NP_TYPE.keys())]
@invocation("prompt_onnx", title="ONNX Prompt (Raw)", tags=["prompt", "onnx"], category="conditioning", version="1.0.0")
class ONNXPromptInvocation(BaseInvocation):
prompt: str = InputField(default="", description=FieldDescriptions.raw_prompt, ui_component=UIComponent.Textarea)
clip: ClipField = InputField(description=FieldDescriptions.clip, input=Input.Connection)
def invoke(self, context: InvocationContext) -> ConditioningOutput:
tokenizer_info = context.services.model_manager.get_model(
**self.clip.tokenizer.model_dump(),
)
text_encoder_info = context.services.model_manager.get_model(
**self.clip.text_encoder.model_dump(),
)
with tokenizer_info as orig_tokenizer, text_encoder_info as text_encoder: # , ExitStack() as stack:
loras = [
(
context.services.model_manager.get_model(**lora.model_dump(exclude={"weight"})).context.model,
lora.weight,
)
for lora in self.clip.loras
]
ti_list = []
for trigger in re.findall(r"<[a-zA-Z0-9., _-]+>", self.prompt):
name = trigger[1:-1]
try:
ti_list.append(
(
name,
context.services.model_manager.get_model(
model_name=name,
base_model=self.clip.text_encoder.base_model,
model_type=ModelType.TextualInversion,
).context.model,
)
)
except Exception:
# print(e)
# import traceback
# print(traceback.format_exc())
print(f'Warn: trigger: "{trigger}" not found')
if loras or ti_list:
text_encoder.release_session()
with (
ONNXModelPatcher.apply_lora_text_encoder(text_encoder, loras),
ONNXModelPatcher.apply_ti(orig_tokenizer, text_encoder, ti_list) as (tokenizer, ti_manager),
):
text_encoder.create_session()
# copy from
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L153
text_inputs = tokenizer(
self.prompt,
padding="max_length",
max_length=tokenizer.model_max_length,
truncation=True,
return_tensors="np",
)
text_input_ids = text_inputs.input_ids
"""
untruncated_ids = tokenizer(prompt, padding="max_length", return_tensors="np").input_ids
if not np.array_equal(text_input_ids, untruncated_ids):
removed_text = self.tokenizer.batch_decode(
untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]
)
logger.warning(
"The following part of your input was truncated because CLIP can only handle sequences up to"
f" {self.tokenizer.model_max_length} tokens: {removed_text}"
)
"""
prompt_embeds = text_encoder(input_ids=text_input_ids.astype(np.int32))[0]
conditioning_name = f"{context.graph_execution_state_id}_{self.id}_conditioning"
# TODO: hacky but works ;D maybe rename latents somehow?
context.services.latents.save(conditioning_name, (prompt_embeds, None))
return ConditioningOutput(
conditioning=ConditioningField(
conditioning_name=conditioning_name,
),
)
# Text to image
@invocation(
"t2l_onnx",
title="ONNX Text to Latents",
tags=["latents", "inference", "txt2img", "onnx"],
category="latents",
version="1.0.0",
)
class ONNXTextToLatentsInvocation(BaseInvocation):
"""Generates latents from conditionings."""
positive_conditioning: ConditioningField = InputField(
description=FieldDescriptions.positive_cond,
input=Input.Connection,
)
negative_conditioning: ConditioningField = InputField(
description=FieldDescriptions.negative_cond,
input=Input.Connection,
)
noise: LatentsField = InputField(
description=FieldDescriptions.noise,
input=Input.Connection,
)
steps: int = InputField(default=10, gt=0, description=FieldDescriptions.steps)
cfg_scale: Union[float, List[float]] = InputField(
default=7.5,
ge=1,
description=FieldDescriptions.cfg_scale,
)
scheduler: SAMPLER_NAME_VALUES = InputField(
default="euler", description=FieldDescriptions.scheduler, input=Input.Direct, ui_type=UIType.Scheduler
)
precision: PRECISION_VALUES = InputField(default="tensor(float16)", description=FieldDescriptions.precision)
unet: UNetField = InputField(
description=FieldDescriptions.unet,
input=Input.Connection,
)
control: Union[ControlField, list[ControlField]] = InputField(
default=None,
description=FieldDescriptions.control,
)
# seamless: bool = InputField(default=False, description="Whether or not to generate an image that can tile without seams", )
# seamless_axes: str = InputField(default="", description="The axes to tile the image on, 'x' and/or 'y'")
@field_validator("cfg_scale")
def ge_one(cls, v):
"""validate that all cfg_scale values are >= 1"""
if isinstance(v, list):
for i in v:
if i < 1:
raise ValueError("cfg_scale must be greater than 1")
else:
if v < 1:
raise ValueError("cfg_scale must be greater than 1")
return v
# based on
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L375
def invoke(self, context: InvocationContext) -> LatentsOutput:
c, _ = context.services.latents.get(self.positive_conditioning.conditioning_name)
uc, _ = context.services.latents.get(self.negative_conditioning.conditioning_name)
graph_execution_state = context.services.graph_execution_manager.get(context.graph_execution_state_id)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
if isinstance(c, torch.Tensor):
c = c.cpu().numpy()
if isinstance(uc, torch.Tensor):
uc = uc.cpu().numpy()
device = torch.device(choose_torch_device())
prompt_embeds = np.concatenate([uc, c])
latents = context.services.latents.get(self.noise.latents_name)
if isinstance(latents, torch.Tensor):
latents = latents.cpu().numpy()
# TODO: better execution device handling
latents = latents.astype(ORT_TO_NP_TYPE[self.precision])
# get the initial random noise unless the user supplied it
do_classifier_free_guidance = True
# latents_dtype = prompt_embeds.dtype
# latents_shape = (batch_size * num_images_per_prompt, 4, height // 8, width // 8)
# if latents.shape != latents_shape:
# raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
scheduler = get_scheduler(
context=context,
scheduler_info=self.unet.scheduler,
scheduler_name=self.scheduler,
seed=0, # TODO: refactor this node
)
def torch2numpy(latent: torch.Tensor):
return latent.cpu().numpy()
def numpy2torch(latent, device):
return torch.from_numpy(latent).to(device)
def dispatch_progress(
self, context: InvocationContext, source_node_id: str, intermediate_state: PipelineIntermediateState
) -> None:
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
node=self.model_dump(),
source_node_id=source_node_id,
)
scheduler.set_timesteps(self.steps)
latents = latents * np.float64(scheduler.init_noise_sigma)
extra_step_kwargs = {}
if "eta" in set(inspect.signature(scheduler.step).parameters.keys()):
extra_step_kwargs.update(
eta=0.0,
)
unet_info = context.services.model_manager.get_model(**self.unet.unet.model_dump())
with unet_info as unet: # , ExitStack() as stack:
# loras = [(stack.enter_context(context.services.model_manager.get_model(**lora.dict(exclude={"weight"}))), lora.weight) for lora in self.unet.loras]
loras = [
(
context.services.model_manager.get_model(**lora.model_dump(exclude={"weight"})).context.model,
lora.weight,
)
for lora in self.unet.loras
]
if loras:
unet.release_session()
with ONNXModelPatcher.apply_lora_unet(unet, loras):
# TODO:
_, _, h, w = latents.shape
unet.create_session(h, w)
timestep_dtype = next(
(input.type for input in unet.session.get_inputs() if input.name == "timestep"), "tensor(float16)"
)
timestep_dtype = ORT_TO_NP_TYPE[timestep_dtype]
for i in tqdm(range(len(scheduler.timesteps))):
t = scheduler.timesteps[i]
# expand the latents if we are doing classifier free guidance
latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents
latent_model_input = scheduler.scale_model_input(numpy2torch(latent_model_input, device), t)
latent_model_input = latent_model_input.cpu().numpy()
# predict the noise residual
timestep = np.array([t], dtype=timestep_dtype)
noise_pred = unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds)
noise_pred = noise_pred[0]
# perform guidance
if do_classifier_free_guidance:
noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2)
noise_pred = noise_pred_uncond + self.cfg_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
scheduler_output = scheduler.step(
numpy2torch(noise_pred, device), t, numpy2torch(latents, device), **extra_step_kwargs
)
latents = torch2numpy(scheduler_output.prev_sample)
state = PipelineIntermediateState(
run_id="test", step=i, timestep=timestep, latents=scheduler_output.prev_sample
)
dispatch_progress(self, context=context, source_node_id=source_node_id, intermediate_state=state)
# call the callback, if provided
# if callback is not None and i % callback_steps == 0:
# callback(i, t, latents)
torch.cuda.empty_cache()
name = f"{context.graph_execution_state_id}__{self.id}"
context.services.latents.save(name, latents)
return build_latents_output(latents_name=name, latents=torch.from_numpy(latents))
# Latent to image
@invocation(
"l2i_onnx",
title="ONNX Latents to Image",
tags=["latents", "image", "vae", "onnx"],
category="image",
version="1.2.0",
)
class ONNXLatentsToImageInvocation(BaseInvocation, WithMetadata):
"""Generates an image from latents."""
latents: LatentsField = InputField(
description=FieldDescriptions.denoised_latents,
input=Input.Connection,
)
vae: VaeField = InputField(
description=FieldDescriptions.vae,
input=Input.Connection,
)
# tiled: bool = InputField(default=False, description="Decode latents by overlaping tiles(less memory consumption)")
def invoke(self, context: InvocationContext) -> ImageOutput:
latents = context.services.latents.get(self.latents.latents_name)
if self.vae.vae.submodel != SubModelType.VaeDecoder:
raise Exception(f"Expected vae_decoder, found: {self.vae.vae.model_type}")
vae_info = context.services.model_manager.get_model(
**self.vae.vae.model_dump(),
)
# clear memory as vae decode can request a lot
torch.cuda.empty_cache()
with vae_info as vae:
vae.create_session()
# copied from
# https://github.com/huggingface/diffusers/blob/3ebbaf7c96801271f9e6c21400033b6aa5ffcf29/src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion.py#L427
latents = 1 / 0.18215 * latents
# image = self.vae_decoder(latent_sample=latents)[0]
# it seems likes there is a strange result for using half-precision vae decoder if batchsize>1
image = np.concatenate([vae(latent_sample=latents[i : i + 1])[0] for i in range(latents.shape[0])])
image = np.clip(image / 2 + 0.5, 0, 1)
image = image.transpose((0, 2, 3, 1))
image = VaeImageProcessor.numpy_to_pil(image)[0]
torch.cuda.empty_cache()
image_dto = context.services.images.create(
image=image,
image_origin=ResourceOrigin.INTERNAL,
image_category=ImageCategory.GENERAL,
node_id=self.id,
session_id=context.graph_execution_state_id,
is_intermediate=self.is_intermediate,
metadata=self.metadata,
workflow=context.workflow,
)
return ImageOutput(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation_output("model_loader_output_onnx")
class ONNXModelLoaderOutput(BaseInvocationOutput):
"""Model loader output"""
unet: UNetField = OutputField(default=None, description=FieldDescriptions.unet, title="UNet")
clip: ClipField = OutputField(default=None, description=FieldDescriptions.clip, title="CLIP")
vae_decoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Decoder")
vae_encoder: VaeField = OutputField(default=None, description=FieldDescriptions.vae, title="VAE Encoder")
class OnnxModelField(BaseModel):
"""Onnx model field"""
model_name: str = Field(description="Name of the model")
base_model: BaseModelType = Field(description="Base model")
model_type: ModelType = Field(description="Model Type")
model_config = ConfigDict(protected_namespaces=())
@invocation("onnx_model_loader", title="ONNX Main Model", tags=["onnx", "model"], category="model", version="1.0.0")
class OnnxModelLoaderInvocation(BaseInvocation):
"""Loads a main model, outputting its submodels."""
model: OnnxModelField = InputField(
description=FieldDescriptions.onnx_main_model, input=Input.Direct, ui_type=UIType.ONNXModel
)
def invoke(self, context: InvocationContext) -> ONNXModelLoaderOutput:
base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.ONNX
# TODO: not found exceptions
if not context.services.model_manager.model_exists(
model_name=model_name,
base_model=base_model,
model_type=model_type,
):
raise Exception(f"Unknown {base_model} {model_type} model: {model_name}")
"""
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.Tokenizer,
):
raise Exception(
f"Failed to find tokenizer submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.TextEncoder,
):
raise Exception(
f"Failed to find text_encoder submodel in {self.model_name}! Check if model corrupted"
)
if not context.services.model_manager.model_exists(
model_name=self.model_name,
model_type=SDModelType.Diffusers,
submodel=SDModelType.UNet,
):
raise Exception(
f"Failed to find unet submodel from {self.model_name}! Check if model corrupted"
)
"""
return ONNXModelLoaderOutput(
unet=UNetField(
unet=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.UNet,
),
scheduler=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Scheduler,
),
loras=[],
),
clip=ClipField(
tokenizer=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.Tokenizer,
),
text_encoder=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.TextEncoder,
),
loras=[],
skipped_layers=0,
),
vae_decoder=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.VaeDecoder,
),
),
vae_encoder=VaeField(
vae=ModelInfo(
model_name=model_name,
base_model=base_model,
model_type=model_type,
submodel=SubModelType.VaeEncoder,
),
),
)

View File

@ -40,8 +40,10 @@ from easing_functions import (
from matplotlib.ticker import MaxNLocator
from invokeai.app.invocations.primitives import FloatCollectionOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import BaseInvocation, InputField, InvocationContext, invocation
from .baseinvocation import BaseInvocation, invocation
from .fields import InputField
@invocation(
@ -109,7 +111,7 @@ EASING_FUNCTION_KEYS = Literal[tuple(EASING_FUNCTIONS_MAP.keys())]
title="Step Param Easing",
tags=["step", "easing"],
category="step",
version="1.0.0",
version="1.0.1",
)
class StepParamEasingInvocation(BaseInvocation):
"""Experimental per-step parameter easing for denoising steps"""
@ -148,19 +150,19 @@ class StepParamEasingInvocation(BaseInvocation):
postlist = list(num_poststeps * [self.post_end_value])
if log_diagnostics:
context.services.logger.debug("start_step: " + str(start_step))
context.services.logger.debug("end_step: " + str(end_step))
context.services.logger.debug("num_easing_steps: " + str(num_easing_steps))
context.services.logger.debug("num_presteps: " + str(num_presteps))
context.services.logger.debug("num_poststeps: " + str(num_poststeps))
context.services.logger.debug("prelist size: " + str(len(prelist)))
context.services.logger.debug("postlist size: " + str(len(postlist)))
context.services.logger.debug("prelist: " + str(prelist))
context.services.logger.debug("postlist: " + str(postlist))
context.logger.debug("start_step: " + str(start_step))
context.logger.debug("end_step: " + str(end_step))
context.logger.debug("num_easing_steps: " + str(num_easing_steps))
context.logger.debug("num_presteps: " + str(num_presteps))
context.logger.debug("num_poststeps: " + str(num_poststeps))
context.logger.debug("prelist size: " + str(len(prelist)))
context.logger.debug("postlist size: " + str(len(postlist)))
context.logger.debug("prelist: " + str(prelist))
context.logger.debug("postlist: " + str(postlist))
easing_class = EASING_FUNCTIONS_MAP[self.easing]
if log_diagnostics:
context.services.logger.debug("easing class: " + str(easing_class))
context.logger.debug("easing class: " + str(easing_class))
easing_list = []
if self.mirror: # "expected" mirroring
# if number of steps is even, squeeze duration down to (number_of_steps)/2
@ -171,7 +173,7 @@ class StepParamEasingInvocation(BaseInvocation):
base_easing_duration = int(np.ceil(num_easing_steps / 2.0))
if log_diagnostics:
context.services.logger.debug("base easing duration: " + str(base_easing_duration))
context.logger.debug("base easing duration: " + str(base_easing_duration))
even_num_steps = num_easing_steps % 2 == 0 # even number of steps
easing_function = easing_class(
start=self.start_value,
@ -183,14 +185,14 @@ class StepParamEasingInvocation(BaseInvocation):
easing_val = easing_function.ease(step_index)
base_easing_vals.append(easing_val)
if log_diagnostics:
context.services.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(easing_val))
context.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(easing_val))
if even_num_steps:
mirror_easing_vals = list(reversed(base_easing_vals))
else:
mirror_easing_vals = list(reversed(base_easing_vals[0:-1]))
if log_diagnostics:
context.services.logger.debug("base easing vals: " + str(base_easing_vals))
context.services.logger.debug("mirror easing vals: " + str(mirror_easing_vals))
context.logger.debug("base easing vals: " + str(base_easing_vals))
context.logger.debug("mirror easing vals: " + str(mirror_easing_vals))
easing_list = base_easing_vals + mirror_easing_vals
# FIXME: add alt_mirror option (alternative to default or mirror), or remove entirely
@ -225,12 +227,12 @@ class StepParamEasingInvocation(BaseInvocation):
step_val = easing_function.ease(step_index)
easing_list.append(step_val)
if log_diagnostics:
context.services.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(step_val))
context.logger.debug("step_index: " + str(step_index) + ", easing_val: " + str(step_val))
if log_diagnostics:
context.services.logger.debug("prelist size: " + str(len(prelist)))
context.services.logger.debug("easing_list size: " + str(len(easing_list)))
context.services.logger.debug("postlist size: " + str(len(postlist)))
context.logger.debug("prelist size: " + str(len(prelist)))
context.logger.debug("easing_list size: " + str(len(easing_list)))
context.logger.debug("postlist size: " + str(len(postlist)))
param_list = prelist + easing_list + postlist

View File

@ -1,20 +1,28 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
from typing import Optional, Tuple
from typing import Optional
import torch
from pydantic import BaseModel, Field
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.app.invocations.constants import LATENT_SCALE_FACTOR
from invokeai.app.invocations.fields import (
ColorField,
ConditioningField,
DenoiseMaskField,
FieldDescriptions,
ImageField,
Input,
InputField,
LatentsField,
OutputField,
UIComponent,
)
from invokeai.app.services.images.images_common import ImageDTO
from invokeai.app.services.shared.invocation_context import InvocationContext
from .baseinvocation import (
BaseInvocation,
BaseInvocationOutput,
Input,
InputField,
InvocationContext,
OutputField,
UIComponent,
invocation,
invocation_output,
)
@ -221,18 +229,6 @@ class StringCollectionInvocation(BaseInvocation):
# region Image
class ImageField(BaseModel):
"""An image primitive field"""
image_name: str = Field(description="The name of the image")
class BoardField(BaseModel):
"""A board primitive field"""
board_id: str = Field(description="The id of the board")
@invocation_output("image_output")
class ImageOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
@ -241,6 +237,14 @@ class ImageOutput(BaseInvocationOutput):
width: int = OutputField(description="The width of the image in pixels")
height: int = OutputField(description="The height of the image in pixels")
@classmethod
def build(cls, image_dto: ImageDTO) -> "ImageOutput":
return cls(
image=ImageField(image_name=image_dto.image_name),
width=image_dto.width,
height=image_dto.height,
)
@invocation_output("image_collection_output")
class ImageCollectionOutput(BaseInvocationOutput):
@ -251,16 +255,14 @@ class ImageCollectionOutput(BaseInvocationOutput):
)
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.0")
class ImageInvocation(
BaseInvocation,
):
@invocation("image", title="Image Primitive", tags=["primitives", "image"], category="primitives", version="1.0.1")
class ImageInvocation(BaseInvocation):
"""An image primitive value"""
image: ImageField = InputField(description="The image to load")
def invoke(self, context: InvocationContext) -> ImageOutput:
image = context.services.images.get_pil_image(self.image.image_name)
image = context.images.get_pil(self.image.image_name)
return ImageOutput(
image=ImageField(image_name=self.image.image_name),
@ -290,42 +292,44 @@ class ImageCollectionInvocation(BaseInvocation):
# region DenoiseMask
class DenoiseMaskField(BaseModel):
"""An inpaint mask field"""
mask_name: str = Field(description="The name of the mask image")
masked_latents_name: Optional[str] = Field(default=None, description="The name of the masked image latents")
@invocation_output("denoise_mask_output")
class DenoiseMaskOutput(BaseInvocationOutput):
"""Base class for nodes that output a single image"""
denoise_mask: DenoiseMaskField = OutputField(description="Mask for denoise model run")
@classmethod
def build(
cls, mask_name: str, masked_latents_name: Optional[str] = None, gradient: bool = False
) -> "DenoiseMaskOutput":
return cls(
denoise_mask=DenoiseMaskField(
mask_name=mask_name, masked_latents_name=masked_latents_name, gradient=gradient
),
)
# endregion
# region Latents
class LatentsField(BaseModel):
"""A latents tensor primitive field"""
latents_name: str = Field(description="The name of the latents")
seed: Optional[int] = Field(default=None, description="Seed used to generate this latents")
@invocation_output("latents_output")
class LatentsOutput(BaseInvocationOutput):
"""Base class for nodes that output a single latents tensor"""
latents: LatentsField = OutputField(
description=FieldDescriptions.latents,
)
latents: LatentsField = OutputField(description=FieldDescriptions.latents)
width: int = OutputField(description=FieldDescriptions.width)
height: int = OutputField(description=FieldDescriptions.height)
@classmethod
def build(cls, latents_name: str, latents: torch.Tensor, seed: Optional[int] = None) -> "LatentsOutput":
return cls(
latents=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * LATENT_SCALE_FACTOR,
height=latents.size()[2] * LATENT_SCALE_FACTOR,
)
@invocation_output("latents_collection_output")
class LatentsCollectionOutput(BaseInvocationOutput):
@ -337,7 +341,7 @@ class LatentsCollectionOutput(BaseInvocationOutput):
@invocation(
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.0"
"latents", title="Latents Primitive", tags=["primitives", "latents"], category="primitives", version="1.0.1"
)
class LatentsInvocation(BaseInvocation):
"""A latents tensor primitive value"""
@ -345,9 +349,9 @@ class LatentsInvocation(BaseInvocation):
latents: LatentsField = InputField(description="The latents tensor", input=Input.Connection)
def invoke(self, context: InvocationContext) -> LatentsOutput:
latents = context.services.latents.get(self.latents.latents_name)
latents = context.tensors.load(self.latents.latents_name)
return build_latents_output(self.latents.latents_name, latents)
return LatentsOutput.build(self.latents.latents_name, latents)
@invocation(
@ -368,31 +372,11 @@ class LatentsCollectionInvocation(BaseInvocation):
return LatentsCollectionOutput(collection=self.collection)
def build_latents_output(latents_name: str, latents: torch.Tensor, seed: Optional[int] = None):
return LatentsOutput(
latents=LatentsField(latents_name=latents_name, seed=seed),
width=latents.size()[3] * 8,
height=latents.size()[2] * 8,
)
# endregion
# region Color
class ColorField(BaseModel):
"""A color primitive field"""
r: int = Field(ge=0, le=255, description="The red component")
g: int = Field(ge=0, le=255, description="The green component")
b: int = Field(ge=0, le=255, description="The blue component")
a: int = Field(ge=0, le=255, description="The alpha component")
def tuple(self) -> Tuple[int, int, int, int]:
return (self.r, self.g, self.b, self.a)
@invocation_output("color_output")
class ColorOutput(BaseInvocationOutput):
"""Base class for nodes that output a single color"""
@ -424,18 +408,16 @@ class ColorInvocation(BaseInvocation):
# region Conditioning
class ConditioningField(BaseModel):
"""A conditioning tensor primitive value"""
conditioning_name: str = Field(description="The name of conditioning tensor")
@invocation_output("conditioning_output")
class ConditioningOutput(BaseInvocationOutput):
"""Base class for nodes that output a single conditioning tensor"""
conditioning: ConditioningField = OutputField(description=FieldDescriptions.cond)
@classmethod
def build(cls, conditioning_name: str) -> "ConditioningOutput":
return cls(conditioning=ConditioningField(conditioning_name=conditioning_name))
@invocation_output("conditioning_collection_output")
class ConditioningCollectionOutput(BaseInvocationOutput):

Some files were not shown because too many files have changed in this diff Show More