Compare commits

...

563 Commits

Author SHA1 Message Date
6539ef7c9f {release} v3.6.3 (#5696)
## What type of PR is this? (check all applicable)
Release Invoke 3.6.3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke 3.6.3 Release



## QA Instructions, Screenshots, Recordings
Test the installer:
[InvokeAI-installer-v3.6.3.zip](https://github.com/invoke-ai/InvokeAI/files/14233359/InvokeAI-installer-v3.6.3.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi & GitHub
2. Announce on Discord
2024-02-11 16:02:30 -05:00
14c9a1e4f3 Merge branch 'main' into release/3.6.3 2024-02-11 15:36:05 -05:00
64b0feca31 Update ruff 2024-02-11 15:24:28 -05:00
0be9a2d906 Update string formatting 2024-02-11 15:24:28 -05:00
d925f721b9 fix references to .env.sample (#5695)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it is text only, simple, and (hopefully) self-evident

      
## Have you updated all relevant documentation?
- [x] Yes - as far as I can grep.
- [ ] No


## Description

`.env.sample` was misspelled as `env.sample` in a few places.

This changes documentation only. You may need to re-build/deploy docs,
I'm not sure.
2024-02-11 13:43:14 -05:00
4e5be1891a {release} v3.6.3 2024-02-11 10:34:47 -07:00
156d4ec3b2 fix references to .env.sample 2024-02-10 21:11:22 -08:00
c45a43519a chore: bump deps
- ruff 0.1.11 -> 0.2.1
- update config format
2024-02-11 08:50:49 +11:00
763816ca0c chore: bump deps
- pydantic 2.5.3 -> 2.6.1
- uvicorn 0.25.0 -> 0.27.1
2024-02-11 08:50:49 +11:00
B N
83a7c9059f translationBot(ui): update translation (German)
Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-11 08:40:55 +11:00
c5f069a255 feat(backend): remove dependency on basicsr
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep.

Closes #5108
2024-02-11 08:34:54 +11:00
cd169ee082 fix(nodes): deep copy graph inputs (#5686)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state,
invocation outputs, etc) constantly. The meant we were effectively
working with deep-copied objects at all time. We could mutate objects
freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we
cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are
two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node
mutates the output object while `invoke`ing, the next node will get the
mutated object.
- The invocation cache stores live python objects. When a node mutates
an output pulled from the cache, the next node that uses the cached
object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set,
effectively reproducing the same behaviour as we had with the SQLite
session storage. Nodes can safely mutate their inputs and those changes
never leave the node's scope.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes  #5665

The root issue affects CLIP Skip because that node mutates its input
`ClipField`. Specifically, it increments `self.clip.skipped_layers` and
passes `self.clip` as its output. I don't know if there are any other
nodes that do this.

## QA Instructions, Screenshots, Recordings

Two issues to reproduce. 

First is the caching issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7a251e48-bc70-4b8e-8816-84aac41ce4d3)

Note the cache is enabled. Run this simple graph a couple times, and
check the outputs of the CLIP Skip node. You'll see the `skipped_layers`
value increasing each time.

Second is the nodes-sharing-inputs issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/ecdaefab-2beb-4950-b4bf-2a5738ce6832)

Note the cache is _disabled_. Run the graph a couple times and check the
outputs of the two CLIP Skip nodes. You'll see that one has the expected
value for `skipped_layers` and the other has double that.

Now update to the PR and try again. You should see `skipped_layers` is
the right value in all cases.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved. It needs a real review with
braintime.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-02-09 13:24:10 -05:00
66b106f107 Merge branch 'main' into fix/nodes/deep-copy-inputs 2024-02-09 11:49:16 -05:00
b10d745dae fix(ui): when using control image dimensions, round to 8
The control image dimensions were set directly without rounding them to 8, causing an error during generation if they weren't a multiple of 8.
2024-02-09 08:44:11 -05:00
d20f98fb4f fix(nodes): deep copy graph inputs
The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state, invocation outputs, etc) constantly. The meant we were effectively working with deep-copied objects at all time. We could mutate objects freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node mutates the output object while `invoke`ing, the next node will get the mutated object.
- The invocation cache stores live python objects. When a node mutates an output pulled from the cache, the next node that uses the cached object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set, effectively reproducing the same behaviour as we had with the SQLite session storage. Nodes can safely mutate their inputs and those changes never leave the node's scope.

Closes  #5665
2024-02-09 21:17:32 +11:00
c9c150f850 feat(ui): use cfgRescaleMultiplier on canvas graphs 2024-02-09 18:53:08 +11:00
a60e2b7c77 fix existing graphs with cfg_RescaleMultiplier not used 2024-02-09 18:53:08 +11:00
da6e5b2ba1 fix(ui): fix lora count badge when none enabled 2024-02-08 19:22:28 -05:00
c65d497cbc fix(ui): filter disabled LoRAs on sdxl 2024-02-08 19:22:28 -05:00
B N
a68d8fe203 translationBot(ui): update translation (German)
Currently translated at 74.4% (1054 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 69.6% (986 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 68.6% (972 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-09 10:10:50 +11:00
5de2288cfa addressed feedback 2024-02-09 10:09:27 +11:00
2ce70b4457 added button on hover for exposing fields to linear workflow ui 2024-02-09 10:09:27 +11:00
6c5f743e2b Upgrade version of fastapi and socketio 2024-02-09 09:04:01 +11:00
bb242c4e1e Print correct version when a non-default version is selected for install (#5675)
…elected

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Small bugfix: the installer would always print the latest stable version
as the one to be installed, even if a different one was selected. The
selected version would still be installed correctly. This PR fixes the
message.

## QA Instructions, Screenshots, Recordings

Select a pre-release version on install and observe the correct version
being printed. Compare to current behaviour to ascertain the fix.

## Merge Plan

- "This PR can be merged when approved"

## Added/updated tests?

- [ ] Yes
- [x] No
2024-02-08 11:07:14 -05:00
c9e246ed1b fix(installer): print correct version when a non-default version is selected 2024-02-08 09:56:56 -05:00
B N
2175fe3823 translationBot(ui): update translation (German)
Currently translated at 66.2% (938 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-08 07:57:55 +11:00
f64fc2c8b7 feat(installer): add a deprecation message to the in-launcher updater 2024-02-07 14:31:26 -05:00
3d1b5c57ea fix(installer): more reliably upgrade pip 2024-02-07 14:31:26 -05:00
31b9538976 feat(installer): improve directory selection experience 2024-02-07 14:31:26 -05:00
97c1545cca feat(installer): show latest versions in the welcome panel 2024-02-07 14:31:26 -05:00
6a8a3b50bc feat(installer): add an interactive version chooser 2024-02-07 14:31:26 -05:00
5a816818dc feat(installer): get list of (pre-)releases from github api 2024-02-07 14:31:26 -05:00
1cb866d1fc fix(installer): small formatting fix in welcome banner 2024-02-07 14:31:26 -05:00
29bcc4b595 fix(installer) slightly better typing for GPU selection 2024-02-07 14:31:26 -05:00
ca2bb6f0cc fix(installer): bubble up exceptions during install 2024-02-07 14:31:26 -05:00
1c8fc908b2 fix(installer): minor logic fixes 2024-02-07 14:31:26 -05:00
d397beaa47 fix(installer): upgrade the temporary pip before installation 2024-02-07 14:31:26 -05:00
60eea09629 feat(installer): *always* force-reinstall
This has repeatedly shown itself useful in fixing install issues,
especially regarding pytorch CPU/GPU version, so there is little
downside to making this the default.

Performance impact of this should be negligible. Packages will
be reinstalled from pip cache if possible, and downloaded only if
necessary. Impact may be felt on slower disks.
2024-02-07 14:31:26 -05:00
5b7b1122cb tidy(installer): clean up unused code 2024-02-07 14:31:26 -05:00
dfc8d1bb10 tidy(installer): remove unused argument / env var 2024-02-07 14:31:26 -05:00
f9fa62164e tidy(installer): remove .whl publishing and bundling - we now install from pypi 2024-02-07 14:31:26 -05:00
d47905d2fb chore(installer): reorder messages in util script
fail fast if there's a virtualenv activated
2024-02-07 14:31:26 -05:00
03b1cde97d tidy(installer): remove unused update scripts and references thereto 2024-02-07 14:31:26 -05:00
7162ff04df tidy(installer): do not preinstall torch separately 2024-02-07 14:31:26 -05:00
32b1e974ca feat(installer): install from PyPi instead of using prepackaged wheel 2024-02-07 14:31:26 -05:00
82c3c7fc38 tidy(installer): remove unused experimental venv location 2024-02-07 14:31:26 -05:00
3dcbb79ef7 chore(installer): typing pass 2024-02-07 14:31:26 -05:00
3b41104427 Minor dep updates for diffusers and numpy (#5673)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because probably not needed

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

These are another minor dep updates that I was able to test without any
regressions. This will ensure we are up-to-date again.
The fixes are very minor, probably not noticeable in InvokeAI (at least
for diffusers) but it's still good to have them.

This is also to make sure that the RC is releasing with the latest
packages to ensure extended testing.

Greetings

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-07 13:29:29 -05:00
35bf7ee66d Minor dep updates 2024-02-07 17:58:28 +01:00
430e17a5d2 community node: BriaAI RMBG 1.4 (#5671)
## What type of PR is this? (check all applicable)

- [x] Community Node Submission


## Description

- Adds BriaAI's new 1.4 model for background removal. Far superior
results from what I've tested compared to any other BG removal so far:
https://github.com/blessedcoolant/invoke_bria_rmbg
2024-02-07 11:06:31 -05:00
400d66fa5d community node: BriaAI RMBG 1.4 2024-02-07 19:55:04 +05:30
800c481515 add actions for workflow library (#5669)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-07 14:14:54 +00:00
79ae9c4e64 feat(nodes): move profiler/stats cleanup logic to function
Harder to miss something going forward.
2024-02-07 11:26:15 +11:00
0dc6cb0535 feat(nodes): do not log stats errors
The stats service was logging error messages when attempting to retrieve stats for a graph that it wasn't tracking. This was rather noisy.

Instead of logging these errors within the service, we now will just raise the error and let the consumer of the service decide whether or not to log. Our usage of the service at this time is to suppress errors - we don't want to log anything to the console.

Note: With the improvements in the previous two commits, we shouldn't get these errors moving forward, but I still think this change is correct.
2024-02-07 11:26:15 +11:00
810fc19e43 feat(nodes): log stats for canceled graphs
When an invocation is canceled, we consider the graph canceled. Log its graph's stats before resetting its graph's stats. No reason to not log these stats.

We also should stop the profiler at this point, because this graph is finished. If we don't stop it manually, it will stop itself and write the profile to disk when it is next started, but the resultant profile will include more than just its target graph.

Now we get both stats and profiles for canceled graphs.
2024-02-07 11:26:15 +11:00
e0e106367d fix(nodes): do not clear invocation stats on invoke error
When an invocation errored, we clear the stats for the whole graph. Later on, we check the graph for errors and see the failed invocation, and we consider the graph failed. We then attempts to log the stats for the failed graph.

Except now the failed graph has no stats, and the stats raises an error.

The user sees, in the terminal:
- An invocation error
- A stats error (scary!)
- No stats for the failed graph (uninformative!)

What the user should see:
- An invocation error
- Graph stats

The fix is simple - don't reset the graph stats when an invocation has an error.
2024-02-07 11:26:15 +11:00
14472dc09d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 11:16:38 +11:00
e8095b73ae feat(ui): improve types for language picker
Makes it impossible to miss a language or typo.
2024-02-05 10:47:36 +11:00
c979cf5ecc tidy(ui): remove language translation strings
There's no need to do things like translate Arabic into Finnish. We never use those strings. Remove these translations entirely.
2024-02-05 10:47:36 +11:00
1b4dbd283e fix(ui): hardcode language picker languages
Hardcode the options in the dropdown, don't rely on translators to fill this in.

Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
2024-02-05 10:47:36 +11:00
fb50a221f8 fix(ui): fix color input field alpha
Closes #5647

The alpha values in the UI are `0-1` but the backend wants `0-255`.

Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.

The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.

Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
2024-02-05 09:28:20 +11:00
52e07db06b Update communityNodes.md
added Autostereogram nodes
2024-02-05 09:26:41 +11:00
6643b5cec4 feat(ui): log trace when skipping reserved input field type 2024-02-05 09:24:46 +11:00
e8bf9ea058 fix(ui): do not swallow errors during schema parsing
Unknown errors were swallowed during schema parsing. Now they log a warning.
2024-02-05 09:24:46 +11:00
ce3d37e829 fix(ui): handle fields with single option literal
Closes #5616

Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.

When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.

 This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.

 I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
2024-02-05 09:15:09 +11:00
8a61063e84 translationBot(ui): update translation (Turkish)
Currently translated at 57.5% (825 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
87ff96553a translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
209bf105bc translationBot(ui): update translation (Turkish)
Currently translated at 57.3% (822 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
804dbeba34 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
067cd4dc2e translationBot(ui): update translation (Turkish)
Currently translated at 40.6% (582 of 1433 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 38.8% (557 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
feb4a3f242 translationBot(ui): update translation (Azerbaijani)
Currently translated at 0.1% (1 of 1433 strings)

translationBot(ui): added translation (Azerbaijani)

Co-authored-by: Mehrab Poladov <thepoladov@protonmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/az/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
4a886c0a4a Minor dep updates 2024-02-04 13:04:36 -05:00
8e500283b6 Fix broken import in checkpoint_convert (#5635)
* Fix broken import in checkpoint_convert

* simplify the fix

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-04 12:56:51 +00:00
3205371654 feat(ui): better error handling for persist serialize function 2024-02-03 07:39:19 -05:00
d713620d9e refactor(ui): refactor reducer list
Instead of manually naming reducers, use each slice's `name` property. Makes typos impossible.
2024-02-03 07:39:19 -05:00
c1300fa8b1 refactor(ui): refactor persist config
Add more structure around persist configs to avoid bugs from typos and misplaced persist denylists.
2024-02-03 07:39:19 -05:00
0976ddba23 chore(invocation-stats): improve types in _prune_stale_stats 2024-02-03 07:34:06 -05:00
3ebb806410 fix(invocation-stats): use appropriate method to get the type of an invocation 2024-02-03 07:34:06 -05:00
9f274c79dc chore(item-storage): improve types
Provide type args to the generics.
2024-02-03 07:34:06 -05:00
88c08bbfc7 fix(item-storage-memory): throw when requested item does not exist
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.

The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.

The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.

This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.

In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.

Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
2024-02-03 07:34:06 -05:00
c2af124622 fix(ui): refetch intermediates count when settings modal open
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.

So we have to imperatively refetch, by refetching as we open the modal.

Closes #5639
2024-02-03 12:14:37 +11:00
f972fe9836 pref: annotate 2024-02-03 10:18:26 +11:00
dcfc883ab3 perf: remove TypeAdapter 2024-02-03 10:18:26 +11:00
1d2bd6b8f7 perf: TypeAdapter instantiated once 2024-02-03 10:18:26 +11:00
f2777f5096 Port the command-line tools to use model_manager2 (#5546)
* Port the command-line tools to use model_manager2

1.Reimplement the following:

  - invokeai-model-install
  - invokeai-merge
  - invokeai-ti

  To avoid breaking the original modeal manager, the udpated tools
  have been renamed invokeai-model-install2 and invokeai-merge2. The
  textual inversion training script should continue to work with
  existing installations. The "starter" models now live in
  `invokeai/configs/INITIAL_MODELS2.yaml`.

  When the full model manager 2 is in place and working, I'll rename
  these files and commands.

2. Add the `merge` route to the web API. This will merge two or three models,
   resulting a new one.

   - Note that because the model installer selectively installs the `fp16` variant
     of models (rather than both 16- and 32-bit versions as previous),
     the diffusers merge script will choke on any huggingface diffuserse models
     that were downloaded with the new installer. Previously-downloaded models
     should continue to merge correctly. I have a PR
     upstream https://github.com/huggingface/diffusers/pull/6670 to fix
     this.

3. (more important!)
  During implementation of the CLI tools, found and fixed a number of small
  runtime bugs in the model_manager2 implementation:

  - During model database migration, if a registered models file was
    not found on disk, the migration would be aborted. Now the
    offending model is skipped with a log warning.

  - Caught and fixed a condition in which the installer would download the
    entire diffusers repo when the user provided a single `.safetensors`
    file URL.

  - Caught and fixed a condition in which the installer would raise an
    exception and stop the app when a request for an unknown model's metadata
    was passed to Civitai. Now an error is logged and the installer continues.

  - Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
    from Civitai.

* fix ruff issue

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-02 17:18:47 +00:00
d3320dc4ee convert checkpoints to safetensors (#5620)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-02 10:27:24 -05:00
72db2ee352 Merge branch 'main' into sdxl-convert-safetensors 2024-02-02 10:10:49 -05:00
60c3a4ad5e chore: add Hand Refiner to communityNodes.md 2024-02-02 08:12:32 -05:00
cf7a7928af Update mkdocs.yml 2024-02-01 20:43:49 -05:00
1057314508 Fix ruff? 2024-02-01 20:40:28 -05:00
73a077956b Why did my IDE change the comment? 2024-02-01 20:40:28 -05:00
5e1e50bd47 Fix hopefully last import 2024-02-01 20:40:28 -05:00
413fe566b8 Fix imports 2024-02-01 20:40:28 -05:00
c9b5f06c42 Update diffusers + hotfix 2024-02-01 20:40:28 -05:00
b53e432b0f translationBot(ui): update translation (German)
Currently translated at 60.8% (871 of 1432 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-02 11:16:45 +11:00
88164447e9 fix(ui): hide HRF if SDXL model selected 2024-02-02 11:10:54 +11:00
1ac85fd049 tidy(migrator): remove logic to check if graph_executions exists in migration 5
Initially I wanted to show how many sessions were being deleted. In hindsight, this is not great:
- It requires extra logic in the migrator, which should be as simple as possible.
- It may be alarming to see "Clearing 224591 old sessions".

The app still reports on freed space during the DB startup logic.
2024-02-02 09:20:41 +11:00
ee6fc4ab1d chore(item_storage): excise SqliteItemStorage 2024-02-02 09:20:41 +11:00
9f793bdae8 feat(item_storage): implement item_storage_memory with LRU eviction strategy
Implemented with OrderedDict.
2024-02-02 09:20:41 +11:00
a0eecaecd0 feat(item_storage): implement item_storage_memory max_size
Implemented with unordered dict and set.
2024-02-02 09:20:41 +11:00
d532073f5b fix(db): check for graph_executions table before dropping
This is needed to not fail tests; see comment in code.
2024-02-02 09:20:41 +11:00
198e8c9d55 feat(db): add migration 5 to drop graph_executions table 2024-02-02 09:20:41 +11:00
30367deeca feat(nodes): use memory item storage 2024-02-02 09:20:41 +11:00
e73298aea2 tidy(item_storage): remove extraneous class attribute declarations 2024-02-02 09:20:41 +11:00
59279851a3 tidy(item_storage): remove unused list and search methods 2024-02-02 09:20:41 +11:00
2965357d99 feat(nodes): add ItemStorageMemory
The sqlite item storage class can be swapped for this eliminate costly network calls.
2024-02-02 09:20:41 +11:00
8bd32ee142 feat(nodes): add delete method to ItemStorageABC 2024-02-02 09:20:41 +11:00
a4f892dcfb tidy(nodes): remove unused get_raw method on ItemStorageABC 2024-02-02 09:20:41 +11:00
e675983e20 fix(ui): download image opens in new tab (#5625)
* fix(ui): download image opens in new tab

In some environments, a simple `a` element cannot trigger a download of an image. Fetching the image directly can get around this and provide more reliable download functionality.

* use hook for imageUrlToBlob so token gets sent if needed

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-01 20:25:01 +00:00
e9558f97c4 perf(config): change default png_compress_level to 1
This substantially reduces the time spent encoding PNGs. In workflows with many image outputs, this is a drastic improvement.

For a tiled upscaling workflow going from 512x512 to a scale factor of 4, this can provide over 15% speed increase.
2024-02-02 00:32:00 +11:00
a1a611f8cb chore(ui): lint 2024-02-02 00:20:28 +11:00
182dc859a0 chore(ui): update eslint rules
- Add `i18next/no-literal-string` (was removed from upstream config)
- Restore `path/no-relative-imports`, this was lost in the shuffle a while ago
2024-02-02 00:20:28 +11:00
c0240a8568 chore(ui): bump @invoke-ai/eslint-config-react 2024-02-02 00:20:28 +11:00
02bcff29e8 feat: update ROCm to 5.6 everywhere 2024-02-01 00:07:16 -05:00
d4ed64df7d feat: add force-reinstall option to the updater 2024-02-01 00:07:16 -05:00
701f14c1e3 fix: add PyTorch extra-index-url to the updater command 2024-02-01 00:07:16 -05:00
45bf2c7da6 chore(updater): address deprecation of pkg_resources
as per module docstring:
This module is deprecated. Users are directed to importlib.resources,
importlib.metadata and packaging instead.
2024-02-01 00:07:16 -05:00
67ada70a26 docs: update link to frontend README 2024-01-31 22:34:59 -05:00
06bcc07f65 Merge branch 'main' into sdxl-convert-safetensors 2024-01-31 17:00:19 -05:00
4410ecf62c fix(stats): log errors at error level
They were erroneously at warning before.
2024-02-01 08:50:56 +11:00
9f6b9d4d23 fix(stats): preserve stack when raising GESStatsNotFoundError 2024-02-01 08:50:56 +11:00
b24e8dd829 feat(stats): refactor InvocationStatsService to output stats as dataclasses
This allows the stats to be written to disk as JSON and analyzed.

- Add dataclasses to hold stats.
- Move stats pretty-print logic to `__str__` of the new `InvocationStatsSummary` class.
- Add `get_stats` and `dump_stats` methods to `InvocationStatsServiceBase`.
- `InvocationStatsService` now throws if stats are requested for a session it doesn't know about. This avoids needing to do a lot of messy null checks.
- Update `DefaultInvocationProcessor` to use the new stats methods and suppresses the new errors.
2024-02-01 08:50:56 +11:00
25291a2e01 select first image if no selectedImageName 2024-01-31 11:52:47 -05:00
332f3930a5 Allow civit ai API Key on Imports (#5608)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Small PR to allow users to pass in a civit api key via config options

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-31 10:51:33 -05:00
ed466a99ec Merge branch 'main' into fix-civit-model-imports 2024-01-31 10:12:44 -05:00
f68f8898c0 Workflow navigation & save-as (#5607)
* redo top panel of workflow editor

* add checkbox option to save to project, integrate save-as flow into first time saving workflow

* remove log

* remove workflowLibrary as a feature that can be disabled

* lint

* feat(ui): make SaveWorkflowAsDialog a singleton

Fixes an issue where the workflow name would erroneously be an empty string (which it should show the current workflow name).

Also makes it easier to interact with this component.

- Extract the dialog state to a hook
- Render the dialog once in `<NodeEditor />`
- Use the hook in the various buttons that should open the dialog
- Fix a few wonkily named components (pre-existing issue)

* fix(ui): when saving a never-before-saved workflow, do not append " (copy)" to the name

* fix(ui): do not obscure workflow library button with add node popover

This component is kinda janky :/ the popover content somehow renders invisibly over the button. I think it's related to the `<PopoverAnchor />.

Need to redo this in the future, but for now, making the popover render lazily fixes this.

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 13:32:31 +00:00
a0996b1c0a Fix ruff styling 2024-01-31 07:16:14 -06:00
522ff4a042 civit -> civitai 2024-01-31 07:16:14 -06:00
a769f93be0 Remove unnecessary change 2024-01-31 07:16:14 -06:00
2c5ef92979 Move location of config property, comment for explanation of use 2024-01-31 07:16:14 -06:00
5d773dc94c Remove debug line 2024-01-31 07:16:14 -06:00
088e3420e6 Allow passing of civit api key via config 2024-01-31 07:16:14 -06:00
14efc95707 Allow passing of a civit api key 2024-01-31 07:16:14 -06:00
f48a2c5fd2 fix(ui): workflow settings styling
Got borked in the redesign.
2024-01-31 07:16:01 -06:00
74ae4d7774 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
191203ea0c translationBot(ui): update translation (Turkish)
Currently translated at 36.1% (516 of 1427 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
6aceae5c22 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1388 of 1427 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
8c6b3efd39 fix(ui): remove hard reset of cursor on canvas during state reset
Remove resetting cursor when resetting state letting event handlers to take care of presentation
2024-01-31 23:03:14 +11:00
4602efd598 feat: add profiler util (#5601)
* feat(config): add profiling config settings

- `profile_graphs` enables graph profiling with cProfile
- `profiles_dir` sets the output for profiles

* feat(nodes): add Profiler util

Simple wrapper around cProfile.

* feat(nodes): use Profiler in invocation processor

* scripts: add generate_profile_graphs.sh script

Helper to generate graphs for profiles.

* pkg: add snakeviz and gprof2dot to dev deps

These are useful for profiling.

* tests: add tests for profiler util

* fix(profiler): handle previous profile not stopped cleanly

* feat(profiler): add profile_prefix config setting

The prefix is used when writing profile output files. Useful to organise profiles into sessions.

* tidy(profiler): add `_` to private API

* feat(profiler): simplify API

* feat(profiler): use child logger for profiler logs

* chore(profiler): update docstrings

* feat(profiler): stop() returns output path

* chore(profiler): fix docstring

* tests(profiler): update tests

* chore: ruff
2024-01-31 10:51:57 +00:00
f70c0936ca feat: disable/enable LoRas with a switch (#5591)
* feat:  disable/enable LorRas with a switch

* feat:  visually display previous weight when disabled

* style: 🚨 linting

* feat:  lora badge count reflects active loras

* style: 🚨 linting

* feat:  track disabled lora on state instead of weight

* style: 🚨 linting

* feat:  it all works now

tracking isEnabled on lora state, disabled slider when disabled, removed disabled loras from graph, updated badge counting and renamed lora add function

* style: 🚨 linting

* fix: 🐛 enabledLoRAs filter nullish coalescing

* refactor: 🎨 minor changes

renamed lora toggle action, removed errent comment, removed extraneous type annotation

* style: 🚨 linting
2024-01-31 05:50:03 +00:00
0d4de4cc63 changed hotkeys (#5542)
Adds adds ctrl/meta + scroll to change brush size on canvas.

* changed hotkeys

* new hotkey as an additional

* lint fixed"

* added ctrl scroll and removed hotkey

* using

* added fix

* feedbck_changes

* brush size change logic

* feat(ui): also check for meta key when modifying brush size

* feat(ui): add comment linking to where brush size algo was determined

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 15:57:16 +11:00
1e855f8290 Update safetensors and transformers to their latest versions (#5562)
* Update Safetensors to the lastest version

* Update Transformers while at it

* Update transformers again
2024-01-31 04:54:56 +00:00
bb2787584d chore(deps-dev): bump vite in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.0.11 to 5.0.12.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.0.12/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.0.12/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-31 15:47:13 +11:00
a04981b418 This seems to work now 2024-01-30 21:32:08 -05:00
d7f16b7c87 fix(ui): the bottom button on floating side panel clears all queue items 2024-01-31 01:04:24 +11:00
4477e04d59 fix(ui): filter out interactive targets when pressing space on canvas tab
Improve input filtering for better accessibility
2024-01-30 09:56:21 +11:00
30e11b4b42 feat(ui): save the current staging image with shift+s 2024-01-30 09:56:21 +11:00
b93695b78f feat(ui): discard all staging images in canvas on escape 2024-01-30 09:56:21 +11:00
b01311813b fix(ui): activate move tool on pressing space
canvas element is not guaranteed to be in focus (e.g. after accepting new staging image) so we check for the active tab name instead
2024-01-30 09:56:21 +11:00
5ae80fab87 fix(ui): accept staging image hotkey callback 2024-01-30 09:56:21 +11:00
c4291f2136 fix(ui): block gallery navigation when staging images on canvas 2024-01-30 09:56:21 +11:00
287d3c2b04 add UI library to rollup config (#5598)
* try rolling up ui library

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-29 13:13:09 -05:00
7fde19730e translationBot(ui): update translation (Turkish)
Currently translated at 22.8% (326 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-29 14:15:29 +11:00
13575642d8 chore: update issue template
- Improve spelling and grammar
- Add browser, GPU model, python deps fields
- Revise other fields
2024-01-29 14:11:00 +11:00
3f5370b284 feat(ui): add a copy button to the about modal
This copies the dependencies as JSON.
2024-01-28 20:50:08 -06:00
d048eb5b20 docs(ui): add STATE_MGMT.md
Supersedes the mini nanostores doc.
2024-01-29 07:28:20 +11:00
dd7031a472 docs(ui): update README.md
Also moved it to the frontend package root
2024-01-29 07:28:20 +11:00
4160d5ef26 update contributors list to bring into sync with discord roles (#5586)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This brings `docs/other/CONTRIBUTORS.md` into sync with collaborator
roles in Discord as of January 27, 2024.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

## QA Instructions, Screenshots, Recordings

N/A

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-28 11:28:22 -05:00
51bdf2fd19 Merge branch 'main' into docs/update-contributors 2024-01-28 11:26:35 -05:00
6a44697911 translationBot(ui): update translation (Turkish)
Currently translated at 10.5% (151 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 8.1% (116 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 6.6% (95 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
7a1d0ec228 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
b5928fd411 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1387 of 1426 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
2f345d1976 chore(ui): lint 2024-01-28 19:57:53 +11:00
f5d0721fa8 chore(ui): bump @invoke-ai/eslint-config-react 2024-01-28 19:57:53 +11:00
c3b36cb61d chore(ui): remove chakra CLI
It doesn't work now that the theme is external. I'm not sure how to fix it and not sure if it really did much (I don't think I ever got autocomplete...). Maybe it can be implemented in `@invoke-ai/ui-library`.
2024-01-28 19:57:53 +11:00
189c430e46 chore(ui): format
Lots of changed bc the line length is now 120. May as well do it now.
2024-01-28 19:57:53 +11:00
b922ee566a chore(ui): use new prettier config 2024-01-28 19:57:53 +11:00
89da69f647 fix(ui): correct import in ReduxInit 2024-01-28 19:57:53 +11:00
138caa34de chore(ui): lint 2024-01-28 19:57:53 +11:00
26c3378ede chore(ui): use new eslint config, add some overrides 2024-01-28 19:57:53 +11:00
aa134a2db8 chore(ui): remove postinstall script 2024-01-28 19:57:53 +11:00
d0391cb430 chore(ui): bump @invoke-ai/ui-library, add @invoke-ai/eslint-config-react & @invoke-ai/prettier-config-react 2024-01-28 19:57:53 +11:00
c955ea9de0 Update CONTRIBUTORS.md 2024-01-27 17:04:32 -05:00
fc29a5d439 update contributors list to bring into sync with discord roles 2024-01-27 16:59:56 -05:00
7e9942dbab {fix} install docs house keeping (#5583)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
- Update docs to make link to automated installer easier to find
- Fixed issue in SDXL + refiner example workflow 

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Read over docs changes
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?
Deploy new docs
2024-01-27 12:10:47 -05:00
c003967eaa Merge branch 'main' into feat/install_docs_update 2024-01-27 11:55:19 -05:00
b28fcc6be5 lint 2024-01-27 21:36:42 +11:00
418cdbabb7 add option for workflowCategories 2024-01-27 21:36:42 +11:00
18e61e92d9 {fix} install docs house keeping 2024-01-26 21:19:48 -06:00
de20711637 add nanostore for open API schema 2024-01-27 12:43:47 +11:00
55e91b97be dep 2024-01-27 12:43:47 +11:00
f79bbd2d6e account for baseUrl 2024-01-27 12:43:47 +11:00
e1c2c3905d Github action for ensuring PRs are labeled in a way that makes it eas… (#5543)
…y to distinguish what's being changed

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-25 20:37:39 -05:00
03ac93bfc7 Merge branch 'main' into pr-labeler 2024-01-25 20:36:12 -05:00
89da976949 workflow library updates (#5568)
* dont show duplicate toasts if workflow actions fail due to auth

* dynamic order by options based on projectId

* add endpointName to authtoast to makeit unique per endpoint

* lint

* update toast logic to check based on endpoint name w type safety

* fix save as endpoit name

* lint

* fix type

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-25 11:43:47 -05:00
57dafd294d {release} v3.6.2
## What type of PR is this? (check all applicable)

Invoke v3.6.2 release


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.2

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.2.zip](https://github.com/invoke-ai/InvokeAI/files/14046191/InvokeAI-installer-v3.6.2.zip)
2024-01-24 22:05:10 -05:00
e611baa4b4 {release} v3.6.2 2024-01-24 21:40:03 -05:00
fc448d5b6d feat(ui): handle proxy configs rewriting paths
We can't assume that the base URL is `host:port/` - it could be `host:port/some/path/`. Make the path handling dynamic to account for this.
2024-01-25 13:29:56 +11:00
e59954f956 fix workflow updating (#5567)
* retain id through workflow state so that we correctly update or save new

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-24 16:10:19 -05:00
e160cbb1e9 Merge branch 'main' into pr-labeler 2024-01-24 15:44:35 -05:00
86c857b9c2 {release} v3.6.1 (#5564)
## What type of PR is this? (check all applicable)

Invoke 3.6.1 release

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.1.zip](https://github.com/invoke-ai/InvokeAI/files/14041411/InvokeAI-installer-v3.6.1.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

## [optional] Are there any post deployment tasks we need to perform?
PyPI Release & GitHub Release
2024-01-24 12:31:10 -05:00
0a13d7d2c7 {release} v3.6.1 2024-01-24 11:27:36 -05:00
68da5c6d22 feat: Add Depth Anything PreProcessor (#5548)
## What type of PR is this? (check all applicable)

- [x] Feature

## Have you discussed this change with the InvokeAI team?
- [x] Yes

## Have you updated all relevant documentation?
- [x] No


## Description

- This adds the newly released Depth Anything to InvokeAI. A new node
`Depth Anything Processor` has been added to generate depth maps using
this new technique. https://depth-anything.github.io

- All related checkpoints will be downloaded automatically on first
boot. The `DinoV2` models will be loaded to your torch cache dir and the
checkpoints pertaining to Depth Anything will be downloaded to
`any/annotators/depth_anything`.

- Alternatively you can find the checkpoints here and download them to
that folder:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints

- This depth map can be used with any depth ControlNet model out there
but the folks at DepthAnything have also released a custom fine tuned
ControlNet model. From my limited testing, I still prefer the original
depth model because this one seems to be producing weird artifacts. Not
sure if that is a specific problem to Invoke or just the model itself.
I'll test more later. Place these in your controlnet folder like your
other ControlNets. You can get that here:
https://huggingface.co/spaces/LiheYoung/Depth-Anything/tree/main/checkpoints_controlnet

- Also available in the LinearUI

- DepthAnything has three models `large`, `base` and `small` -- I've
defaulted the processor to small but a user can change to the large
model if they wish to do so. Small is way faster but obviously somewhat
of a lesser quality.

- DepthAnything is now the default processor for depth controlnet
models.

## Screenshots


![opera_o3jHnWxVRi](https://github.com/invoke-ai/InvokeAI/assets/54517381/573c66f3-1492-45b0-b6df-25756f5e1d1a)

## Merge Plan

DO NOT MERGE YET. Test it first and I'm sure the model caching can be
done better. Coz I don't think I've done that at all. I would appreciate
if @brandonrising or @lstein or anyone can take a look at that part of
it.
2024-01-24 19:14:34 +05:30
f82744b95e fix: linting issues 2024-01-24 18:45:54 +05:30
5a67bc68a1 Merge branch 'main' into depth-anything 2024-01-23 22:31:19 -06:00
61cf4d4c70 feat: "Remix Image" option on images (#5553)
* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  "Remix Image" option on images

Adds a new "remix image" option where applicable, recalls all metadata except the seed

* refactor: 🚨 lint code

* feat:  add new remix hotkey to hotkeys modal

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-24 00:45:30 +00:00
9d20a2d5a3 Update huggingface deps to their lastest versions 2024-01-24 11:14:21 +11:00
8b0ac451e3 translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1378 of 1415 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
470dbe75a2 translationBot(ui): update translation (German)
Currently translated at 60.0% (850 of 1415 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-01-24 11:05:17 +11:00
b7d19b8130 add project as category to back-end 2024-01-24 10:59:04 +11:00
3dc13221d8 add project as a workflow category in the front-end 2024-01-24 10:59:04 +11:00
35184dbd86 fix: incorrect local file path 2024-01-24 03:37:16 +05:30
0868fc2558 Merge branch 'main' into depth-anything 2024-01-24 03:36:25 +05:30
92fb09c4df fix: Move the models to any folder to avoid boot warnings 2024-01-24 03:35:37 +05:30
b4cf5496b6 fix(ui): handle model names with spaces
Remove `trim()` from model identifier schema, which prevented parsed model identifiers from matching.

The root issue here is that model names are identifiers. This will be resolved in the model manager refactor.

Closes #5556
2024-01-23 15:48:10 -06:00
a0e68705dd feat(ui): improved dynamic prompts behaviour
- Bump `@invoke-ai/ui` for updated styles
- Update regex to parse prompts with newlines
- Update styling of overlay button when prompt has an error
- Fix bug where loading and error state sometimes weren't cleared
2024-01-23 15:26:12 -06:00
7cb49e65bd feat: Add Resolution to DepthAnything 2024-01-23 14:13:50 -06:00
39fedb090b feat: Make depth anything the default processor for depth controlnet 2024-01-23 14:13:50 -06:00
f36a691219 feat: Make the depth anything small model the default 2024-01-23 14:13:50 -06:00
6a2eb1d2e4 fix: Change the path of the annotator folder to annotators
Just making this change in case there are other models added to the folder in the future
2024-01-23 14:13:50 -06:00
13123daa3f feat: Add DepthAnything to Linear UI 2024-01-23 14:13:50 -06:00
c859eb865e fix: lint & other minor issues 2024-01-23 14:13:50 -06:00
8f5e2cbcc7 feat: Add Depth Anything PreProcessor 2024-01-23 14:13:50 -06:00
2aed6e2dba fix(ui): duplicate "base model" in merge UI
closes #5505
2024-01-23 14:13:18 -06:00
52b51a6088 fix(ui): recall/use size sets aspect ratio correctly
Added a new action that resets the aspect ratio when dispatched.

Closes #5456
2024-01-23 14:13:18 -06:00
52b24e01e2 feat(ui): remove chakra as direct dependency
Moved a number of things to `@invoke-ai/ui` to support this.

Unfortunately, the bundle size has increased a bit. I will work on that later.
2024-01-23 14:13:18 -06:00
1178fd8bd3 fix(ui): fix styling of some form elements 2024-01-23 14:13:18 -06:00
a0187cc9df fix(ui): remove unused import in storybook 2024-01-23 14:13:18 -06:00
2f656cc357 fix(ui): fix field context menu jank
Closes #5551
2024-01-23 14:13:18 -06:00
71f9ac9985 feat(ui): scollable areas support x-axis scrolling
Closes #5490
2024-01-23 14:13:18 -06:00
8bbdfc45fa fix(ui): increase size of control adapters advanced toggle button 2024-01-23 14:13:18 -06:00
3cbb1a7671 chore(ui): bump @invoke-ai/ui
This includes a minor enhancement, increasing the contrast on tabs.
2024-01-23 14:13:18 -06:00
b74e0de74a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
e7e7793896 feat(ui): remember open/closed state of accordions/expanders 2024-01-23 14:13:18 -06:00
504bdac14a tidy(ui): remove unused state from uiSlice 2024-01-23 14:13:18 -06:00
b76d2cd716 fix(ui): handle base model compat when recalling parameters
We had a one-behind issue with recalling metadata items that had a model.

For example, when recalling LoRAs, we check against the current main model to decide whether or not the requested LoRA is compatible and may be recalled.

When recalling all params, we are often also recalling the main model, but the compat logic didn't compare against this new main model.

The logic is updated to check against the new main model, if one is being set.

Closes #5512
2024-01-23 14:13:18 -06:00
022b32c724 fix(ui): reset clip skip to 0 if new model is sdxl
Clip skip wasn't actually used in SDXL graphs so enabling it didn't do anything, just a UI quirk.

Closes #5508
2024-01-23 14:13:18 -06:00
653b820da1 tidy(ui): clearer variable names in modelSelected listener 2024-01-23 14:13:18 -06:00
68232e642f Merge branch 'main' into pr-labeler 2024-01-23 09:20:43 -05:00
4ba0bf4dcf docs(ui): update README (#5473)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission

## Description

Update UI README


## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-23 18:47:24 +05:30
5e4daf4bc6 docs: remove separate frontend docs and link to repo
The frontend docs should just be in the frontend. This is a standard practice for monorepos with developer information for specific packages within the monorepo.
2024-01-23 18:04:41 +11:00
7e0713c869 docs(ui): fix typo 2024-01-23 18:04:41 +11:00
099d516ac0 docs(ui): update README 2024-01-23 18:04:41 +11:00
b94f6a4a29 Fix python label, add test label 2024-01-22 15:14:02 -05:00
4caf63d53d Added a few more labels 2024-01-22 15:08:11 -05:00
6057229ceb Github action for ensuring PRs are labeled in a way that makes it easy to distinguish what's being changed 2024-01-22 11:22:33 -05:00
6a2856e46f Updated field descriptions 2024-01-23 02:26:30 +11:00
4dedd63b74 Update defaultNodes.md
Added ideal size node
2024-01-23 02:26:30 +11:00
db74837eb1 Update communityNodes.md
Removed ideal size node
2024-01-23 02:26:30 +11:00
892fe62264 Add Ideal Size node to core nodes
The Ideal Size node is useful for High-Res Optimization as it gives the optimum size for creating an initial generation with minimal artifacts (duplication and other strangeness) from today's models.

After inclusion, front end graph generation can be simplified by offloading calculations for HRO initial generation to this node.
2024-01-23 02:26:30 +11:00
3c79476785 requested changes made 2024-01-22 21:06:40 +11:00
dad364da17 rebased and made more chnges 2024-01-22 21:06:40 +11:00
37bc4f78d0 last 2024-01-22 21:06:40 +11:00
de0b43c81d more strings and translations added 2024-01-22 21:06:40 +11:00
ea1d2d6a4c added translation where needed 2024-01-22 21:06:40 +11:00
fafe8ccc59 fix(api): typo in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
4b88cfac19 fix(api): type in no_cache_staticfiles.py 2024-01-22 16:10:25 +11:00
5fa13fba36 chore: ruff 2024-01-22 16:10:25 +11:00
f28f761436 fix(api): add NoCacheStaticFiles to prevent *all* caching
The previous method wasn't totally foolproof, and locales/assets were cached.

To solve this once and for all (famous last words, I know), we can subclass `StaticFiles` and use maximally strict no-caching headers to disable caching on all static files.
2024-01-22 16:10:25 +11:00
27d7889780 tidy(ui): remove commented line 2024-01-22 09:37:26 +11:00
a1cf153097 fix(ui): generation accordion defaults to open 2024-01-22 09:37:26 +11:00
d121eefa12 tidy(ui): organise accordions files 2024-01-22 09:37:26 +11:00
c92e25a6a7 fix(ui): add tooltip to model select 2024-01-22 09:37:26 +11:00
8be03dead5 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
1197133d06 fix(ui): fix lora name wrap 2024-01-22 09:37:26 +11:00
850458a554 chore(ui): lint 2024-01-22 09:37:26 +11:00
e96ad41729 feat(ui): use @invoke-ai/ui hooks for modifiers, global menu state 2024-01-22 09:37:26 +11:00
53cf518390 chore(ui): bump @invoke-ai/ui 2024-01-22 09:37:26 +11:00
b00ace852d fix(ui): settings modal switch widths 2024-01-22 09:37:26 +11:00
be72765d02 fix(ui): bump @invoke-ai/ui, fix TS issues 2024-01-22 09:37:26 +11:00
580d29257c feat(ui): add missing translations 2024-01-22 09:37:26 +11:00
5d068c1da1 feat(ui): migrate to @invoke-ai/ui 2024-01-22 09:37:26 +11:00
8e2ccab1f0 feat(ui): add @invoke-ai/ui 2024-01-22 09:37:26 +11:00
6f478eef62 chore(ui): bump deps 2024-01-22 09:37:26 +11:00
1ff1c370df translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

translationBot(ui): update translation (Italian)

Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-19 15:11:32 +11:00
5ef87ef2a6 fix(ui): tidy component/props naming in ClearQueueIconButton.tsx 2024-01-19 15:03:16 +11:00
d0709d4f4e fix(ui): use trash icon for clear all queue 2024-01-19 15:03:16 +11:00
2a081b0a27 fix(ui): remove unnecessary fragments 2024-01-19 15:03:16 +11:00
d902533387 feat(ui): use global modifier state for clear queue button mode switch 2024-01-19 15:03:16 +11:00
1174713223 fix(ui): use cancel strings for cancel button 2024-01-19 15:03:16 +11:00
4b1740ad19 chore(ui): format 2024-01-19 15:03:16 +11:00
e03c88ce32 feat: 🚸 shift key queue cancellations 2024-01-19 15:03:16 +11:00
b917ffecbe chore(ui): format 2024-01-19 14:42:31 +11:00
2967a78c5a feat: 💄 update lots of icons 2024-01-19 14:42:31 +11:00
aa25ea62a5 fix(backend) installed models being redownloaded (#5526)
* fix

* fix ruff errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-18 16:53:53 -05:00
1ab0e86085 feat(ui): add about modal w/ app deps (#5462)
* resolved conflicts

* changed logo and some design changes

* feedback changes

* resolved conflicts

* changed logo and some design changes

* feedback changes

* lint fixed

* added translations

* some requested changes done

* all feedback changes done and replace links in settingsmenu comp

* fixed the gap between deps verisons & chnaged heights

* feat(ui): minor about modal styling

* feat(ui): tag app endpoints with FetchOnReconnect

* fix(ui): remove unused translation string

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
Co-authored-by: Kent Keirsey <31807370+hipsterusername@users.noreply.github.com>
2024-01-18 13:30:56 +00:00
c9ddbb4241 lint 2024-01-18 23:29:47 +11:00
415a1c7a4f add ids 2024-01-18 23:29:47 +11:00
84a4836ab7 change actions to be for any InvExpander or InvSingleAccordion that has id passed in 2024-01-18 23:29:47 +11:00
dbd6c9c6ed remove actions we can get from mutation data 2024-01-18 23:29:47 +11:00
4f95c077d4 lint fix 2024-01-18 23:29:47 +11:00
0a4cbc4e16 undo 2024-01-18 23:29:47 +11:00
d45b76fab4 undo 2024-01-18 23:29:47 +11:00
9722135cda add various actions for commercial purposes 2024-01-18 23:29:47 +11:00
7366913a31 chore(deps): bump tj-actions/changed-files in /.github/workflows
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 37 to 41.
- [Release notes](https://github.com/tj-actions/changed-files/releases)
- [Changelog](https://github.com/tj-actions/changed-files/blob/main/HISTORY.md)
- [Commits](https://github.com/tj-actions/changed-files/compare/v37...v41)

---
updated-dependencies:
- dependency-name: tj-actions/changed-files
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-17 00:25:21 -05:00
bd31b5606c Retain suffix (.safetensors, .bin) when renaming a checkpoint file or LoRA
- closes #5518
2024-01-16 17:27:50 -05:00
2953dea4a0 set width and height based on defaultModel if passed in 2024-01-17 07:08:11 +11:00
f3fed0b10f fix(ui): use native langs for language select
Use each language's own language for their option in the language select. This falls back to the english translation if the language name isn't translated.
2024-01-17 06:50:16 +11:00
db57d426d9 fix(ui): force dark mode 2024-01-17 06:48:04 +11:00
4536e4a8b6 Model Manager Refactor: Install remote models and store their tags and other metadata (#5361)
* add basic functionality for model metadata fetching from hf and civitai

* add storage

* start unit tests

* add unit tests and documentation

* add missing dependency for pytests

* remove redundant fetch; add modified/published dates; updated docs

* add code to select diffusers files based on the variant type

* implement Civitai installs

* make huggingface parallel downloading work

* add unit tests for model installation manager

- Fixed race condition on selection of download destination path
- Add fixtures common to several model_manager_2 unit tests
- Added dummy model files for testing diffusers and safetensors downloading/probing
- Refactored code for selecting proper variant from list of huggingface repo files
- Regrouped ordering of methods in model_install_default.py

* improve Civitai model downloading

- Provide a better error message when Civitai requires an access token (doesn't give a 403 forbidden, but redirects
  to the HTML of an authorization page -- arrgh)
- Handle case of Civitai providing a primary download link plus additional links for VAEs, config files, etc

* add routes for retrieving metadata and tags

* code tidying and documentation

* fix ruff errors

* add file needed to maintain test root diretory in repo for unit tests

* fix self->cls in classmethod

* add pydantic plugin for mypy

* use TestSession instead of requests.Session to prevent any internet activity

improve logging

fix error message formatting

fix logging again

fix forward vs reverse slash issue in Windows install tests

* Several fixes of problems detected during PR review:

- Implement cancel_model_install_job and get_model_install_job routes
  to allow for better control of model download and install.
- Fix thread deadlock that occurred after cancelling an install.
- Remove unneeded pytest_plugins section from tests/conftest.py
- Remove unused _in_terminal_state() from model_install_default.
- Remove outdated documentation from several spots.
- Add workaround for Civitai API results which don't return correct
  URL for the default model.

* fix docs and tests to match get_job_by_source() rather than get_job()

* Update invokeai/backend/model_manager/metadata/fetch/huggingface.py

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Call CivitaiMetadata.model_validate_json() directly

Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>

* Second round of revisions suggested by @ryanjdick:

- Fix type mismatch in `list_all_metadata()` route.
- Do not have a default value for the model install job id
- Remove static class variable declarations from non Pydantic classes
- Change `id` field to `model_id` for the sqlite3 `model_tags` table.
- Changed AFTER DELETE triggers to ON DELETE CASCADE for the metadata and tags tables.
- Made the `id` field of the `model_metadata` table into a primary key to achieve uniqueness.

* Code cleanup suggested in PR review:

- Narrowed the declaration of the `parts` attribute of the download progress event
- Removed auto-conversion of str to Url in Url-containing sources
- Fixed handling of `InvalidModelConfigException`
- Made unknown sources raise `NotImplementedError` rather than `Exception`
- Improved status reporting on cached HuggingFace access tokens

* Multiple fixes:

- `job.total_size` returns a valid size for locally installed models
- new route `list_models` returns a paged summary of model, name,
  description, tags and other essential info
- fix a few type errors

* consolidated all invokeai root pytest fixtures into a single location

* Update invokeai/backend/model_manager/metadata/metadata_store.py

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>

* Small tweaks in response to review comments:

- Remove flake8 configuration from pyproject.toml
- Use `id` rather than `modelId` for huggingface `ModelInfo` object
- Use `last_modified` rather than `LastModified` for huggingface `ModelInfo` object
- Add `sha256` field to file metadata downloaded from huggingface
- Add `Invoker` argument to the model installer `start()` and `stop()` routines
  (but made it optional in order to facilitate use of the service outside the API)
- Removed redundant `PRAGMA foreign_keys` from metadata store initialization code.

* Additional tweaks and minor bug fixes

- Fix calculation of aggregate diffusers model size to only count the
  size of files, not files + directories (which gives different unit test
  results on different filesystems).
- Refactor _get_metadata() and _get_download_urls() to have distinct code paths
  for Civitai, HuggingFace and URL sources.
- Forward the `inplace` flag from the source to the job and added unit test for this.
- Attach cached model metadata to the job rather than to the model install service.

* fix unit test that was breaking on windows due to CR/LF changing size of test json files

* fix ruff formatting

* a few last minor fixes before merging:

- Turn job `error` and `error_type` into properties derived from the exception.
- Add TODO comment about the reason for handling temporary directory destruction
  manually rather than using tempfile.tmpdir().

* add unit tests for reporting HTTP download errors

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Ryan Dick <ryanjdick3@gmail.com>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-14 19:54:53 +00:00
426a7b900f feat(ui): resize options/gallery panels to min on window resize
Per user feedback, this is preferrable to letting them expand when the window grows.

Also bumps `react-resizable-panels` now that one of my PRs is merged to fix an issue.
2024-01-14 11:33:44 +11:00
cc571d9ab2 Quick Fix for right gallery button 2024-01-14 09:49:52 +11:00
296c861e7d Handle bad id in log_stats(...). 2024-01-13 15:19:57 -05:00
aa45d21fd2 Reduce the number of graph_execution_manager.get(...) calls from the InvocationStatsService. 2024-01-13 15:19:57 -05:00
ac42513da9 Remove unused reset_all_stats(...). 2024-01-13 15:19:57 -05:00
e2387546fe Rename GIG -> GB. And move it to where it's being used. 2024-01-13 15:19:57 -05:00
c8929b35f0 Refactor the invocation stats service for better readability and to support reporting the execution wall time. 2024-01-13 15:19:57 -05:00
c000e270a0 Release/v3.6.0 (#5485)
## What type of PR is this? (check all applicable)

Release v3.6.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke v3.6.0

## QA Instructions, Screenshots, Recordings



[InvokeAI-installer-v3.6.0.zip](https://github.com/invoke-ai/InvokeAI/files/13923761/InvokeAI-installer-v3.6.0.zip)

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce in #releases
2024-01-12 15:26:28 -05:00
8ff28da3b4 {release} v3.6.0 2024-01-12 15:00:49 -05:00
b7b376103c Update default workflows 2024-01-12 14:59:44 -05:00
08d379bb29 Update default workflows 2024-01-12 14:58:21 -05:00
74e644c4ba Allow bfloat16 to be configurable in invoke.yaml (#5469)
* feat: allow bfloat16 to be configurable in invoke.yaml

* fix: `torch_dtype()` util

- Use `choose_precision` to get the precision string
- Do not reference deprecated `config.full_precision` flat (why does this still exist?), if a user had this enabled it would override their actual precision setting and potentially cause a lot of confusion.

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-12 18:40:37 +00:00
d4c36da3ee translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1365 of 1402 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-12 22:52:38 +11:00
dfe0b73890 fix(ui): fix usages of panel helpers
Upstream breaking change.
2024-01-12 09:31:07 +11:00
c0c8fa9a89 fix(ui): use nodrag on invinput in workflow editor
Closes #5476
2024-01-12 09:31:07 +11:00
ad7139829c fix(ui): fix canvas space hotkey
Need to do some checks to ensure we aren't taking over input elements, and are focused on the canvas.

Closes #5478
2024-01-12 09:31:07 +11:00
a24e63d440 fix(ui): do not focus board search on load 2024-01-12 09:31:07 +11:00
59437a02c3 feat(ui): restore resizable prompt boxes
The autosize proved to be unpopular. Changed back to resizable.
2024-01-12 09:31:07 +11:00
98a44d7fa1 feat(ui): update assets
- Add various brand images, organise images
- Create favicon for docs pages (light blue version of key logo)
- Rename app title to `Invoke - Community Edition`
2024-01-12 08:02:59 +11:00
07416753be feat(ui): more context in storage errors 2024-01-12 07:54:18 +11:00
630854ce26 3.6 Docs updates (#5412)
* Update UNIFIED_CANVAS.md

* Update index.md

* Update structure

* Docs updates
2024-01-11 16:52:22 +00:00
b55c2b99a7 feat(ui): workflow library styling 2024-01-11 09:42:12 -05:00
f81d36c95f fix(ui): do not string workflow id on rehydrate 2024-01-11 09:42:12 -05:00
26b7aadd32 fix(db): fix workflows pagination math 2024-01-11 09:42:12 -05:00
8e7e3c2b4a Initial Styling Commit 2024-01-11 09:42:12 -05:00
f2e8b66be4 Fix "Cannot import name 'PagingArgumentParser' error when starting textual inversion
- Closes #5395
2024-01-11 13:57:06 +11:00
ff09fd30dc feat(ui): if in dev mode, reset API on reconnect
This retains the current good developer experience when working on the server - the UI should fully reset when you restart the server.
2024-01-11 12:51:15 +11:00
9fcc30c3d6 feat(ui): optimize reconnect queries
Add `FetchOnReconnect` tag, tagging relevant queries with it. This tag is invalidated in the socketConnected listener, when it is determined that the queue changed.
2024-01-11 12:51:15 +11:00
b29a6522ef feat(ui): always check for change to queue status when reconnecting 2024-01-11 12:51:15 +11:00
936d19cd60 feat(ui): improve comments on socketConnected listener 2024-01-11 12:51:15 +11:00
f25b6ee5d1 chore(ui): lint 2024-01-11 12:51:15 +11:00
7dea079220 fix(ui): reduce reconnect requests
- Add checks to the "recovery" logic for socket connect events to reduce the number of network requests.
- Remove the `isInitialized` state from `systemSlice` and make it a nanostore local to the socketConnected listener. It didn't need to be global state. It's also now more clearly named `isFirstConnection`.
- Export the queue status selector (minor improvement, memoizes it correctly).
2024-01-11 12:51:15 +11:00
7fc08962fb translationBot(ui): update translation (Russian)
Currently translated at 97.0% (1361 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
71155d9e72 translationBot(ui): update translation (Hungarian)
Currently translated at 1.9% (28 of 1402 strings)

translationBot(ui): added translation (Hungarian)

Co-authored-by: ItzAttila <attila.gm.studio@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/hu/
Translation: InvokeAI/Web UI
2024-01-11 12:48:23 +11:00
6ccd72349d {release} v3.6.0rc6 (#5467)
## What type of PR is this? (check all applicable)

Release v3.6.0rc6

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release candidate $6 

## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc6.zip](https://github.com/invoke-ai/InvokeAI/files/13890206/InvokeAI-installer-v3.6.0rc6.zip)



## Merge Plan
Merge when approved

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & Github
2024-01-10 11:19:39 -05:00
30e12376d3 {release} v3.6.0rc6 2024-01-10 10:45:33 -05:00
23c8a893e1 fix(ui): fix gallery display bug, major lag
- Fixed a bug where after you load more, changing boards doesn't work. The offset and limit for the list image query had some wonky logic, now resolved.
- Addressed major lag in gallery when selecting an image.

Both issues were related to the useMultiselect and useGalleryImages hooks, which caused every image in the gallery to re-render on whenever the selection changed. There's no way to memoize away this - we need to know when the selection changes. This is a longstanding issue.

The selection is only used in a callback, though - the onClick handler for an image to select it (or add it to the existing selection). We don't really need the reactivity for a callback, so we don't need to listen for changes to the selection.

The logic to handle multiple selection is moved to a new `galleryImageClicked` listener, which does all the selection right when it is needed.

The result is that gallery images no long need to do heavy re-renders on any selection change.

Besides the multiselect click handler, there was also inefficient use of DND payloads. Previously, the `IMAGE_DTOS` type had a payload of image DTO objects. This was only used to drag gallery selection into a board. There is no need to hold onto image DTOs when we have the selection state already in redux. We were recalculating this payload for every image, on every tick.

This payload is now just the board id (the only piece of information we need for this particular DND event).

- I also removed some unused DND types while making this change.
2024-01-10 08:22:46 -05:00
7d93329401 feat(ui): de-jank context menu
There was a lot of convoluted, janky logic related to trying to not mount the context menu's portal until its needed. This was in the library where the component was originally copied from.

I've removed that and resolved the jank, at the cost of there being an extra portal for each instance of the context menu. Don't think this is going to be an issue. If it is, the whole context menu could be refactored to be a singleton.
2024-01-10 08:22:46 -05:00
968fb655a4 Report ci disk space + minor docker fixes (#5461)
* ci: add docker build timout; log free space on runner before and after build

* docker: bump frontend builder to node=20.x; skip linting on build

* chore: gitignore .pnpm-store

* update code owners for docker and CI

---------

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-01-10 05:20:26 +00:00
80ec9f4131 chore(ui): lint 2024-01-10 00:11:05 -05:00
f19def5f7b feat(ui): replace aspect ratio icon
closes #5448
2024-01-10 00:11:05 -05:00
9e1dd8ac9c fix(ui): reset canvas coords/dims on reset 2024-01-10 00:11:05 -05:00
ebd68b7a6c feat(ui): support reset canvas view when no image on canvas 2024-01-10 00:11:05 -05:00
68a231afea feat(ui): move canvas stage and base layer to nanostores 2024-01-10 00:11:05 -05:00
21ab650ac0 feat(ui): move canvas tool to nanostores
I was troubleshooting a hotkeys issue on canvas and thought I had broken the tool logic in a past change so I redid it moving it to nanostores. In the end, the issue was an upstream but with the hotkeys library, but I like having tool in nanostores so I'm leaving it.

It's ephemeral interaction state anyways, doesn't need to be in redux.
2024-01-10 00:11:05 -05:00
b501bd709f fix(ui): canvas bbox number input wonky
It was rounding dimensions when it shouldn't.

Closes #5453
2024-01-10 00:11:05 -05:00
4082f25062 feat(ui): do not optimize size when changing between models with same base model
There's a challenge to accomplish this due to our slice structure - the model is stored in `generationSlice`, but `canvasSlice` also needs to have awareness of it. For example, when the model changes, the canvas slice doesn't know what the previous model was, so it doesn't know whether or not to optimize the size.

This means we need to lift the "should we optimize size" information up. To do this, the `modelChanged` action creator accepts the previous model as an optional second arg.

Now the canvas has access to both the previous model and new model selection, and can decide whether or not it should optimize its size setting in the same way that the generation slice does.

Closes  #5452
2024-01-10 00:11:05 -05:00
63d74b4ba6 feat(ui): remove unnecessary tabChanged listener
This was needed when we didn't support SDXL on canvas.
2024-01-10 00:11:05 -05:00
da5907613b fix(ui): fix typing of usGalleryImages
For some reason `ReturnType<typeof useListImagesQuery>` isn't working correctly, and destructuring `queryResult` it results in `any`, when the hook is used.

I've removed the explicit return typing so that consumers of the hook get correct types.
2024-01-10 00:11:05 -05:00
3a9201bd31 feat: pin deps
Organise deps into ~3 categories:
- Core generation dependencies, pinned for reproducible builds.
- Core application dependencies, pinned for reproducible builds.
- Auxiliary dependencies, pinned only if necessary.

I pinned / bumped these to latest:
- `controlnet_aux`
- `fastapi`
- `fastapi-events`
- `huggingface-hub`
- `numpy`
- `python-socketio`
- `torchmetrics`
- `transformers`
- `uvicorn`

I checked the release notes for these and didn't see any breaking changes that would affect us. There is a `fastapi` breaking change in v108 related to background tasks but it doesn't affect us.

I tested on a fresh venv. The app still works and I can generate on macOS.

Hopefully, enforcing explicit pinned versions will reduce the issues where people get CPU torch.

It also means we should periodically bump versions up to ensure we don't get too far behind on our dependencies and have to do painful upgrades.
2024-01-10 00:03:29 -05:00
d6e2cb7cef fix(ui): use memoized selector for workflow watcher
Minor perf improvement.
2024-01-10 15:32:16 +11:00
0809e832d4 fix(ui): use less brutally strict workflow validation
Workflow building would fail when a current image node was in the workflow due to the strict validation.

So we need to use the other workflow builder util first, which strips out extraneous data.

This bug was introduced during an attempt to optimize the workflow building logic, which was causing slowdowns on the workflow editor.
2024-01-10 14:31:14 +11:00
7269c9f02e Enable correct probing of LoRA latent-consistency/lcm-lora-sdxl (#5449)
- Closes #5435

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-01-08 17:18:26 -05:00
d86d7e5c33 do not show toast if 403 is triggered by forbidden image (#5447)
* do not show toast if 403 is triggered by lack of image access

* remove log

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-08 12:15:46 -05:00
5d87578746 {release} v3.5.0rc5 (#5446)
## What type of PR is this? (check all applicable)

Release - InvokeAI v3.5.0rc5


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release - InvokeAI v3.5.0rc5


## QA Instructions, Screenshots, Recordings

[InvokeAI-installer-v3.6.0rc5.zip](https://github.com/invoke-ai/InvokeAI/files/13863661/InvokeAI-installer-v3.6.0rc5.zip)


## [optional] Are there any post deployment tasks we need to perform?
Releasee on PyPi & GitHub
2024-01-09 03:40:34 +11:00
04aef021fc {release} v3.5.0rc5 2024-01-08 10:42:16 -05:00
0fc08bb384 ui: redesign followups 8 (#5445)
* feat(ui): get rid of convoluted socket vs appSocket redux actions

There's no need to have `socket...` and `appSocket...` actions.

I did this initially due to a misunderstanding about the sequence of handling from middleware to reducers.

* feat(ui): bump deps

Mainly bumping to get latest `redux-remember`.

A change to socket.io required a change to the types in `useSocketIO`.

* chore(ui): format

* feat(ui): add error handling to redux persistence layer

- Add an error handler to `redux-remember` config using our logger
- Add custom errors representing storage set and get failures
- Update storage driver to raise these accordingly
- wrap method to clear idbkeyval storage and tidy its logic up

* feat(ui): add debuggingLoggerMiddleware

This simply logs every action and a diff of the state change.

Due to the noise this creates, it's not added by default at all. Add it to the middlewares if you want to use it.

* feat(ui): add $socket to window if in dev mode

* fix(ui): do not enable cancel hotkeys on inputs

* fix(ui): use JSON.stringify for ROARR logger serializer

A recent change to ROARR introduced limits to the size of data that will logged. This ends up making our logs far less useful. Change the serializer back to what it was previously.

* feat(ui): change diff util, update debuggerLoggerMiddleware

The previous diff library would present deleted things as `undefined`. Unfortunately, a JSON.stringify cycle will strip those values out. The ROARR logger does this and so the diffs end up being a lot less useful, not showing removed keys.

The new diff library uses a different format for the delta that serializes nicely.

* feat(ui): add migrations to redux persistence layer

- All persisted slices must now have a slice config, consisting of their initial state and a migrate callback. The migrate callback is very simple for now, with no type safety. It adds missing properties to the state. A future enhancement might be to model the each slice's state with e.g. zod and have proper validation and types.
- Persisted slices now have a `_version` property
- The migrate callback is called inside `redux-remember`'s `unserialize` handler. I couldn't figure out a good way to put this into the reducer and do logging (reducers should have no side effects). Also I ran into a weird race condition that I couldn't figure out. And finally, the typings are tricky. This works for now.
- `generationSlice` and `canvasSlice` both need migrations for the new aspect ratio setup, this has been added
- Stuff related to persistence has been moved in to `store.ts` for simplicity

* feat(ui): clean up StorageError class

* fix(ui): scale method default is now 'auto'

* feat(ui): when changing controlnet model, enable autoconfig

* fix(ui): make embedding popover immediately accessible

Prevents hotkeys from being captured when embeddings are still loading.
2024-01-08 09:11:45 -05:00
5779542084 Updated icons + Minor UI Tweaks (#5427)
* feat: 💄 updated icons + minor ui tweaks

* revert: 💄 removes ui tweaks

* revert: 💄 removed more ui tweaks

removed more ui tweaks and a commented-out icon import

* style: 🚨 satisfy the linter

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-07 14:14:44 +11:00
ebda81e96e fix(ui): fix add node autoconnect (#5434)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Description

The new select component appears to close itself before calling the
onchange handler. This short-circuits the autoconnect logic. Tweaked so
the ordering is correct.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes #5425

## QA Instructions, Screenshots, Recordings

bug should be fixed

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-01-07 08:32:52 +05:30
3fe332e85f fix(ui): fix add node autoconnect
The new select component appears to close itself before calling the onchange handler. This short-circuits the autoconnect logic. Tweaked so the ordering is correct.
2024-01-07 14:00:22 +11:00
3428ea1b3c feat(ui): use config for all numerical params
Centralize the initial/min/max/etc values for all numerical params. We used this for some but at some point stopped updating it.

All numerical params now use their respective configs. Far fewer hardcoded values throughout the app now.

Also updated the config types a bit to better accommodate slider vs number input constraints.
2024-01-07 13:49:29 +11:00
6024fc7baf Update diffusers to the lastest version 2024-01-06 21:47:51 -05:00
75c1c4ce5a fix(ui): fix gallery nav math
- Use the virtuoso grid item container and list containers to calculate imagesPerRow, skipping manual compensation for padding of images
- Round the imagesPerRow instead of flooring - we often will end up with values like 4.99999 due to floating point precision
- Update `getDownImage` comments & logic to be clearer
- Use variables for the ids in query selectors, preventing future typos
- Only scroll if the new selected image is different from the prev one
2024-01-06 20:52:09 -05:00
ffa05a0bb3 Only replace vae when it is the broken SDXL 1.0 version 2024-01-06 14:06:47 -05:00
a20e17330b blackify 2024-01-06 14:06:47 -05:00
4e83644433 if sdxl-vae-fp16-fix model is available then bake it in when converting ckpts 2024-01-06 14:06:47 -05:00
604f0083f2 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 100.0% (1402 of 1402 strings)

Co-authored-by: Surisen <zhonghx0804@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
2a8a158823 translationBot(ui): update translation (Russian)
Currently translated at 96.2% (1349 of 1402 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-07 01:21:04 +11:00
f8c3db72e9 feat(ui): improved arrow key navigation in gallery
- Fix preexisting bug where gallery network requests were duplicated when triggering infinite scroll
- Refactor `useNextPrevImage` to not use `state => state` as an input selector - logic split up into different hooks
- Remove use instant scroll for arrow key navigation - smooth scroll is janky when you hold the arrow down and it fires rapidly
- Move gallery nav hotkeys to GalleryImageGrid component, so they work whenever the gallery is open (previously didn't work on canvas or workflow editor tabs)
- Use nanostores for gallery grid refs instead of passing context with virtuoso's context feature, making it much simpler to do the imperative gallery nav
- General gallery hook/component cleanup
2024-01-07 01:19:32 +11:00
60815807f9 fix(ui): fix merge issue w/ selectors 2024-01-07 01:19:32 +11:00
196fb0e014 added support for bottom key 2024-01-07 01:19:32 +11:00
eba668956d up button support in gallery navigation 2024-01-07 01:19:32 +11:00
ee5ec023f4 {release} v3.6.0rc4 (#5424)
## What type of PR is this? (check all applicable)

Release v3.6.0rc4


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Release for v3.6.0rc4

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc4.zip…](Installer Zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
- This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPi & GitHub
2024-01-06 12:02:57 +11:00
d59661e0af {release} v3.6.0rc4 2024-01-06 11:08:00 +11:00
f51e8eeae1 fix(ui): better node footer spacing 2024-01-06 09:09:38 +11:00
6e06935e75 fix(ui): fix favicon
It wasn't in the right place to be bundled into `assets/` by vite.

Also replaced uncategorized board's fallback image with new logo.
2024-01-06 09:09:38 +11:00
f7f697849c Skip weight initialization when resizing text encoder token embeddings to accomodate new TI embeddings. This saves time. 2024-01-05 15:16:00 -05:00
8e17e29a5c fix text color for lora card (#5417)
* use label

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:57:16 -05:00
12e9f17f7a only GET intermediates if that setting is an option (#5416)
* only GET intermediates if that setting is an option

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-05 09:40:34 -05:00
cb7e56a9a3 chore(ui): lint 2024-01-05 08:34:46 -05:00
1a710a4c12 fix(ui): restore prev colors for workflow editor
Brand colors are now prefixed with "invoke".
2024-01-05 08:34:46 -05:00
d8d266d3be fix(ui): fix field title spacing
Closes #5405
2024-01-05 08:34:46 -05:00
4716632c23 fix(ui): tsc 2024-01-06 00:03:07 +11:00
3c4150d153 fix(ui): update most other selectors
Just a few stragglers left. Good enough for now.
2024-01-06 00:03:07 +11:00
b71b14d582 fix(ui): update workflow selectors 2024-01-06 00:03:07 +11:00
73481d4aec feat(ui): clean up canvas selectors
Do not memoize unless absolutely necessary. Minor perf improvement
2024-01-06 00:03:07 +11:00
2c049a3b94 feat(ui): clean up a few selectors that do not need to be memoized 2024-01-06 00:03:07 +11:00
367de44a8b fix(ui): tidy remaining selectors
These were just using overly verbose syntax - like explicitly typing `state: RootState`, which is unnecessary.
2024-01-06 00:03:07 +11:00
f5f378d04b fix(ui): revert back to lrumemoize 2024-01-06 00:03:07 +11:00
823edbfdef fix(ui): fix more state => state selectors 2024-01-06 00:03:07 +11:00
29bbb27289 fix(ui): re-add reselect patch
Accidentally removed it last commit.
2024-01-06 00:03:07 +11:00
a23502f7ff fix(ui): do not use state => state as an input selector
This is a no-no, whoops!
2024-01-06 00:03:07 +11:00
ce64dbefce chore(ui): lint 2024-01-06 00:03:07 +11:00
b47afdc3b5 feat(ui): patch reselect to use lruMemoize only
Pending resolution of https://github.com/reduxjs/reselect/issues/635, we can patch `reselect` to use `lruMemoize` exclusively.

Pin RTK and react-redux versions too just to be safe.

This reduces the major GC events that were causing lag/stutters in the app, particularly in canvas and workflow editor.
2024-01-06 00:03:07 +11:00
cde9c3090f fix(ui): use useAppSelector instead of useSelector 2024-01-06 00:03:07 +11:00
6924b04d7c feat(ui): use lruMemoize for all entity adapter selectors 2024-01-06 00:03:07 +11:00
83fbd4bdf2 ui: slightly reposition floating bars. 2024-01-05 23:59:08 +11:00
6460dcc7e0 use torch.bfloat16 on cuda systems 2024-01-04 23:25:52 -05:00
59aa009c93 {release} v3.6.0rc3 (#5408)
## What type of PR is this? (check all applicable)

Release v3.6.0rc3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [] Yes
- [X] No


## Description
Next release candidate

## Related Tickets & Documents
N/A
<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
[Uploading InvokeAI-installer-v3.6.0rc3.zip…](Installer zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
This PR can be merged when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Release on PyPI & Github
2024-01-05 10:39:20 +11:00
59d2a012cd {release} v3.6.0rc3 2024-01-05 09:40:21 +11:00
7e3b620830 Update README.md 2024-01-04 15:56:44 -05:00
e16b55816f fix(ui): clarify comparison in usePanel 2024-01-05 07:09:37 +11:00
895cb8637e fix(ui): fix panel resize bug
A bug that caused panels to be collapsed on a fresh indexedDb in was fixed in dd32c632cd, but this re-introduced a different bug that caused the panels to expand on window resize, if they were already collapsed.

Revert the previous change and instead add one imperative resize outside the observer, so that on startup, we set both panels to their minimum sizes.
2024-01-05 07:09:37 +11:00
fe5bceb1ed translationBot(ui): update translation (Italian)
Currently translated at 97.3% (1363 of 1400 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-05 07:04:04 +11:00
5d475a40f5 lint 2024-01-04 12:51:36 -05:00
bca7ea1674 option to override logo component 2024-01-04 12:51:36 -05:00
f27bb402fb render one or the other 2024-01-04 11:11:10 -05:00
dd32c632cd fix default panel width (#5403)
* fix default panel width

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 11:04:21 -05:00
9e2e740033 custom components for nav, gallery header, and app info (#5400)
* replace custom header with custom nav component to go below settings

* add option for custom gallery header

* add option for custom app info text on logo hover

* add data-testid for tabs

* remove descriptions

* lint

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-04 15:30:27 +00:00
d6362ce0bd fix(ui): fix invalid nesting of button in button 2024-01-04 09:36:59 -05:00
2347a00a70 fix(ui): do not show loading state on floating invoke button if disabled 2024-01-04 09:36:59 -05:00
0b7dc721cf chore(ui): lint 2024-01-04 09:36:59 -05:00
ac04a834ef feat(ui): tidy hotkeysmodal state 2024-01-04 09:36:59 -05:00
bbca053b48 feat(ui): style settings modal 2024-01-04 09:36:59 -05:00
fcf2006502 feat(ui): increase brightnesst of accordion title 2024-01-04 09:36:59 -05:00
ac0d0019bd chore(ui): lint 2024-01-04 09:36:59 -05:00
2d922a0a65 feat(ui): restore floating options and gallery buttons 2024-01-04 09:36:59 -05:00
8db14911d7 feat(ui): give tooltips padding from screen edge
We can pass a popperjs modifier to the tooltip to give it this padding.
2024-01-04 09:36:59 -05:00
01bab58b20 fix(ui): do not resize panel when window resizes if panel is collapsed 2024-01-04 09:36:59 -05:00
7a57bc99cf feat(ui): statusindicator changes
We are now using the lefthand vertical strip for the settings menu button. This is a good place for the status indicator.

Really, we only need to display something *if there is a problem*. If the app is processing, the progress bar indicates that.

For the case where the panels are collapsed, I'll add the floating buttons back in some form, and we'll indicate via those if the app is processing something.
2024-01-04 09:36:59 -05:00
d3b6d86e74 feat(ui): tweak badge styles 2024-01-04 09:36:59 -05:00
360b6cb286 fix(ui): fix logo version tooltip 2024-01-04 09:36:59 -05:00
8f9e9e639e fix(ui): fix hotkey key & untranslated string 2024-01-04 09:36:59 -05:00
6930d8ba41 feat(ui): do not wrap expander content in box 2024-01-04 09:36:59 -05:00
7ad74e680d feat(ui): make invexpander button styles less complex
just make it like a normal button - normal and hover state, no difference when its expanded. the icon clearly indicates this, and you see the extra components
2024-01-04 09:36:59 -05:00
c56a6a4ddd feat(ui): make expander divider button, add hover, remove color
On one hand I like the color but on the other it makes this divider a focus point, which doesn't really makes sense to me. I tried several shades but think it adds a bit too much distraction for your eyes.
2024-01-04 09:36:59 -05:00
afad764a00 feat(ui): make badges a bit paler
too stabby in the eye region
2024-01-04 09:36:59 -05:00
49a72bd714 feat(ui): use wrench icon in settings menu for settings 2024-01-04 09:36:59 -05:00
8cf14287b6 feat(ui): simplify App.tsx layout
There was an extra div, needed for the fullscreen file upload dropzone, that made styling the main app containers a bit awkward.

Refactor the uploader a bit to simplify this - no longer need so many app-level wrappers. Much cleaner.
2024-01-04 09:36:59 -05:00
0db47dd5e7 ui: Bolden text & add activation color for expanded state 2024-01-04 09:36:59 -05:00
71f6f77ae8 ui: Change background and padding of advanced settings 2024-01-04 09:36:59 -05:00
6f16229c41 fix: tone down the base color saturation by one step 2024-01-04 09:36:59 -05:00
0cc0d794d1 fix: Minor alignment issues with the queue badge 2024-01-04 09:36:59 -05:00
535639cb95 feat: Update status and progress colors to match new theme 2024-01-04 09:36:59 -05:00
2250bca8d9 feat: Remove Header
Remove header and incorporate everything else into the side bar and other areas
2024-01-04 09:36:59 -05:00
4ce39a5974 fix(ui): remove unused icons 2024-01-04 13:59:25 +11:00
644e9287f0 chore(ui): control adapter docstrings 2024-01-04 13:59:25 +11:00
6a5e0be022 fix(ui): reduce minStepsBetweenThumbs for ca begin/end 2024-01-04 13:59:25 +11:00
707f0f7091 feat(ui): update favicon to new logo 2024-01-04 13:59:25 +11:00
8e709fe05a fix(ui): remove shift+enter to cancel
Whoops!
2024-01-04 13:59:25 +11:00
154da609cb translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-04 01:19:31 +11:00
21975d6268 chore(ui): lint 2024-01-03 09:09:50 -05:00
31035b3e63 fix(ui): fix scaled bbox sliders
Removed logic related to aspect ratio from the components.

When the main bbox changes, if the scale method is auto, the reducers will handle the scaled bbox size appropriately.

Somehow linking up the manual mode to the aspect ratio is tricky, and instead of adding complexity for a rarely-used mode, I'm leaving manual mode as fully manual.
2024-01-03 09:09:50 -05:00
6c05818887 fix(ui): workaround canvas weirdness with locked aspect ratio
Cannot figure out how to allow the bbox to be transformed when aspect ratio is locked from all handles. Only the bottom right handle works as expected.

As a workaround, when the aspect ratio is locked, you can only resize the bbox from the bottom right handle.
2024-01-03 09:09:50 -05:00
77c5b051f0 fix(ui): clean up actionsDenylist 2024-01-03 09:09:50 -05:00
4fdc4c15f9 feat(ui): add optimal size handling 2024-01-03 09:09:50 -05:00
1a4be78013 fix(ui): make aspect ratio preview pixel-perfect size 2024-01-03 09:09:50 -05:00
eb16ad3d6f fix(ui): remove old esc hotkey 2024-01-03 09:09:50 -05:00
1fee08639d feat(ui): tweak board search UI 2024-01-03 09:09:50 -05:00
7caaf40835 feat(ui): reworked hotkeys modal
- Displays all as list
- Uses chakra `Kbd` component for keys
- Provides search box
2024-01-03 09:09:50 -05:00
6bfe994622 feat(ui): bump fontSize in fallback compoennt 2024-01-03 09:09:50 -05:00
8a6f03cd46 feat(ui): improved panel interactions 2024-01-03 09:09:50 -05:00
4ce9f9dc36 fix(ui): scaled bounding box uses canvas aspect ratio 2024-01-03 09:09:50 -05:00
00297716d6 Release: v3.6.0rc2 (#5386)
## What type of PR is this? (check all applicable)

Release

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
v3.6.0rc2 release

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Test latest main & [Uploading
InvokeAI-installer-v3.6.0rc2.zip…](Installer zip)

## Merge Plan
PR can be merged immediately
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
Publish release on PyPI and GitHub
2024-01-03 14:58:45 +11:00
50c0dc71eb {release} v3.6.0rc2 2024-01-03 14:21:10 +11:00
29ccc6a3d8 chore(ui): lint 2024-01-03 13:18:50 +11:00
f92a5cbabc fix(ui): fix up hotkeys
- Add Shift+X back (this has been missing for a long time)
- Add secondary toggle options hotkey
2024-01-03 13:18:50 +11:00
acbf10f7ba feat(ui): improve error indicator for model fields 2024-01-03 13:18:50 +11:00
46d830b9fa feat(ui): new logo! 2024-01-03 13:18:50 +11:00
db17ec7a4b feat(ui): use dropzone noKeyboard opt instead of manual listener to disable on spacebar 2024-01-03 13:18:50 +11:00
6320d18846 fix(ui): upscale dropdown activates when clicking image actions
Weird issue with `react-select`... Made the popover lazy as a workaround.

Also updated styling of the popover.
2024-01-03 13:18:50 +11:00
37c8b9d06a fix(ui): fix sdxl style prompts
- Do not _merge_ prompt and style prompt when concat is enabled - either use the prompt as style, or use the style directly.
- Set style prompt metadata correctly.
- Add metadata recall for style prompt.
2024-01-03 13:18:50 +11:00
7ba2108eb0 fix(ui): increase contrast between disabled and enabled inputs 2024-01-03 13:18:50 +11:00
8aeeee4752 fix(ui): fix erroneous vae model display
`react-select` has some weird behaviour where if the value is `undefined`, it shows the last-selected value instead of nothing. Must fall back to `null`
2024-01-03 13:18:50 +11:00
930de51910 feat(ui): add badges for advanced settings 2024-01-03 13:18:50 +11:00
b1b5c0d3b2 fix(ui): fix workflow editor model selector, excise ONNX
Ensure workflow editor model selector component gets a value

This introduced some funky type issues related to ONNX models. ONNX doesn't work anyways (unmaintained). Instead of fixing the types to work with a non-working feature, ONNX is now removed entirely from the UI.

- Remove all refs to ONNX (and Olives)
- Fix some type issues
- Add ONNX nodes to the nodes denylist (so they are not visible in UI)
- Update VAE graph helper, which still had some ONNX logic. It's a very simple change and doesn't change any logic. Just removes some conditions that were for ONNX. I tested it and nothing broke.
- Regenerate types
- Fix prettier and eslint ignores for generated types
- Lint
2024-01-03 13:18:50 +11:00
ebe717099e feat(ui): add $store to window in dev mode
Helpful for troubleshooting.
2024-01-03 13:18:50 +11:00
06245bc761 feat(ui): add support for default values for sliders 2024-01-03 13:18:50 +11:00
b4c0dafdc8 feat(ui): remove unused iterations component 2024-01-03 13:18:50 +11:00
0cefacb3a2 feat(ui): add support for default values for numberinputs 2024-01-03 13:18:50 +11:00
baa5f75976 fix(ui): fix node styles
Got borked when adjusting control adapter styling. Should revisit this later.
2024-01-03 13:18:50 +11:00
989aaedc7f feat(nodes): add title for cfg rescale mult on denoise_latents 2024-01-03 13:18:50 +11:00
93e08df849 fix(ui): min fallback on nodes number fields -> -NUMPY_RAND_MAX 2024-01-03 13:18:50 +11:00
4a43e1c1b8 fix(ui): restore global hotkeys 2024-01-03 13:18:50 +11:00
2bbab9d94e Update recommends db backup when installing RC (#5381)
* Udpater suggest db backup when installing RC

* Update invokeai_update.py to be more specific

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py

* Update invokeai_update.py
2024-01-02 22:44:45 +00:00
a456f6e6f0 lint 2024-01-02 10:02:33 -05:00
a408f562d6 option to use new brand for loader 2024-01-02 10:02:33 -05:00
cefdf9ed00 define text color for tooltips 2024-01-02 10:02:33 -05:00
5413bf07e2 Sisco/docker allow relative paths for invokeai data (#5344)
* Update docker-compose.yml to bind local data path

* Update LOCAL_DATA_PATH in .env.sample

* Add fallback to INVOKEAI_ROOT envar if LOCAL_DATA_PATH not present.

* rename LOCAL_DATA_PATH to INVOKAI_LOCAL_ROOT

* Whoops, didnt mean to include this

* Update docker/docker-compose.yml

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>

* [chore] rename envar

* Apply suggestions from code review

---------

Co-authored-by: Eugene Brodsky <ebr@users.noreply.github.com>
2024-01-02 13:17:57 +00:00
4cffe282bd feat(ui): disable scan models tab
not working yet WIP
2024-01-02 07:28:53 -05:00
ae8ffe9d51 chore(ui): lint 2024-01-02 07:28:53 -05:00
870cc5b733 feat(ui): dynamic prompts loading ux
- Prompt must have an open curly brace followed by a close curly brace to enable dynamic prompts processing
- If a the given prompt already had a dynamic prompt cached, do not re-process
- If processing is not needed, user may invoke immediately
- Invoke button shows loading state when dynamic prompts are processing, tooltip says generating
- Dynamic prompts preview icon in prompt box shows loading state when processing, tooltip says generating
2024-01-02 07:28:53 -05:00
0b4eb888c5 feat(ui): canvas bbox interaction tweaks
Making the math match the previous implementation
2024-01-02 07:28:53 -05:00
11f1cb5391 fix(ui): fix canvas bbox style when cursor leaves canvas 2024-01-02 07:28:53 -05:00
1e2e26cfc2 feat(ui): add open queue to queue action menu 2024-01-02 07:28:53 -05:00
e9bce6e1c3 fix(ui): fix cut off badge on queue actions menu 2024-01-02 07:28:53 -05:00
799ef0e7c1 fix(ui): control adapter models select disable if incompatible 2024-01-02 07:28:53 -05:00
61c10a7ca8 fix(ui): fix canvas bbox interactions 2024-01-02 07:28:53 -05:00
93880223e6 feat(ui): move strength up one 2024-01-02 07:28:53 -05:00
271456b745 fix(ui): fix badges for image settings canvas 2024-01-02 07:28:53 -05:00
cecee33bc0 feat(ui): support grid size of 8 on canvas
- Support grid size of 8 on canvas
- Internal canvas math works on 8
- Update gridlines rendering to show 64 spaced lines and 32/16/8 when zoomed in
- Bbox manipulation defaults to grid of 64 - hold shift to get grid of 8

Besides being something we support internally, supporting 8 on canvas avoids a lot of hacky logic needed to work well with aspect ratios.
2024-01-02 07:28:53 -05:00
4f43eda09b feat(ui): modularize imagesize components
Canvas and non-canvas have separate width and height and need their own separate aspect ratios. In order to not duplicate a lot of aspect ratio logic, the components relating to image size have been modularized.
2024-01-02 07:28:53 -05:00
011757c497 fix(ui): add numberinput to control adapter weight
Required some rejiggering of the InvControl and InvSlider styles.
2024-01-02 07:28:53 -05:00
2700d0e769 fix(nodes): fix constraints/validation for controlnet
- Fix `weight` and `begin_step_percent`, the constraints were mixed up
- Add model validatort to ensure `begin_step_percent < end_step_percent`
- Bump version
2024-01-02 07:28:53 -05:00
d256d93a2a feat(ui): use larger chevrons for number input steppers 2024-01-02 07:28:53 -05:00
f3c8e986a5 feat(ui): bump badge fontsize to 10px 2024-01-02 07:28:53 -05:00
48f5e4f313 fix(ui): missing denoise strength
accidentally hid it from everywhere
2024-01-02 07:28:53 -05:00
5950ffe064 Release/v3.6.0rc1 (#5372)
## What type of PR is this? (check all applicable)

InvokeAI 3.6.0rc1 Release 


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Update version & frontend build for Invoke v3.6.0rc1

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?

Upload release to PyPI & create release on GitHub
2024-01-02 11:21:37 +11:00
49ca949cd6 {release} Update version to 3.6.0rc1 2024-01-02 10:23:55 +11:00
5d69f1cbf5 Remove frontend build from repo permanantly 2024-01-02 10:18:11 +11:00
9169006171 chore(ui): lint 2024-01-01 08:13:23 -05:00
28b74523d0 fix(ui): fix dynamic prompts with single prompt
Closes #5292

The special handling for single prompt is totally extraneous and caused a bug.
2024-01-01 08:13:23 -05:00
9359c03c3c feat(ui): use zod-less workflow builder when appropriate 2024-01-01 08:13:23 -05:00
598241e0f2 fix(ui): InvContextMenu.placement = 'auto-end'
This ensures the context menus don't get cut off when the window size is very small.
2024-01-01 08:13:23 -05:00
e698a8006c feat(ui): use lruMemoize for argsMemoize on selectors
This provides a small performance improvement, on the order of a few ms per interaction.
2024-01-01 08:13:23 -05:00
34e7b5a7fb chore(ui): lint 2024-01-01 08:13:23 -05:00
5c3dd62ae0 feat(ui): update useGlobalModifiers to store each key independently
This reduces rerenders when the user presses a modifier key.
2024-01-01 08:13:23 -05:00
7e2eeec1f3 feat(ui): optimized workflow building
- Store workflow in nanostore as singleton instead of building for each consumer
- Debounce the build (already was indirectly debounced)
- When the workflow is needed, imperatively grab it from the nanostores, instead of letting react handle it via reactivity
2024-01-01 08:13:23 -05:00
7eb79266c4 feat(ui): split dnd overlay to separate component
This reduces top-level rerenders when zooming in and out on workflow editor
2024-01-01 08:13:23 -05:00
5d4610d981 feat(ui): store node templates in separate slice
Flattens the `nodes` slice. May offer minor perf improvements in addition to just being cleaner.
2024-01-01 08:13:23 -05:00
7c548c5bf3 feat(ui): move canvas interaction state to nanostores
This drastically reduces the computation needed when moving the cursor. It also correctly separates ephemeral interaction state from redux, where it is not needed.

Also removed some unused canvas state.
2024-01-01 08:13:23 -05:00
2a38606342 fix(ui): show denoising strength on canvas 2024-01-01 08:13:23 -05:00
793cf39964 feat(ui): bump react-resizable-panels & improve usePanel hook 2024-01-01 08:13:23 -05:00
ab3e689ee0 fix(ui): fix workflow library new workflow/settings closing
Need to make the menu not lazy. A better solution is to refactor how the settings works, rendering it in a different part of the component tree
2024-01-01 08:13:23 -05:00
20f497054f feat(ui): optimized useMouseOverNode
Manually hook into pubsub to eliminate extraneous rerenders on hook change
2024-01-01 08:13:23 -05:00
6209fef63d fix(ui): focus add node popover on open
Need an extra ref to pass to the InvSelect component.
2024-01-01 08:13:23 -05:00
5168415999 feat(ui): use nanostores for useMouseOverNode
This greatly reduces the weight of the event handlers.
2024-01-01 08:13:23 -05:00
b490c8ae27 chore(ui): bump deps
Includes vite v5 - only change needed is to set .mts for vite config files.
2024-01-01 08:13:23 -05:00
6f354f16ba feat(ui): canvas perf improvements 2024-01-01 08:13:23 -05:00
e108a2302e fix(ui): fix uninteractable canvas bbox 2024-01-01 08:13:23 -05:00
2ffecef792 feat(ui): bump react-resizable-panels, improve panel resize logic 2024-01-01 08:13:23 -05:00
2663a07e94 feat(ui): misc canvas perf improvements
- disable listening when not needed
- use useMemo for gridlines
2024-01-01 08:13:23 -05:00
8d2ef5afc3 feat(ui): disable onlyRenderVisibleElements on Flow
This can cause stuttering when nodes are being moved in and out of the viewport. I think it's better to improve rendering/perf in other ways.
2024-01-01 08:13:23 -05:00
539887b215 feat(ui): misc perf/rerender improvements
More efficient selectors, memoized/stable references to objects, lazy popover/menu rendering.
2024-01-01 08:13:23 -05:00
2ba505cce9 feat(ui): use pubsub to for globalcontextmenuclose
Far more efficient than the crude redux incrementor thing.
2024-01-01 08:13:23 -05:00
bd92a31d15 feat(ui): add createLruSelector
This uses the previous implementation of the memoization function in reselect. It's possible for the new weakmap-based memoization to cause memory leaks in certain scenarios, so we will avoid it for now.
2024-01-01 08:13:23 -05:00
ee2529f3fd lru 2024-01-01 08:13:23 -05:00
89b7082bc0 fix(ui): remove debug stmts 2024-01-01 08:13:23 -05:00
55dfabb892 feat(ui): use make label widths grow
Fixes issue where translations overflowed due to hardcoded widths.
2024-01-01 08:13:23 -05:00
2a41fd0b29 fix(ui): fix field title styling 2024-01-01 08:13:23 -05:00
966919ea4a translationBot(ui): update translation (Russian)
Currently translated at 98.1% (1335 of 1360 strings)

Co-authored-by: Васянатор <ilabulanov339@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
d3acdcf12f translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-01 11:38:27 +11:00
52f9749bf5 feat(ui): partial rebuild of model manager internal logic 2023-12-29 08:26:14 -05:00
2a661450c3 feat(ui): increase size of clear icon on selects 2023-12-29 08:26:14 -05:00
2d96c62fdb feat(ui): more memoization 2023-12-29 08:26:14 -05:00
3e6173ee8c feat(ui): only show refiner models on refiner model select 2023-12-29 08:26:14 -05:00
4e9841c924 feat(ui): add refiner cfg scale & steps defaults & marks 2023-12-29 08:26:14 -05:00
f4ea495d23 feat(ui): InvSwitch and InvSliderThumb are round 2023-12-29 08:26:14 -05:00
43a4b815e8 fix(ui): fix InvSlider vertical thumb styling 2023-12-29 08:26:14 -05:00
4134f18319 fix(ui): InvEditable, linear field view styling 2023-12-29 08:26:14 -05:00
cd292f6c1c fix(ui): remove errant console.log 2023-12-29 08:26:14 -05:00
3ce8f3d6fe feat(ui): more memoization 2023-12-29 08:26:14 -05:00
10fd4f6a61 feat(ui): update panel lib, move gallery to percentages 2023-12-29 08:26:14 -05:00
47b1fd4bce chore(ui): bump deps 2023-12-29 08:26:14 -05:00
300805a25a fix(ui): fix typing issues 2023-12-29 08:26:14 -05:00
56527da73e feat(ui): memoize all components 2023-12-29 08:26:14 -05:00
ca4b8e65c1 feat(ui): use stable objects for animation/native element styles 2023-12-29 08:26:14 -05:00
f5194f9e2d feat(ui): generation accordion badges 2023-12-29 08:26:14 -05:00
ccbbb417f9 feat(ui): fix control adapters styling 2023-12-29 08:26:14 -05:00
37786a26a5 feat(ui): move scaling up to image settings -> advanced 2023-12-29 08:26:14 -05:00
4f2930412e feat(ui): use primitive style props or memoized sx objects 2023-12-29 08:26:14 -05:00
83049a3a5b fix(ui): typo in canvas model handler 2023-12-29 08:26:14 -05:00
38256f97b3 fix(ui): fix word break on LoRACard 2023-12-29 08:26:14 -05:00
77f2aabda4 feat(ui): sort model select options with compatible base model first 2023-12-29 08:26:14 -05:00
e32eb2a649 fix(ui): restore labels in model manager selects 2023-12-29 08:26:14 -05:00
f4cdfa3b9c fix(ui): canvas layer select cut off 2023-12-29 08:26:14 -05:00
e99b715e9e fix(ui): board collapse button styling 2023-12-29 08:26:14 -05:00
ed96c40239 feat(ui): change queue icon 2023-12-29 08:26:14 -05:00
1b3bb932b9 feat(ui): reduce button fontweight to semibold 2023-12-29 08:26:14 -05:00
f0b102d830 feat(ui): ux improvements & redesign
This is a squash merge of a bajillion messy small commits created while iterating on the UI component library and redesign.
2023-12-29 08:26:14 -05:00
a47d91f0e7 feat(api): add max_prompts constraints 2023-12-29 08:26:14 -05:00
358c1f5791 Release/v3.5.1 (#5363)
## What type of PR is this? (check all applicable)

Release v3.5.1

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
InvokeAI v3.5.1 release

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Create GH release
3. Annonce on Discord
2023-12-29 15:20:24 +11:00
faec320d48 {release} v3.5.1 2023-12-29 13:33:47 +11:00
fd074abdc4 Add frontend build 2023-12-29 13:16:23 +11:00
d8eb58cd58 Add frontend build 2023-12-29 13:15:37 +11:00
8937d66412 Add Tiled Upscaling to default workflows (#5362)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [X] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [X] No


## Description
Add Tiled Upscaling to default workflows

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2023-12-29 12:43:47 +11:00
a6935ae7fb Add Tiled Upscaling to default workflows 2023-12-29 12:26:50 +11:00
69968eb67b add nightmare promptgen to communityNodes.md (#5360)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [x] Community Node Submission

## Description
Adds nightmare promptgen to the community nodes list.
2023-12-29 08:06:44 +11:00
e57f5f129c add nightmare promptgen to communityNodes.md 2023-12-28 13:15:52 -05:00
1b8651fa26 fix(ui): do no create extraneous pos var 2023-12-28 20:44:02 +11:00
f6664960ca Update useBuildNode.ts
Added addition of the rect's top left coordinates to get equivalent behavior.
2023-12-28 20:44:02 +11:00
84a001720c Added back bounds check 2023-12-28 20:44:02 +11:00
c9951cd86b Eliminate constant console deprecation warnings
React Flow 11.10 eliminates the need to use project() and issues a deprecation warning to the console every time that onMouseMove is called (see https://reactflow.dev/whats-new/2023-11-10#rename-usereactflowproject-to-usereactflowscreentoflowposition). This code change eliminates that warning,
2023-12-28 20:44:02 +11:00
1176 changed files with 41019 additions and 36830 deletions

8
.github/CODEOWNERS vendored
View File

@ -1,5 +1,5 @@
# continuous integration # continuous integration
/.github/workflows/ @lstein @blessedcoolant @hipsterusername /.github/workflows/ @lstein @blessedcoolant @hipsterusername @ebr
# documentation # documentation
/docs/ @lstein @blessedcoolant @hipsterusername @Millu /docs/ @lstein @blessedcoolant @hipsterusername @Millu
@ -10,7 +10,7 @@
# installation and configuration # installation and configuration
/pyproject.toml @lstein @blessedcoolant @hipsterusername /pyproject.toml @lstein @blessedcoolant @hipsterusername
/docker/ @lstein @blessedcoolant @hipsterusername /docker/ @lstein @blessedcoolant @hipsterusername @ebr
/scripts/ @ebr @lstein @hipsterusername /scripts/ @ebr @lstein @hipsterusername
/installer/ @lstein @ebr @hipsterusername /installer/ @lstein @ebr @hipsterusername
/invokeai/assets @lstein @ebr @hipsterusername /invokeai/assets @lstein @ebr @hipsterusername
@ -26,9 +26,7 @@
# front ends # front ends
/invokeai/frontend/CLI @lstein @hipsterusername /invokeai/frontend/CLI @lstein @hipsterusername
/invokeai/frontend/install @lstein @ebr @hipsterusername /invokeai/frontend/install @lstein @ebr @hipsterusername
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername /invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername /invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername /invokeai/frontend/web @psychedelicious @blessedcoolant @maryhipp @hipsterusername

View File

@ -6,10 +6,6 @@ title: '[bug]: '
labels: ['bug'] labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body: body:
- type: markdown - type: markdown
attributes: attributes:
@ -18,10 +14,9 @@ body:
- type: checkboxes - type: checkboxes
attributes: attributes:
label: Is there an existing issue for this? label: Is there an existing issue for this problem?
description: | description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug) Please [search](https://github.com/invoke-ai/InvokeAI/issues) first to see if an issue already exists for the problem.
irst to see if an issue already exists for the bug you encountered.
options: options:
- label: I have searched the existing issues - label: I have searched the existing issues
required: true required: true
@ -33,80 +28,119 @@ body:
- type: dropdown - type: dropdown
id: os_dropdown id: os_dropdown
attributes: attributes:
label: OS label: Operating system
description: Which operating System did you use when the bug occured description: Your computer's operating system.
multiple: false multiple: false
options: options:
- 'Linux' - 'Linux'
- 'Windows' - 'Windows'
- 'macOS' - 'macOS'
- 'other'
validations: validations:
required: true required: true
- type: dropdown - type: dropdown
id: gpu_dropdown id: gpu_dropdown
attributes: attributes:
label: GPU label: GPU vendor
description: Which kind of Graphic-Adapter is your System using description: Your GPU's vendor.
multiple: false multiple: false
options: options:
- 'cuda' - 'Nvidia (CUDA)'
- 'amd' - 'AMD (ROCm)'
- 'mps' - 'Apple Silicon (MPS)'
- 'cpu' - 'None (CPU)'
validations: validations:
required: true required: true
- type: input
id: gpu_model
attributes:
label: GPU model
description: Your GPU's model. If on Apple Silicon, this is your Mac's chip. Leave blank if on CPU.
placeholder: ex. RTX 2080 Ti, Mac M1 Pro
validations:
required: false
- type: input - type: input
id: vram id: vram
attributes: attributes:
label: VRAM label: GPU VRAM
description: Size of the VRAM if known description: Your GPU's VRAM. If on Apple Silicon, this is your Mac's unified memory. Leave blank if on CPU.
placeholder: 8GB placeholder: 8GB
validations: validations:
required: false required: false
- type: input - type: input
id: version-number id: version-number
attributes: attributes:
label: What version did you experience this issue on? label: Version number
description: | description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead. The version of Invoke you have installed. If it is not the latest version, please update and try again to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X placeholder: ex. 3.6.1
validations: validations:
required: true required: true
- type: input
id: browser-version
attributes:
label: Browser
description: Your web browser and version.
placeholder: ex. Firefox 123.0b3
validations:
required: true
- type: textarea
id: python-deps
attributes:
label: Python dependencies
description: |
If the problem occurred during image generation, click the gear icon at the bottom left corner, click "About", click the copy button and then paste here.
validations:
required: false
- type: textarea - type: textarea
id: what-happened id: what-happened
attributes: attributes:
label: What happened? label: What happened
description: | description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug. Describe what happened. Include any relevant error messages, stack traces and screenshots here.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears placeholder: I clicked button X and then Y happened.
validations: validations:
required: true required: true
- type: textarea - type: textarea
id: what-you-expected
attributes: attributes:
label: Screenshots label: What you expected to happen
description: If applicable, add screenshots to help explain your problem description: Describe what you expected to happen.
placeholder: this is what the result looked like <screenshot> placeholder: I expected Z to happen.
validations:
required: true
- type: textarea
id: how-to-repro
attributes:
label: How to reproduce the problem
description: List steps to reproduce the problem.
placeholder: Start the app, generate an image with these settings, then click button X.
validations: validations:
required: false required: false
- type: textarea - type: textarea
id: additional-context
attributes: attributes:
label: Additional context label: Additional context
description: Add any other context about the problem here description: Any other context that might help us to understand the problem.
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻 placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations: validations:
required: false required: false
- type: input - type: input
id: contact id: discord-username
attributes: attributes:
label: Contact Details label: Discord username
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)? description: If you are on the Invoke discord and would prefer to be contacted there, please provide your username.
placeholder: ex. email@example.com, discordname, twitter, ... placeholder: supercoolusername123
validations: validations:
required: false required: false

59
.github/pr_labels.yml vendored Normal file
View File

@ -0,0 +1,59 @@
Root:
- changed-files:
- any-glob-to-any-file: '*'
PythonDeps:
- changed-files:
- any-glob-to-any-file: 'pyproject.toml'
Python:
- changed-files:
- all-globs-to-any-file:
- 'invokeai/**'
- '!invokeai/frontend/web/**'
PythonTests:
- changed-files:
- any-glob-to-any-file: 'tests/**'
CICD:
- changed-files:
- any-glob-to-any-file: .github/**
Docker:
- changed-files:
- any-glob-to-any-file: docker/**
Installer:
- changed-files:
- any-glob-to-any-file: installer/**
Documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Invocations:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/invocations/**'
Backend:
- changed-files:
- any-glob-to-any-file: 'invokeai/backend/**'
Api:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/api/**'
Services:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/services/**'
FrontendDeps:
- changed-files:
- any-glob-to-any-file:
- '**/*/package.json'
- '**/*/pnpm-lock.yaml'
Frontend:
- changed-files:
- any-glob-to-any-file: 'invokeai/frontend/web/**'

View File

@ -40,10 +40,14 @@ jobs:
- name: Free up more disk space on the runner - name: Free up more disk space on the runner
# https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930 # https://github.com/actions/runner-images/issues/2840#issuecomment-1284059930
run: | run: |
echo "----- Free space before cleanup"
df -h
sudo rm -rf /usr/share/dotnet sudo rm -rf /usr/share/dotnet
sudo rm -rf "$AGENT_TOOLSDIRECTORY" sudo rm -rf "$AGENT_TOOLSDIRECTORY"
sudo swapoff /mnt/swapfile sudo swapoff /mnt/swapfile
sudo rm -rf /mnt/swapfile sudo rm -rf /mnt/swapfile
echo "----- Free space after cleanup"
df -h
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v3
@ -91,6 +95,7 @@ jobs:
# password: ${{ secrets.DOCKERHUB_TOKEN }} # password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build container - name: Build container
timeout-minutes: 40
id: docker_build id: docker_build
uses: docker/build-push-action@v4 uses: docker/build-push-action@v4
with: with:

16
.github/workflows/label-pr.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: "Pull Request Labeler"
on:
- pull_request_target
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: .github/pr_labels.yml

View File

@ -58,7 +58,7 @@ jobs:
- name: Check for changed python files - name: Check for changed python files
id: changed-files id: changed-files
uses: tj-actions/changed-files@v37 uses: tj-actions/changed-files@v41
with: with:
files_yaml: | files_yaml: |
python: python:

View File

@ -1,10 +1,10 @@
<div align="center"> <div align="center">
![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/1a917d94-e099-4fa1-a70f-7dd8d0691018) ![project hero](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)
# Invoke AI - Generative AI for Professional Creatives # Invoke - Professional Creative AI Tools for Visual Media
## Professional Creative Tools for Stable Diffusion, Custom-Trained Models, and more. ## To learn more about Invoke, or implement our Business solutions, visit [invoke.com](https://www.invoke.com/about)
To learn more about Invoke AI, get started instantly, or implement our Business solutions, visit [invoke.ai](https://invoke.ai)
[![discord badge]][discord link] [![discord badge]][discord link]
@ -56,7 +56,9 @@ the foundation for multiple commercial products.
<div align="center"> <div align="center">
![canvas preview](https://github.com/invoke-ai/InvokeAI/raw/main/docs/assets/canvas_preview.png)
![Highlighted Features - Canvas and Workflows](https://github.com/invoke-ai/InvokeAI/assets/31807370/708f7a82-084f-4860-bfbe-e2588c53548d)
</div> </div>
@ -167,7 +169,7 @@ the command `npm install -g pnpm` if needed)
_For Linux with an AMD GPU:_ _For Linux with an AMD GPU:_
```sh ```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2 pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
``` ```
_For non-GPU systems:_ _For non-GPU systems:_

View File

@ -2,10 +2,13 @@
## Any environment variables supported by InvokeAI can be specified here, ## Any environment variables supported by InvokeAI can be specified here,
## in addition to the examples below. ## in addition to the examples below.
# INVOKEAI_ROOT is the path to a path on the local filesystem where InvokeAI will store data. # HOST_INVOKEAI_ROOT is the path on the docker host's filesystem where InvokeAI will store data.
# Outputs will also be stored here by default. # Outputs will also be stored here by default.
# This **must** be an absolute path. # If relative, it will be relative to the docker directory in which the docker-compose.yml file is located
INVOKEAI_ROOT= #HOST_INVOKEAI_ROOT=../../invokeai-data
# INVOKEAI_ROOT is the path to the root of the InvokeAI repository within the container.
# INVOKEAI_ROOT=~/invokeai
# Get this value from your HuggingFace account settings page. # Get this value from your HuggingFace account settings page.
# HUGGING_FACE_HUB_TOKEN= # HUGGING_FACE_HUB_TOKEN=

View File

@ -59,7 +59,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# #### Build the Web UI ------------------------------------ # #### Build the Web UI ------------------------------------
FROM node:18-slim AS web-builder FROM node:20-slim AS web-builder
ENV PNPM_HOME="/pnpm" ENV PNPM_HOME="/pnpm"
ENV PATH="$PNPM_HOME:$PATH" ENV PATH="$PNPM_HOME:$PATH"
RUN corepack enable RUN corepack enable
@ -68,7 +68,7 @@ WORKDIR /build
COPY invokeai/frontend/web/ ./ COPY invokeai/frontend/web/ ./
RUN --mount=type=cache,target=/pnpm/store \ RUN --mount=type=cache,target=/pnpm/store \
pnpm install --frozen-lockfile pnpm install --frozen-lockfile
RUN pnpm run build RUN npx vite build
#### Runtime stage --------------------------------------- #### Runtime stage ---------------------------------------

View File

@ -28,7 +28,7 @@ This is done via Docker Desktop preferences
### Configure Invoke environment ### Configure Invoke environment
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to: 1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory. b. an existing, v3.0.0 compatible runtime directory.
1. Execute `run.sh` 1. Execute `run.sh`

View File

@ -21,7 +21,9 @@ x-invokeai: &invokeai
ports: ports:
- "${INVOKEAI_PORT:-9090}:9090" - "${INVOKEAI_PORT:-9090}:9090"
volumes: volumes:
- ${INVOKEAI_ROOT:-~/invokeai}:${INVOKEAI_ROOT:-/invokeai} - type: bind
source: ${HOST_INVOKEAI_ROOT:-${INVOKEAI_ROOT:-~/invokeai}}
target: ${INVOKEAI_ROOT:-/invokeai}
- ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface} - ${HF_HOME:-~/.cache/huggingface}:${HF_HOME:-/invokeai/.cache/huggingface}
# - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}} # - ${INVOKEAI_MODELS_DIR:-${INVOKEAI_ROOT:-/invokeai/models}}
# - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}} # - ${INVOKEAI_MODELS_CONFIG_PATH:-${INVOKEAI_ROOT:-/invokeai/configs/models.yaml}}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 297 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 MiB

After

Width:  |  Height:  |  Size: 4.9 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 209 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 114 KiB

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 187 KiB

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

After

Width:  |  Height:  |  Size: 107 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 167 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

View File

@ -15,8 +15,13 @@ model. These are the:
their metadata, and `ModelRecordServiceBase` to store that their metadata, and `ModelRecordServiceBase` to store that
information. It is also responsible for managing the InvokeAI information. It is also responsible for managing the InvokeAI
`models` directory and its contents. `models` directory and its contents.
* _DownloadQueueServiceBase_ (**CURRENTLY UNDER DEVELOPMENT - NOT IMPLEMENTED**) * _ModelMetadataStore_ and _ModelMetaDataFetch_ Backend modules that
are able to retrieve metadata from online model repositories,
transform them into Pydantic models, and cache them to the InvokeAI
SQL database.
* _DownloadQueueServiceBase_
A multithreaded downloader responsible A multithreaded downloader responsible
for downloading models from a remote source to disk. The download for downloading models from a remote source to disk. The download
queue has special methods for downloading repo_id folders from queue has special methods for downloading repo_id folders from
@ -30,13 +35,13 @@ model. These are the:
## Location of the Code ## Location of the Code
All four of these services can be found in The four main services can be found in
`invokeai/app/services` in the following directories: `invokeai/app/services` in the following directories:
* `invokeai/app/services/model_records/` * `invokeai/app/services/model_records/`
* `invokeai/app/services/model_install/` * `invokeai/app/services/model_install/`
* `invokeai/app/services/downloads/`
* `invokeai/app/services/model_loader/` (**under development**) * `invokeai/app/services/model_loader/` (**under development**)
* `invokeai/app/services/downloads/`(**under development**)
Code related to the FastAPI web API can be found in Code related to the FastAPI web API can be found in
`invokeai/app/api/routers/model_records.py`. `invokeai/app/api/routers/model_records.py`.
@ -402,15 +407,18 @@ functionality:
the download, installation and registration process. the download, installation and registration process.
- Downloading a model from an arbitrary URL and installing it in - Downloading a model from an arbitrary URL and installing it in
`models_dir` (_implementation pending_). `models_dir`.
- Special handling for Civitai model URLs which allow the user to - Special handling for Civitai model URLs which allow the user to
paste in a model page's URL or download link (_implementation pending_). paste in a model page's URL or download link
- Special handling for HuggingFace repo_ids to recursively download - Special handling for HuggingFace repo_ids to recursively download
the contents of the repository, paying attention to alternative the contents of the repository, paying attention to alternative
variants such as fp16. (_implementation pending_) variants such as fp16.
- Saving tags and other metadata about the model into the invokeai database
when fetching from a repo that provides that type of information,
(currently only Civitai and HuggingFace).
### Initializing the installer ### Initializing the installer
@ -426,16 +434,24 @@ following initialization pattern:
from invokeai.app.services.config import InvokeAIAppConfig from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_records import ModelRecordServiceSQL from invokeai.app.services.model_records import ModelRecordServiceSQL
from invokeai.app.services.model_install import ModelInstallService from invokeai.app.services.model_install import ModelInstallService
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.shared.sqlite import SqliteDatabase from invokeai.app.services.shared.sqlite import SqliteDatabase
from invokeai.backend.util.logging import InvokeAILogger from invokeai.backend.util.logging import InvokeAILogger
config = InvokeAIAppConfig.get_config() config = InvokeAIAppConfig.get_config()
config.parse_args() config.parse_args()
logger = InvokeAILogger.get_logger(config=config) logger = InvokeAILogger.get_logger(config=config)
db = SqliteDatabase(config, logger) db = SqliteDatabase(config, logger)
record_store = ModelRecordServiceSQL(db)
queue = DownloadQueueService()
queue.start()
store = ModelRecordServiceSQL(db) installer = ModelInstallService(app_config=config,
installer = ModelInstallService(config, store) record_store=record_store,
download_queue=queue
)
installer.start()
``` ```
The full form of `ModelInstallService()` takes the following The full form of `ModelInstallService()` takes the following
@ -443,9 +459,12 @@ required parameters:
| **Argument** | **Type** | **Description** | | **Argument** | **Type** | **Description** |
|------------------|------------------------------|------------------------------| |------------------|------------------------------|------------------------------|
| `config` | InvokeAIAppConfig | InvokeAI app configuration object | | `app_config` | InvokeAIAppConfig | InvokeAI app configuration object |
| `record_store` | ModelRecordServiceBase | Config record storage database | | `record_store` | ModelRecordServiceBase | Config record storage database |
| `event_bus` | EventServiceBase | Optional event bus to send download/install progress events to | | `download_queue` | DownloadQueueServiceBase | Download queue object |
| `metadata_store` | Optional[ModelMetadataStore] | Metadata storage object |
|`session` | Optional[requests.Session] | Swap in a different Session object (usually for debugging) |
Once initialized, the installer will provide the following methods: Once initialized, the installer will provide the following methods:
@ -474,14 +493,14 @@ source7 = URLModelSource(url='https://civitai.com/api/download/models/63006', ac
for source in [source1, source2, source3, source4, source5, source6, source7]: for source in [source1, source2, source3, source4, source5, source6, source7]:
install_job = installer.install_model(source) install_job = installer.install_model(source)
source2job = installer.wait_for_installs() source2job = installer.wait_for_installs(timeout=120)
for source in sources: for source in sources:
job = source2job[source] job = source2job[source]
if job.status == "completed": if job.complete:
model_config = job.config_out model_config = job.config_out
model_key = model_config.key model_key = model_config.key
print(f"{source} installed as {model_key}") print(f"{source} installed as {model_key}")
elif job.status == "error": elif job.errored:
print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}") print(f"{source}: {job.error_type}.\nStack trace:\n{job.error}")
``` ```
@ -515,43 +534,117 @@ The full list of arguments to `import_model()` is as follows:
| **Argument** | **Type** | **Default** | **Description** | | **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------| |------------------|------------------------------|-------------|-------------------------------------------|
| `source` | Union[str, Path, AnyHttpUrl] | | The source of the model, Path, URL or repo_id | | `source` | ModelSource | None | The source of the model, Path, URL or repo_id |
| `inplace` | bool | True | Leave a local model in its current location |
| `variant` | str | None | Desired variant, such as 'fp16' or 'onnx' (HuggingFace only) |
| `subfolder` | str | None | Repository subfolder (HuggingFace only) |
| `config` | Dict[str, Any] | None | Override all or a portion of model's probed attributes | | `config` | Dict[str, Any] | None | Override all or a portion of model's probed attributes |
| `access_token` | str | None | Provide authorization information needed to download |
The next few sections describe the various types of ModelSource that
The `inplace` field controls how local model Paths are handled. If can be passed to `import_model()`.
True (the default), then the model is simply registered in its current
location by the installer's `ModelConfigRecordService`. Otherwise, a
copy of the model put into the location specified by the `models_dir`
application configuration parameter.
The `variant` field is used for HuggingFace repo_ids only. If
provided, the repo_id download handler will look for and download
tensors files that follow the convention for the selected variant:
- "fp16" will select files named "*model.fp16.{safetensors,bin}"
- "onnx" will select files ending with the suffix ".onnx"
- "openvino" will select files beginning with "openvino_model"
In the special case of the "fp16" variant, the installer will select
the 32-bit version of the files if the 16-bit version is unavailable.
`subfolder` is used for HuggingFace repo_ids only. If provided, the
model will be downloaded from the designated subfolder rather than the
top-level repository folder. If a subfolder is attached to the repo_id
using the format `repo_owner/repo_name:subfolder`, then the subfolder
specified by the repo_id will override the subfolder argument.
`config` can be used to override all or a portion of the configuration `config` can be used to override all or a portion of the configuration
attributes returned by the model prober. See the section below for attributes returned by the model prober. See the section below for
details. details.
`access_token` is passed to the download queue and used to access
repositories that require it. #### LocalModelSource
This is used for a model that is located on a locally-accessible Posix
filesystem, such as a local disk or networked fileshare.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `path` | str | Path | None | Path to the model file or directory |
| `inplace` | bool | False | If set, the model file(s) will be left in their location; otherwise they will be copied into the InvokeAI root's `models` directory |
#### URLModelSource
This is used for a single-file model that is accessible via a URL. The
fields are:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `url` | AnyHttpUrl | None | The URL for the model file. |
| `access_token` | str | None | An access token needed to gain access to this file. |
The `AnyHttpUrl` class can be imported from `pydantic.networks`.
Ordinarily, no metadata is retrieved from these sources. However,
there is special-case code in the installer that looks for HuggingFace
and Civitai URLs and fetches the corresponding model metadata from
the corresponding repo.
#### CivitaiModelSource
This is used for a model that is hosted by the Civitai web site.
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `version_id` | int | None | The ID of the particular version of the desired model. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
Civitai has two model IDs, both of which are integers. The `model_id`
corresponds to a collection of model versions that may different in
arbitrary ways, such as derivation from different checkpoint training
steps, SFW vs NSFW generation, pruned vs non-pruned, etc. The
`version_id` points to a specific version. Please use the latter.
Some Civitai models require an access token to download. These can be
generated from the Civitai profile page of a logged-in
account. Somewhat annoyingly, if you fail to provide the access token
when downloading a model that needs it, Civitai generates a redirect
to a login page rather than a 403 Forbidden error. The installer
attempts to catch this event and issue an informative error
message. Otherwise you will get an "unrecognized model suffix" error
when the model prober tries to identify the type of the HTML login
page.
#### HFModelSource
HuggingFace has the most complicated `ModelSource` structure:
| **Argument** | **Type** | **Default** | **Description** |
|------------------|------------------------------|-------------|-------------------------------------------|
| `repo_id` | str | None | The ID of the desired model. |
| `variant` | ModelRepoVariant | ModelRepoVariant('fp16') | The desired variant. |
| `subfolder` | Path | None | Look for the model in a subfolder of the repo. |
| `access_token` | str | None | An access token needed to gain access to a subscriber's-only model. |
The `repo_id` is the repository ID, such as `stabilityai/sdxl-turbo`.
The `variant` is one of the various diffusers formats that HuggingFace
supports and is used to pick out from the hodgepodge of files that in
a typical HuggingFace repository the particular components needed for
a complete diffusers model. `ModelRepoVariant` is an enum that can be
imported from `invokeai.backend.model_manager` and has the following
values:
| **Name** | **String Value** |
|----------------------------|---------------------------|
| ModelRepoVariant.DEFAULT | "default" |
| ModelRepoVariant.FP16 | "fp16" |
| ModelRepoVariant.FP32 | "fp32" |
| ModelRepoVariant.ONNX | "onnx" |
| ModelRepoVariant.OPENVINO | "openvino" |
| ModelRepoVariant.FLAX | "flax" |
You can also pass the string forms to `variant` directly. Note that
InvokeAI may not be able to load and run all variants. At the current
time, specifying `ModelRepoVariant.DEFAULT` will retrieve model files
that are unqualified, e.g. `pytorch_model.safetensors` rather than
`pytorch_model.fp16.safetensors`. These are usually the 32-bit
safetensors forms of the model.
If `subfolder` is specified, then the requested model resides in a
subfolder of the main model repository. This is typically used to
fetch and install VAEs.
Some models require you to be registered with HuggingFace and logged
in. To download these files, you must provide an
`access_token`. Internally, if no access token is provided, then
`HfFolder.get_token()` will be called to fill it in with the cached
one.
#### Monitoring the install job process #### Monitoring the install job process
@ -563,7 +656,8 @@ The `ModelInstallJob` class has the following structure:
| **Attribute** | **Type** | **Description** | | **Attribute** | **Type** | **Description** |
|----------------|-----------------|------------------| |----------------|-----------------|------------------|
| `status` | `InstallStatus` | An enum of ["waiting", "running", "completed" and "error" | | `id` | `int` | Integer ID for this job |
| `status` | `InstallStatus` | An enum of [`waiting`, `downloading`, `running`, `completed`, `error` and `cancelled`]|
| `config_in` | `dict` | Overriding configuration values provided by the caller | | `config_in` | `dict` | Overriding configuration values provided by the caller |
| `config_out` | `AnyModelConfig`| After successful completion, contains the configuration record written to the database | | `config_out` | `AnyModelConfig`| After successful completion, contains the configuration record written to the database |
| `inplace` | `boolean` | True if the caller asked to install the model in place using its local path | | `inplace` | `boolean` | True if the caller asked to install the model in place using its local path |
@ -578,30 +672,70 @@ broadcast to the InvokeAI event bus. The events will appear on the bus
as an event of type `EventServiceBase.model_event`, a timestamp and as an event of type `EventServiceBase.model_event`, a timestamp and
the following event names: the following event names:
- `model_install_started` ##### `model_install_downloading`
The payload will contain the keys `timestamp` and `source`. The latter For remote models only, `model_install_downloading` events will be issued at regular
indicates the requested model source for installation. intervals as the download progresses. The event's payload contains the
following keys:
- `model_install_progress` | **Key** | **Type** | **Description** |
|----------------|-----------|------------------|
| `source` | str | String representation of the requested source |
| `local_path` | str | String representation of the path to the downloading model (usually a temporary directory) |
| `bytes` | int | How many bytes downloaded so far |
| `total_bytes` | int | Total size of all the files that make up the model |
| `parts` | List[Dict]| Information on the progress of the individual files that make up the model |
Emitted at regular intervals when downloading a remote model, the
payload will contain the keys `timestamp`, `source`, `current_bytes`
and `total_bytes`. These events are _not_ emitted when a local model
already on the filesystem is imported.
- `model_install_completed` The parts is a list of dictionaries that give information on each of
the components pieces of the download. The dictionary's keys are
`source`, `local_path`, `bytes` and `total_bytes`, and correspond to
the like-named keys in the main event.
Issued once at the end of a successful installation. The payload will Note that downloading events will not be issued for local models, and
contain the keys `timestamp`, `source` and `key`, where `key` is the that downloading events occur *before* the running event.
ID under which the model has been registered.
- `model_install_error` ##### `model_install_running`
`model_install_running` is issued when all the required downloads have completed (if applicable) and the
model probing, copying and registration process has now started.
The payload will contain the key `source`.
##### `model_install_completed`
`model_install_completed` is issued once at the end of a successful
installation. The payload will contain the keys `source`,
`total_bytes` and `key`, where `key` is the ID under which the model
has been registered.
##### `model_install_error`
`model_install_error` is emitted if the installation process fails for
some reason. The payload will contain the keys `source`, `error_type`
and `error`. `error_type` is a short message indicating the nature of
the error, and `error` is the long traceback to help debug the
problem.
##### `model_install_cancelled`
`model_install_cancelled` is issued if the model installation is
cancelled, or if one or more of its files' downloads are
cancelled. The payload will contain `source`.
##### Following the model status
You may poll the `ModelInstallJob` object returned by `import_model()`
to ascertain the state of the install. The job status can be read from
the job's `status` attribute, an `InstallStatus` enum which has the
enumerated values `WAITING`, `DOWNLOADING`, `RUNNING`, `COMPLETED`,
`ERROR` and `CANCELLED`.
For convenience, install jobs also provided the following boolean
properties: `waiting`, `downloading`, `running`, `complete`, `errored`
and `cancelled`, as well as `in_terminal_state`. The last will return
True if the job is in the complete, errored or cancelled states.
Emitted if the installation process fails for some reason. The payload
will contain the keys `timestamp`, `source`, `error_type` and
`error`. `error_type` is a short message indicating the nature of the
error, and `error` is the long traceback to help debug the problem.
#### Model confguration and probing #### Model confguration and probing
@ -621,17 +755,9 @@ overriding values for any of the model's configuration
attributes. Here is an example of setting the attributes. Here is an example of setting the
`SchedulerPredictionType` and `name` for an sd-2 model: `SchedulerPredictionType` and `name` for an sd-2 model:
This is typically used to set
the model's name and description, but can also be used to overcome
cases in which automatic probing is unable to (correctly) determine
the model's attribute. The most common situation is the
`prediction_type` field for sd-2 (and rare sd-1) models. Here is an
example of how it works:
``` ```
install_job = installer.import_model( install_job = installer.import_model(
source='stabilityai/stable-diffusion-2-1', source=HFModelSource(repo_id='stabilityai/stable-diffusion-2-1',variant='fp32'),
variant='fp16',
config=dict( config=dict(
prediction_type=SchedulerPredictionType('v_prediction') prediction_type=SchedulerPredictionType('v_prediction')
name='stable diffusion 2 base model', name='stable diffusion 2 base model',
@ -643,29 +769,38 @@ install_job = installer.import_model(
This section describes additional methods provided by the installer class. This section describes additional methods provided by the installer class.
#### jobs = installer.wait_for_installs() #### jobs = installer.wait_for_installs([timeout])
Block until all pending installs are completed or errored and then Block until all pending installs are completed or errored and then
returns a list of completed jobs. returns a list of completed jobs. The optional `timeout` argument will
return from the call if jobs aren't completed in the specified
time. An argument of 0 (the default) will block indefinitely.
#### jobs = installer.list_jobs([source]) #### jobs = installer.list_jobs()
Return a list of all active and complete `ModelInstallJobs`. An Return a list of all active and complete `ModelInstallJobs`.
optional `source` argument allows you to filter the returned list by a
model source string pattern using a partial string match.
#### jobs = installer.get_job(source) #### jobs = installer.get_job_by_source(source)
Return a list of `ModelInstallJob` corresponding to the indicated Return a list of `ModelInstallJob` corresponding to the indicated
model source. model source.
#### jobs = installer.get_job_by_id(id)
Return a list of `ModelInstallJob` corresponding to the indicated
model id.
#### jobs = installer.cancel_job(job)
Cancel the indicated job.
#### installer.prune_jobs #### installer.prune_jobs
Remove non-pending jobs (completed or errored) from the job list Remove jobs that are in a terminal state (i.e. complete, errored or
returned by `list_jobs()` and `get_job()`. cancelled) from the job list returned by `list_jobs()` and
`get_job()`.
#### installer.app_config, installer.record_store, #### installer.app_config, installer.record_store, installer.event_bus
installer.event_bus
Properties that provide access to the installer's `InvokeAIAppConfig`, Properties that provide access to the installer's `InvokeAIAppConfig`,
`ModelRecordServiceBase` and `EventServiceBase` objects. `ModelRecordServiceBase` and `EventServiceBase` objects.
@ -726,120 +861,6 @@ the API starts up. Its effect is to call `sync_to_config()` to
synchronize the model record store database with what's currently on synchronize the model record store database with what's currently on
disk. disk.
# The remainder of this documentation is provisional, pending implementation of the Download and Load services
## Let's get loaded, the lowdown on ModelLoadService
The `ModelLoadService` is responsible for loading a named model into
memory so that it can be used for inference. Despite the fact that it
does a lot under the covers, it is very straightforward to use.
An application-wide model loader is created at API initialization time
and stored in
`ApiDependencies.invoker.services.model_loader`. However, you can
create alternative instances if you wish.
### Creating a ModelLoadService object
The class is defined in
`invokeai.app.services.model_loader_service`. It is initialized with
an InvokeAIAppConfig object, from which it gets configuration
information such as the user's desired GPU and precision, and with a
previously-created `ModelRecordServiceBase` object, from which it
loads the requested model's configuration information.
Here is a typical initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_record_service import ModelRecordServiceBase
from invokeai.app.services.model_loader_service import ModelLoadService
config = InvokeAIAppConfig.get_config()
store = ModelRecordServiceBase.open(config)
loader = ModelLoadService(config, store)
```
Note that we are relying on the contents of the application
configuration to choose the implementation of
`ModelRecordServiceBase`.
### get_model(key, [submodel_type], [context]) -> ModelInfo:
*** TO DO: change to get_model(key, context=None, **kwargs)
The `get_model()` method, like its similarly-named cousin in
`ModelRecordService`, receives the unique key that identifies the
model. It loads the model into memory, gets the model ready for use,
and returns a `ModelInfo` object.
The optional second argument, `subtype` is a `SubModelType` string
enum, such as "vae". It is mandatory when used with a main model, and
is used to select which part of the main model to load.
The optional third argument, `context` can be provided by
an invocation to trigger model load event reporting. See below for
details.
The returned `ModelInfo` object shares some fields in common with
`ModelConfigBase`, but is otherwise a completely different beast:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `key` | str | The model key derived from the ModelRecordService database |
| `name` | str | Name of this model |
| `base_model` | BaseModelType | Base model for this model |
| `type` | ModelType or SubModelType | Either the model type (non-main) or the submodel type (main models)|
| `location` | Path or str | Location of the model on the filesystem |
| `precision` | torch.dtype | The torch.precision to use for inference |
| `context` | ModelCache.ModelLocker | A context class used to lock the model in VRAM while in use |
The types for `ModelInfo` and `SubModelType` can be imported from
`invokeai.app.services.model_loader_service`.
To use the model, you use the `ModelInfo` as a context manager using
the following pattern:
```
model_info = loader.get_model('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae:
image = vae.decode(latents)[0]
```
The `vae` model will stay locked in the GPU during the period of time
it is in the context manager's scope.
`get_model()` may raise any of the following exceptions:
- `UnknownModelException` -- key not in database
- `ModelNotFoundException` -- key in database but model not found at path
- `InvalidModelException` -- the model is guilty of a variety of sins
** TO DO: ** Resolve discrepancy between ModelInfo.location and
ModelConfig.path.
### Emitting model loading events
When the `context` argument is passed to `get_model()`, it will
retrieve the invocation event bus from the passed `InvocationContext`
object to emit events on the invocation bus. The two events are
"model_load_started" and "model_load_completed". Both carry the
following payload:
```
payload=dict(
queue_id=queue_id,
queue_item_id=queue_item_id,
queue_batch_id=queue_batch_id,
graph_execution_state_id=graph_execution_state_id,
model_key=model_key,
submodel=submodel,
hash=model_info.hash,
location=str(model_info.location),
precision=str(model_info.precision),
)
```
*** ***
## Get on line: The Download Queue ## Get on line: The Download Queue
@ -879,7 +900,6 @@ following fields:
| `job_started` | float | | Timestamp for when the job started running | | `job_started` | float | | Timestamp for when the job started running |
| `job_ended` | float | | Timestamp for when the job completed or errored out | | `job_ended` | float | | Timestamp for when the job completed or errored out |
| `job_sequence` | int | | A counter that is incremented each time a model is dequeued | | `job_sequence` | int | | A counter that is incremented each time a model is dequeued |
| `preserve_partial_downloads`| bool | False | Resume partial downloads when relaunched. |
| `error` | Exception | | A copy of the Exception that caused an error during download | | `error` | Exception | | A copy of the Exception that caused an error during download |
When you create a job, you can assign it a `priority`. If multiple When you create a job, you can assign it a `priority`. If multiple
@ -1184,3 +1204,362 @@ other resources that it might have been using.
This will start/pause/cancel all jobs that have been submitted to the This will start/pause/cancel all jobs that have been submitted to the
queue and have not yet reached a terminal state. queue and have not yet reached a terminal state.
***
## This Meta be Good: Model Metadata Storage
The modules found under `invokeai.backend.model_manager.metadata`
provide a straightforward API for fetching model metadatda from online
repositories. Currently two repositories are supported: HuggingFace
and Civitai. However, the modules are easily extended for additional
repos, provided that they have defined APIs for metadata access.
Metadata comprises any descriptive information that is not essential
for getting the model to run. For example "author" is metadata, while
"type", "base" and "format" are not. The latter fields are part of the
model's config, as defined in `invokeai.backend.model_manager.config`.
### Example Usage:
```
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
CivitaiMetadata
ModelMetadataStore,
)
# to access the initialized sql database
from invokeai.app.api.dependencies import ApiDependencies
civitai = CivitaiMetadataFetch()
# fetch the metadata
model_metadata = civitai.from_url("https://civitai.com/models/215796")
# get some common metadata fields
author = model_metadata.author
tags = model_metadata.tags
# get some Civitai-specific fields
assert isinstance(model_metadata, CivitaiMetadata)
trained_words = model_metadata.trained_words
base_model = model_metadata.base_model_trained_on
thumbnail = model_metadata.thumbnail_url
# cache the metadata to the database using the key corresponding to
# an existing model config record in the `model_config` table
sql_cache = ModelMetadataStore(ApiDependencies.invoker.services.db)
sql_cache.add_metadata('fb237ace520b6716adc98bcb16e8462c', model_metadata)
# now we can search the database by tag, author or model name
# matches will contain a list of model keys that match the search
matches = sql_cache.search_by_tag({"tool", "turbo"})
```
### Structure of the Metadata objects
There is a short class hierarchy of Metadata objects, all of which
descend from the Pydantic `BaseModel`.
#### `ModelMetadataBase`
This is the common base class for metadata:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `name` | str | Repository's name for the model |
| `author` | str | Model's author |
| `tags` | Set[str] | Model tags |
Note that the model config record also has a `name` field. It is
intended that the config record version be locally customizable, while
the metadata version is read-only. However, enforcing this is expected
to be part of the business logic.
Descendents of the base add additional fields.
#### `HuggingFaceMetadata`
This descends from `ModelMetadataBase` and adds the following fields:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `type` | Literal["huggingface"] | Used for the discriminated union of metadata classes|
| `id` | str | HuggingFace repo_id |
| `tag_dict` | Dict[str, Any] | A dictionary of tag/value pairs provided in addition to `tags` |
| `last_modified`| datetime | Date of last commit of this model to the repo |
| `files` | List[Path] | List of the files in the model repo |
#### `CivitaiMetadata`
This descends from `ModelMetadataBase` and adds the following fields:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `type` | Literal["civitai"] | Used for the discriminated union of metadata classes|
| `id` | int | Civitai model id |
| `version_name` | str | Name of this version of the model (distinct from model name) |
| `version_id` | int | Civitai model version id (distinct from model id) |
| `created` | datetime | Date this version of the model was created |
| `updated` | datetime | Date this version of the model was last updated |
| `published` | datetime | Date this version of the model was published to Civitai |
| `description` | str | Model description. Quite verbose and contains HTML tags |
| `version_description` | str | Model version description, usually describes changes to the model |
| `nsfw` | bool | Whether the model tends to generate NSFW content |
| `restrictions` | LicenseRestrictions | An object that describes what is and isn't allowed with this model |
| `trained_words`| Set[str] | Trigger words for this model, if any |
| `download_url` | AnyHttpUrl | URL for downloading this version of the model |
| `base_model_trained_on` | str | Name of the model that this version was trained on |
| `thumbnail_url` | AnyHttpUrl | URL to access a representative thumbnail image of the model's output |
| `weight_min` | int | For LoRA sliders, the minimum suggested weight to apply |
| `weight_max` | int | For LoRA sliders, the maximum suggested weight to apply |
Note that `weight_min` and `weight_max` are not currently populated
and take the default values of (-1.0, +2.0). The issue is that these
values aren't part of the structured data but appear in the text
description. Some regular expression or LLM coding may be able to
extract these values.
Also be aware that `base_model_trained_on` is free text and doesn't
correspond to our `ModelType` enum.
`CivitaiMetadata` also defines some convenience properties relating to
licensing restrictions: `credit_required`, `allow_commercial_use`,
`allow_derivatives` and `allow_different_license`.
#### `AnyModelRepoMetadata`
This is a discriminated Union of `CivitaiMetadata` and
`HuggingFaceMetadata`.
### Fetching Metadata from Online Repos
The `HuggingFaceMetadataFetch` and `CivitaiMetadataFetch` classes will
retrieve metadata from their corresponding repositories and return
`AnyModelRepoMetadata` objects. Their base class
`ModelMetadataFetchBase` is an abstract class that defines two
methods: `from_url()` and `from_id()`. The former accepts the type of
model URLs that the user will try to cut and paste into the model
import form. The latter accepts a string ID in the format recognized
by the repository of choice. Both methods return an
`AnyModelRepoMetadata`.
The base class also has a class method `from_json()` which will take
the JSON representation of a `ModelMetadata` object, validate it, and
return the corresponding `AnyModelRepoMetadata` object.
When initializing one of the metadata fetching classes, you may
provide a `requests.Session` argument. This allows you to customize
the low-level HTTP fetch requests and is used, for instance, in the
testing suite to avoid hitting the internet.
The HuggingFace and Civitai fetcher subclasses add additional
repo-specific fetching methods:
#### HuggingFaceMetadataFetch
This overrides its base class `from_json()` method to return a
`HuggingFaceMetadata` object directly.
#### CivitaiMetadataFetch
This adds the following methods:
`from_civitai_modelid()` This takes the ID of a model, finds the
default version of the model, and then retrieves the metadata for
that version, returning a `CivitaiMetadata` object directly.
`from_civitai_versionid()` This takes the ID of a model version and
retrieves its metadata. Functionally equivalent to `from_id()`, the
only difference is that it returna a `CivitaiMetadata` object rather
than an `AnyModelRepoMetadata`.
### Metadata Storage
The `ModelMetadataStore` provides a simple facility to store model
metadata in the `invokeai.db` database. The data is stored as a JSON
blob, with a few common fields (`name`, `author`, `tags`) broken out
to be searchable.
When a metadata object is saved to the database, it is identified
using the model key, _and this key must correspond to an existing
model key in the model_config table_. There is a foreign key integrity
constraint between the `model_config.id` field and the
`model_metadata.id` field such that if you attempt to save metadata
under an unknown key, the attempt will result in an
`UnknownModelException`. Likewise, when a model is deleted from
`model_config`, the deletion of the corresponding metadata record will
be triggered.
Tags are stored in a normalized fashion in the tables `model_tags` and
`tags`. Triggers keep the tag table in sync with the `model_metadata`
table.
To create the storage object, initialize it with the InvokeAI
`SqliteDatabase` object. This is often done this way:
```
from invokeai.app.api.dependencies import ApiDependencies
metadata_store = ModelMetadataStore(ApiDependencies.invoker.services.db)
```
You can then access the storage with the following methods:
#### `add_metadata(key, metadata)`
Add the metadata using a previously-defined model key.
There is currently no `delete_metadata()` method. The metadata will
persist until the matching config is deleted from the `model_config`
table.
#### `get_metadata(key) -> AnyModelRepoMetadata`
Retrieve the metadata corresponding to the model key.
#### `update_metadata(key, new_metadata)`
Update an existing metadata record with new metadata.
#### `search_by_tag(tags: Set[str]) -> Set[str]`
Given a set of tags, find models that are tagged with them. If
multiple tags are provided then a matching model must be tagged with
*all* the tags in the set. This method returns a set of model keys and
is intended to be used in conjunction with the `ModelRecordService`:
```
model_config_store = ApiDependencies.invoker.services.model_records
matches = metadata_store.search_by_tag({'license:other'})
models = [model_config_store.get(x) for x in matches]
```
#### `search_by_name(name: str) -> Set[str]
Find all model metadata records that have the given name and return a
set of keys to the corresponding model config objects.
#### `search_by_author(author: str) -> Set[str]
Find all model metadata records that have the given author and return
a set of keys to the corresponding model config objects.
# The remainder of this documentation is provisional, pending implementation of the Load service
## Let's get loaded, the lowdown on ModelLoadService
The `ModelLoadService` is responsible for loading a named model into
memory so that it can be used for inference. Despite the fact that it
does a lot under the covers, it is very straightforward to use.
An application-wide model loader is created at API initialization time
and stored in
`ApiDependencies.invoker.services.model_loader`. However, you can
create alternative instances if you wish.
### Creating a ModelLoadService object
The class is defined in
`invokeai.app.services.model_loader_service`. It is initialized with
an InvokeAIAppConfig object, from which it gets configuration
information such as the user's desired GPU and precision, and with a
previously-created `ModelRecordServiceBase` object, from which it
loads the requested model's configuration information.
Here is a typical initialization pattern:
```
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_record_service import ModelRecordServiceBase
from invokeai.app.services.model_loader_service import ModelLoadService
config = InvokeAIAppConfig.get_config()
store = ModelRecordServiceBase.open(config)
loader = ModelLoadService(config, store)
```
Note that we are relying on the contents of the application
configuration to choose the implementation of
`ModelRecordServiceBase`.
### get_model(key, [submodel_type], [context]) -> ModelInfo:
*** TO DO: change to get_model(key, context=None, **kwargs)
The `get_model()` method, like its similarly-named cousin in
`ModelRecordService`, receives the unique key that identifies the
model. It loads the model into memory, gets the model ready for use,
and returns a `ModelInfo` object.
The optional second argument, `subtype` is a `SubModelType` string
enum, such as "vae". It is mandatory when used with a main model, and
is used to select which part of the main model to load.
The optional third argument, `context` can be provided by
an invocation to trigger model load event reporting. See below for
details.
The returned `ModelInfo` object shares some fields in common with
`ModelConfigBase`, but is otherwise a completely different beast:
| **Field Name** | **Type** | **Description** |
|----------------|-----------------|------------------|
| `key` | str | The model key derived from the ModelRecordService database |
| `name` | str | Name of this model |
| `base_model` | BaseModelType | Base model for this model |
| `type` | ModelType or SubModelType | Either the model type (non-main) or the submodel type (main models)|
| `location` | Path or str | Location of the model on the filesystem |
| `precision` | torch.dtype | The torch.precision to use for inference |
| `context` | ModelCache.ModelLocker | A context class used to lock the model in VRAM while in use |
The types for `ModelInfo` and `SubModelType` can be imported from
`invokeai.app.services.model_loader_service`.
To use the model, you use the `ModelInfo` as a context manager using
the following pattern:
```
model_info = loader.get_model('f13dd932c0c35c22dcb8d6cda4203764', SubModelType('vae'))
with model_info as vae:
image = vae.decode(latents)[0]
```
The `vae` model will stay locked in the GPU during the period of time
it is in the context manager's scope.
`get_model()` may raise any of the following exceptions:
- `UnknownModelException` -- key not in database
- `ModelNotFoundException` -- key in database but model not found at path
- `InvalidModelException` -- the model is guilty of a variety of sins
** TO DO: ** Resolve discrepancy between ModelInfo.location and
ModelConfig.path.
### Emitting model loading events
When the `context` argument is passed to `get_model()`, it will
retrieve the invocation event bus from the passed `InvocationContext`
object to emit events on the invocation bus. The two events are
"model_load_started" and "model_load_completed". Both carry the
following payload:
```
payload=dict(
queue_id=queue_id,
queue_item_id=queue_item_id,
queue_batch_id=queue_batch_id,
graph_execution_state_id=graph_execution_state_id,
model_key=model_key,
submodel=submodel,
hash=model_info.hash,
location=str(model_info.location),
precision=str(model_info.precision),
)
```

View File

@ -1,76 +0,0 @@
# Contributing to the Frontend
# InvokeAI Web UI
- [InvokeAI Web UI](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#invokeai-web-ui)
- [Stack](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#stack)
- [Contributing](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#contributing)
- [Dev Environment](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#dev-environment)
- [Production builds](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web/docs#production-builds)
The UI is a fairly straightforward Typescript React app, with the Unified Canvas being more complex.
Code is located in `invokeai/frontend/web/` for review.
## Stack
State management is Redux via [Redux Toolkit](https://github.com/reduxjs/redux-toolkit). We lean heavily on RTK:
- `createAsyncThunk` for HTTP requests
- `createEntityAdapter` for fetching images and models
- `createListenerMiddleware` for workflows
The API client and associated types are generated from the OpenAPI schema. See API_CLIENT.md.
Communication with server is a mix of HTTP and [socket.io](https://github.com/socketio/socket.io-client) (with a simple socket.io redux middleware to help).
[Chakra-UI](https://github.com/chakra-ui/chakra-ui) & [Mantine](https://github.com/mantinedev/mantine) for components and styling.
[Konva](https://github.com/konvajs/react-konva) for the canvas, but we are pushing the limits of what is feasible with it (and HTML canvas in general). We plan to rebuild it with [PixiJS](https://github.com/pixijs/pixijs) to take advantage of WebGL's improved raster handling.
[Vite](https://vitejs.dev/) for bundling.
Localisation is via [i18next](https://github.com/i18next/react-i18next), but translation happens on our [Weblate](https://hosted.weblate.org/engage/invokeai/) project. Only the English source strings should be changed on this repo.
## Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [Discord](https://discord.gg/ZmtBAhwWhy) if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Dev Environment
**Setup**
1. Install [node](https://nodejs.org/en/download/). You can confirm node is installed with:
```bash
node --version
```
2. Install [pnpm](https://pnpm.io/) and confirm it is installed by running this:
```bash
npm install --global pnpm
pnpm --version
```
From `invokeai/frontend/web/` run `pnpm install` to get everything set up.
Start everything in dev mode:
1. Ensure your virtual environment is running
2. Start the dev server: `pnpm dev`
3. Start the InvokeAI Nodes backend: `python scripts/invokeai-web.py # run from the repo root`
4. Point your browser to the dev server address e.g. [http://localhost:5173/](http://localhost:5173/)
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
`ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host`
### Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run `pnpm build`.

View File

@ -12,7 +12,7 @@ To get started, take a look at our [new contributors checklist](newContributorCh
Once you're setup, for more information, you can review the documentation specific to your area of interest: Once you're setup, for more information, you can review the documentation specific to your area of interest:
* #### [InvokeAI Architecure](../ARCHITECTURE.md) * #### [InvokeAI Architecure](../ARCHITECTURE.md)
* #### [Frontend Documentation](./contributingToFrontend.md) * #### [Frontend Documentation](https://github.com/invoke-ai/InvokeAI/tree/main/invokeai/frontend/web)
* #### [Node Documentation](../INVOCATIONS.md) * #### [Node Documentation](../INVOCATIONS.md)
* #### [Local Development](../LOCAL_DEVELOPMENT.md) * #### [Local Development](../LOCAL_DEVELOPMENT.md)

53
docs/deprecated/2to3.md Normal file
View File

@ -0,0 +1,53 @@
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.

View File

@ -229,29 +229,28 @@ clarity on the intent and common use cases we expect for utilizing them.
currently being rendered by your browser into a merged copy of the image. This currently being rendered by your browser into a merged copy of the image. This
lowers the resource requirements and should improve performance. lowers the resource requirements and should improve performance.
### Seam Correction ### Compositing / Seam Correction
When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
by Stable Diffusion into your existing image. To do this, the area around the by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
`seam` at the boundary between your image and the new generation is
automatically blended to produce a seamless output. In a fully automatic automatically blended to produce a seamless output. In a fully automatic
process, a mask is generated to cover the seam, and then the area of the seam is process, a mask is generated to cover the boundary, and then the area of the boundary is
Inpainted. Inpainted.
Although the default options should work well most of the time, sometimes it can Although the default options should work well most of the time, sometimes it can
help to alter the parameters that control the seam Inpainting. A wider seam and help to alter the parameters that control the Compositing. A larger blur and
a blur setting of about 1/3 of the seam have been noted as producing a blur setting have been noted as producing
consistently strong results (e.g. 96 wide and 16 blur - adds up to 32 blur with consistently strong results . Strength of 0.7 is best for reducing hard seams.
both sides). Seam strength of 0.7 is best for reducing hard seams.
- **Mode** - What part of the image will have the the Compositing applied to it.
- **Mask edge** will apply Compositing to the edge of the masked area
- **Mask** will apply Compositing to the entire masked area
- **Unmasked** will apply Compositing to the entire image
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur Method** - The method of blur applied to the masked area.
- **Seam Size** - The size of the seam masked area. Set higher to make a larger
mask around the seam.
- **Seam Blur** - The size of the blur that is applied on _each_ side of the
masked area.
- **Seam Strength** - The Image To Image Strength parameter used for the
Inpainting generation that is applied to the seam area.
- **Seam Steps** - The number of generation steps that should be used to Inpaint
the seam.
### Infill & Scaling ### Infill & Scaling

BIN
docs/img/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -18,7 +18,7 @@ title: Home
width: 100%; width: 100%;
max-width: 100%; max-width: 100%;
height: 50px; height: 50px;
background-color: #448AFF; background-color: #35A4DB;
color: #fff; color: #fff;
font-size: 16px; font-size: 16px;
border: none; border: none;
@ -43,7 +43,7 @@ title: Home
<div align="center" markdown> <div align="center" markdown>
[![project logo](assets/invoke_ai_banner.png)](https://github.com/invoke-ai/InvokeAI) [![project logo](https://github.com/invoke-ai/InvokeAI/assets/31807370/6e3728c7-e90e-4711-905c-3b55844ff5be)](https://github.com/invoke-ai/InvokeAI)
[![discord badge]][discord link] [![discord badge]][discord link]
@ -117,6 +117,11 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
## :octicons-gift-24: InvokeAI Features ## :octicons-gift-24: InvokeAI Features
### Installation
- [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
- [Manual Installation](installation/020_INSTALL_MANUAL.md)
- [Docker Installation](installation/040_INSTALL_DOCKER.md)
### The InvokeAI Web Interface ### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md) - [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md) - [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)
@ -145,60 +150,6 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
- [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md) - [Guide to InvokeAI Runtime Settings](features/CONFIGURATION.md)
- [Database Maintenance and other Command Line Utilities](features/UTILITIES.md) - [Database Maintenance and other Command Line Utilities](features/UTILITIES.md)
## :octicons-log-16: Important Changes Since Version 2.3
### Nodes
Behind the scenes, InvokeAI has been completely rewritten to support
"nodes," small unitary operations that can be combined into graphs to
form arbitrary workflows. For example, there is a prompt node that
processes the prompt string and feeds it to a text2latent node that
generates a latent image. The latents are then fed to a latent2image
node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and
execute custom node graphs. The ability to save and load graphs is
still a work in progress, but coming soon.
### Command-Line Interface Retired
All "invokeai" command-line interfaces have been retired as of version
3.4.
To launch the Web GUI from the command-line, use the command
`invokeai-web` rather than the traditional `invokeai --web`.
### ControlNet
This version of InvokeAI features ControlNet, a system that allows you
to achieve exact poses for human and animal figures by providing a
model to follow. Full details are found in [ControlNet](features/CONTROLNET.md)
### New Schedulers
The list of schedulers has been completely revamped and brought up to date:
| **Short Name** | **Scheduler** | **Notes** |
|----------------|---------------------------------|-----------------------------|
| **ddim** | DDIMScheduler | |
| **ddpm** | DDPMScheduler | |
| **deis** | DEISMultistepScheduler | |
| **lms** | LMSDiscreteScheduler | |
| **pndm** | PNDMScheduler | |
| **heun** | HeunDiscreteScheduler | original noise schedule |
| **heun_k** | HeunDiscreteScheduler | using karras noise schedule |
| **euler** | EulerDiscreteScheduler | original noise schedule |
| **euler_k** | EulerDiscreteScheduler | using karras noise schedule |
| **kdpm_2** | KDPM2DiscreteScheduler | |
| **kdpm_2_a** | KDPM2AncestralDiscreteScheduler | |
| **dpmpp_2s** | DPMSolverSinglestepScheduler | |
| **dpmpp_2m** | DPMSolverMultistepScheduler | original noise scnedule |
| **dpmpp_2m_k** | DPMSolverMultistepScheduler | using karras noise schedule |
| **unipc** | UniPCMultistepScheduler | CPU only |
| **lcm** | LCMScheduler | |
Please see [3.0.0 Release Notes](https://github.com/invoke-ai/InvokeAI/releases/tag/v3.0.0) for further details.
## :material-target: Troubleshooting ## :material-target: Troubleshooting
Please check out our **[:material-frequently-asked-questions: Please check out our **[:material-frequently-asked-questions:

View File

@ -477,7 +477,7 @@ Then type the following commands:
=== "AMD System" === "AMD System"
```bash ```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.4.2 pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6
``` ```
### Corrupted configuration file ### Corrupted configuration file

View File

@ -154,7 +154,7 @@ manager, please follow these steps:
=== "ROCm (AMD)" === "ROCm (AMD)"
```bash ```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2 pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
``` ```
=== "CPU (Intel Macs & non-GPU systems)" === "CPU (Intel Macs & non-GPU systems)"
@ -313,7 +313,7 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git) Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md). You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere. If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
@ -345,7 +345,7 @@ installation protocol (important!)
=== "ROCm (AMD)" === "ROCm (AMD)"
```bash ```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2 pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
``` ```
=== "CPU (Intel Macs & non-GPU systems)" === "CPU (Intel Macs & non-GPU systems)"
@ -361,7 +361,7 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command. dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described. 5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be 6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files read from the repository, so that you can edit the .py source files

View File

@ -134,7 +134,7 @@ recipes are available
When installing torch and torchvision manually with `pip`, remember to provide When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.4.2` as described in the [Manual https://download.pytorch.org/whl/rocm5.6` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md). Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer This will be done automatically for you if you use the installer

View File

@ -69,7 +69,7 @@ a token and copy it, since you will need in for the next step.
### Setup ### Setup
Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary. Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail. Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.

View File

@ -18,13 +18,18 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver). driver).
## **[Automated Installer](010_INSTALL_AUTOMATED.md)** ## **[Automated Installer (Recommended)](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users. ✅ This is the recommended installation method for first-time users.
This is a script that will install all of InvokeAI's essential This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a third party libraries and InvokeAI itself.
"developer console" which will help us debug problems with you and
give you to access experimental features. 🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).
## **[Manual Installation](020_INSTALL_MANUAL.md)** ## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers. This method is recommended for experienced users and developers.

View File

@ -6,10 +6,17 @@ If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](
## Features ## Features
### Workflow Library
The Workflow Library enables you to save workflows to the Invoke database, allowing you to easily creating, modify and share workflows as needed.
A curated set of workflows are provided by default - these are designed to help explain important nodes' usage in the Workflow Editor.
![workflow_library](../assets/nodes/workflow_library.png)
### Linear View ### Linear View
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations. The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
To add an input to the Linear UI, right click on the input label and select "Add to Linear View". To add an input to the Linear UI, right click on the **input label** and select "Add to Linear View".
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity. The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
@ -30,7 +37,7 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing. Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
## Important Concepts ## Important Nodes & Concepts
There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples). There are several node grouping concepts that can be examined with a narrow focus. These (and other) groupings can be pieced together to make up functional graph setups, and are important to understanding how groups of nodes work together as part of a whole. Note that the screenshots below aren't examples of complete functioning node graphs (see Examples).
@ -56,7 +63,7 @@ The ImageToLatents node takes in a pixel image and a VAE and outputs a latents.
It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed. It is common to want to use both the same seed (for continuity) and random seeds (for variety). To define a seed, simply enter it into the 'Seed' field on a noise node. Conversely, the RandomInt node generates a random integer between 'Low' and 'High', and can be used as input to the 'Seed' edge point on a noise node to randomize your seed.
![groupsrandseed](../assets/nodes/groupsrandseed.png) ![groupsrandseed](../assets/nodes/groupsnoise.png)
### ControlNet ### ControlNet

View File

@ -14,6 +14,7 @@ To use a community workflow, download the the `.json` node graph file and load i
- Community Nodes - Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes) + [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images) + [Average Images](#average-images)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut) + [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask) + [Close Color Mask](#close-color-mask)
@ -25,7 +26,7 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker) + [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif) + [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone) + [Halftone](#halftone)
+ [Ideal Size](#ideal-size) + [Hand Refiner with MeshGraphormer](#hand-refiner-with-meshgraphormer)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack) + [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color) + [Image Dominant Color](#image-dominant-color)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes) + [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
@ -36,10 +37,12 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Mask Operations](#mask-operations) + [Mask Operations](#mask-operations)
+ [Match Histogram](#match-histogram) + [Match Histogram](#match-histogram)
+ [Metadata-Linked](#metadata-linked-nodes) + [Metadata-Linked](#metadata-linked-nodes)
+ [Negative Image](#negative-image) + [Negative Image](#negative-image)
+ [Nightmare Promptgen](#nightmare-promptgen)
+ [Oobabooga](#oobabooga) + [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools) + [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image) + [Remote Image](#remote-image)
+ [BriaAI Background Remove](#briaai-remove-background)
+ [Remove Background](#remove-background) + [Remove Background](#remove-background)
+ [Retroize](#retroize) + [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes) + [Size Stepper Nodes](#size-stepper-nodes)
@ -66,6 +69,17 @@ Note: These are inherited from the core nodes so any update to the core nodes sh
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes **Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
--------------------------------
### Autostereogram Nodes
**Description:** Generate autostereogram images from a depth map. This is not a very practically useful node but more a 90s nostalgic indulgence as I used to love these images as a kid.
**Node Link:** https://github.com/skunkworxdark/autostereogram_nodes
**Example Usage:**
</br>
<img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider-depth.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-dots.png" width="200" /> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-pattern.png" width="200" />
-------------------------------- --------------------------------
### Average Images ### Average Images
@ -196,13 +210,18 @@ CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" /> <img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
-------------------------------- --------------------------------
### Ideal Size
**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of. ### Hand Refiner with MeshGraphormer
**Node Link:** https://github.com/JPPhoto/ideal-size-node **Description**: Hand Refiner takes in your image and automatically generates a fixed depth map for the hands along with a mask of the hands region that will conveniently allow you to use them along with ControlNet to fix the wonky hands generated by Stable Diffusion
**Node Link:** https://github.com/blessedcoolant/invoke_meshgraphormer
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_meshgraphormer/main/assets/preview.jpg" />
-------------------------------- --------------------------------
### Image and Mask Composition Pack ### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling. **Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
@ -346,6 +365,13 @@ Node Link: https://github.com/VeyDlin/negative-image-node
View: View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/negative-image-node/master/.readme/node.png" width="500" /> </br><img src="https://raw.githubusercontent.com/VeyDlin/negative-image-node/master/.readme/node.png" width="500" />
--------------------------------
### Nightmare Promptgen
**Description:** Nightmare Prompt Generator - Uses a local text generation model to create unique imaginative (but usually nightmarish) prompts for InvokeAI. By default, it allows you to choose from some gpt-neo models I finetuned on over 2500 of my own InvokeAI prompts in Compel format, but you're able to add your own, as well. Offers support for replacing any troublesome words with a random choice from list you can also define.
**Node Link:** [https://github.com/gogurtenjoyer/nightmare-promptgen](https://github.com/gogurtenjoyer/nightmare-promptgen)
-------------------------------- --------------------------------
### Oobabooga ### Oobabooga
@ -409,6 +435,17 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Node Link:** https://github.com/fieldOfView/InvokeAI-remote_image **Node Link:** https://github.com/fieldOfView/InvokeAI-remote_image
--------------------------------
### BriaAI Remove Background
**Description**: Implements one click background removal with BriaAI's new version 1.4 model which seems to be be producing better results than any other previous background removal tool.
**Node Link:** https://github.com/blessedcoolant/invoke_bria_rmbg
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_bria_rmbg/main/assets/preview.jpg" />
-------------------------------- --------------------------------
### Remove Background ### Remove Background

View File

@ -36,6 +36,7 @@ their descriptions.
| Integer Math | Perform basic math operations on two integers | | Integer Math | Perform basic math operations on two integers |
| Convert Image Mode | Converts an image to a different mode. | | Convert Image Mode | Converts an image to a different mode. |
| Crop Image | Crops an image to a specified box. The box can be outside of the image. | | Crop Image | Crops an image to a specified box. The box can be outside of the image. |
| Ideal Size | Calculates an ideal image size for latents for a first pass of a multi-pass upscaling to avoid duplication and other artifacts |
| Image Hue Adjustment | Adjusts the Hue of an image. | | Image Hue Adjustment | Adjusts the Hue of an image. |
| Inverse Lerp Image | Inverse linear interpolation of all pixels of an image | | Inverse Lerp Image | Inverse linear interpolation of all pixels of an image |
| Image Primitive | An image primitive value | | Image Primitive | An image primitive value |

View File

@ -1,6 +1,6 @@
# Example Workflows # Example Workflows
We've curated some example workflows for you to get started with Workflows in InvokeAI We've curated some example workflows for you to get started with Workflows in InvokeAI! These can also be found in the Workflow Library, located in the Workflow Editor of Invoke.
To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images! To use them, right click on your desired workflow, follow the link to GitHub and click the "⬇" button to download the raw file. You can then use the "Load Workflow" functionality in InvokeAI to load the workflow and start generating images!

View File

@ -13,46 +13,69 @@ We thank them for all of their time and hard work.
- [Lincoln D. Stein](mailto:lincoln.stein@gmail.com) - [Lincoln D. Stein](mailto:lincoln.stein@gmail.com)
## **Current core team** ## **Current Core Team**
* @lstein (Lincoln Stein) - Co-maintainer * @lstein (Lincoln Stein) - Co-maintainer
* @blessedcoolant - Co-maintainer * @blessedcoolant - Co-maintainer
* @hipsterusername (Kent Keirsey) - Co-maintainer, CEO, Positive Vibes * @hipsterusername (Kent Keirsey) - Co-maintainer, CEO, Positive Vibes
* @psychedelicious (Spencer Mabrito) - Web Team Leader * @psychedelicious (Spencer Mabrito) - Web Team Leader
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard * @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @damian0815 - Attention Systems and Compel Maintainer * @josh is toast (Josh Corbett) - Web Development
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @genomancer (Gregg Helt) - Controlnet support
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @cheerio (Mary Rogers) - Lead Engineer & Web App Development * @cheerio (Mary Rogers) - Lead Engineer & Web App Development
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @sunija - Standalone version
* @genomancer (Gregg Helt) - Controlnet support
* @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems * @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
* @ryanjdick (Ryan Dick) - Machine Learning & Training * @ryanjdick (Ryan Dick) - Machine Learning & Training
* @millu (Millun Atluri) - Community Manager, Documentation, Node-wrangler * @JPPhoto - Core image generation nodes
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping * @dunkeroni - Image generation backend
* @SkunkWorxDark - Image generation backend
* @keturn (Kevin Turner) - Diffusers * @keturn (Kevin Turner) - Diffusers
* @millu (Millun Atluri) - Community Wizard, Documentation, Node-wrangler,
* @glimmerleaf (Devon Hopkins) - Community Wizard
* @gogurt enjoyer - Discord moderator and end user support * @gogurt enjoyer - Discord moderator and end user support
* @whosawhatsis - Discord moderator and end user support * @whosawhatsis - Discord moderator and end user support
* @dwinrger - Discord moderator and end user support * @dwinrger - Discord moderator and end user support
* @526christian - Discord moderator and end user support * @526christian - Discord moderator and end user support
* @harvester62 - Discord moderator and end user support
## **Honored Team Alumni**
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @damian0815 - Attention Systems and Compel Maintainer
* @netsvetaev (Artur) - Localization support
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @tildebyte - Installation and configuration
* @mauwii (Matthias Wilde) - Installation, release, continuous integration
## **Full List of Contributors by Commit Name** ## **Full List of Contributors by Commit Name**
- 이승석
- AbdBarho - AbdBarho
- ablattmann - ablattmann
- AdamOStark - AdamOStark
- Adam Rice - Adam Rice
- Airton Silva - Airton Silva
- Aldo Hoeben
- Alexander Eichhorn - Alexander Eichhorn
- Alexandre D. Roberge - Alexandre D. Roberge
- Alexandre Macabies
- Alfie John
- Andreas Rozek - Andreas Rozek
- Andre LaBranche - Andre LaBranche
- Andy Bearman - Andy Bearman
- Andy Luhrs - Andy Luhrs
- Andy Pilate - Andy Pilate
- Anonymous
- Anthony Monthe
- Any-Winter-4079 - Any-Winter-4079
- apolinario - apolinario
- Ar7ific1al
- ArDiouscuros - ArDiouscuros
- Armando C. Santisbon - Armando C. Santisbon
- Arnold Cordewiner
- Arthur Holstvoogd - Arthur Holstvoogd
- artmen1516 - artmen1516
- Artur - Artur
@ -64,13 +87,16 @@ We thank them for all of their time and hard work.
- blhook - blhook
- BlueAmulet - BlueAmulet
- Bouncyknighter - Bouncyknighter
- Brandon
- Brandon Rising - Brandon Rising
- Brent Ozar - Brent Ozar
- Brian Racer - Brian Racer
- bsilvereagle - bsilvereagle
- c67e708d - c67e708d
- camenduru
- CapableWeb - CapableWeb
- Carson Katri - Carson Katri
- chainchompa
- Chloe - Chloe
- Chris Dawson - Chris Dawson
- Chris Hayes - Chris Hayes
@ -86,30 +112,45 @@ We thank them for all of their time and hard work.
- cpacker - cpacker
- Cragin Godley - Cragin Godley
- creachec - creachec
- CrypticWit
- d8ahazard
- damian
- damian0815
- Damian at mba
- Damian Stewart - Damian Stewart
- Daniel Manzke - Daniel Manzke
- Danny Beer - Danny Beer
- Dan Sully - Dan Sully
- Darren Ringer
- David Burnett - David Burnett
- David Ford - David Ford
- David Regla - David Regla
- David Sisco
- David Wager - David Wager
- Daya Adianto - Daya Adianto
- db3000 - db3000
- DekitaRPG
- Denis Olshin - Denis Olshin
- Dennis - Dennis
- dependabot[bot]
- Dmitry Parnas
- Dobrynia100
- Dominic Letz - Dominic Letz
- DrGunnarMallon - DrGunnarMallon
- Drun555
- dunkeroni
- Edward Johan - Edward Johan
- elliotsayes - elliotsayes
- Elrik - Elrik
- ElrikUnderlake - ElrikUnderlake
- Eric Khun - Eric Khun
- Eric Wolf - Eric Wolf
- Eugene
- Eugene Brodsky - Eugene Brodsky
- ExperimentalCyborg - ExperimentalCyborg
- Fabian Bahl - Fabian Bahl
- Fabio 'MrWHO' Torchetti - Fabio 'MrWHO' Torchetti
- Fattire
- fattire - fattire
- Felipe Nogueira - Felipe Nogueira
- Félix Sanz - Félix Sanz
@ -118,8 +159,12 @@ We thank them for all of their time and hard work.
- gabrielrotbart - gabrielrotbart
- gallegonovato - gallegonovato
- Gérald LONLAS - Gérald LONLAS
- Gille
- GitHub Actions Bot - GitHub Actions Bot
- glibesyck
- gogurtenjoyer - gogurtenjoyer
- Gohsuke Shimada
- greatwolf
- greentext2 - greentext2
- Gregg Helt - Gregg Helt
- H4rk - H4rk
@ -131,6 +176,7 @@ We thank them for all of their time and hard work.
- Hosted Weblate - Hosted Weblate
- Iman Karim - Iman Karim
- ismail ihsan bülbül - ismail ihsan bülbül
- ItzAttila
- Ivan Efimov - Ivan Efimov
- jakehl - jakehl
- Jakub Kolčář - Jakub Kolčář
@ -141,6 +187,7 @@ We thank them for all of their time and hard work.
- Jason Toffaletti - Jason Toffaletti
- Jaulustus - Jaulustus
- Jeff Mahoney - Jeff Mahoney
- Jennifer Player
- jeremy - jeremy
- Jeremy Clark - Jeremy Clark
- JigenD - JigenD
@ -148,19 +195,26 @@ We thank them for all of their time and hard work.
- Johan Roxendal - Johan Roxendal
- Johnathon Selstad - Johnathon Selstad
- Jonathan - Jonathan
- Jordan Hewitt
- Joseph Dries III - Joseph Dries III
- Josh Corbett
- JPPhoto - JPPhoto
- jspraul - jspraul
- junzi
- Justin Wong - Justin Wong
- Juuso V - Juuso V
- Kaspar Emanuel - Kaspar Emanuel
- Katsuyuki-Karasawa - Katsuyuki-Karasawa
- Keerigan45
- Kent Keirsey - Kent Keirsey
- Kevin Brack
- Kevin Coakley - Kevin Coakley
- Kevin Gibbons - Kevin Gibbons
- Kevin Schaul - Kevin Schaul
- Kevin Turner - Kevin Turner
- Kieran Klaassen
- krummrey - krummrey
- Kyle
- Kyle Lacy - Kyle Lacy
- Kyle Schouviller - Kyle Schouviller
- Lawrence Norton - Lawrence Norton
@ -171,10 +225,15 @@ We thank them for all of their time and hard work.
- Lynne Whitehorn - Lynne Whitehorn
- majick - majick
- Marco Labarile - Marco Labarile
- Marta Nahorniuk
- Martin Kristiansen - Martin Kristiansen
- Mary Hipp
- maryhipp
- Mary Hipp Rogers - Mary Hipp Rogers
- mastercaster
- mastercaster9000 - mastercaster9000
- Matthias Wild - Matthias Wild
- mauwii
- michaelk71 - michaelk71
- mickr777 - mickr777
- Mihai - Mihai
@ -182,11 +241,15 @@ We thank them for all of their time and hard work.
- Mikhail Tishin - Mikhail Tishin
- Millun Atluri - Millun Atluri
- Minjune Song - Minjune Song
- Mitchell Allain
- mitien - mitien
- mofuzz - mofuzz
- Muhammad Usama - Muhammad Usama
- Name - Name
- _nderscore - _nderscore
- Neil Wang
- nekowaiz
- nemuruibai
- Netzer R - Netzer R
- Nicholas Koh - Nicholas Koh
- Nicholas Körfer - Nicholas Körfer
@ -197,9 +260,11 @@ We thank them for all of their time and hard work.
- ofirkris - ofirkris
- Olivier Louvignes - Olivier Louvignes
- owenvincent - owenvincent
- pand4z31
- Patrick Esser - Patrick Esser
- Patrick Tien - Patrick Tien
- Patrick von Platen - Patrick von Platen
- Paul Curry
- Paul Sajna - Paul Sajna
- pejotr - pejotr
- Peter Baylies - Peter Baylies
@ -207,6 +272,7 @@ We thank them for all of their time and hard work.
- plucked - plucked
- prixt - prixt
- psychedelicious - psychedelicious
- psychedelicious@windows
- Rainer Bernhardt - Rainer Bernhardt
- Riccardo Giovanetti - Riccardo Giovanetti
- Rich Jones - Rich Jones
@ -215,16 +281,22 @@ We thank them for all of their time and hard work.
- Robert Bolender - Robert Bolender
- Robin Rombach - Robin Rombach
- Rohan Barar - Rohan Barar
- Rohinish
- rpagliuca - rpagliuca
- rromb - rromb
- Rupesh Sreeraman - Rupesh Sreeraman
- Ryan
- Ryan Cao - Ryan Cao
- Ryan Dick
- Saifeddine - Saifeddine
- Saifeddine ALOUI - Saifeddine ALOUI
- Sam
- SammCheese - SammCheese
- Sam McLeod
- Sammy - Sammy
- sammyf - sammyf
- Samuel Husso - Samuel Husso
- Saurav Maheshkar
- Scott Lahteine - Scott Lahteine
- Sean McLellan - Sean McLellan
- Sebastian Aigner - Sebastian Aigner
@ -232,16 +304,21 @@ We thank them for all of their time and hard work.
- Sergey Krashevich - Sergey Krashevich
- Shapor Naghibzadeh - Shapor Naghibzadeh
- Shawn Zhong - Shawn Zhong
- Simona Liliac
- Simon Vans-Colina - Simon Vans-Colina
- skunkworxdark - skunkworxdark
- slashtechno - slashtechno
- SoheilRezaei
- Song, Pengcheng
- spezialspezial - spezialspezial
- ssantos - ssantos
- StAlKeR7779 - StAlKeR7779
- Stefan Tobler
- Stephan Koglin-Fischer - Stephan Koglin-Fischer
- SteveCaruso - SteveCaruso
- Steve Martinelli - Steve Martinelli
- Steven Frank - Steven Frank
- Surisen
- System X - Files - System X - Files
- Taylor Kems - Taylor Kems
- techicode - techicode
@ -260,26 +337,34 @@ We thank them for all of their time and hard work.
- tyler - tyler
- unknown - unknown
- user1 - user1
- vedant-3010
- Vedant Madane - Vedant Madane
- veprogames - veprogames
- wa.code - wa.code
- wfng92 - wfng92
- whjms
- whosawhatsis - whosawhatsis
- Will - Will
- William Becher - William Becher
- William Chong - William Chong
- Wilson E. Alvarez
- woweenie
- Wubbbi
- xra - xra
- Yeung Yiu Hung - Yeung Yiu Hung
- ymgenesis - ymgenesis
- Yorzaren - Yorzaren
- Yosuke Shinya - Yosuke Shinya
- yun saki - yun saki
- ZachNagengast
- Zadagu - Zadagu
- zeptofine - zeptofine
- Zerdoumi
- Васянатор
- 冯不游 - 冯不游
- 唐澤 克幸 - 唐澤 克幸
## **Original CompVis Authors** ## **Original CompVis (Stable Diffusion) Authors**
- [Robin Rombach](https://github.com/rromb) - [Robin Rombach](https://github.com/rromb)
- [Patrick von Platen](https://github.com/patrickvonplaten) - [Patrick von Platen](https://github.com/patrickvonplaten)

View File

@ -0,0 +1,5 @@
:root {
--md-primary-fg-color: #35A4DB;
--md-primary-fg-color--light: #35A4DB;
--md-primary-fg-color--dark: #35A4DB;
}

File diff suppressed because it is too large Load Diff

View File

@ -14,11 +14,19 @@ function is_bin_in_path {
} }
function git_show { function git_show {
git show -s --format='%h %s' $1 git show -s --format=oneline --abbrev-commit "$1" | cat
} }
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
cd "$(dirname "$0")" cd "$(dirname "$0")"
echo
echo -e "${BYELLOW}This script must be run from the installer directory!${RESET}" echo -e "${BYELLOW}This script must be run from the installer directory!${RESET}"
echo "The current working directory is $(pwd)" echo "The current working directory is $(pwd)"
read -p "If that looks right, press any key to proceed, or CTRL-C to exit..." read -p "If that looks right, press any key to proceed, or CTRL-C to exit..."
@ -32,13 +40,6 @@ if ! is_bin_in_path python && is_bin_in_path python3; then
} }
fi fi
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
VERSION=$( VERSION=$(
cd .. cd ..
python -c "from invokeai.version import __version__ as version; print(version)" python -c "from invokeai.version import __version__ as version; print(version)"
@ -47,38 +48,9 @@ PATCH=""
VERSION="v${VERSION}${PATCH}" VERSION="v${VERSION}${PATCH}"
echo -e "${BGREEN}HEAD${RESET}:" echo -e "${BGREEN}HEAD${RESET}:"
git_show git_show HEAD
echo echo
# ---------------------- FRONTEND ----------------------
pushd ../invokeai/frontend/web >/dev/null
echo
echo "Installing frontend dependencies..."
echo
pnpm i --frozen-lockfile
echo
echo "Building frontend..."
echo
pnpm build
popd
# ---------------------- BACKEND ----------------------
echo
echo "Building wheel..."
echo
# install the 'build' package in the user site packages, if needed
# could be improved by using a temporary venv, but it's tiny and harmless
if [[ $(python -c 'from importlib.util import find_spec; print(find_spec("build") is None)') == "True" ]]; then
pip install --user build
fi
rm -rf ../build
python -m build --wheel --outdir dist/ ../.
# ---------------------- # ----------------------
echo echo
@ -97,16 +69,13 @@ done
mkdir InvokeAI-Installer/lib mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib cp lib/*.py InvokeAI-Installer/lib
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
# Install scripts # Install scripts
# Mac/Linux # Mac/Linux
cp install.sh.in InvokeAI-Installer/install.sh cp install.sh.in InvokeAI-Installer/install.sh
chmod a+x InvokeAI-Installer/install.sh chmod a+x InvokeAI-Installer/install.sh
# Windows # Windows
perl -p -e "s/^set INVOKEAI_VERSION=.*/set INVOKEAI_VERSION=$VERSION/" install.bat.in >InvokeAI-Installer/install.bat cp install.bat.in InvokeAI-Installer/install.bat
cp WinLongPathsEnabled.reg InvokeAI-Installer/ cp WinLongPathsEnabled.reg InvokeAI-Installer/
# Zip everything up # Zip everything up

View File

@ -15,7 +15,6 @@ if "%1" == "use-cache" (
@rem Config @rem Config
@rem The version in the next line is replaced by an up to date release number @rem The version in the next line is replaced by an up to date release number
@rem when create_installer.sh is run. Change the release number there. @rem when create_installer.sh is run. Change the release number there.
set INVOKEAI_VERSION=latest
set INSTRUCTIONS=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/ set INSTRUCTIONS=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
set TROUBLESHOOTING=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting set TROUBLESHOOTING=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting
set PYTHON_URL=https://www.python.org/downloads/windows/ set PYTHON_URL=https://www.python.org/downloads/windows/

View File

@ -11,7 +11,7 @@ import sys
import venv import venv
from pathlib import Path from pathlib import Path
from tempfile import TemporaryDirectory from tempfile import TemporaryDirectory
from typing import Union from typing import Optional, Tuple
SUPPORTED_PYTHON = ">=3.10.0,<=3.11.100" SUPPORTED_PYTHON = ">=3.10.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"] INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
@ -21,40 +21,20 @@ OS = platform.uname().system
ARCH = platform.uname().machine ARCH = platform.uname().machine
VERSION = "latest" VERSION = "latest"
### Feature flags
# Install the virtualenv into the runtime dir
FF_VENV_IN_RUNTIME = True
# Install the wheel packaged with the installer
FF_USE_LOCAL_WHEEL = True
class Installer: class Installer:
""" """
Deploys an InvokeAI installation into a given path Deploys an InvokeAI installation into a given path
""" """
reqs: list[str] = INSTALLER_REQS
def __init__(self) -> None: def __init__(self) -> None:
self.reqs = INSTALLER_REQS
self.preflight()
if os.getenv("VIRTUAL_ENV") is not None: if os.getenv("VIRTUAL_ENV") is not None:
print("A virtual environment is already activated. Please 'deactivate' before installation.") print("A virtual environment is already activated. Please 'deactivate' before installation.")
sys.exit(-1) sys.exit(-1)
self.bootstrap() self.bootstrap()
self.available_releases = get_github_releases()
def preflight(self) -> None:
"""
Preflight checks
"""
# TODO
# verify python version
# on macOS verify XCode tools are present
# verify libmesa, libglx on linux
# check that the system arch is not i386 (?)
# check that the system has a GPU, and the type of GPU
pass
def mktemp_venv(self) -> TemporaryDirectory: def mktemp_venv(self) -> TemporaryDirectory:
""" """
@ -78,12 +58,9 @@ class Installer:
return venv_dir return venv_dir
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory: def bootstrap(self, verbose: bool = False) -> TemporaryDirectory | None:
""" """
Bootstrap the installer venv with packages required at install time Bootstrap the installer venv with packages required at install time
:return: path to the virtual environment directory that was bootstrapped
:rtype: TemporaryDirectory
""" """
print("Initializing the installer. This may take a minute - please wait...") print("Initializing the installer. This may take a minute - please wait...")
@ -95,39 +72,27 @@ class Installer:
cmd.extend(self.reqs) cmd.extend(self.reqs)
try: try:
res = subprocess.check_output(cmd).decode() # upgrade pip to the latest version to avoid a confusing message
res = upgrade_pip(Path(venv_dir.name))
if verbose: if verbose:
print(res) print(res)
# run the install prerequisites installation
res = subprocess.check_output(cmd).decode()
if verbose:
print(res)
return venv_dir return venv_dir
except subprocess.CalledProcessError as e: except subprocess.CalledProcessError as e:
print(e) print(e)
def app_venv(self, path: str = None): def app_venv(self, venv_parent) -> Path:
""" """
Create a virtualenv for the InvokeAI installation Create a virtualenv for the InvokeAI installation
""" """
# explicit venv location venv_dir = venv_parent / ".venv"
# currently unused in normal operation
# useful for testing or special cases
if path is not None:
venv_dir = Path(path)
# experimental / testing
elif not FF_VENV_IN_RUNTIME:
if OS == "Windows":
venv_dir_parent = os.getenv("APPDATA", "~/AppData/Roaming")
elif OS == "Darwin":
# there is no environment variable on macOS to find this
# TODO: confirm this is working as expected
venv_dir_parent = "~/Library/Application Support"
elif OS == "Linux":
venv_dir_parent = os.getenv("XDG_DATA_DIR", "~/.local/share")
venv_dir = Path(venv_dir_parent).expanduser().resolve() / f"InvokeAI/{VERSION}/venv"
# stable / current
else:
venv_dir = self.dest / ".venv"
# Prefer to copy python executables # Prefer to copy python executables
# so that updates to system python don't break InvokeAI # so that updates to system python don't break InvokeAI
@ -141,7 +106,7 @@ class Installer:
return venv_dir return venv_dir
def install( def install(
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None self, version=None, root: str = "~/invokeai", yes_to_all=False, find_links: Optional[Path] = None
) -> None: ) -> None:
""" """
Install the InvokeAI application into the given runtime path Install the InvokeAI application into the given runtime path
@ -158,15 +123,20 @@ class Installer:
import messages import messages
messages.welcome() messages.welcome(self.available_releases)
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve() version = messages.choose_version(self.available_releases)
self.dest = default_path if yes_to_all else messages.dest_path(root)
auto_dest = Path(os.environ.get("INVOKEAI_ROOT", root)).expanduser().resolve()
destination = auto_dest if yes_to_all else messages.dest_path(root)
if destination is None:
print("Could not find or create the destination directory. Installation cancelled.")
sys.exit(0)
# create the venv for the app # create the venv for the app
self.venv = self.app_venv() self.venv = self.app_venv(venv_parent=destination)
self.instance = InvokeAiInstance(runtime=self.dest, venv=self.venv, version=version) self.instance = InvokeAiInstance(runtime=destination, venv=self.venv, version=version)
# install dependencies and the InvokeAI application # install dependencies and the InvokeAI application
(extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None) (extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None)
@ -190,7 +160,7 @@ class InvokeAiInstance:
A single runtime directory *may* be shared by multiple virtual environments, though this isn't currently tested or supported. A single runtime directory *may* be shared by multiple virtual environments, though this isn't currently tested or supported.
""" """
def __init__(self, runtime: Path, venv: Path, version: str) -> None: def __init__(self, runtime: Path, venv: Path, version: str = "stable") -> None:
self.runtime = runtime self.runtime = runtime
self.venv = venv self.venv = venv
self.pip = get_pip_from_venv(venv) self.pip = get_pip_from_venv(venv)
@ -199,6 +169,7 @@ class InvokeAiInstance:
set_sys_path(venv) set_sys_path(venv)
os.environ["INVOKEAI_ROOT"] = str(self.runtime.expanduser().resolve()) os.environ["INVOKEAI_ROOT"] = str(self.runtime.expanduser().resolve())
os.environ["VIRTUAL_ENV"] = str(self.venv.expanduser().resolve()) os.environ["VIRTUAL_ENV"] = str(self.venv.expanduser().resolve())
upgrade_pip(venv)
def get(self) -> tuple[Path, Path]: def get(self) -> tuple[Path, Path]:
""" """
@ -212,54 +183,7 @@ class InvokeAiInstance:
def install(self, extra_index_url=None, optional_modules=None, find_links=None): def install(self, extra_index_url=None, optional_modules=None, find_links=None):
""" """
Install this instance, including dependencies and the app itself Install the package from PyPi.
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
"""
import messages
# install torch first to ensure the correct version gets installed.
# works with either source or wheel install with negligible impact on installation times.
messages.simple_banner("Installing PyTorch :fire:")
self.install_torch(extra_index_url, find_links)
messages.simple_banner("Installing the InvokeAI Application :art:")
self.install_app(extra_index_url, optional_modules, find_links)
def install_torch(self, extra_index_url=None, find_links=None):
"""
Install PyTorch
"""
from plumbum import FG, local
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"numpy~=1.24.0", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.2",
"torchmetrics==0.11.4",
"torchvision>=0.16.2",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
]
& FG
)
def install_app(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install the application with pip.
Supports installation from PyPi or from a local source directory.
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes. :param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str :type extra_index_url: str
@ -271,53 +195,52 @@ class InvokeAiInstance:
:type find_links: Path :type find_links: Path
""" """
## this only applies to pypi installs; TODO actually use this import messages
if self.version == "pre":
# not currently used, but may be useful for "install most recent version" option
if self.version == "prerelease":
version = None version = None
pre = "--pre" pre_flag = "--pre"
elif self.version == "stable":
version = None
pre_flag = None
else: else:
version = self.version version = self.version
pre = None pre_flag = None
## TODO: only local wheel will be installed as of now; support for --version arg is TODO src = "invokeai"
if FF_USE_LOCAL_WHEEL: if optional_modules:
# if no wheel, try to do a source install before giving up src += optional_modules
try: if version:
src = str(next(Path(__file__).parent.glob("InvokeAI-*.whl"))) src += f"=={version}"
except StopIteration:
try:
src = Path(__file__).parents[1].expanduser().resolve()
# if the above directory contains one of these files, we'll do a source install
next(src.glob("pyproject.toml"))
next(src.glob("invokeai"))
except StopIteration:
print("Unable to find a wheel or perform a source install. Giving up.")
elif version == "source": messages.simple_banner("Installing the InvokeAI Application :art:")
# this makes an assumption about the location of the installer package in the source tree
src = Path(__file__).parents[1].expanduser().resolve()
else:
# will install from PyPi
src = f"invokeai=={version}" if version is not None else "invokeai"
from plumbum import FG, local from plumbum import FG, ProcessExecutionError, local # type: ignore
pip = local[self.pip] pip = local[self.pip]
( pipeline = pip[
pip[ "install",
"install", "--require-virtualenv",
"--require-virtualenv", "--force-reinstall",
"--use-pep517", "--use-pep517",
str(src) + (optional_modules if optional_modules else ""), str(src),
"--find-links" if find_links is not None else None, "--find-links" if find_links is not None else None,
find_links, find_links,
"--extra-index-url" if extra_index_url is not None else None, "--extra-index-url" if extra_index_url is not None else None,
extra_index_url, extra_index_url,
pre, pre_flag,
] ]
& FG
) try:
_ = pipeline & FG
except ProcessExecutionError as e:
print(f"Error: {e}")
print(
"Could not install InvokeAI. Please try downloading the latest version of the installer and install again."
)
sys.exit(1)
def configure(self): def configure(self):
""" """
@ -373,7 +296,6 @@ class InvokeAiInstance:
ext = "bat" if OS == "Windows" else "sh" ext = "bat" if OS == "Windows" else "sh"
# scripts = ['invoke', 'update']
scripts = ["invoke"] scripts = ["invoke"]
for script in scripts: for script in scripts:
@ -408,6 +330,23 @@ def get_pip_from_venv(venv_path: Path) -> str:
return str(venv_path.expanduser().resolve() / pip) return str(venv_path.expanduser().resolve() / pip)
def upgrade_pip(venv_path: Path) -> str | None:
"""
Upgrade the pip executable in the given virtual environment
"""
python = "Scripts\\python.exe" if OS == "Windows" else "bin/python"
python = str(venv_path.expanduser().resolve() / python)
try:
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
except subprocess.CalledProcessError as e:
print(e)
result = None
return result
def set_sys_path(venv_path: Path) -> None: def set_sys_path(venv_path: Path) -> None:
""" """
Given a path to a virtual environment, set the sys.path, in a cross-platform fashion, Given a path to a virtual environment, set the sys.path, in a cross-platform fashion,
@ -431,7 +370,43 @@ def set_sys_path(venv_path: Path) -> None:
sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve())) sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve()))
def get_torch_source() -> (Union[str, None], str): def get_github_releases() -> tuple[list, list] | None:
"""
Query Github for published (pre-)release versions.
Return a tuple where the first element is a list of stable releases and the second element is a list of pre-releases.
Return None if the query fails for any reason.
"""
import requests
## get latest releases using github api
url = "https://api.github.com/repos/invoke-ai/InvokeAI/releases"
releases, pre_releases = [], []
try:
res = requests.get(url)
res.raise_for_status()
tag_info = res.json()
for tag in tag_info:
if not tag["prerelease"]:
releases.append(tag["tag_name"].lstrip("v"))
else:
pre_releases.append(tag["tag_name"].lstrip("v"))
except requests.HTTPError as e:
print(f"Error: {e}")
print("Could not fetch version information from GitHub. Please check your network connection and try again.")
return
except Exception as e:
print(f"Error: {e}")
print("An unexpected error occurred while trying to fetch version information from GitHub. Please try again.")
return
releases.sort(reverse=True)
pre_releases.sort(reverse=True)
return releases, pre_releases
def get_torch_source() -> Tuple[str | None, str | None]:
""" """
Determine the extra index URL for pip to use for torch installation. Determine the extra index URL for pip to use for torch installation.
This depends on the OS and the graphics accelerator in use. This depends on the OS and the graphics accelerator in use.
@ -446,25 +421,26 @@ def get_torch_source() -> (Union[str, None], str):
:rtype: list :rtype: list
""" """
from messages import graphical_accelerator from messages import select_gpu
# device can be one of: "cuda", "rocm", "cpu", "idk" # device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = graphical_accelerator() device = select_gpu()
url = None url = None
optional_modules = "[onnx]" optional_modules = "[onnx]"
if OS == "Linux": if OS == "Linux":
if device == "rocm": if device.value == "rocm":
url = "https://download.pytorch.org/whl/rocm5.4.2" url = "https://download.pytorch.org/whl/rocm5.6"
elif device == "cpu": elif device.value == "cpu":
url = "https://download.pytorch.org/whl/cpu" url = "https://download.pytorch.org/whl/cpu"
if device == "cuda": elif OS == "Windows":
url = "https://download.pytorch.org/whl/cu121" if device.value == "cuda":
optional_modules = "[xformers,onnx-cuda]" url = "https://download.pytorch.org/whl/cu121"
if device == "cuda_and_dml": optional_modules = "[xformers,onnx-cuda]"
url = "https://download.pytorch.org/whl/cu121" if device.value == "cuda_and_dml":
optional_modules = "[xformers,onnx-directml]" url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13 # in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@ -5,10 +5,11 @@ Installer user interaction
import os import os
import platform import platform
from enum import Enum
from pathlib import Path from pathlib import Path
from prompt_toolkit import HTML, prompt from prompt_toolkit import HTML, prompt
from prompt_toolkit.completion import PathCompleter from prompt_toolkit.completion import FuzzyWordCompleter, PathCompleter
from prompt_toolkit.validation import Validator from prompt_toolkit.validation import Validator
from rich import box, print from rich import box, print
from rich.console import Console, Group, group from rich.console import Console, Group, group
@ -35,16 +36,26 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19")) console = Console(style=Style(color="grey74", bgcolor="grey19"))
def welcome(): def welcome(available_releases: tuple | None = None) -> None:
@group() @group()
def text(): def text():
if (platform_specific := _platform_specific_help()) != "": if (platform_specific := _platform_specific_help()) is not None:
yield platform_specific yield platform_specific
yield "" yield ""
yield Text.from_markup( yield Text.from_markup(
"Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.", "Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",
justify="center", justify="center",
) )
if available_releases is not None:
latest_stable = available_releases[0][0]
last_pre = available_releases[1][0]
yield ""
yield Text.from_markup(
f"[red3]🠶[/] Latest stable release (recommended): [b bright_white]{latest_stable}", justify="center"
)
yield Text.from_markup(
f"[red3]🠶[/] Last published pre-release version: [b bright_white]{last_pre}", justify="center"
)
console.rule() console.rule()
print( print(
@ -61,19 +72,30 @@ def welcome():
console.line() console.line()
def confirm_install(dest: Path) -> bool: def choose_version(available_releases: tuple | None = None) -> str:
if dest.exists(): """
print(f":exclamation: Directory {dest} already exists :exclamation:") Prompt the user to choose an Invoke version to install
dest_confirmed = Confirm.ask( """
":stop_sign: (re)install in this location?",
default=False, # short circuit if we couldn't get a version list
) # still try to install the latest stable version
else: if available_releases is None:
print(f"InvokeAI will be installed in {dest}") return "stable"
dest_confirmed = Confirm.ask("Use this location?", default=True)
console.print(":grey_question: [orange3]Please choose an Invoke version to install.")
choices = available_releases[0] + available_releases[1]
response = prompt(
message=f" <Enter> to install the recommended release ({choices[0]}). <Tab> or type to pick a version: ",
complete_while_typing=True,
completer=FuzzyWordCompleter(choices),
)
console.print(f" Version {choices[0] if response == '' else response} will be installed.")
console.line() console.line()
return dest_confirmed return "stable" if response == "" else response
def user_wants_auto_configuration() -> bool: def user_wants_auto_configuration() -> bool:
@ -109,7 +131,23 @@ def user_wants_auto_configuration() -> bool:
return choice.lower().startswith("a") return choice.lower().startswith("a")
def dest_path(dest=None) -> Path: def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":stop_sign: Directory {dest} already exists!")
print(" Is this location correct?")
default = False
else:
print(f":file_folder: InvokeAI will be installed in {dest}")
default = True
dest_confirmed = Confirm.ask(" Please confirm:", default=default)
console.line()
return dest_confirmed
def dest_path(dest=None) -> Path | None:
""" """
Prompt the user for the destination path and create the path Prompt the user for the destination path and create the path
@ -124,25 +162,21 @@ def dest_path(dest=None) -> Path:
else: else:
dest = Path.cwd().expanduser().resolve() dest = Path.cwd().expanduser().resolve()
prev_dest = init_path = dest prev_dest = init_path = dest
dest_confirmed = False
dest_confirmed = confirm_install(dest)
while not dest_confirmed: while not dest_confirmed:
# if the given destination already exists, the starting point for browsing is its parent directory. browse_start = (dest or Path.cwd()).expanduser().resolve()
# the user may have made a typo, or otherwise wants to place the root dir next to an existing one.
# if the destination dir does NOT exist, then the user must have changed their mind about the selection.
# since we can't read their mind, start browsing at Path.cwd().
browse_start = (prev_dest.parent if prev_dest.exists() else Path.cwd()).expanduser().resolve()
path_completer = PathCompleter( path_completer = PathCompleter(
only_directories=True, only_directories=True,
expanduser=True, expanduser=True,
get_paths=lambda: [browse_start], # noqa: B023 get_paths=lambda: [str(browse_start)], # noqa: B023
# get_paths=lambda: [".."].extend(list(browse_start.iterdir())) # get_paths=lambda: [".."].extend(list(browse_start.iterdir()))
) )
console.line() console.line()
console.print(f"[orange3]Please select the destination directory for the installation:[/] \\[{browse_start}]: ")
console.print(f":grey_question: [orange3]Please select the install destination:[/] \\[{browse_start}]: ")
selected = prompt( selected = prompt(
">>> ", ">>> ",
complete_in_thread=True, complete_in_thread=True,
@ -155,6 +189,7 @@ def dest_path(dest=None) -> Path:
) )
prev_dest = dest prev_dest = dest
dest = Path(selected) dest = Path(selected)
console.line() console.line()
dest_confirmed = confirm_install(dest.expanduser().resolve()) dest_confirmed = confirm_install(dest.expanduser().resolve())
@ -182,41 +217,45 @@ def dest_path(dest=None) -> Path:
console.rule("Goodbye!") console.rule("Goodbye!")
def graphical_accelerator(): class GpuType(Enum):
CUDA = "cuda"
CUDA_AND_DML = "cuda_and_dml"
ROCM = "rocm"
CPU = "cpu"
AUTODETECT = "autodetect"
def select_gpu() -> GpuType:
""" """
Prompt the user to select the graphical accelerator in their system Prompt the user to select the GPU driver
This does not validate user's choices (yet), but only offers choices
valid for the platform.
CUDA is the fallback.
We may be able to detect the GPU driver by shelling out to `modprobe` or `lspci`,
but this is not yet supported or reliable. Also, some users may have exotic preferences.
""" """
if ARCH == "arm64" and OS != "Darwin": if ARCH == "arm64" and OS != "Darwin":
print(f"Only CPU acceleration is available on {ARCH} architecture. Proceeding with that.") print(f"Only CPU acceleration is available on {ARCH} architecture. Proceeding with that.")
return "cpu" return GpuType.CPU
nvidia = ( nvidia = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)", "an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"cuda", GpuType.CUDA,
) )
nvidia_with_dml = ( nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA", "an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
"cuda_and_dml", GpuType.CUDA_AND_DML,
) )
amd = ( amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)", "an [gold1 b]AMD[/] GPU (using ROCm™)",
"rocm", GpuType.ROCM,
) )
cpu = ( cpu = (
"no compatible GPU, or specifically prefer to use the CPU", "Do not install any GPU support, use CPU for generation (slow)",
"cpu", GpuType.CPU,
) )
idk = ( autodetect = (
"I'm not sure what to choose", "I'm not sure what to choose",
"idk", GpuType.AUTODETECT,
) )
options = []
if OS == "Windows": if OS == "Windows":
options = [nvidia, nvidia_with_dml, cpu] options = [nvidia, nvidia_with_dml, cpu]
if OS == "Linux": if OS == "Linux":
@ -230,7 +269,7 @@ def graphical_accelerator():
return options[0][1] return options[0][1]
# "I don't know" is always added the last option # "I don't know" is always added the last option
options.append(idk) options.append(autodetect) # type: ignore
options = {str(i): opt for i, opt in enumerate(options, 1)} options = {str(i): opt for i, opt in enumerate(options, 1)}
@ -265,9 +304,9 @@ def graphical_accelerator():
), ),
) )
if options[choice][1] == "idk": if options[choice][1] is GpuType.AUTODETECT:
console.print( console.print(
"No problem. We will try to install a version that [i]should[/i] be compatible. :crossed_fingers:" "No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
) )
return options[choice][1] return options[choice][1]
@ -291,7 +330,7 @@ def windows_long_paths_registry() -> None:
""" """
with open(str(Path(__file__).parent / "WinLongPathsEnabled.reg"), "r", encoding="utf-16le") as code: with open(str(Path(__file__).parent / "WinLongPathsEnabled.reg"), "r", encoding="utf-16le") as code:
syntax = Syntax(code.read(), line_numbers=True) syntax = Syntax(code.read(), line_numbers=True, lexer="regedit")
console.print( console.print(
Panel( Panel(
@ -301,7 +340,7 @@ def windows_long_paths_registry() -> None:
"We will now apply a registry fix to enable long paths on Windows. InvokeAI needs this to function correctly. We are asking your permission to modify the Windows Registry on your behalf.", "We will now apply a registry fix to enable long paths on Windows. InvokeAI needs this to function correctly. We are asking your permission to modify the Windows Registry on your behalf.",
"", "",
"This is the change that will be applied:", "This is the change that will be applied:",
syntax, str(syntax),
] ]
) )
), ),
@ -340,7 +379,7 @@ def introduction() -> None:
console.line(2) console.line(2)
def _platform_specific_help() -> str: def _platform_specific_help() -> Text | None:
if OS == "Darwin": if OS == "Darwin":
text = Text.from_markup( text = Text.from_markup(
"""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/].""" """[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/]."""
@ -354,5 +393,5 @@ def _platform_specific_help() -> str:
[deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]""" [deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]"""
) )
else: else:
text = "" return
return text return text

View File

@ -15,7 +15,7 @@ echo 4. Download and install models
echo 5. Change InvokeAI startup options echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console echo 7. Open the developer console
echo 8. Update InvokeAI echo 8. Update InvokeAI (DEPRECATED - please use the installer)
echo 9. Run the InvokeAI image database maintenance script echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help echo 10. Command-line help
echo Q - Quit echo Q - Quit
@ -52,8 +52,10 @@ IF /I "%choice%" == "1" (
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment *** echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k call cmd /k
) ELSE IF /I "%choice%" == "8" ( ) ELSE IF /I "%choice%" == "8" (
echo Running invokeai-update... echo UPDATING FROM WITHIN THE APP IS BEING DEPRECATED.
python -m invokeai.frontend.install.invokeai_update echo Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.
timeout 4
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" ( ) ELSE IF /I "%choice%" == "9" (
echo Running the db maintenance script... echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe python .venv\Scripts\invokeai-db-maintenance.exe
@ -77,4 +79,3 @@ pause
:ending :ending
exit /b exit /b

View File

@ -90,7 +90,9 @@ do_choice() {
;; ;;
8) 8)
clear clear
printf "Update InvokeAI\n" printf "UPDATING FROM WITHIN THE APP IS BEING DEPRECATED\n"
printf "Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.\n"
sleep 4
python -m invokeai.frontend.install.invokeai_update python -m invokeai.frontend.install.invokeai_update
;; ;;
9) 9)
@ -122,7 +124,7 @@ do_dialog() {
5 "Change InvokeAI startup options" 5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade" 6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console" 7 "Open the developer console"
8 "Update InvokeAI" 8 "Update InvokeAI (DEPRECATED - please use the installer)"
9 "Run the InvokeAI image database maintenance script" 9 "Run the InvokeAI image database maintenance script"
10 "Command-line help" 10 "Command-line help"
) )

View File

@ -1,72 +0,0 @@
@echo off
setlocal EnableExtensions EnableDelayedExpansion
PUSHD "%~dp0"
set INVOKE_AI_VERSION=latest
set arg=%1
if "%arg%" neq "" (
if "%arg:~0,2%" equ "/?" (
echo Usage: update.bat ^<release name or branch^>
echo Updates InvokeAI to use the indicated version of the code base.
echo Find the version or branch for the release you want, and pass it as the argument.
echo For example '.\update.bat v2.2.5' for release 2.2.5.
echo '.\update.bat main' for the latest development version
echo.
echo If no argument provided then will install the most recent release, equivalent to
echo '.\update.bat latest'
exit /b
) else (
set INVOKE_AI_VERSION=%arg%
)
)
set INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/!INVOKE_AI_VERSION!.zip"
set INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/!INVOKE_AI_VERSION!/environments-and-requirements/requirements-base.txt
set INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
call curl -I "%INVOKE_AI_DEP%" -fs >.tmp.out
if %errorlevel% neq 0 (
echo '!INVOKE_AI_VERSION!' is not a known branch name or tag. Please check the version and try again.
echo "Press any key to continue"
pause
exit /b
)
del .tmp.out
echo This script will update InvokeAI and all its dependencies to !INVOKE_AI_SRC!.
echo If you do not want to do this, press control-C now!
pause
call curl -L "%INVOKE_AI_DEP%" > environments-and-requirements/requirements-base.txt
call curl -L "%INVOKE_AI_MODELS%" > configs/INITIAL_MODELS.yaml
call .venv\Scripts\activate.bat
call .venv\Scripts\python -mpip install -r requirements.txt
if %errorlevel% neq 0 (
echo Installation of requirements failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
call .venv\Scripts\python -mpip install !INVOKE_AI_SRC!
if %errorlevel% neq 0 (
echo Installation of InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
@rem call .venv\Scripts\invokeai-configure --root=.
@rem if %errorlevel% neq 0 (
@rem echo Configuration InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
@rem pause
@rem exit /b
@rem )
echo InvokeAI has been updated to '%INVOKE_AI_VERSION%'
echo "Press any key to continue"
pause
endlocal

View File

@ -1,58 +0,0 @@
#!/usr/bin/env bash
set -eu
if [ $# -ge 1 ] && [ "${1:0:2}" == "-h" ]; then
echo "Usage: update.sh <release>"
echo "Updates InvokeAI to use the indicated version of the code base."
echo "Find the version or branch for the release you want, and pass it as the argument."
echo "For example: update.sh v2.2.5 for release 2.2.5."
echo " update.sh main for the current development version."
echo ""
echo "If no argument provided then will install the version tagged with 'latest', equivalent to"
echo "update.sh latest"
exit -1
fi
INVOKE_AI_VERSION=${1:-latest}
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/$INVOKE_AI_VERSION.zip"
INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/environments-and-requirements/requirements-base.txt
INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
# ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
function _err_exit {
if test "$1" -ne 0
then
echo "Something went wrong while installing InvokeAI and/or its requirements."
echo "Update cannot continue. Please report this error to https://github.com/invoke-ai/InvokeAI/issues"
echo -e "Error code $1; Error caught was '$2'"
read -p "Press any key to exit..."
exit
fi
}
if ! curl -I "$INVOKE_AI_DEP" -fs >/dev/null; then
echo \'$INVOKE_AI_VERSION\' is not a known branch name or tag. Please check the version and try again.
exit
fi
echo This script will update InvokeAI and all its dependencies to version \'$INVOKE_AI_VERSION\'.
echo If you do not want to do this, press control-C now!
read -p "Press any key to continue, or CTRL-C to exit..."
curl -L "$INVOKE_AI_DEP" > environments-and-requirements/requirements-base.txt
curl -L "$INVOKE_AI_MODELS" > configs/INITIAL_MODELS.yaml
. .venv/bin/activate
./.venv/bin/python -mpip install -r requirements.txt
_err_exit $? "The pip program failed to install InvokeAI's requirements."
./.venv/bin/python -mpip install $INVOKE_AI_SRC
_err_exit $? "The pip program failed to install InvokeAI."
echo InvokeAI updated to \'$INVOKE_AI_VERSION\'

View File

@ -2,7 +2,9 @@
from logging import Logger from logging import Logger
from invokeai.app.services.item_storage.item_storage_memory import ItemStorageMemory
from invokeai.app.services.shared.sqlite.sqlite_util import init_db from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager.metadata import ModelMetadataStore
from invokeai.backend.util.logging import InvokeAILogger from invokeai.backend.util.logging import InvokeAILogger
from invokeai.version.invokeai_version import __version__ from invokeai.version.invokeai_version import __version__
@ -21,7 +23,6 @@ from ..services.invocation_queue.invocation_queue_memory import MemoryInvocation
from ..services.invocation_services import InvocationServices from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_install import ModelInstallService from ..services.model_install import ModelInstallService
@ -61,7 +62,7 @@ class ApiDependencies:
invoker: Invoker invoker: Invoker
@staticmethod @staticmethod
def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger): def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger) -> None:
logger.info(f"InvokeAI version {__version__}") logger.info(f"InvokeAI version {__version__}")
logger.info(f"Root directory = {str(config.root_path)}") logger.info(f"Root directory = {str(config.root_path)}")
logger.debug(f"Internet connectivity is {config.internet_available}") logger.debug(f"Internet connectivity is {config.internet_available}")
@ -79,7 +80,7 @@ class ApiDependencies:
board_records = SqliteBoardRecordStorage(db=db) board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService() boards = BoardService()
events = FastAPIEventService(event_handler_id) events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions") graph_execution_manager = ItemStorageMemory[GraphExecutionState]()
image_records = SqliteImageRecordStorage(db=db) image_records = SqliteImageRecordStorage(db=db)
images = ImageService() images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size) invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)
@ -87,8 +88,13 @@ class ApiDependencies:
model_manager = ModelManagerService(config, logger) model_manager = ModelManagerService(config, logger)
model_record_service = ModelRecordServiceSQL(db=db) model_record_service = ModelRecordServiceSQL(db=db)
download_queue_service = DownloadQueueService(event_bus=events) download_queue_service = DownloadQueueService(event_bus=events)
metadata_store = ModelMetadataStore(db=db)
model_install_service = ModelInstallService( model_install_service = ModelInstallService(
app_config=config, record_store=model_record_service, event_bus=events app_config=config,
record_store=model_record_service,
download_queue=download_queue_service,
metadata_store=metadata_store,
event_bus=events,
) )
names = SimpleNameService() names = SimpleNameService()
performance_statistics = InvocationStatsService() performance_statistics = InvocationStatsService()
@ -131,6 +137,6 @@ class ApiDependencies:
db.clean() db.clean()
@staticmethod @staticmethod
def shutdown(): def shutdown() -> None:
if ApiDependencies.invoker: if ApiDependencies.invoker:
ApiDependencies.invoker.stop() ApiDependencies.invoker.stop()

View File

@ -0,0 +1,28 @@
from typing import Any
from starlette.responses import Response
from starlette.staticfiles import StaticFiles
class NoCacheStaticFiles(StaticFiles):
"""
This class is used to override the default caching behavior of starlette for static files,
ensuring we *never* cache static files. It modifies the file response headers to strictly
never cache the files.
Static files include the javascript bundles, fonts, locales, and some images. Generated
images are not included, as they are served by a router.
"""
def __init__(self, *args: Any, **kwargs: Any):
self.cachecontrol = "max-age=0, no-cache, no-store, , must-revalidate"
self.pragma = "no-cache"
self.expires = "0"
super().__init__(*args, **kwargs)
def file_response(self, *args: Any, **kwargs: Any) -> Response:
resp = super().file_response(*args, **kwargs)
resp.headers.setdefault("Cache-Control", self.cachecontrol)
resp.headers.setdefault("Pragma", self.pragma)
resp.headers.setdefault("Expires", self.expires)
return resp

View File

@ -1,10 +1,10 @@
# Copyright (c) 2023 Lincoln D. Stein # Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records.""" """FastAPI route for model configuration records."""
import pathlib
from hashlib import sha1 from hashlib import sha1
from random import randbytes from random import randbytes
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional, Set
from fastapi import Body, Path, Query, Response from fastapi import Body, Path, Query, Response
from fastapi.routing import APIRouter from fastapi.routing import APIRouter
@ -16,13 +16,19 @@ from invokeai.app.services.model_install import ModelInstallJob, ModelSource
from invokeai.app.services.model_records import ( from invokeai.app.services.model_records import (
DuplicateModelException, DuplicateModelException,
InvalidModelException, InvalidModelException,
ModelRecordOrderBy,
ModelSummary,
UnknownModelException, UnknownModelException,
) )
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import ( from invokeai.backend.model_manager.config import (
AnyModelConfig, AnyModelConfig,
BaseModelType, BaseModelType,
ModelFormat,
ModelType, ModelType,
) )
from invokeai.backend.model_manager.merge import MergeInterpolationMethod, ModelMerger
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from ..dependencies import ApiDependencies from ..dependencies import ApiDependencies
@ -32,11 +38,20 @@ model_records_router = APIRouter(prefix="/v1/model/record", tags=["model_manager
class ModelsList(BaseModel): class ModelsList(BaseModel):
"""Return list of configs.""" """Return list of configs."""
models: list[AnyModelConfig] models: List[AnyModelConfig]
model_config = ConfigDict(use_enum_values=True) model_config = ConfigDict(use_enum_values=True)
class ModelTagSet(BaseModel):
"""Return tags for a set of models."""
key: str
name: str
author: str
tags: Set[str]
@model_records_router.get( @model_records_router.get(
"/", "/",
operation_id="list_model_records", operation_id="list_model_records",
@ -45,7 +60,7 @@ async def list_model_records(
base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"), base_models: Optional[List[BaseModelType]] = Query(default=None, description="Base models to include"),
model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"), model_type: Optional[ModelType] = Query(default=None, description="The type of model to get"),
model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"), model_name: Optional[str] = Query(default=None, description="Exact match on the name of the model"),
model_format: Optional[str] = Query( model_format: Optional[ModelFormat] = Query(
default=None, description="Exact match on the format of the model (e.g. 'diffusers')" default=None, description="Exact match on the format of the model (e.g. 'diffusers')"
), ),
) -> ModelsList: ) -> ModelsList:
@ -86,6 +101,59 @@ async def get_model_record(
raise HTTPException(status_code=404, detail=str(e)) raise HTTPException(status_code=404, detail=str(e))
@model_records_router.get("/meta", operation_id="list_model_summary")
async def list_model_summary(
page: int = Query(default=0, description="The page to get"),
per_page: int = Query(default=10, description="The number of models per page"),
order_by: ModelRecordOrderBy = Query(default=ModelRecordOrderBy.Default, description="The attribute to order by"),
) -> PaginatedResults[ModelSummary]:
"""Gets a page of model summary data."""
return ApiDependencies.invoker.services.model_records.list_models(page=page, per_page=per_page, order_by=order_by)
@model_records_router.get(
"/meta/i/{key}",
operation_id="get_model_metadata",
responses={
200: {"description": "Success"},
400: {"description": "Bad request"},
404: {"description": "No metadata available"},
},
)
async def get_model_metadata(
key: str = Path(description="Key of the model repo metadata to fetch."),
) -> Optional[AnyModelRepoMetadata]:
"""Get a model metadata object."""
record_store = ApiDependencies.invoker.services.model_records
result = record_store.get_metadata(key)
if not result:
raise HTTPException(status_code=404, detail="No metadata for a model with this key")
return result
@model_records_router.get(
"/tags",
operation_id="list_tags",
)
async def list_tags() -> Set[str]:
"""Get a unique set of all the model tags."""
record_store = ApiDependencies.invoker.services.model_records
return record_store.list_tags()
@model_records_router.get(
"/tags/search",
operation_id="search_by_metadata_tags",
)
async def search_by_metadata_tags(
tags: Set[str] = Query(default=None, description="Tags to search for"),
) -> ModelsList:
"""Get a list of models."""
record_store = ApiDependencies.invoker.services.model_records
results = record_store.search_by_metadata_tag(tags)
return ModelsList(models=results)
@model_records_router.patch( @model_records_router.patch(
"/i/{key}", "/i/{key}",
operation_id="update_model_record", operation_id="update_model_record",
@ -159,9 +227,7 @@ async def del_model_record(
async def add_model_record( async def add_model_record(
config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")], config: Annotated[AnyModelConfig, Body(description="Model config", discriminator="type")],
) -> AnyModelConfig: ) -> AnyModelConfig:
""" """Add a model using the configuration information appropriate for its type."""
Add a model using the configuration information appropriate for its type.
"""
logger = ApiDependencies.invoker.services.logger logger = ApiDependencies.invoker.services.logger
record_store = ApiDependencies.invoker.services.model_records record_store = ApiDependencies.invoker.services.model_records
if config.key == "<NOKEY>": if config.key == "<NOKEY>":
@ -243,7 +309,7 @@ async def import_model(
Installation occurs in the background. Either use list_model_install_jobs() Installation occurs in the background. Either use list_model_install_jobs()
to poll for completion, or listen on the event bus for the following events: to poll for completion, or listen on the event bus for the following events:
"model_install_started" "model_install_running"
"model_install_completed" "model_install_completed"
"model_install_error" "model_install_error"
@ -279,16 +345,46 @@ async def import_model(
operation_id="list_model_install_jobs", operation_id="list_model_install_jobs",
) )
async def list_model_install_jobs() -> List[ModelInstallJob]: async def list_model_install_jobs() -> List[ModelInstallJob]:
""" """Return list of model install jobs."""
Return list of model install jobs.
If the optional 'source' argument is provided, then the list will be filtered
for partial string matches against the install source.
"""
jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_install.list_jobs() jobs: List[ModelInstallJob] = ApiDependencies.invoker.services.model_install.list_jobs()
return jobs return jobs
@model_records_router.get(
"/import/{id}",
operation_id="get_model_install_job",
responses={
200: {"description": "Success"},
404: {"description": "No such job"},
},
)
async def get_model_install_job(id: int = Path(description="Model install id")) -> ModelInstallJob:
"""Return model install job corresponding to the given source."""
try:
return ApiDependencies.invoker.services.model_install.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=404, detail=str(e))
@model_records_router.delete(
"/import/{id}",
operation_id="cancel_model_install_job",
responses={
201: {"description": "The job was cancelled successfully"},
415: {"description": "No such job"},
},
status_code=201,
)
async def cancel_model_install_job(id: int = Path(description="Model install job ID")) -> None:
"""Cancel the model install job(s) corresponding to the given job ID."""
installer = ApiDependencies.invoker.services.model_install
try:
job = installer.get_job_by_id(id)
except ValueError as e:
raise HTTPException(status_code=415, detail=str(e))
installer.cancel_job(job)
@model_records_router.patch( @model_records_router.patch(
"/import", "/import",
operation_id="prune_model_install_jobs", operation_id="prune_model_install_jobs",
@ -298,9 +394,7 @@ async def list_model_install_jobs() -> List[ModelInstallJob]:
}, },
) )
async def prune_model_install_jobs() -> Response: async def prune_model_install_jobs() -> Response:
""" """Prune all completed and errored jobs from the install job list."""
Prune all completed and errored jobs from the install job list.
"""
ApiDependencies.invoker.services.model_install.prune_jobs() ApiDependencies.invoker.services.model_install.prune_jobs()
return Response(status_code=204) return Response(status_code=204)
@ -315,8 +409,64 @@ async def prune_model_install_jobs() -> Response:
) )
async def sync_models_to_config() -> Response: async def sync_models_to_config() -> Response:
""" """
Traverse the models and autoimport directories. Model files without a corresponding Traverse the models and autoimport directories.
Model files without a corresponding
record in the database are added. Orphan records without a models file are deleted. record in the database are added. Orphan records without a models file are deleted.
""" """
ApiDependencies.invoker.services.model_install.sync_to_config() ApiDependencies.invoker.services.model_install.sync_to_config()
return Response(status_code=204) return Response(status_code=204)
@model_records_router.put(
"/merge",
operation_id="merge",
)
async def merge(
keys: List[str] = Body(description="Keys for two to three models to merge", min_length=2, max_length=3),
merged_model_name: Optional[str] = Body(description="Name of destination model", default=None),
alpha: float = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
force: bool = Body(
description="Force merging of models created with different versions of diffusers",
default=False,
),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method", default=None),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> AnyModelConfig:
"""
Merge diffusers models.
keys: List of 2-3 model keys to merge together. All models must use the same base type.
merged_model_name: Name for the merged model [Concat model names]
alpha: Alpha value (0.0-1.0). Higher values give more weight to the second model [0.5]
force: If true, force the merge even if the models were generated by different versions of the diffusers library [False]
interp: Interpolation method. One of "weighted_sum", "sigmoid", "inv_sigmoid" or "add_difference" [weighted_sum]
merge_dest_directory: Specify a directory to store the merged model in [models directory]
"""
print(f"here i am, keys={keys}")
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {keys} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
installer = ApiDependencies.invoker.services.model_install
merger = ModelMerger(installer)
model_names = [installer.record_store.get_model(x).name for x in keys]
response = merger.merge_diffusion_models_and_save(
model_keys=keys,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory=dest,
)
except UnknownModelException:
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{keys}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@ -23,10 +23,11 @@ class DynamicPromptsResponse(BaseModel):
) )
async def parse_dynamicprompts( async def parse_dynamicprompts(
prompt: str = Body(description="The prompt to parse with dynamicprompts"), prompt: str = Body(description="The prompt to parse with dynamicprompts"),
max_prompts: int = Body(default=1000, description="The max number of prompts to generate"), max_prompts: int = Body(ge=1, le=10000, default=1000, description="The max number of prompts to generate"),
combinatorial: bool = Body(default=True, description="Whether to use the combinatorial generator"), combinatorial: bool = Body(default=True, description="Whether to use the combinatorial generator"),
) -> DynamicPromptsResponse: ) -> DynamicPromptsResponse:
"""Creates a batch process""" """Creates a batch process"""
max_prompts = min(max_prompts, 10000)
generator: Union[RandomPromptGenerator, CombinatorialPromptGenerator] generator: Union[RandomPromptGenerator, CombinatorialPromptGenerator]
try: try:
error: Optional[str] = None error: Optional[str] = None

View File

@ -14,7 +14,7 @@ class SocketIO:
def __init__(self, app: FastAPI): def __init__(self, app: FastAPI):
self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*") self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="socket.io") self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="/ws/socket.io")
app.mount("/ws", self.__app) app.mount("/ws", self.__app)
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue) self.__sio.on("subscribe_queue", handler=self._handle_sub_queue)

View File

@ -3,6 +3,7 @@
# values from the command line or config file. # values from the command line or config file.
import sys import sys
from invokeai.app.api.no_cache_staticfiles import NoCacheStaticFiles
from invokeai.version.invokeai_version import __version__ from invokeai.version.invokeai_version import __version__
from .services.config import InvokeAIAppConfig from .services.config import InvokeAIAppConfig
@ -27,8 +28,7 @@ if True: # hack to make flake8 happy with imports coming after setting up the c
from fastapi.middleware.gzip import GZipMiddleware from fastapi.middleware.gzip import GZipMiddleware
from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html from fastapi.openapi.docs import get_redoc_html, get_swagger_ui_html
from fastapi.openapi.utils import get_openapi from fastapi.openapi.utils import get_openapi
from fastapi.responses import FileResponse, HTMLResponse from fastapi.responses import HTMLResponse
from fastapi.staticfiles import StaticFiles
from fastapi_events.handlers.local import local_handler from fastapi_events.handlers.local import local_handler
from fastapi_events.middleware import EventHandlerASGIMiddleware from fastapi_events.middleware import EventHandlerASGIMiddleware
from pydantic.json_schema import models_json_schema from pydantic.json_schema import models_json_schema
@ -76,7 +76,7 @@ mimetypes.add_type("text/css", ".css")
# Create the app # Create the app
# TODO: create this all in a method so configuration/etc. can be passed in? # TODO: create this all in a method so configuration/etc. can be passed in?
app = FastAPI(title="Invoke AI", docs_url=None, redoc_url=None, separate_input_output_schemas=False) app = FastAPI(title="Invoke - Community Edition", docs_url=None, redoc_url=None, separate_input_output_schemas=False)
# Add event handler # Add event handler
event_handler_id: int = id(app) event_handler_id: int = id(app)
@ -205,8 +205,8 @@ app.openapi = custom_openapi # type: ignore [method-assign] # this is a valid a
def overridden_swagger() -> HTMLResponse: def overridden_swagger() -> HTMLResponse:
return get_swagger_ui_html( return get_swagger_ui_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title, title=f"{app.title} - Swagger UI",
swagger_favicon_url="/static/docs/favicon.ico", swagger_favicon_url="static/docs/invoke-favicon-docs.svg",
) )
@ -214,26 +214,20 @@ def overridden_swagger() -> HTMLResponse:
def overridden_redoc() -> HTMLResponse: def overridden_redoc() -> HTMLResponse:
return get_redoc_html( return get_redoc_html(
openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string openapi_url=app.openapi_url, # type: ignore [arg-type] # this is always a string
title=app.title, title=f"{app.title} - Redoc",
redoc_favicon_url="/static/docs/favicon.ico", redoc_favicon_url="static/docs/invoke-favicon-docs.svg",
) )
web_root_path = Path(list(web_dir.__path__)[0]) web_root_path = Path(list(web_dir.__path__)[0])
# Only serve the UI if we it has a build try:
if (web_root_path / "dist").exists(): app.mount("/", NoCacheStaticFiles(directory=Path(web_root_path, "dist"), html=True), name="ui")
# Cannot add headers to StaticFiles, so we must serve index.html with a custom route except RuntimeError:
# Add cache-control: no-store header to prevent caching of index.html, which leads to broken UIs at release logger.warn(f"No UI found at {web_root_path}/dist, skipping UI mount")
@app.get("/", include_in_schema=False, name="ui_root") app.mount(
def get_index() -> FileResponse: "/static", NoCacheStaticFiles(directory=Path(web_root_path, "static/")), name="static"
return FileResponse(Path(web_root_path, "dist/index.html"), headers={"Cache-Control": "no-store"}) ) # docs favicon is in here
# # Must mount *after* the other routes else it borks em
app.mount("/assets", StaticFiles(directory=Path(web_root_path, "dist/assets/")), name="assets")
app.mount("/locales", StaticFiles(directory=Path(web_root_path, "dist/locales/")), name="locales")
app.mount("/static", StaticFiles(directory=Path(web_root_path, "static/")), name="static") # docs favicon is in here
def invoke_api() -> None: def invoke_api() -> None:

View File

@ -24,11 +24,13 @@ from controlnet_aux import (
) )
from controlnet_aux.util import HWC3, ade_palette from controlnet_aux.util import HWC3, ade_palette
from PIL import Image from PIL import Image
from pydantic import BaseModel, ConfigDict, Field, field_validator from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from ...backend.model_management import BaseModelType from ...backend.model_management import BaseModelType
from .baseinvocation import ( from .baseinvocation import (
@ -75,17 +77,16 @@ class ControlField(BaseModel):
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use") resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("control_weight") @field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v): def validate_control_weight(cls, v):
"""Validate that all control weights in the valid range""" validate_weights(v)
if isinstance(v, list):
for i in v:
if i < -1 or i > 2:
raise ValueError("Control weights must be within -1 to 2 range")
else:
if v < -1 or v > 2:
raise ValueError("Control weights must be within -1 to 2 range")
return v return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("control_output") @invocation_output("control_output")
class ControlOutput(BaseInvocationOutput): class ControlOutput(BaseInvocationOutput):
@ -95,17 +96,17 @@ class ControlOutput(BaseInvocationOutput):
control: ControlField = OutputField(description=FieldDescriptions.control) control: ControlField = OutputField(description=FieldDescriptions.control)
@invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.0") @invocation("controlnet", title="ControlNet", tags=["controlnet"], category="controlnet", version="1.1.1")
class ControlNetInvocation(BaseInvocation): class ControlNetInvocation(BaseInvocation):
"""Collects ControlNet info to pass to other nodes""" """Collects ControlNet info to pass to other nodes"""
image: ImageField = InputField(description="The control image") image: ImageField = InputField(description="The control image")
control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct) control_model: ControlNetModelField = InputField(description=FieldDescriptions.controlnet_model, input=Input.Direct)
control_weight: Union[float, List[float]] = InputField( control_weight: Union[float, List[float]] = InputField(
default=1.0, description="The weight given to the ControlNet" default=1.0, ge=-1, le=2, description="The weight given to the ControlNet"
) )
begin_step_percent: float = InputField( begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the ControlNet is first applied (% of total steps)" default=0, ge=0, le=1, description="When the ControlNet is first applied (% of total steps)"
) )
end_step_percent: float = InputField( end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)" default=1, ge=0, le=1, description="When the ControlNet is last applied (% of total steps)"
@ -113,6 +114,17 @@ class ControlNetInvocation(BaseInvocation):
control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used") control_mode: CONTROLNET_MODE_VALUES = InputField(default="balanced", description="The control mode used")
resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used") resize_mode: CONTROLNET_RESIZE_VALUES = InputField(default="just_resize", description="The resize mode used")
@field_validator("control_weight")
@classmethod
def validate_control_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self) -> "ControlNetInvocation":
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> ControlOutput: def invoke(self, context: InvocationContext) -> ControlOutput:
return ControlOutput( return ControlOutput(
control=ControlField( control=ControlField(
@ -591,3 +603,33 @@ class ColorMapImageProcessorInvocation(ImageProcessorInvocation):
color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST) color_map = cv2.resize(color_map, (width, height), interpolation=cv2.INTER_NEAREST)
color_map = Image.fromarray(color_map) color_map = Image.fromarray(color_map)
return color_map return color_map
DEPTH_ANYTHING_MODEL_SIZES = Literal["large", "base", "small"]
@invocation(
"depth_anything_image_processor",
title="Depth Anything Processor",
tags=["controlnet", "depth", "depth anything"],
category="controlnet",
version="1.0.0",
)
class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
"""Generates a depth map based on the Depth Anything algorithm"""
model_size: DEPTH_ANYTHING_MODEL_SIZES = InputField(
default="small", description="The size of the depth model to use"
)
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
if image.mode == "RGBA":
image = image.convert("RGB")
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image

View File

@ -2,7 +2,7 @@ import os
from builtins import float from builtins import float
from typing import List, Union from typing import List, Union
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import ( from invokeai.app.invocations.baseinvocation import (
BaseInvocation, BaseInvocation,
@ -15,6 +15,7 @@ from invokeai.app.invocations.baseinvocation import (
invocation_output, invocation_output,
) )
from invokeai.app.invocations.primitives import ImageField from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.shared.fields import FieldDescriptions from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.model_management.models.base import BaseModelType, ModelType from invokeai.backend.model_management.models.base import BaseModelType, ModelType
from invokeai.backend.model_management.models.ip_adapter import get_ip_adapter_image_encoder_model_id from invokeai.backend.model_management.models.ip_adapter import get_ip_adapter_image_encoder_model_id
@ -39,7 +40,6 @@ class IPAdapterField(BaseModel):
ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.") ip_adapter_model: IPAdapterModelField = Field(description="The IP-Adapter model to use.")
image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.") image_encoder_model: CLIPVisionModelField = Field(description="The name of the CLIP image encoder model.")
weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet") weight: Union[float, List[float]] = Field(default=1, description="The weight given to the ControlNet")
# weight: float = Field(default=1.0, ge=0, description="The weight of the IP-Adapter.")
begin_step_percent: float = Field( begin_step_percent: float = Field(
default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)" default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
) )
@ -47,6 +47,17 @@ class IPAdapterField(BaseModel):
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)" default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
) )
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("ip_adapter_output") @invocation_output("ip_adapter_output")
class IPAdapterOutput(BaseInvocationOutput): class IPAdapterOutput(BaseInvocationOutput):
@ -54,7 +65,7 @@ class IPAdapterOutput(BaseInvocationOutput):
ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter") ip_adapter: IPAdapterField = OutputField(description=FieldDescriptions.ip_adapter, title="IP-Adapter")
@invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.0") @invocation("ip_adapter", title="IP-Adapter", tags=["ip_adapter", "control"], category="ip_adapter", version="1.1.1")
class IPAdapterInvocation(BaseInvocation): class IPAdapterInvocation(BaseInvocation):
"""Collects IP-Adapter info to pass to other nodes.""" """Collects IP-Adapter info to pass to other nodes."""
@ -64,18 +75,27 @@ class IPAdapterInvocation(BaseInvocation):
description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1 description="The IP-Adapter model.", title="IP-Adapter Model", input=Input.Direct, ui_order=-1
) )
# weight: float = InputField(default=1.0, description="The weight of the IP-Adapter.", ui_type=UIType.Float)
weight: Union[float, List[float]] = InputField( weight: Union[float, List[float]] = InputField(
default=1, ge=-1, description="The weight given to the IP-Adapter", title="Weight" default=1, description="The weight given to the IP-Adapter", title="Weight"
) )
begin_step_percent: float = InputField( begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the IP-Adapter is first applied (% of total steps)" default=0, ge=0, le=1, description="When the IP-Adapter is first applied (% of total steps)"
) )
end_step_percent: float = InputField( end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)" default=1, ge=0, le=1, description="When the IP-Adapter is last applied (% of total steps)"
) )
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> IPAdapterOutput: def invoke(self, context: InvocationContext) -> IPAdapterOutput:
# Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model. # Lookup the CLIP Vision encoder that is intended to be used with the IP-Adapter model.
ip_adapter_info = context.services.model_manager.model_info( ip_adapter_info = context.services.model_manager.model_info(

View File

@ -1,5 +1,6 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654)
import math
from contextlib import ExitStack from contextlib import ExitStack
from functools import singledispatchmethod from functools import singledispatchmethod
from typing import List, Literal, Optional, Union from typing import List, Literal, Optional, Union
@ -220,7 +221,7 @@ def get_scheduler(
title="Denoise Latents", title="Denoise Latents",
tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"], tags=["latents", "denoise", "txt2img", "t2i", "t2l", "img2img", "i2i", "l2l"],
category="latents", category="latents",
version="1.5.0", version="1.5.1",
) )
class DenoiseLatentsInvocation(BaseInvocation): class DenoiseLatentsInvocation(BaseInvocation):
"""Denoises noisy latents to decodable images""" """Denoises noisy latents to decodable images"""
@ -279,7 +280,7 @@ class DenoiseLatentsInvocation(BaseInvocation):
ui_order=7, ui_order=7,
) )
cfg_rescale_multiplier: float = InputField( cfg_rescale_multiplier: float = InputField(
default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier title="CFG Rescale Multiplier", default=0, ge=0, lt=1, description=FieldDescriptions.cfg_rescale_multiplier
) )
latents: Optional[LatentsField] = InputField( latents: Optional[LatentsField] = InputField(
default=None, default=None,
@ -1228,3 +1229,57 @@ class CropLatentsCoreInvocation(BaseInvocation):
context.services.latents.save(name, cropped_latents) context.services.latents.save(name, cropped_latents)
return build_latents_output(latents_name=name, latents=cropped_latents) return build_latents_output(latents_name=name, latents=cropped_latents)
@invocation_output("ideal_size_output")
class IdealSizeOutput(BaseInvocationOutput):
"""Base class for invocations that output an image"""
width: int = OutputField(description="The ideal width of the image (in pixels)")
height: int = OutputField(description="The ideal height of the image (in pixels)")
@invocation(
"ideal_size",
title="Ideal Size",
tags=["latents", "math", "ideal_size"],
version="1.0.2",
)
class IdealSizeInvocation(BaseInvocation):
"""Calculates the ideal size for generation to avoid duplication"""
width: int = InputField(default=1024, description="Final image width")
height: int = InputField(default=576, description="Final image height")
unet: UNetField = InputField(default=None, description=FieldDescriptions.unet)
multiplier: float = InputField(
default=1.0,
description="Amount to multiply the model's dimensions by when calculating the ideal size (may result in initial generation artifacts if too large)",
)
def trim_to_multiple_of(self, *args, multiple_of=LATENT_SCALE_FACTOR):
return tuple((x - x % multiple_of) for x in args)
def invoke(self, context: InvocationContext) -> IdealSizeOutput:
aspect = self.width / self.height
dimension = 512
if self.unet.unet.base_model == BaseModelType.StableDiffusion2:
dimension = 768
elif self.unet.unet.base_model == BaseModelType.StableDiffusionXL:
dimension = 1024
dimension = dimension * self.multiplier
min_dimension = math.floor(dimension * 0.5)
model_area = dimension * dimension # hardcoded for now since all models are trained on square images
if aspect > 1.0:
init_height = max(min_dimension, math.sqrt(model_area / aspect))
init_width = init_height * aspect
else:
init_width = max(min_dimension, math.sqrt(model_area * aspect))
init_height = init_width / aspect
scaled_width, scaled_height = self.trim_to_multiple_of(
math.floor(init_width),
math.floor(init_height),
)
return IdealSizeOutput(width=scaled_width, height=scaled_height)

View File

@ -1,6 +1,6 @@
from typing import Union from typing import Union
from pydantic import BaseModel, ConfigDict, Field from pydantic import BaseModel, ConfigDict, Field, field_validator, model_validator
from invokeai.app.invocations.baseinvocation import ( from invokeai.app.invocations.baseinvocation import (
BaseInvocation, BaseInvocation,
@ -14,6 +14,7 @@ from invokeai.app.invocations.baseinvocation import (
) )
from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES from invokeai.app.invocations.controlnet_image_processors import CONTROLNET_RESIZE_VALUES
from invokeai.app.invocations.primitives import ImageField from invokeai.app.invocations.primitives import ImageField
from invokeai.app.invocations.util import validate_begin_end_step, validate_weights
from invokeai.app.shared.fields import FieldDescriptions from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.model_management.models.base import BaseModelType from invokeai.backend.model_management.models.base import BaseModelType
@ -37,6 +38,17 @@ class T2IAdapterField(BaseModel):
) )
resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use") resize_mode: CONTROLNET_RESIZE_VALUES = Field(default="just_resize", description="The resize mode to use")
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
@invocation_output("t2i_adapter_output") @invocation_output("t2i_adapter_output")
class T2IAdapterOutput(BaseInvocationOutput): class T2IAdapterOutput(BaseInvocationOutput):
@ -44,7 +56,7 @@ class T2IAdapterOutput(BaseInvocationOutput):
@invocation( @invocation(
"t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.0" "t2i_adapter", title="T2I-Adapter", tags=["t2i_adapter", "control"], category="t2i_adapter", version="1.0.1"
) )
class T2IAdapterInvocation(BaseInvocation): class T2IAdapterInvocation(BaseInvocation):
"""Collects T2I-Adapter info to pass to other nodes.""" """Collects T2I-Adapter info to pass to other nodes."""
@ -61,7 +73,7 @@ class T2IAdapterInvocation(BaseInvocation):
default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight" default=1, ge=0, description="The weight given to the T2I-Adapter", title="Weight"
) )
begin_step_percent: float = InputField( begin_step_percent: float = InputField(
default=0, ge=-1, le=2, description="When the T2I-Adapter is first applied (% of total steps)" default=0, ge=0, le=1, description="When the T2I-Adapter is first applied (% of total steps)"
) )
end_step_percent: float = InputField( end_step_percent: float = InputField(
default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)" default=1, ge=0, le=1, description="When the T2I-Adapter is last applied (% of total steps)"
@ -71,6 +83,17 @@ class T2IAdapterInvocation(BaseInvocation):
description="The resize mode applied to the T2I-Adapter input image so that it matches the target output size.", description="The resize mode applied to the T2I-Adapter input image so that it matches the target output size.",
) )
@field_validator("weight")
@classmethod
def validate_ip_adapter_weight(cls, v):
validate_weights(v)
return v
@model_validator(mode="after")
def validate_begin_end_step_percent(self):
validate_begin_end_step(self.begin_step_percent, self.end_step_percent)
return self
def invoke(self, context: InvocationContext) -> T2IAdapterOutput: def invoke(self, context: InvocationContext) -> T2IAdapterOutput:
return T2IAdapterOutput( return T2IAdapterOutput(
t2i_adapter=T2IAdapterField( t2i_adapter=T2IAdapterField(

View File

@ -5,12 +5,12 @@ from typing import Literal
import cv2 import cv2
import numpy as np import numpy as np
import torch import torch
from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image from PIL import Image
from pydantic import ConfigDict from pydantic import ConfigDict
from invokeai.app.invocations.primitives import ImageField, ImageOutput from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import choose_torch_device from invokeai.backend.util.devices import choose_torch_device

View File

@ -0,0 +1,14 @@
from typing import Union
def validate_weights(weights: Union[float, list[float]]) -> None:
"""Validate that all control weights in the valid range"""
to_validate = weights if isinstance(weights, list) else [weights]
if any(i < -1 or i > 2 for i in to_validate):
raise ValueError("Control weights must be within -1 to 2 range")
def validate_begin_end_step(begin_step_percent: float, end_step_percent: float) -> None:
"""Validate that begin_step_percent is less than end_step_percent"""
if begin_step_percent >= end_step_percent:
raise ValueError("Begin step percent must be less than or equal to end step percent")

View File

@ -1,5 +1,7 @@
"""Init file for InvokeAI configure package.""" """Init file for InvokeAI configure package."""
from invokeai.app.services.config.config_common import PagingArgumentParser
from .config_default import InvokeAIAppConfig, get_invokeai_config from .config_default import InvokeAIAppConfig, get_invokeai_config
__all__ = ["InvokeAIAppConfig", "get_invokeai_config"] __all__ = ["InvokeAIAppConfig", "get_invokeai_config", "PagingArgumentParser"]

View File

@ -173,10 +173,10 @@ from __future__ import annotations
import os import os
from pathlib import Path from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_type_hints from typing import Any, ClassVar, Dict, List, Literal, Optional, Union
from omegaconf import DictConfig, OmegaConf from omegaconf import DictConfig, OmegaConf
from pydantic import Field, TypeAdapter from pydantic import Field
from pydantic.config import JsonDict from pydantic.config import JsonDict
from pydantic_settings import SettingsConfigDict from pydantic_settings import SettingsConfigDict
@ -209,7 +209,7 @@ class InvokeAIAppConfig(InvokeAISettings):
"""Configuration object for InvokeAI App.""" """Configuration object for InvokeAI App."""
singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None singleton_config: ClassVar[Optional[InvokeAIAppConfig]] = None
singleton_init: ClassVar[Optional[Dict]] = None singleton_init: ClassVar[Optional[Dict[str, Any]]] = None
# fmt: off # fmt: off
type: Literal["InvokeAI"] = "InvokeAI" type: Literal["InvokeAI"] = "InvokeAI"
@ -251,7 +251,11 @@ class InvokeAIAppConfig(InvokeAISettings):
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", json_schema_extra=Categories.Logging) log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", json_schema_extra=Categories.Logging)
log_sql : bool = Field(default=False, description="Log SQL queries", json_schema_extra=Categories.Logging) log_sql : bool = Field(default=False, description="Log SQL queries", json_schema_extra=Categories.Logging)
# Development
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", json_schema_extra=Categories.Development) dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", json_schema_extra=Categories.Development)
profile_graphs : bool = Field(default=False, description="Enable graph profiling", json_schema_extra=Categories.Development)
profile_prefix : Optional[str] = Field(default=None, description="An optional prefix for profile output files.", json_schema_extra=Categories.Development)
profiles_dir : Path = Field(default=Path('profiles'), description="Directory for graph profiles", json_schema_extra=Categories.Development)
version : bool = Field(default=False, description="Show InvokeAI version and exit", json_schema_extra=Categories.Other) version : bool = Field(default=False, description="Show InvokeAI version and exit", json_schema_extra=Categories.Other)
@ -263,14 +267,14 @@ class InvokeAIAppConfig(InvokeAISettings):
# DEVICE # DEVICE
device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device) device : Literal["auto", "cpu", "cuda", "cuda:1", "mps"] = Field(default="auto", description="Generation device", json_schema_extra=Categories.Device)
precision : Literal["auto", "float16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device) precision : Literal["auto", "float16", "bfloat16", "float32", "autocast"] = Field(default="auto", description="Floating point precision", json_schema_extra=Categories.Device)
# GENERATION # GENERATION
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", json_schema_extra=Categories.Generation) sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", json_schema_extra=Categories.Generation)
attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type", json_schema_extra=Categories.Generation) attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type", json_schema_extra=Categories.Generation)
attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', json_schema_extra=Categories.Generation) attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', json_schema_extra=Categories.Generation)
force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Generation) force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=6, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation) png_compress_level : int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
# QUEUE # QUEUE
max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue", json_schema_extra=Categories.Queue) max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue", json_schema_extra=Categories.Queue)
@ -280,6 +284,9 @@ class InvokeAIAppConfig(InvokeAISettings):
deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes) deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", json_schema_extra=Categories.Nodes) node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", json_schema_extra=Categories.Nodes)
# MODEL IMPORT
civitai_api_key : Optional[str] = Field(default=os.environ.get("CIVITAI_API_KEY"), description="API key for CivitAI", json_schema_extra=Categories.Other)
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES # DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.MemoryPerformance) always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.MemoryPerformance)
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.MemoryPerformance) max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.MemoryPerformance)
@ -289,6 +296,7 @@ class InvokeAIAppConfig(InvokeAISettings):
lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Paths) lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Paths)
embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Paths) embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Paths) controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
# this is not referred to in the source code and can be removed entirely # this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance) #free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
@ -301,8 +309,8 @@ class InvokeAIAppConfig(InvokeAISettings):
self, self,
argv: Optional[list[str]] = None, argv: Optional[list[str]] = None,
conf: Optional[DictConfig] = None, conf: Optional[DictConfig] = None,
clobber=False, clobber: Optional[bool] = False,
): ) -> None:
""" """
Update settings with contents of init file, environment, and command-line settings. Update settings with contents of init file, environment, and command-line settings.
@ -328,16 +336,12 @@ class InvokeAIAppConfig(InvokeAISettings):
super().parse_args(argv) super().parse_args(argv)
if self.singleton_init and not clobber: if self.singleton_init and not clobber:
hints = get_type_hints(self.__class__) # When setting values in this way, set validate_assignment to true if you want to validate the value.
for k in self.singleton_init: for k, v in self.singleton_init.items():
setattr( setattr(self, k, v)
self,
k,
TypeAdapter(hints[k]).validate_python(self.singleton_init[k]),
)
@classmethod @classmethod
def get_config(cls, **kwargs: Dict[str, Any]) -> InvokeAIAppConfig: def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
"""Return a singleton InvokeAIAppConfig configuration object.""" """Return a singleton InvokeAIAppConfig configuration object."""
if ( if (
cls.singleton_config is None cls.singleton_config is None
@ -449,13 +453,18 @@ class InvokeAIAppConfig(InvokeAISettings):
disabled_in_config = not self.xformers_enabled disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers" return disabled_in_config and self.attention_type != "xformers"
@property
def profiles_path(self) -> Path:
"""Path to the graph profiles directory."""
return self._resolve(self.profiles_dir)
@staticmethod @staticmethod
def find_root() -> Path: def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file.""" """Choose the runtime root directory when not specified on command line or init file."""
return _find_root() return _find_root()
def get_invokeai_config(**kwargs) -> InvokeAIAppConfig: def get_invokeai_config(**kwargs: Any) -> InvokeAIAppConfig:
"""Legacy function which returns InvokeAIAppConfig.get_config().""" """Legacy function which returns InvokeAIAppConfig.get_config()."""
return InvokeAIAppConfig.get_config(**kwargs) return InvokeAIAppConfig.get_config(**kwargs)

View File

@ -34,6 +34,7 @@ class ServiceInactiveException(Exception):
DownloadEventHandler = Callable[["DownloadJob"], None] DownloadEventHandler = Callable[["DownloadJob"], None]
DownloadExceptionHandler = Callable[["DownloadJob", Optional[Exception]], None]
@total_ordering @total_ordering
@ -55,6 +56,7 @@ class DownloadJob(BaseModel):
job_ended: Optional[str] = Field( job_ended: Optional[str] = Field(
default=None, description="Timestamp for when the download job ende1d (completed or errored)" default=None, description="Timestamp for when the download job ende1d (completed or errored)"
) )
content_type: Optional[str] = Field(default=None, description="Content type of downloaded file")
bytes: int = Field(default=0, description="Bytes downloaded so far") bytes: int = Field(default=0, description="Bytes downloaded so far")
total_bytes: int = Field(default=0, description="Total file size (bytes)") total_bytes: int = Field(default=0, description="Total file size (bytes)")
@ -70,7 +72,11 @@ class DownloadJob(BaseModel):
_on_progress: Optional[DownloadEventHandler] = PrivateAttr(default=None) _on_progress: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_complete: Optional[DownloadEventHandler] = PrivateAttr(default=None) _on_complete: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_cancelled: Optional[DownloadEventHandler] = PrivateAttr(default=None) _on_cancelled: Optional[DownloadEventHandler] = PrivateAttr(default=None)
_on_error: Optional[DownloadEventHandler] = PrivateAttr(default=None) _on_error: Optional[DownloadExceptionHandler] = PrivateAttr(default=None)
def __hash__(self) -> int:
"""Return hash of the string representation of this object, for indexing."""
return hash(str(self))
def __le__(self, other: "DownloadJob") -> bool: def __le__(self, other: "DownloadJob") -> bool:
"""Return True if this job's priority is less than another's.""" """Return True if this job's priority is less than another's."""
@ -87,6 +93,26 @@ class DownloadJob(BaseModel):
"""Call to cancel the job.""" """Call to cancel the job."""
return self._cancelled return self._cancelled
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == DownloadJobStatus.COMPLETED
@property
def running(self) -> bool:
"""Return true if the job is running."""
return self.status == DownloadJobStatus.RUNNING
@property
def errored(self) -> bool:
"""Return true if the job is errored."""
return self.status == DownloadJobStatus.ERROR
@property
def in_terminal_state(self) -> bool:
"""Return true if job has finished, one way or another."""
return self.status not in [DownloadJobStatus.WAITING, DownloadJobStatus.RUNNING]
@property @property
def on_start(self) -> Optional[DownloadEventHandler]: def on_start(self) -> Optional[DownloadEventHandler]:
"""Return the on_start event handler.""" """Return the on_start event handler."""
@ -103,7 +129,7 @@ class DownloadJob(BaseModel):
return self._on_complete return self._on_complete
@property @property
def on_error(self) -> Optional[DownloadEventHandler]: def on_error(self) -> Optional[DownloadExceptionHandler]:
"""Return the on_error event handler.""" """Return the on_error event handler."""
return self._on_error return self._on_error
@ -118,7 +144,7 @@ class DownloadJob(BaseModel):
on_progress: Optional[DownloadEventHandler] = None, on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None, on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None, on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None, on_error: Optional[DownloadExceptionHandler] = None,
) -> None: ) -> None:
"""Set the callbacks for download events.""" """Set the callbacks for download events."""
self._on_start = on_start self._on_start = on_start
@ -150,10 +176,10 @@ class DownloadQueueServiceBase(ABC):
on_progress: Optional[DownloadEventHandler] = None, on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None, on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None, on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None, on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob: ) -> DownloadJob:
""" """
Create a download job. Create and enqueue download job.
:param source: Source of the download as a URL. :param source: Source of the download as a URL.
:param dest: Path to download to. See below. :param dest: Path to download to. See below.
@ -175,6 +201,25 @@ class DownloadQueueServiceBase(ABC):
""" """
pass pass
@abstractmethod
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""
Enqueue a download job.
:param job: The DownloadJob
:param on_start, on_progress, on_complete, on_error: Callbacks for the indicated
events.
"""
pass
@abstractmethod @abstractmethod
def list_jobs(self) -> List[DownloadJob]: def list_jobs(self) -> List[DownloadJob]:
""" """
@ -197,21 +242,21 @@ class DownloadQueueServiceBase(ABC):
pass pass
@abstractmethod @abstractmethod
def cancel_all_jobs(self): def cancel_all_jobs(self) -> None:
"""Cancel all active and enquedjobs.""" """Cancel all active and enquedjobs."""
pass pass
@abstractmethod @abstractmethod
def prune_jobs(self): def prune_jobs(self) -> None:
"""Prune completed and errored queue items from the job list.""" """Prune completed and errored queue items from the job list."""
pass pass
@abstractmethod @abstractmethod
def cancel_job(self, job: DownloadJob): def cancel_job(self, job: DownloadJob) -> None:
"""Cancel the job, clearing partial downloads and putting it into ERROR state.""" """Cancel the job, clearing partial downloads and putting it into ERROR state."""
pass pass
@abstractmethod @abstractmethod
def join(self): def join(self) -> None:
"""Wait until all jobs are off the queue.""" """Wait until all jobs are off the queue."""
pass pass

View File

@ -5,10 +5,9 @@ import os
import re import re
import threading import threading
import traceback import traceback
from logging import Logger
from pathlib import Path from pathlib import Path
from queue import Empty, PriorityQueue from queue import Empty, PriorityQueue
from typing import Any, Dict, List, Optional, Set from typing import Any, Dict, List, Optional
import requests import requests
from pydantic.networks import AnyHttpUrl from pydantic.networks import AnyHttpUrl
@ -21,6 +20,7 @@ from invokeai.backend.util.logging import InvokeAILogger
from .download_base import ( from .download_base import (
DownloadEventHandler, DownloadEventHandler,
DownloadExceptionHandler,
DownloadJob, DownloadJob,
DownloadJobCancelledException, DownloadJobCancelledException,
DownloadJobStatus, DownloadJobStatus,
@ -36,18 +36,6 @@ DOWNLOAD_CHUNK_SIZE = 100000
class DownloadQueueService(DownloadQueueServiceBase): class DownloadQueueService(DownloadQueueServiceBase):
"""Class for queued download of models.""" """Class for queued download of models."""
_jobs: Dict[int, DownloadJob]
_max_parallel_dl: int = 5
_worker_pool: Set[threading.Thread]
_queue: PriorityQueue[DownloadJob]
_stop_event: threading.Event
_lock: threading.Lock
_logger: Logger
_events: Optional[EventServiceBase] = None
_next_job_id: int = 0
_accept_download_requests: bool = False
_requests: requests.sessions.Session
def __init__( def __init__(
self, self,
max_parallel_dl: int = 5, max_parallel_dl: int = 5,
@ -99,6 +87,33 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._stop_event.set() self._stop_event.set()
self._worker_pool.clear() self._worker_pool.clear()
def submit_download_job(
self,
job: DownloadJob,
on_start: Optional[DownloadEventHandler] = None,
on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadExceptionHandler] = None,
) -> None:
"""Enqueue a download job."""
if not self._accept_download_requests:
raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service."
)
with self._lock:
job.id = self._next_job_id
self._next_job_id += 1
job.set_callbacks(
on_start=on_start,
on_progress=on_progress,
on_complete=on_complete,
on_cancelled=on_cancelled,
on_error=on_error,
)
self._jobs[job.id] = job
self._queue.put(job)
def download( def download(
self, self,
source: AnyHttpUrl, source: AnyHttpUrl,
@ -109,32 +124,27 @@ class DownloadQueueService(DownloadQueueServiceBase):
on_progress: Optional[DownloadEventHandler] = None, on_progress: Optional[DownloadEventHandler] = None,
on_complete: Optional[DownloadEventHandler] = None, on_complete: Optional[DownloadEventHandler] = None,
on_cancelled: Optional[DownloadEventHandler] = None, on_cancelled: Optional[DownloadEventHandler] = None,
on_error: Optional[DownloadEventHandler] = None, on_error: Optional[DownloadExceptionHandler] = None,
) -> DownloadJob: ) -> DownloadJob:
"""Create a download job and return its ID.""" """Create and enqueue a download job and return it."""
if not self._accept_download_requests: if not self._accept_download_requests:
raise ServiceInactiveException( raise ServiceInactiveException(
"The download service is not currently accepting requests. Please call start() to initialize the service." "The download service is not currently accepting requests. Please call start() to initialize the service."
) )
with self._lock: job = DownloadJob(
id = self._next_job_id source=source,
self._next_job_id += 1 dest=dest,
job = DownloadJob( priority=priority,
id=id, access_token=access_token,
source=source, )
dest=dest, self.submit_download_job(
priority=priority, job,
access_token=access_token, on_start=on_start,
) on_progress=on_progress,
job.set_callbacks( on_complete=on_complete,
on_start=on_start, on_cancelled=on_cancelled,
on_progress=on_progress, on_error=on_error,
on_complete=on_complete, )
on_cancelled=on_cancelled,
on_error=on_error,
)
self._jobs[id] = job
self._queue.put(job)
return job return job
def join(self) -> None: def join(self) -> None:
@ -150,7 +160,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
with self._lock: with self._lock:
to_delete = set() to_delete = set()
for job_id, job in self._jobs.items(): for job_id, job in self._jobs.items():
if self._in_terminal_state(job): if job.in_terminal_state:
to_delete.add(job_id) to_delete.add(job_id)
for job_id in to_delete: for job_id in to_delete:
del self._jobs[job_id] del self._jobs[job_id]
@ -172,19 +182,12 @@ class DownloadQueueService(DownloadQueueServiceBase):
with self._lock: with self._lock:
job.cancel() job.cancel()
def cancel_all_jobs(self, preserve_partial: bool = False) -> None: def cancel_all_jobs(self) -> None:
"""Cancel all jobs (those not in enqueued, running or paused state).""" """Cancel all jobs (those not in enqueued, running or paused state)."""
for job in self._jobs.values(): for job in self._jobs.values():
if not self._in_terminal_state(job): if not job.in_terminal_state:
self.cancel_job(job) self.cancel_job(job)
def _in_terminal_state(self, job: DownloadJob) -> bool:
return job.status in [
DownloadJobStatus.COMPLETED,
DownloadJobStatus.CANCELLED,
DownloadJobStatus.ERROR,
]
def _start_workers(self, max_workers: int) -> None: def _start_workers(self, max_workers: int) -> None:
"""Start the requested number of worker threads.""" """Start the requested number of worker threads."""
self._stop_event.clear() self._stop_event.clear()
@ -205,7 +208,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
job = self._queue.get(timeout=1) job = self._queue.get(timeout=1)
except Empty: except Empty:
continue continue
try: try:
job.job_started = get_iso_timestamp() job.job_started = get_iso_timestamp()
self._do_download(job) self._do_download(job)
@ -214,7 +216,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
except (OSError, HTTPError) as excp: except (OSError, HTTPError) as excp:
job.error_type = excp.__class__.__name__ + f"({str(excp)})" job.error_type = excp.__class__.__name__ + f"({str(excp)})"
job.error = traceback.format_exc() job.error = traceback.format_exc()
self._signal_job_error(job) self._signal_job_error(job, excp)
except DownloadJobCancelledException: except DownloadJobCancelledException:
self._signal_job_cancelled(job) self._signal_job_cancelled(job)
self._cleanup_cancelled_job(job) self._cleanup_cancelled_job(job)
@ -235,6 +237,8 @@ class DownloadQueueService(DownloadQueueServiceBase):
resp = self._requests.get(str(url), headers=header, stream=True) resp = self._requests.get(str(url), headers=header, stream=True)
if not resp.ok: if not resp.ok:
raise HTTPError(resp.reason) raise HTTPError(resp.reason)
job.content_type = resp.headers.get("Content-Type")
content_length = int(resp.headers.get("content-length", 0)) content_length = int(resp.headers.get("content-length", 0))
job.total_bytes = content_length job.total_bytes = content_length
@ -296,6 +300,7 @@ class DownloadQueueService(DownloadQueueServiceBase):
self._signal_job_progress(job) self._signal_job_progress(job)
# if we get here we are done and can rename the file to the original dest # if we get here we are done and can rename the file to the original dest
self._logger.debug(f"{job.source}: saved to {job.download_path} (bytes={job.bytes})")
in_progress_path.rename(job.download_path) in_progress_path.rename(job.download_path)
def _validate_filename(self, directory: str, filename: str) -> bool: def _validate_filename(self, directory: str, filename: str) -> bool:
@ -322,7 +327,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try: try:
job.on_start(job) job.on_start(job)
except Exception as e: except Exception as e:
self._logger.error(e) self._logger.error(
f"An error occurred while processing the on_start callback: {traceback.format_exception(e)}"
)
if self._event_bus: if self._event_bus:
assert job.download_path assert job.download_path
self._event_bus.emit_download_started(str(job.source), job.download_path.as_posix()) self._event_bus.emit_download_started(str(job.source), job.download_path.as_posix())
@ -332,7 +339,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try: try:
job.on_progress(job) job.on_progress(job)
except Exception as e: except Exception as e:
self._logger.error(e) self._logger.error(
f"An error occurred while processing the on_progress callback: {traceback.format_exception(e)}"
)
if self._event_bus: if self._event_bus:
assert job.download_path assert job.download_path
self._event_bus.emit_download_progress( self._event_bus.emit_download_progress(
@ -348,7 +357,9 @@ class DownloadQueueService(DownloadQueueServiceBase):
try: try:
job.on_complete(job) job.on_complete(job)
except Exception as e: except Exception as e:
self._logger.error(e) self._logger.error(
f"An error occurred while processing the on_complete callback: {traceback.format_exception(e)}"
)
if self._event_bus: if self._event_bus:
assert job.download_path assert job.download_path
self._event_bus.emit_download_complete( self._event_bus.emit_download_complete(
@ -356,29 +367,36 @@ class DownloadQueueService(DownloadQueueServiceBase):
) )
def _signal_job_cancelled(self, job: DownloadJob) -> None: def _signal_job_cancelled(self, job: DownloadJob) -> None:
if job.status not in [DownloadJobStatus.RUNNING, DownloadJobStatus.WAITING]:
return
job.status = DownloadJobStatus.CANCELLED job.status = DownloadJobStatus.CANCELLED
if job.on_cancelled: if job.on_cancelled:
try: try:
job.on_cancelled(job) job.on_cancelled(job)
except Exception as e: except Exception as e:
self._logger.error(e) self._logger.error(
f"An error occurred while processing the on_cancelled callback: {traceback.format_exception(e)}"
)
if self._event_bus: if self._event_bus:
self._event_bus.emit_download_cancelled(str(job.source)) self._event_bus.emit_download_cancelled(str(job.source))
def _signal_job_error(self, job: DownloadJob) -> None: def _signal_job_error(self, job: DownloadJob, excp: Optional[Exception] = None) -> None:
job.status = DownloadJobStatus.ERROR job.status = DownloadJobStatus.ERROR
self._logger.error(f"{str(job.source)}: {traceback.format_exception(excp)}")
if job.on_error: if job.on_error:
try: try:
job.on_error(job) job.on_error(job, excp)
except Exception as e: except Exception as e:
self._logger.error(e) self._logger.error(
f"An error occurred while processing the on_error callback: {traceback.format_exception(e)}"
)
if self._event_bus: if self._event_bus:
assert job.error_type assert job.error_type
assert job.error assert job.error
self._event_bus.emit_download_error(str(job.source), error_type=job.error_type, error=job.error) self._event_bus.emit_download_error(str(job.source), error_type=job.error_type, error=job.error)
def _cleanup_cancelled_job(self, job: DownloadJob) -> None: def _cleanup_cancelled_job(self, job: DownloadJob) -> None:
self._logger.warning(f"Cleaning up leftover files from cancelled download job {job.download_path}") self._logger.debug(f"Cleaning up leftover files from cancelled download job {job.download_path}")
try: try:
if job.download_path: if job.download_path:
partial_file = self._in_progress_path(job.download_path) partial_file = self._in_progress_path(job.download_path)

View File

@ -1,7 +1,7 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) # Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from typing import Any, Optional from typing import Any, Dict, List, Optional, Union
from invokeai.app.services.invocation_processor.invocation_processor_common import ProgressImage from invokeai.app.services.invocation_processor.invocation_processor_common import ProgressImage
from invokeai.app.services.session_queue.session_queue_common import ( from invokeai.app.services.session_queue.session_queue_common import (
@ -404,53 +404,72 @@ class EventServiceBase:
}, },
) )
def emit_model_install_started(self, source: str) -> None: def emit_model_install_downloading(
self,
source: str,
local_path: str,
bytes: int,
total_bytes: int,
parts: List[Dict[str, Union[str, int]]],
) -> None:
""" """
Emitted when an install job is started. Emit at intervals while the install job is in progress (remote models only).
:param source: Source of the model
:param local_path: Where model is downloading to
:param parts: Progress of downloading URLs that comprise the model, if any.
:param bytes: Number of bytes downloaded so far.
:param total_bytes: Total size of download, including all files.
This emits a Dict with keys "source", "local_path", "bytes" and "total_bytes".
"""
self.__emit_model_event(
event_name="model_install_downloading",
payload={
"source": source,
"local_path": local_path,
"bytes": bytes,
"total_bytes": total_bytes,
"parts": parts,
},
)
def emit_model_install_running(self, source: str) -> None:
"""
Emit once when an install job becomes active.
:param source: Source of the model; local path, repo_id or url :param source: Source of the model; local path, repo_id or url
""" """
self.__emit_model_event( self.__emit_model_event(
event_name="model_install_started", event_name="model_install_running",
payload={"source": source}, payload={"source": source},
) )
def emit_model_install_completed(self, source: str, key: str) -> None: def emit_model_install_completed(self, source: str, key: str, total_bytes: Optional[int] = None) -> None:
""" """
Emitted when an install job is completed successfully. Emit when an install job is completed successfully.
:param source: Source of the model; local path, repo_id or url :param source: Source of the model; local path, repo_id or url
:param key: Model config record key :param key: Model config record key
:param total_bytes: Size of the model (may be None for installation of a local path)
""" """
self.__emit_model_event( self.__emit_model_event(
event_name="model_install_completed", event_name="model_install_completed",
payload={ payload={
"source": source, "source": source,
"total_bytes": total_bytes,
"key": key, "key": key,
}, },
) )
def emit_model_install_progress( def emit_model_install_cancelled(self, source: str) -> None:
self,
source: str,
current_bytes: int,
total_bytes: int,
) -> None:
""" """
Emitted while the install job is in progress. Emit when an install job is cancelled.
(Downloaded models only)
:param source: Source of the model :param source: Source of the model; local path, repo_id or url
:param current_bytes: Number of bytes downloaded so far
:param total_bytes: Total bytes to download
""" """
self.__emit_model_event( self.__emit_model_event(
event_name="model_install_progress", event_name="model_install_cancelled",
payload={ payload={"source": source},
"source": source,
"current_bytes": int,
"total_bytes": int,
},
) )
def emit_model_install_error( def emit_model_install_error(
@ -460,10 +479,11 @@ class EventServiceBase:
error: str, error: str,
) -> None: ) -> None:
""" """
Emitted when an install job encounters an exception. Emit when an install job encounters an exception.
:param source: Source of the model :param source: Source of the model
:param exception: The exception that raised the error :param error_type: The name of the exception
:param error: A text description of the exception
""" """
self.__emit_model_event( self.__emit_model_event(
event_name="model_install_error", event_name="model_install_error",

View File

@ -1,11 +1,16 @@
import time import time
import traceback import traceback
from contextlib import suppress
from threading import BoundedSemaphore, Event, Thread from threading import BoundedSemaphore, Event, Thread
from typing import Optional from typing import Optional
import invokeai.backend.util.logging as logger import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import InvocationContext from invokeai.app.invocations.baseinvocation import InvocationContext
from invokeai.app.services.invocation_queue.invocation_queue_common import InvocationQueueItem from invokeai.app.services.invocation_queue.invocation_queue_common import InvocationQueueItem
from invokeai.app.services.invocation_stats.invocation_stats_common import (
GESStatsNotFoundError,
)
from invokeai.app.util.profiler import Profiler
from ..invoker import Invoker from ..invoker import Invoker
from .invocation_processor_base import InvocationProcessorABC from .invocation_processor_base import InvocationProcessorABC
@ -18,7 +23,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
__invoker: Invoker __invoker: Invoker
__threadLimit: BoundedSemaphore __threadLimit: BoundedSemaphore
def start(self, invoker) -> None: def start(self, invoker: Invoker) -> None:
# if we do want multithreading at some point, we could make this configurable # if we do want multithreading at some point, we could make this configurable
self.__threadLimit = BoundedSemaphore(1) self.__threadLimit = BoundedSemaphore(1)
self.__invoker = invoker self.__invoker = invoker
@ -39,6 +44,27 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
self.__threadLimit.acquire() self.__threadLimit.acquire()
queue_item: Optional[InvocationQueueItem] = None queue_item: Optional[InvocationQueueItem] = None
profiler = (
Profiler(
logger=self.__invoker.services.logger,
output_dir=self.__invoker.services.configuration.profiles_path,
prefix=self.__invoker.services.configuration.profile_prefix,
)
if self.__invoker.services.configuration.profile_graphs
else None
)
def stats_cleanup(graph_execution_state_id: str) -> None:
if profiler:
profile_path = profiler.stop()
stats_path = profile_path.with_suffix(".json")
self.__invoker.services.performance_statistics.dump_stats(
graph_execution_state_id=graph_execution_state_id, output_path=stats_path
)
with suppress(GESStatsNotFoundError):
self.__invoker.services.performance_statistics.log_stats(graph_execution_state_id)
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state_id)
while not stop_event.is_set(): while not stop_event.is_set():
try: try:
queue_item = self.__invoker.services.queue.get() queue_item = self.__invoker.services.queue.get()
@ -49,6 +75,10 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# do not hammer the queue # do not hammer the queue
time.sleep(0.5) time.sleep(0.5)
continue continue
if profiler and profiler.profile_id != queue_item.graph_execution_state_id:
profiler.start(profile_id=queue_item.graph_execution_state_id)
try: try:
graph_execution_state = self.__invoker.services.graph_execution_manager.get( graph_execution_state = self.__invoker.services.graph_execution_manager.get(
queue_item.graph_execution_state_id queue_item.graph_execution_state_id
@ -132,13 +162,12 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
source_node_id=source_node_id, source_node_id=source_node_id,
result=outputs.model_dump(), result=outputs.model_dump(),
) )
self.__invoker.services.performance_statistics.log_stats()
except KeyboardInterrupt: except KeyboardInterrupt:
pass pass
except CanceledException: except CanceledException:
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id) stats_cleanup(graph_execution_state.id)
pass pass
except Exception as e: except Exception as e:
@ -163,7 +192,6 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error_type=e.__class__.__name__, error_type=e.__class__.__name__,
error=error, error=error,
) )
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
pass pass
# Check queue to see if this is canceled, and skip if so # Check queue to see if this is canceled, and skip if so
@ -201,6 +229,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
queue_id=queue_item.session_queue_id, queue_id=queue_item.session_queue_id,
graph_execution_state_id=graph_execution_state.id, graph_execution_state_id=graph_execution_state.id,
) )
stats_cleanup(graph_execution_state.id)
except KeyboardInterrupt: except KeyboardInterrupt:
pass # Log something? KeyboardInterrupt is probably not going to be seen by the processor pass # Log something? KeyboardInterrupt is probably not going to be seen by the processor

View File

@ -30,23 +30,15 @@ writes to the system log is stored in InvocationServices.performance_statistics.
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from contextlib import AbstractContextManager from contextlib import AbstractContextManager
from typing import Dict from pathlib import Path
from invokeai.app.invocations.baseinvocation import BaseInvocation from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.backend.model_management.model_cache import CacheStats from invokeai.app.services.invocation_stats.invocation_stats_common import InvocationStatsSummary
from .invocation_stats_common import NodeLog
class InvocationStatsServiceBase(ABC): class InvocationStatsServiceBase(ABC):
"Abstract base class for recording node memory/time performance statistics" "Abstract base class for recording node memory/time performance statistics"
# {graph_id => NodeLog}
_stats: Dict[str, NodeLog]
_cache_stats: Dict[str, CacheStats]
ram_used: float
ram_changed: float
@abstractmethod @abstractmethod
def __init__(self): def __init__(self):
""" """
@ -71,51 +63,36 @@ class InvocationStatsServiceBase(ABC):
@abstractmethod @abstractmethod
def reset_stats(self, graph_execution_state_id: str): def reset_stats(self, graph_execution_state_id: str):
""" """
Reset all statistics for the indicated graph Reset all statistics for the indicated graph.
:param graph_execution_state_id :param graph_execution_state_id: The id of the session whose stats to reset.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
""" """
pass pass
@abstractmethod @abstractmethod
def reset_all_stats(self): def log_stats(self, graph_execution_state_id: str):
"""Zero all statistics"""
pass
@abstractmethod
def update_invocation_stats(
self,
graph_id: str,
invocation_type: str,
time_used: float,
vram_used: float,
):
"""
Add timing information on execution of a node. Usually
used internally.
:param graph_id: ID of the graph that is currently executing
:param invocation_type: String literal type of the node
:param time_used: Time used by node's exection (sec)
:param vram_used: Maximum VRAM used during exection (GB)
"""
pass
@abstractmethod
def log_stats(self):
""" """
Write out the accumulated statistics to the log or somewhere else. Write out the accumulated statistics to the log or somewhere else.
:param graph_execution_state_id: The id of the session whose stats to log.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
""" """
pass pass
@abstractmethod @abstractmethod
def update_mem_stats( def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
self,
ram_used: float,
ram_changed: float,
):
""" """
Update the collector with RAM memory usage info. Gets the accumulated statistics for the indicated graph.
:param graph_execution_state_id: The id of the session whose stats to get.
:param ram_used: How much RAM is currently in use. :raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
:param ram_changed: How much RAM changed since last generation. """
pass
@abstractmethod
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
"""
Write out the accumulated statistics to the indicated path as JSON.
:param graph_execution_state_id: The id of the session whose stats to dump.
:param output_path: The file to write the stats to.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
""" """
pass pass

View File

@ -1,25 +1,183 @@
from dataclasses import dataclass, field from collections import defaultdict
from typing import Dict from dataclasses import asdict, dataclass
from typing import Any, Optional
# size of GIG in bytes
GIG = 1073741824 class GESStatsNotFoundError(Exception):
"""Raised when execution stats are not found for a given Graph Execution State."""
@dataclass @dataclass
class NodeStats: class NodeExecutionStatsSummary:
"""Class for tracking execution stats of an invocation node""" """The stats for a specific type of node."""
calls: int = 0 node_type: str
time_used: float = 0.0 # seconds num_calls: int
max_vram: float = 0.0 # GB time_used_seconds: float
cache_hits: int = 0 peak_vram_gb: float
cache_misses: int = 0
cache_high_watermark: int = 0
@dataclass @dataclass
class NodeLog: class ModelCacheStatsSummary:
"""Class for tracking node usage""" """The stats for the model cache."""
# {node_type => NodeStats} high_water_mark_gb: float
nodes: Dict[str, NodeStats] = field(default_factory=dict) cache_size_gb: float
total_usage_gb: float
cache_hits: int
cache_misses: int
models_cached: int
models_cleared: int
@dataclass
class GraphExecutionStatsSummary:
"""The stats for the graph execution state."""
graph_execution_state_id: str
execution_time_seconds: float
# `wall_time_seconds`, `ram_usage_gb` and `ram_change_gb` are derived from the node execution stats.
# In some situations, there are no node stats, so these values are optional.
wall_time_seconds: Optional[float]
ram_usage_gb: Optional[float]
ram_change_gb: Optional[float]
@dataclass
class InvocationStatsSummary:
"""
The accumulated stats for a graph execution.
Its `__str__` method returns a human-readable stats summary.
"""
vram_usage_gb: Optional[float]
graph_stats: GraphExecutionStatsSummary
model_cache_stats: ModelCacheStatsSummary
node_stats: list[NodeExecutionStatsSummary]
def __str__(self) -> str:
_str = ""
_str = f"Graph stats: {self.graph_stats.graph_execution_state_id}\n"
_str += f"{'Node':>30} {'Calls':>7} {'Seconds':>9} {'VRAM Used':>10}\n"
for summary in self.node_stats:
_str += f"{summary.node_type:>30} {summary.num_calls:>7} {summary.time_used_seconds:>8.3f}s {summary.peak_vram_gb:>9.3f}G\n"
_str += f"TOTAL GRAPH EXECUTION TIME: {self.graph_stats.execution_time_seconds:7.3f}s\n"
if self.graph_stats.wall_time_seconds is not None:
_str += f"TOTAL GRAPH WALL TIME: {self.graph_stats.wall_time_seconds:7.3f}s\n"
if self.graph_stats.ram_usage_gb is not None and self.graph_stats.ram_change_gb is not None:
_str += f"RAM used by InvokeAI process: {self.graph_stats.ram_usage_gb:4.2f}G ({self.graph_stats.ram_change_gb:+5.3f}G)\n"
_str += f"RAM used to load models: {self.model_cache_stats.total_usage_gb:4.2f}G\n"
if self.vram_usage_gb:
_str += f"VRAM in use: {self.vram_usage_gb:4.3f}G\n"
_str += "RAM cache statistics:\n"
_str += f" Model cache hits: {self.model_cache_stats.cache_hits}\n"
_str += f" Model cache misses: {self.model_cache_stats.cache_misses}\n"
_str += f" Models cached: {self.model_cache_stats.models_cached}\n"
_str += f" Models cleared from cache: {self.model_cache_stats.models_cleared}\n"
_str += f" Cache high water mark: {self.model_cache_stats.high_water_mark_gb:4.2f}/{self.model_cache_stats.cache_size_gb:4.2f}G\n"
return _str
def as_dict(self) -> dict[str, Any]:
"""Returns the stats as a dictionary."""
return asdict(self)
@dataclass
class NodeExecutionStats:
"""Class for tracking execution stats of an invocation node."""
invocation_type: str
start_time: float # Seconds since the epoch.
end_time: float # Seconds since the epoch.
start_ram_gb: float # GB
end_ram_gb: float # GB
peak_vram_gb: float # GB
def total_time(self) -> float:
return self.end_time - self.start_time
class GraphExecutionStats:
"""Class for tracking execution stats of a graph."""
def __init__(self):
self._node_stats_list: list[NodeExecutionStats] = []
def add_node_execution_stats(self, node_stats: NodeExecutionStats):
self._node_stats_list.append(node_stats)
def get_total_run_time(self) -> float:
"""Get the total time spent executing nodes in the graph."""
total = 0.0
for node_stats in self._node_stats_list:
total += node_stats.total_time()
return total
def get_first_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the first node in the graph (by start_time)."""
first_node = None
for node_stats in self._node_stats_list:
if first_node is None or node_stats.start_time < first_node.start_time:
first_node = node_stats
assert first_node is not None
return first_node
def get_last_node_stats(self) -> NodeExecutionStats | None:
"""Get the stats of the last node in the graph (by end_time)."""
last_node = None
for node_stats in self._node_stats_list:
if last_node is None or node_stats.end_time > last_node.end_time:
last_node = node_stats
return last_node
def get_graph_stats_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
"""Get a summary of the graph stats."""
first_node = self.get_first_node_stats()
last_node = self.get_last_node_stats()
wall_time_seconds: Optional[float] = None
ram_usage_gb: Optional[float] = None
ram_change_gb: Optional[float] = None
if last_node and first_node:
wall_time_seconds = last_node.end_time - first_node.start_time
ram_usage_gb = last_node.end_ram_gb
ram_change_gb = last_node.end_ram_gb - first_node.start_ram_gb
return GraphExecutionStatsSummary(
graph_execution_state_id=graph_execution_state_id,
execution_time_seconds=self.get_total_run_time(),
wall_time_seconds=wall_time_seconds,
ram_usage_gb=ram_usage_gb,
ram_change_gb=ram_change_gb,
)
def get_node_stats_summaries(self) -> list[NodeExecutionStatsSummary]:
"""Get a summary of the node stats."""
summaries: list[NodeExecutionStatsSummary] = []
node_stats_by_type: dict[str, list[NodeExecutionStats]] = defaultdict(list)
for node_stats in self._node_stats_list:
node_stats_by_type[node_stats.invocation_type].append(node_stats)
for node_type, node_type_stats_list in node_stats_by_type.items():
num_calls = len(node_type_stats_list)
time_used = sum([n.total_time() for n in node_type_stats_list])
peak_vram = max([n.peak_vram_gb for n in node_type_stats_list])
summary = NodeExecutionStatsSummary(
node_type=node_type, num_calls=num_calls, time_used_seconds=time_used, peak_vram_gb=peak_vram
)
summaries.append(summary)
return summaries

View File

@ -1,5 +1,7 @@
import json
import time import time
from typing import Dict from contextlib import contextmanager
from pathlib import Path
import psutil import psutil
import torch import torch
@ -7,161 +9,163 @@ import torch
import invokeai.backend.util.logging as logger import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invoker import Invoker from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_manager.model_manager_base import ModelManagerServiceBase from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
from invokeai.backend.model_management.model_cache import CacheStats from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_base import InvocationStatsServiceBase from .invocation_stats_base import InvocationStatsServiceBase
from .invocation_stats_common import GIG, NodeLog, NodeStats from .invocation_stats_common import (
GESStatsNotFoundError,
GraphExecutionStats,
GraphExecutionStatsSummary,
InvocationStatsSummary,
ModelCacheStatsSummary,
NodeExecutionStats,
NodeExecutionStatsSummary,
)
# Size of 1GB in bytes.
GB = 2**30
class InvocationStatsService(InvocationStatsServiceBase): class InvocationStatsService(InvocationStatsServiceBase):
"""Accumulate performance information about a running graph. Collects time spent in each node, """Accumulate performance information about a running graph. Collects time spent in each node,
as well as the maximum and current VRAM utilisation for CUDA systems""" as well as the maximum and current VRAM utilisation for CUDA systems"""
_invoker: Invoker
def __init__(self): def __init__(self):
# {graph_id => NodeLog} # Maps graph_execution_state_id to GraphExecutionStats.
self._stats: Dict[str, NodeLog] = {} self._stats: dict[str, GraphExecutionStats] = {}
self._cache_stats: Dict[str, CacheStats] = {} # Maps graph_execution_state_id to model manager CacheStats.
self.ram_used: float = 0.0 self._cache_stats: dict[str, CacheStats] = {}
self.ram_changed: float = 0.0
def start(self, invoker: Invoker) -> None: def start(self, invoker: Invoker) -> None:
self._invoker = invoker self._invoker = invoker
class StatsContext: @contextmanager
"""Context manager for collecting statistics.""" def collect_stats(self, invocation: BaseInvocation, graph_execution_state_id: str):
if not self._stats.get(graph_execution_state_id):
invocation: BaseInvocation # First time we're seeing this graph_execution_state_id.
collector: "InvocationStatsServiceBase" self._stats[graph_execution_state_id] = GraphExecutionStats()
graph_id: str
start_time: float
ram_used: int
model_manager: ModelManagerServiceBase
def __init__(
self,
invocation: BaseInvocation,
graph_id: str,
model_manager: ModelManagerServiceBase,
collector: "InvocationStatsServiceBase",
):
"""Initialize statistics for this run."""
self.invocation = invocation
self.collector = collector
self.graph_id = graph_id
self.start_time = 0.0
self.ram_used = 0
self.model_manager = model_manager
def __enter__(self):
self.start_time = time.time()
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
self.ram_used = psutil.Process().memory_info().rss
if self.model_manager:
self.model_manager.collect_cache_stats(self.collector._cache_stats[self.graph_id])
def __exit__(self, *args):
"""Called on exit from the context."""
ram_used = psutil.Process().memory_info().rss
self.collector.update_mem_stats(
ram_used=ram_used / GIG,
ram_changed=(ram_used - self.ram_used) / GIG,
)
self.collector.update_invocation_stats(
graph_id=self.graph_id,
invocation_type=self.invocation.type, # type: ignore # `type` is not on the `BaseInvocation` model, but *is* on all invocations
time_used=time.time() - self.start_time,
vram_used=torch.cuda.max_memory_allocated() / GIG if torch.cuda.is_available() else 0.0,
)
def collect_stats(
self,
invocation: BaseInvocation,
graph_execution_state_id: str,
) -> StatsContext:
if not self._stats.get(graph_execution_state_id): # first time we're seeing this
self._stats[graph_execution_state_id] = NodeLog()
self._cache_stats[graph_execution_state_id] = CacheStats() self._cache_stats[graph_execution_state_id] = CacheStats()
return self.StatsContext(invocation, graph_execution_state_id, self._invoker.services.model_manager, self)
def reset_all_stats(self): # Prune stale stats. There should be none since we're starting a new graph, but just in case.
"""Zero all statistics""" self._prune_stale_stats()
self._stats = {}
# Record state before the invocation.
start_time = time.time()
start_ram = psutil.Process().memory_info().rss
if torch.cuda.is_available():
torch.cuda.reset_peak_memory_stats()
if self._invoker.services.model_manager:
self._invoker.services.model_manager.collect_cache_stats(self._cache_stats[graph_execution_state_id])
def reset_stats(self, graph_execution_id: str):
try: try:
self._stats.pop(graph_execution_id) # Let the invocation run.
except KeyError: yield None
logger.warning(f"Attempted to clear statistics for unknown graph {graph_execution_id}") finally:
# Record state after the invocation.
node_stats = NodeExecutionStats(
invocation_type=invocation.get_type(),
start_time=start_time,
end_time=time.time(),
start_ram_gb=start_ram / GB,
end_ram_gb=psutil.Process().memory_info().rss / GB,
peak_vram_gb=torch.cuda.max_memory_allocated() / GB if torch.cuda.is_available() else 0.0,
)
self._stats[graph_execution_state_id].add_node_execution_stats(node_stats)
def update_mem_stats( def _prune_stale_stats(self):
self, """Check all graphs being tracked and prune any that have completed/errored.
ram_used: float,
ram_changed: float,
):
self.ram_used = ram_used
self.ram_changed = ram_changed
def update_invocation_stats( This shouldn't be necessary, but we don't have totally robust upstream handling of graph completions/errors, so
self, for now we call this function periodically to prevent them from accumulating.
graph_id: str, """
invocation_type: str, to_prune: list[str] = []
time_used: float, for graph_execution_state_id in self._stats:
vram_used: float,
):
if not self._stats[graph_id].nodes.get(invocation_type):
self._stats[graph_id].nodes[invocation_type] = NodeStats()
stats = self._stats[graph_id].nodes[invocation_type]
stats.calls += 1
stats.time_used += time_used
stats.max_vram = max(stats.max_vram, vram_used)
def log_stats(self):
completed = set()
errored = set()
for graph_id, _node_log in self._stats.items():
try: try:
current_graph_state = self._invoker.services.graph_execution_manager.get(graph_id) graph_execution_state = self._invoker.services.graph_execution_manager.get(graph_execution_state_id)
except Exception: except ItemNotFoundError:
errored.add(graph_id) # TODO(ryand): What would cause this? Should this exception just be allowed to propagate?
logger.warning(f"Failed to get graph state for {graph_execution_state_id}.")
continue continue
if not current_graph_state.is_complete(): if not graph_execution_state.is_complete():
# The graph is still running, don't prune it.
continue continue
total_time = 0 to_prune.append(graph_execution_state_id)
logger.info(f"Graph stats: {graph_id}")
logger.info(f"{'Node':>30} {'Calls':>7}{'Seconds':>9} {'VRAM Used':>10}")
for node_type, stats in self._stats[graph_id].nodes.items():
logger.info(f"{node_type:>30} {stats.calls:>4} {stats.time_used:7.3f}s {stats.max_vram:4.3f}G")
total_time += stats.time_used
cache_stats = self._cache_stats[graph_id] for graph_execution_state_id in to_prune:
hwm = cache_stats.high_watermark / GIG del self._stats[graph_execution_state_id]
tot = cache_stats.cache_size / GIG del self._cache_stats[graph_execution_state_id]
loaded = sum(list(cache_stats.loaded_model_sizes.values())) / GIG
logger.info(f"TOTAL GRAPH EXECUTION TIME: {total_time:7.3f}s") if len(to_prune) > 0:
logger.info("RAM used by InvokeAI process: " + "%4.2fG" % self.ram_used + f" ({self.ram_changed:+5.3f}G)") logger.info(f"Pruned stale graph stats for {to_prune}.")
logger.info(f"RAM used to load models: {loaded:4.2f}G")
if torch.cuda.is_available():
logger.info("VRAM in use: " + "%4.3fG" % (torch.cuda.memory_allocated() / GIG))
logger.info("RAM cache statistics:")
logger.info(f" Model cache hits: {cache_stats.hits}")
logger.info(f" Model cache misses: {cache_stats.misses}")
logger.info(f" Models cached: {cache_stats.in_cache}")
logger.info(f" Models cleared from cache: {cache_stats.cleared}")
logger.info(f" Cache high water mark: {hwm:4.2f}/{tot:4.2f}G")
completed.add(graph_id) def reset_stats(self, graph_execution_state_id: str):
try:
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to clear statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
for graph_id in completed: def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
del self._stats[graph_id] graph_stats_summary = self._get_graph_summary(graph_execution_state_id)
del self._cache_stats[graph_id] node_stats_summaries = self._get_node_summaries(graph_execution_state_id)
model_cache_stats_summary = self._get_model_cache_summary(graph_execution_state_id)
vram_usage_gb = torch.cuda.memory_allocated() / GB if torch.cuda.is_available() else None
for graph_id in errored: return InvocationStatsSummary(
del self._stats[graph_id] graph_stats=graph_stats_summary,
del self._cache_stats[graph_id] model_cache_stats=model_cache_stats_summary,
node_stats=node_stats_summaries,
vram_usage_gb=vram_usage_gb,
)
def log_stats(self, graph_execution_state_id: str) -> None:
stats = self.get_stats(graph_execution_state_id)
logger.info(str(stats))
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
stats = self.get_stats(graph_execution_state_id)
with open(output_path, "w") as f:
f.write(json.dumps(stats.as_dict(), indent=2))
def _get_model_cache_summary(self, graph_execution_state_id: str) -> ModelCacheStatsSummary:
try:
cache_stats = self._cache_stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to get model cache statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
return ModelCacheStatsSummary(
cache_hits=cache_stats.hits,
cache_misses=cache_stats.misses,
high_water_mark_gb=cache_stats.high_watermark / GB,
cache_size_gb=cache_stats.cache_size / GB,
total_usage_gb=sum(list(cache_stats.loaded_model_sizes.values())) / GB,
models_cached=cache_stats.in_cache,
models_cleared=cache_stats.cleared,
)
def _get_graph_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to get graph statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
return graph_stats.get_graph_stats_summary(graph_execution_state_id)
def _get_node_summaries(self, graph_execution_state_id: str) -> list[NodeExecutionStatsSummary]:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to get node statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
return graph_stats.get_node_stats_summaries()

View File

@ -1,10 +1,8 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import Callable, Generic, Optional, TypeVar from typing import Callable, Generic, TypeVar
from pydantic import BaseModel from pydantic import BaseModel
from invokeai.app.services.shared.pagination import PaginatedResults
T = TypeVar("T", bound=BaseModel) T = TypeVar("T", bound=BaseModel)
@ -22,26 +20,26 @@ class ItemStorageABC(ABC, Generic[T]):
@abstractmethod @abstractmethod
def get(self, item_id: str) -> T: def get(self, item_id: str) -> T:
"""Gets the item, parsing it into a Pydantic model""" """
pass Gets the item.
:param item_id: the id of the item to get
@abstractmethod :raises ItemNotFoundError: if the item is not found
def get_raw(self, item_id: str) -> Optional[str]: """
"""Gets the raw item as a string, skipping Pydantic parsing"""
pass pass
@abstractmethod @abstractmethod
def set(self, item: T) -> None: def set(self, item: T) -> None:
"""Sets the item""" """
Sets the item. The id will be extracted based on id_field.
:param item: the item to set
"""
pass pass
@abstractmethod @abstractmethod
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]: def delete(self, item_id: str) -> None:
"""Gets a paginated list of items""" """
pass Deletes the item, if it exists.
"""
@abstractmethod
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
pass pass
def on_changed(self, on_changed: Callable[[T], None]) -> None: def on_changed(self, on_changed: Callable[[T], None]) -> None:

View File

@ -0,0 +1,5 @@
class ItemNotFoundError(KeyError):
"""Raised when an item is not found in storage"""
def __init__(self, item_id: str) -> None:
super().__init__(f"Item with id {item_id} not found")

View File

@ -0,0 +1,52 @@
from collections import OrderedDict
from contextlib import suppress
from typing import Generic, TypeVar
from pydantic import BaseModel
from invokeai.app.services.item_storage.item_storage_base import ItemStorageABC
from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
T = TypeVar("T", bound=BaseModel)
class ItemStorageMemory(ItemStorageABC[T], Generic[T]):
"""
Provides a simple in-memory storage for items, with a maximum number of items to store.
The storage uses the LRU strategy to evict items from storage when the max has been reached.
"""
def __init__(self, id_field: str = "id", max_items: int = 10) -> None:
super().__init__()
if max_items < 1:
raise ValueError("max_items must be at least 1")
if not id_field:
raise ValueError("id_field must not be empty")
self._id_field = id_field
self._items: OrderedDict[str, T] = OrderedDict()
self._max_items = max_items
def get(self, item_id: str) -> T:
# If the item exists, move it to the end of the OrderedDict.
item = self._items.pop(item_id, None)
if item is None:
raise ItemNotFoundError(item_id)
self._items[item_id] = item
return item
def set(self, item: T) -> None:
item_id = getattr(item, self._id_field)
if item_id in self._items:
# If item already exists, remove it and add it to the end
self._items.pop(item_id)
elif len(self._items) >= self._max_items:
# If cache is full, evict the least recently used item
self._items.popitem(last=False)
self._items[item_id] = item
self._on_changed(item)
def delete(self, item_id: str) -> None:
# This is a no-op if the item doesn't exist.
with suppress(KeyError):
del self._items[item_id]
self._on_deleted(item_id)

View File

@ -1,147 +0,0 @@
import sqlite3
import threading
from typing import Generic, Optional, TypeVar, get_args
from pydantic import BaseModel, TypeAdapter
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .item_storage_base import ItemStorageABC
T = TypeVar("T", bound=BaseModel)
class SqliteItemStorage(ItemStorageABC, Generic[T]):
_table_name: str
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_id_field: str
_lock: threading.RLock
_validator: Optional[TypeAdapter[T]]
def __init__(self, db: SqliteDatabase, table_name: str, id_field: str = "id"):
super().__init__()
self._lock = db.lock
self._conn = db.conn
self._table_name = table_name
self._id_field = id_field # TODO: validate that T has this field
self._cursor = self._conn.cursor()
self._validator: Optional[TypeAdapter[T]] = None
self._create_table()
def _create_table(self):
try:
self._lock.acquire()
self._cursor.execute(
f"""CREATE TABLE IF NOT EXISTS {self._table_name} (
item TEXT,
id TEXT GENERATED ALWAYS AS (json_extract(item, '$.{self._id_field}')) VIRTUAL NOT NULL);"""
)
self._cursor.execute(
f"""CREATE UNIQUE INDEX IF NOT EXISTS {self._table_name}_id ON {self._table_name}(id);"""
)
finally:
self._lock.release()
def _parse_item(self, item: str) -> T:
if self._validator is None:
"""
We don't get access to `__orig_class__` in `__init__()`, and we need this before start(), so
we can create it when it is first needed instead.
__orig_class__ is technically an implementation detail of the typing module, not a supported API
"""
self._validator = TypeAdapter(get_args(self.__orig_class__)[0]) # type: ignore [attr-defined]
return self._validator.validate_json(item)
def set(self, item: T):
try:
self._lock.acquire()
self._cursor.execute(
f"""INSERT OR REPLACE INTO {self._table_name} (item) VALUES (?);""",
(item.model_dump_json(warnings=False, exclude_none=True),),
)
self._conn.commit()
finally:
self._lock.release()
self._on_changed(item)
def get(self, id: str) -> Optional[T]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return self._parse_item(result[0])
def get_raw(self, id: str) -> Optional[str]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return result[0]
def delete(self, id: str):
try:
self._lock.acquire()
self._cursor.execute(f"""DELETE FROM {self._table_name} WHERE id = ?;""", (str(id),))
self._conn.commit()
finally:
self._lock.release()
self._on_deleted(id)
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} LIMIT ? OFFSET ?;""",
(per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(f"""SELECT count(*) FROM {self._table_name};""")
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} WHERE item LIKE ? LIMIT ? OFFSET ?;""",
(f"%{query}%", per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(
f"""SELECT count(*) FROM {self._table_name} WHERE item LIKE ?;""",
(f"%{query}%",),
)
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)

View File

@ -1,6 +1,7 @@
"""Initialization file for model install service package.""" """Initialization file for model install service package."""
from .model_install_base import ( from .model_install_base import (
CivitaiModelSource,
HFModelSource, HFModelSource,
InstallStatus, InstallStatus,
LocalModelSource, LocalModelSource,
@ -22,4 +23,5 @@ __all__ = [
"LocalModelSource", "LocalModelSource",
"HFModelSource", "HFModelSource",
"URLModelSource", "URLModelSource",
"CivitaiModelSource",
] ]

View File

@ -1,27 +1,42 @@
# Copyright 2023 Lincoln D. Stein and the InvokeAI development team
"""Baseclass definitions for the model installer."""
import re import re
import traceback import traceback
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from enum import Enum from enum import Enum
from pathlib import Path from pathlib import Path
from typing import Any, Dict, List, Literal, Optional, Union from typing import Any, Dict, List, Literal, Optional, Set, Union
from pydantic import BaseModel, Field, field_validator from pydantic import BaseModel, Field, PrivateAttr, field_validator
from pydantic.networks import AnyHttpUrl from pydantic.networks import AnyHttpUrl
from typing_extensions import Annotated from typing_extensions import Annotated
from invokeai.app.services.config import InvokeAIAppConfig from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.events import EventServiceBase from invokeai.app.services.events import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import ModelRecordServiceBase from invokeai.app.services.model_records import ModelRecordServiceBase
from invokeai.backend.model_manager import AnyModelConfig from invokeai.backend.model_manager import AnyModelConfig, ModelRepoVariant
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class InstallStatus(str, Enum): class InstallStatus(str, Enum):
"""State of an install job running in the background.""" """State of an install job running in the background."""
WAITING = "waiting" # waiting to be dequeued WAITING = "waiting" # waiting to be dequeued
DOWNLOADING = "downloading" # downloading of model files in process
RUNNING = "running" # being processed RUNNING = "running" # being processed
COMPLETED = "completed" # finished running COMPLETED = "completed" # finished running
ERROR = "error" # terminated with an error message ERROR = "error" # terminated with an error message
CANCELLED = "cancelled" # terminated with an error message
class ModelInstallPart(BaseModel):
url: AnyHttpUrl
path: Path
bytes: int = 0
total_bytes: int = 0
class UnknownInstallJobException(Exception): class UnknownInstallJobException(Exception):
@ -74,12 +89,31 @@ class LocalModelSource(StringLikeSource):
return Path(self.path).as_posix() return Path(self.path).as_posix()
class CivitaiModelSource(StringLikeSource):
"""A Civitai version id, with optional variant and access token."""
version_id: int
variant: Optional[ModelRepoVariant] = None
access_token: Optional[str] = None
type: Literal["civitai"] = "civitai"
def __str__(self) -> str:
"""Return string version of repoid when string rep needed."""
base: str = str(self.version_id)
base += f" ({self.variant})" if self.variant else ""
return base
class HFModelSource(StringLikeSource): class HFModelSource(StringLikeSource):
"""A HuggingFace repo_id, with optional variant and sub-folder.""" """
A HuggingFace repo_id with optional variant, sub-folder and access token.
Note that the variant option, if not provided to the constructor, will default to fp16, which is
what people (almost) always want.
"""
repo_id: str repo_id: str
variant: Optional[str] = None variant: Optional[ModelRepoVariant] = ModelRepoVariant.FP16
subfolder: Optional[str | Path] = None subfolder: Optional[Path] = None
access_token: Optional[str] = None access_token: Optional[str] = None
type: Literal["hf"] = "hf" type: Literal["hf"] = "hf"
@ -103,19 +137,22 @@ class URLModelSource(StringLikeSource):
url: AnyHttpUrl url: AnyHttpUrl
access_token: Optional[str] = None access_token: Optional[str] = None
type: Literal["generic_url"] = "generic_url" type: Literal["url"] = "url"
def __str__(self) -> str: def __str__(self) -> str:
"""Return string version of the url when string rep needed.""" """Return string version of the url when string rep needed."""
return str(self.url) return str(self.url)
ModelSource = Annotated[Union[LocalModelSource, HFModelSource, URLModelSource], Field(discriminator="type")] ModelSource = Annotated[
Union[LocalModelSource, HFModelSource, CivitaiModelSource, URLModelSource], Field(discriminator="type")
]
class ModelInstallJob(BaseModel): class ModelInstallJob(BaseModel):
"""Object that tracks the current status of an install request.""" """Object that tracks the current status of an install request."""
id: int = Field(description="Unique ID for this job")
status: InstallStatus = Field(default=InstallStatus.WAITING, description="Current status of install process") status: InstallStatus = Field(default=InstallStatus.WAITING, description="Current status of install process")
config_in: Dict[str, Any] = Field( config_in: Dict[str, Any] = Field(
default_factory=dict, description="Configuration information (e.g. 'description') to apply to model." default_factory=dict, description="Configuration information (e.g. 'description') to apply to model."
@ -128,15 +165,74 @@ class ModelInstallJob(BaseModel):
) )
source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model") source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model")
local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source") local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source")
error_type: Optional[str] = Field(default=None, description="Class name of the exception that led to status==ERROR") bytes: int = Field(
error: Optional[str] = Field(default=None, description="Error traceback") # noqa #501 default=0, description="For a remote model, the number of bytes downloaded so far (may not be available)"
)
total_bytes: int = Field(default=0, description="Total size of the model to be installed")
source_metadata: Optional[AnyModelRepoMetadata] = Field(
default=None, description="Metadata provided by the model source"
)
download_parts: Set[DownloadJob] = Field(
default_factory=set, description="Download jobs contributing to this install"
)
# internal flags and transitory settings
_install_tmpdir: Optional[Path] = PrivateAttr(default=None)
_exception: Optional[Exception] = PrivateAttr(default=None)
def set_error(self, e: Exception) -> None: def set_error(self, e: Exception) -> None:
"""Record the error and traceback from an exception.""" """Record the error and traceback from an exception."""
self.error_type = e.__class__.__name__ self._exception = e
self.error = "".join(traceback.format_exception(e))
self.status = InstallStatus.ERROR self.status = InstallStatus.ERROR
def cancel(self) -> None:
"""Call to cancel the job."""
self.status = InstallStatus.CANCELLED
@property
def error_type(self) -> Optional[str]:
"""Class name of the exception that led to status==ERROR."""
return self._exception.__class__.__name__ if self._exception else None
@property
def error(self) -> Optional[str]:
"""Error traceback."""
return "".join(traceback.format_exception(self._exception)) if self._exception else None
@property
def cancelled(self) -> bool:
"""Set status to CANCELLED."""
return self.status == InstallStatus.CANCELLED
@property
def errored(self) -> bool:
"""Return true if job has errored."""
return self.status == InstallStatus.ERROR
@property
def waiting(self) -> bool:
"""Return true if job is waiting to run."""
return self.status == InstallStatus.WAITING
@property
def downloading(self) -> bool:
"""Return true if job is downloading."""
return self.status == InstallStatus.DOWNLOADING
@property
def running(self) -> bool:
"""Return true if job is running."""
return self.status == InstallStatus.RUNNING
@property
def complete(self) -> bool:
"""Return true if job completed without errors."""
return self.status == InstallStatus.COMPLETED
@property
def in_terminal_state(self) -> bool:
"""Return true if job is in a terminal state."""
return self.status in [InstallStatus.COMPLETED, InstallStatus.ERROR, InstallStatus.CANCELLED]
class ModelInstallServiceBase(ABC): class ModelInstallServiceBase(ABC):
"""Abstract base class for InvokeAI model installation.""" """Abstract base class for InvokeAI model installation."""
@ -146,6 +242,8 @@ class ModelInstallServiceBase(ABC):
self, self,
app_config: InvokeAIAppConfig, app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase, record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: ModelMetadataStore,
event_bus: Optional["EventServiceBase"] = None, event_bus: Optional["EventServiceBase"] = None,
): ):
""" """
@ -156,12 +254,14 @@ class ModelInstallServiceBase(ABC):
:param event_bus: InvokeAI event bus for reporting events to. :param event_bus: InvokeAI event bus for reporting events to.
""" """
# make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
@abstractmethod @abstractmethod
def start(self, *args: Any, **kwarg: Any) -> None: def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer service.""" """Start the installer service."""
@abstractmethod @abstractmethod
def stop(self, *args: Any, **kwarg: Any) -> None: def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the model install service. After this the objection can be safely deleted.""" """Stop the model install service. After this the objection can be safely deleted."""
@property @property
@ -264,9 +364,13 @@ class ModelInstallServiceBase(ABC):
""" """
@abstractmethod @abstractmethod
def get_job(self, source: ModelSource) -> List[ModelInstallJob]: def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]:
"""Return the ModelInstallJob(s) corresponding to the provided source.""" """Return the ModelInstallJob(s) corresponding to the provided source."""
@abstractmethod
def get_job_by_id(self, id: int) -> ModelInstallJob:
"""Return the ModelInstallJob corresponding to the provided id. Raises ValueError if no job has that ID."""
@abstractmethod @abstractmethod
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102 def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
""" """
@ -278,16 +382,19 @@ class ModelInstallServiceBase(ABC):
"""Prune all completed and errored jobs.""" """Prune all completed and errored jobs."""
@abstractmethod @abstractmethod
def wait_for_installs(self) -> List[ModelInstallJob]: def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
@abstractmethod
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]:
""" """
Wait for all pending installs to complete. Wait for all pending installs to complete.
This will block until all pending installs have This will block until all pending installs have
completed, been cancelled, or errored out. It will completed, been cancelled, or errored out.
block indefinitely if one or more jobs are in the
paused state.
It will return the current list of jobs. :param timeout: Wait up to indicated number of seconds. Raise an Exception('timeout') if
installs do not complete within the indicated time.
""" """
@abstractmethod @abstractmethod

View File

@ -1,60 +1,72 @@
"""Model installation class.""" """Model installation class."""
import os
import re
import threading import threading
import time
from hashlib import sha256 from hashlib import sha256
from logging import Logger
from pathlib import Path from pathlib import Path
from queue import Queue from queue import Empty, Queue
from random import randbytes from random import randbytes
from shutil import copyfile, copytree, move, rmtree from shutil import copyfile, copytree, move, rmtree
from tempfile import mkdtemp
from typing import Any, Dict, List, Optional, Set, Union from typing import Any, Dict, List, Optional, Set, Union
from huggingface_hub import HfFolder
from pydantic.networks import AnyHttpUrl
from requests import Session
from invokeai.app.services.config import InvokeAIAppConfig from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.events import EventServiceBase from invokeai.app.services.download import DownloadJob, DownloadQueueServiceBase
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, UnknownModelException from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.model_records import DuplicateModelException, ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.backend.model_manager.config import ( from invokeai.backend.model_manager.config import (
AnyModelConfig, AnyModelConfig,
BaseModelType, BaseModelType,
InvalidModelConfigException, InvalidModelConfigException,
ModelRepoVariant,
ModelType, ModelType,
) )
from invokeai.backend.model_manager.hash import FastModelHash from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.model_manager.metadata import (
AnyModelRepoMetadata,
CivitaiMetadataFetch,
HuggingFaceMetadataFetch,
ModelMetadataStore,
ModelMetadataWithFiles,
RemoteModelFile,
)
from invokeai.backend.model_manager.probe import ModelProbe from invokeai.backend.model_manager.probe import ModelProbe
from invokeai.backend.model_manager.search import ModelSearch from invokeai.backend.model_manager.search import ModelSearch
from invokeai.backend.util import Chdir, InvokeAILogger from invokeai.backend.util import Chdir, InvokeAILogger
from invokeai.backend.util.devices import choose_precision, choose_torch_device
from .model_install_base import ( from .model_install_base import (
CivitaiModelSource,
HFModelSource,
InstallStatus, InstallStatus,
LocalModelSource, LocalModelSource,
ModelInstallJob, ModelInstallJob,
ModelInstallServiceBase, ModelInstallServiceBase,
ModelSource, ModelSource,
URLModelSource,
) )
# marker that the queue is done and that thread should exit TMPDIR_PREFIX = "tmpinstall_"
STOP_JOB = ModelInstallJob(
source=LocalModelSource(path="stop"),
local_path=Path("/dev/null"),
)
class ModelInstallService(ModelInstallServiceBase): class ModelInstallService(ModelInstallServiceBase):
"""class for InvokeAI model installation.""" """class for InvokeAI model installation."""
_app_config: InvokeAIAppConfig
_record_store: ModelRecordServiceBase
_event_bus: Optional[EventServiceBase] = None
_install_queue: Queue[ModelInstallJob]
_install_jobs: List[ModelInstallJob]
_logger: Logger
_cached_model_paths: Set[Path]
_models_installed: Set[str]
def __init__( def __init__(
self, self,
app_config: InvokeAIAppConfig, app_config: InvokeAIAppConfig,
record_store: ModelRecordServiceBase, record_store: ModelRecordServiceBase,
download_queue: DownloadQueueServiceBase,
metadata_store: Optional[ModelMetadataStore] = None,
event_bus: Optional[EventServiceBase] = None, event_bus: Optional[EventServiceBase] = None,
session: Optional[Session] = None,
): ):
""" """
Initialize the installer object. Initialize the installer object.
@ -67,10 +79,26 @@ class ModelInstallService(ModelInstallServiceBase):
self._record_store = record_store self._record_store = record_store
self._event_bus = event_bus self._event_bus = event_bus
self._logger = InvokeAILogger.get_logger(name=self.__class__.__name__) self._logger = InvokeAILogger.get_logger(name=self.__class__.__name__)
self._install_jobs = [] self._install_jobs: List[ModelInstallJob] = []
self._install_queue = Queue() self._install_queue: Queue[ModelInstallJob] = Queue()
self._cached_model_paths = set() self._cached_model_paths: Set[Path] = set()
self._models_installed = set() self._models_installed: Set[str] = set()
self._lock = threading.Lock()
self._stop_event = threading.Event()
self._downloads_changed_event = threading.Event()
self._download_queue = download_queue
self._download_cache: Dict[AnyHttpUrl, ModelInstallJob] = {}
self._running = False
self._session = session
self._next_job_id = 0
# There may not necessarily be a metadata store initialized
# so we create one and initialize it with the same sql database
# used by the record store service.
if metadata_store:
self._metadata_store = metadata_store
else:
assert isinstance(record_store, ModelRecordServiceSQL)
self._metadata_store = ModelMetadataStore(record_store.db)
@property @property
def app_config(self) -> InvokeAIAppConfig: # noqa D102 def app_config(self) -> InvokeAIAppConfig: # noqa D102
@ -84,69 +112,31 @@ class ModelInstallService(ModelInstallServiceBase):
def event_bus(self) -> Optional[EventServiceBase]: # noqa D102 def event_bus(self) -> Optional[EventServiceBase]: # noqa D102
return self._event_bus return self._event_bus
def start(self, *args: Any, **kwarg: Any) -> None: # make the invoker optional here because we don't need it and it
# makes the installer harder to use outside the web app
def start(self, invoker: Optional[Invoker] = None) -> None:
"""Start the installer thread.""" """Start the installer thread."""
self._start_installer_thread() with self._lock:
self.sync_to_config() if self._running:
raise Exception("Attempt to start the installer service twice")
self._start_installer_thread()
self._remove_dangling_install_dirs()
self.sync_to_config()
def stop(self, *args: Any, **kwarg: Any) -> None: def stop(self, invoker: Optional[Invoker] = None) -> None:
"""Stop the installer thread; after this the object can be deleted and garbage collected.""" """Stop the installer thread; after this the object can be deleted and garbage collected."""
self._install_queue.put(STOP_JOB) with self._lock:
if not self._running:
def _start_installer_thread(self) -> None: raise Exception("Attempt to stop the install service before it was started")
threading.Thread(target=self._install_next_item, daemon=True).start() self._stop_event.set()
with self._install_queue.mutex:
def _install_next_item(self) -> None: self._install_queue.queue.clear() # get rid of pending jobs
done = False active_jobs = [x for x in self.list_jobs() if x.running]
while not done: if active_jobs:
job = self._install_queue.get() self._logger.warning("Waiting for active install job to complete")
if job == STOP_JOB: self.wait_for_installs()
done = True self._download_cache.clear()
continue self._running = False
assert job.local_path is not None
try:
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
self._signal_job_completed(job)
except (OSError, DuplicateModelException, InvalidModelConfigException) as excp:
self._signal_job_errored(job, excp)
finally:
self._install_queue.task_done()
self._logger.info("Install thread exiting")
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_started(str(job.source))
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob, excp: Exception) -> None:
job.set_error(excp)
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
def register_path( def register_path(
self, self,
@ -172,7 +162,12 @@ class ModelInstallService(ModelInstallServiceBase):
info: AnyModelConfig = self._probe_model(Path(model_path), config) info: AnyModelConfig = self._probe_model(Path(model_path), config)
old_hash = info.original_hash old_hash = info.original_hash
dest_path = self.app_config.models_path / info.base.value / info.type.value / model_path.name dest_path = self.app_config.models_path / info.base.value / info.type.value / model_path.name
new_path = self._copy_model(model_path, dest_path) try:
new_path = self._copy_model(model_path, dest_path)
except FileExistsError as excp:
raise DuplicateModelException(
f"A model named {model_path.name} is already installed at {dest_path.as_posix()}"
) from excp
new_hash = FastModelHash.hash(new_path) new_hash = FastModelHash.hash(new_path)
assert new_hash == old_hash, f"{model_path}: Model hash changed during installation, possibly corrupted." assert new_hash == old_hash, f"{model_path}: Model hash changed during installation, possibly corrupted."
@ -182,43 +177,56 @@ class ModelInstallService(ModelInstallServiceBase):
info, info,
) )
def import_model( def import_model(self, source: ModelSource, config: Optional[Dict[str, Any]] = None) -> ModelInstallJob: # noqa D102
self, if isinstance(source, LocalModelSource):
source: ModelSource, install_job = self._import_local_model(source, config)
config: Optional[Dict[str, Any]] = None, self._install_queue.put(install_job) # synchronously install
) -> ModelInstallJob: # noqa D102 elif isinstance(source, CivitaiModelSource):
if not config: install_job = self._import_from_civitai(source, config)
config = {} elif isinstance(source, HFModelSource):
install_job = self._import_from_hf(source, config)
elif isinstance(source, URLModelSource):
install_job = self._import_from_url(source, config)
else:
raise ValueError(f"Unsupported model source: '{type(source)}'")
# Installing a local path self._install_jobs.append(install_job)
if isinstance(source, LocalModelSource) and Path(source.path).exists(): # a path that is already on disk return install_job
job = ModelInstallJob(
source=source,
config_in=config,
local_path=Path(source.path),
)
self._install_jobs.append(job)
self._install_queue.put(job)
return job
else: # here is where we'd download a URL or repo_id. Implementation pending download queue.
raise UnknownModelException("File or directory not found")
def list_jobs(self) -> List[ModelInstallJob]: # noqa D102 def list_jobs(self) -> List[ModelInstallJob]: # noqa D102
return self._install_jobs return self._install_jobs
def get_job(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102 def get_job_by_source(self, source: ModelSource) -> List[ModelInstallJob]: # noqa D102
return [x for x in self._install_jobs if x.source == source] return [x for x in self._install_jobs if x.source == source]
def wait_for_installs(self) -> List[ModelInstallJob]: # noqa D102 def get_job_by_id(self, id: int) -> ModelInstallJob: # noqa D102
jobs = [x for x in self._install_jobs if x.id == id]
if not jobs:
raise ValueError(f"No job with id {id} known")
assert len(jobs) == 1
assert isinstance(jobs[0], ModelInstallJob)
return jobs[0]
def wait_for_installs(self, timeout: int = 0) -> List[ModelInstallJob]: # noqa D102
"""Block until all installation jobs are done."""
start = time.time()
while len(self._download_cache) > 0:
if self._downloads_changed_event.wait(timeout=5): # in case we miss an event
self._downloads_changed_event.clear()
if timeout > 0 and time.time() - start > timeout:
raise Exception("Timeout exceeded")
self._install_queue.join() self._install_queue.join()
return self._install_jobs return self._install_jobs
def cancel_job(self, job: ModelInstallJob) -> None:
"""Cancel the indicated job."""
job.cancel()
with self._lock:
self._cancel_download_parts(job)
def prune_jobs(self) -> None: def prune_jobs(self) -> None:
"""Prune all completed and errored jobs.""" """Prune all completed and errored jobs."""
unfinished_jobs = [ unfinished_jobs = [x for x in self._install_jobs if not x.in_terminal_state]
x for x in self._install_jobs if x.status not in [InstallStatus.COMPLETED, InstallStatus.ERROR]
]
self._install_jobs = unfinished_jobs self._install_jobs = unfinished_jobs
def sync_to_config(self) -> None: def sync_to_config(self) -> None:
@ -234,10 +242,108 @@ class ModelInstallService(ModelInstallServiceBase):
self._cached_model_paths = {Path(x.path) for x in self.record_store.all_models()} self._cached_model_paths = {Path(x.path) for x in self.record_store.all_models()}
callback = self._scan_install if install else self._scan_register callback = self._scan_install if install else self._scan_register
search = ModelSearch(on_model_found=callback) search = ModelSearch(on_model_found=callback)
self._models_installed: Set[str] = set() self._models_installed.clear()
search.search(scan_dir) search.search(scan_dir)
return list(self._models_installed) return list(self._models_installed)
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
# --------------------------------------------------------------------------------------------
# Internal functions that manage the installer threads
# --------------------------------------------------------------------------------------------
def _start_installer_thread(self) -> None:
threading.Thread(target=self._install_next_item, daemon=True).start()
self._running = True
def _install_next_item(self) -> None:
done = False
while not done:
if self._stop_event.is_set():
done = True
continue
try:
job = self._install_queue.get(timeout=1)
except Empty:
continue
assert job.local_path is not None
try:
if job.cancelled:
self._signal_job_cancelled(job)
elif job.errored:
self._signal_job_errored(job)
elif (
job.waiting or job.downloading
): # local jobs will be in waiting state, remote jobs will be downloading state
job.total_bytes = self._stat_size(job.local_path)
job.bytes = job.total_bytes
self._signal_job_running(job)
if job.inplace:
key = self.register_path(job.local_path, job.config_in)
else:
key = self.install_path(job.local_path, job.config_in)
job.config_out = self.record_store.get_model(key)
# enter the metadata, if there is any
if job.source_metadata:
self._metadata_store.add_metadata(key, job.source_metadata)
self._signal_job_completed(job)
except InvalidModelConfigException as excp:
if any(x.content_type is not None and "text/html" in x.content_type for x in job.download_parts):
job.set_error(
InvalidModelConfigException(
f"At least one file in {job.local_path} is an HTML page, not a model. This can happen when an access token is required to download."
)
)
else:
job.set_error(excp)
self._signal_job_errored(job)
except (OSError, DuplicateModelException) as excp:
job.set_error(excp)
self._signal_job_errored(job)
finally:
# if this is an install of a remote file, then clean up the temporary directory
if job._install_tmpdir is not None:
rmtree(job._install_tmpdir)
self._install_queue.task_done()
self._logger.info("Install thread exiting")
# --------------------------------------------------------------------------------------------
# Internal functions that manage the models directory
# --------------------------------------------------------------------------------------------
def _remove_dangling_install_dirs(self) -> None:
"""Remove leftover tmpdirs from aborted installs."""
path = self._app_config.models_path
for tmpdir in path.glob(f"{TMPDIR_PREFIX}*"):
self._logger.info(f"Removing dangling temporary directory {tmpdir}")
rmtree(tmpdir)
def _scan_models_directory(self) -> None: def _scan_models_directory(self) -> None:
""" """
Scan the models directory for new and missing models. Scan the models directory for new and missing models.
@ -320,28 +426,6 @@ class ModelInstallService(ModelInstallServiceBase):
pass pass
return True return True
def unregister(self, key: str) -> None: # noqa D102
self.record_store.del_model(key)
def delete(self, key: str) -> None: # noqa D102
"""Unregister the model. Delete its files only if they are within our models directory."""
model = self.record_store.get_model(key)
models_dir = self.app_config.models_path
model_path = models_dir / model.path
if model_path.is_relative_to(models_dir):
self.unconditionally_delete(key)
else:
self.unregister(key)
def unconditionally_delete(self, key: str) -> None: # noqa D102
model = self.record_store.get_model(key)
path = self.app_config.models_path / model.path
if path.is_dir():
rmtree(path)
else:
path.unlink()
self.unregister(key)
def _copy_model(self, old_path: Path, new_path: Path) -> Path: def _copy_model(self, old_path: Path, new_path: Path) -> Path:
if old_path == new_path: if old_path == new_path:
return old_path return old_path
@ -397,3 +481,280 @@ class ModelInstallService(ModelInstallServiceBase):
info.config = legacy_conf.relative_to(self.app_config.root_dir).as_posix() info.config = legacy_conf.relative_to(self.app_config.root_dir).as_posix()
self.record_store.add_model(key, info) self.record_store.add_model(key, info)
return key return key
def _next_id(self) -> int:
with self._lock:
id = self._next_job_id
self._next_job_id += 1
return id
@staticmethod
def _guess_variant() -> ModelRepoVariant:
"""Guess the best HuggingFace variant type to download."""
precision = choose_precision(choose_torch_device())
return ModelRepoVariant.FP16 if precision == "float16" else ModelRepoVariant.DEFAULT
def _import_local_model(self, source: LocalModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
return ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
local_path=Path(source.path),
inplace=source.inplace,
)
def _import_from_civitai(self, source: CivitaiModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
if not source.access_token:
self._logger.info("No Civitai access token provided; some models may not be downloadable.")
metadata = CivitaiMetadataFetch(self._session).from_id(str(source.version_id))
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(session=self._session)
return self._import_remote_model(source=source, config=config, metadata=metadata, remote_files=remote_files)
def _import_from_hf(self, source: HFModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# Add user's cached access token to HuggingFace requests
source.access_token = source.access_token or HfFolder.get_token()
if not source.access_token:
self._logger.info("No HuggingFace access token present; some models may not be downloadable.")
metadata = HuggingFaceMetadataFetch(self._session).from_id(source.repo_id)
assert isinstance(metadata, ModelMetadataWithFiles)
remote_files = metadata.download_urls(
variant=source.variant or self._guess_variant(),
subfolder=source.subfolder,
session=self._session,
)
return self._import_remote_model(
source=source,
config=config,
remote_files=remote_files,
metadata=metadata,
)
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# URLs from Civitai or HuggingFace will be handled specially
url_patterns = {
r"^https?://civitai.com/": CivitaiMetadataFetch,
r"^https?://huggingface.co/[^/]+/[^/]+$": HuggingFaceMetadataFetch,
}
metadata = None
for pattern, fetcher in url_patterns.items():
if re.match(pattern, str(source.url), re.IGNORECASE):
metadata = fetcher(self._session).from_url(source.url)
break
self._logger.debug(f"metadata={metadata}")
if metadata and isinstance(metadata, ModelMetadataWithFiles):
remote_files = metadata.download_urls(session=self._session)
else:
remote_files = [RemoteModelFile(url=source.url, path=Path("."), size=0)]
return self._import_remote_model(
source=source,
config=config,
metadata=metadata,
remote_files=remote_files,
)
def _import_remote_model(
self,
source: ModelSource,
remote_files: List[RemoteModelFile],
metadata: Optional[AnyModelRepoMetadata],
config: Optional[Dict[str, Any]],
) -> ModelInstallJob:
# TODO: Replace with tempfile.tmpdir() when multithreading is cleaned up.
# Currently the tmpdir isn't automatically removed at exit because it is
# being held in a daemon thread.
tmpdir = Path(
mkdtemp(
dir=self._app_config.models_path,
prefix=TMPDIR_PREFIX,
)
)
install_job = ModelInstallJob(
id=self._next_id(),
source=source,
config_in=config or {},
source_metadata=metadata,
local_path=tmpdir, # local path may change once the download has started due to content-disposition handling
bytes=0,
total_bytes=0,
)
# we remember the path up to the top of the tmpdir so that it may be
# removed safely at the end of the install process.
install_job._install_tmpdir = tmpdir
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
self._logger.info(f"Queuing {source} for downloading")
self._logger.debug(f"remote_files={remote_files}")
for model_file in remote_files:
url = model_file.url
path = model_file.path
self._logger.info(f"Downloading {url} => {path}")
install_job.total_bytes += model_file.size
assert hasattr(source, "access_token")
dest = tmpdir / path.parent
dest.mkdir(parents=True, exist_ok=True)
download_job = DownloadJob(
source=url,
dest=dest,
access_token=source.access_token,
)
self._download_cache[download_job.source] = install_job # matches a download job to an install job
install_job.download_parts.add(download_job)
self._download_queue.submit_download_job(
download_job,
on_start=self._download_started_callback,
on_progress=self._download_progress_callback,
on_complete=self._download_complete_callback,
on_error=self._download_error_callback,
on_cancelled=self._download_cancelled_callback,
)
return install_job
def _stat_size(self, path: Path) -> int:
size = 0
if path.is_file():
size = path.stat().st_size
elif path.is_dir():
for root, _, files in os.walk(path):
size += sum(self._stat_size(Path(root, x)) for x in files)
return size
# ------------------------------------------------------------------
# Callbacks are executed by the download queue in a separate thread
# ------------------------------------------------------------------
def _download_started_callback(self, download_job: DownloadJob) -> None:
self._logger.info(f"{download_job.source}: model download started")
with self._lock:
install_job = self._download_cache[download_job.source]
install_job.status = InstallStatus.DOWNLOADING
assert download_job.download_path
if install_job.local_path == install_job._install_tmpdir:
partial_path = download_job.download_path.relative_to(install_job._install_tmpdir)
dest_name = partial_path.parts[0]
install_job.local_path = install_job._install_tmpdir / dest_name
# Update the total bytes count for remote sources.
if not install_job.total_bytes:
install_job.total_bytes = sum(x.total_bytes for x in install_job.download_parts)
def _download_progress_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
if install_job.cancelled: # This catches the case in which the caller directly calls job.cancel()
self._cancel_download_parts(install_job)
else:
# update sizes
install_job.bytes = sum(x.bytes for x in install_job.download_parts)
self._signal_job_downloading(install_job)
def _download_complete_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache[download_job.source]
self._download_cache.pop(download_job.source, None)
# are there any more active jobs left in this task?
if all(x.complete for x in install_job.download_parts):
# now enqueue job for actual installation into the models directory
self._install_queue.put(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_error_callback(self, download_job: DownloadJob, excp: Optional[Exception] = None) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
assert install_job is not None
assert excp is not None
install_job.set_error(excp)
self._logger.error(
f"Cancelling {install_job.source} due to an error while downloading {download_job.source}: {str(excp)}"
)
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _download_cancelled_callback(self, download_job: DownloadJob) -> None:
with self._lock:
install_job = self._download_cache.pop(download_job.source, None)
if not install_job:
return
self._downloads_changed_event.set()
self._logger.warning(f"Download {download_job.source} cancelled.")
# if install job has already registered an error, then do not replace its status with cancelled
if not install_job.errored:
install_job.cancel()
self._cancel_download_parts(install_job)
# Let other threads know that the number of downloads has changed
self._downloads_changed_event.set()
def _cancel_download_parts(self, install_job: ModelInstallJob) -> None:
# on multipart downloads, _cancel_components() will get called repeatedly from the download callbacks
# do not lock here because it gets called within a locked context
for s in install_job.download_parts:
self._download_queue.cancel_job(s)
if all(x.in_terminal_state for x in install_job.download_parts):
# When all parts have reached their terminal state, we finalize the job to clean up the temporary directory and other resources
self._install_queue.put(install_job)
# ------------------------------------------------------------------------------------------------
# Internal methods that put events on the event bus
# ------------------------------------------------------------------------------------------------
def _signal_job_running(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.RUNNING
self._logger.info(f"{job.source}: model installation started")
if self._event_bus:
self._event_bus.emit_model_install_running(str(job.source))
def _signal_job_downloading(self, job: ModelInstallJob) -> None:
if self._event_bus:
parts: List[Dict[str, str | int]] = [
{
"url": str(x.source),
"local_path": str(x.download_path),
"bytes": x.bytes,
"total_bytes": x.total_bytes,
}
for x in job.download_parts
]
assert job.bytes is not None
assert job.total_bytes is not None
self._event_bus.emit_model_install_downloading(
str(job.source),
local_path=job.local_path.as_posix(),
parts=parts,
bytes=job.bytes,
total_bytes=job.total_bytes,
)
def _signal_job_completed(self, job: ModelInstallJob) -> None:
job.status = InstallStatus.COMPLETED
assert job.config_out
self._logger.info(
f"{job.source}: model installation completed. {job.local_path} registered key {job.config_out.key}"
)
if self._event_bus:
assert job.local_path is not None
assert job.config_out is not None
key = job.config_out.key
self._event_bus.emit_model_install_completed(str(job.source), key)
def _signal_job_errored(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation encountered an exception: {job.error_type}\n{job.error}")
if self._event_bus:
error_type = job.error_type
error = job.error
assert error_type is not None
assert error is not None
self._event_bus.emit_model_install_error(str(job.source), error_type, error)
def _signal_job_cancelled(self, job: ModelInstallJob) -> None:
self._logger.info(f"{job.source}: model installation was cancelled")
if self._event_bus:
self._event_bus.emit_model_install_cancelled(str(job.source))

View File

@ -4,6 +4,8 @@ from .model_records_base import ( # noqa F401
InvalidModelException, InvalidModelException,
ModelRecordServiceBase, ModelRecordServiceBase,
UnknownModelException, UnknownModelException,
ModelSummary,
ModelRecordOrderBy,
) )
from .model_records_sql import ModelRecordServiceSQL # noqa F401 from .model_records_sql import ModelRecordServiceSQL # noqa F401
@ -13,4 +15,6 @@ __all__ = [
"DuplicateModelException", "DuplicateModelException",
"InvalidModelException", "InvalidModelException",
"UnknownModelException", "UnknownModelException",
"ModelSummary",
"ModelRecordOrderBy",
] ]

View File

@ -4,10 +4,15 @@ Abstract base class for storing and retrieving model configuration records.
""" """
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from enum import Enum
from pathlib import Path from pathlib import Path
from typing import List, Optional, Union from typing import Any, Dict, List, Optional, Set, Tuple, Union
from pydantic import BaseModel, Field
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType from invokeai.backend.model_manager.config import AnyModelConfig, BaseModelType, ModelFormat, ModelType
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore
class DuplicateModelException(Exception): class DuplicateModelException(Exception):
@ -26,11 +31,33 @@ class ConfigFileVersionMismatchException(Exception):
"""Raised on an attempt to open a config with an incompatible version.""" """Raised on an attempt to open a config with an incompatible version."""
class ModelRecordOrderBy(str, Enum):
"""The order in which to return model summaries."""
Default = "default" # order by type, base, format and name
Type = "type"
Base = "base"
Name = "name"
Format = "format"
class ModelSummary(BaseModel):
"""A short summary of models for UI listing purposes."""
key: str = Field(description="model key")
type: ModelType = Field(description="model type")
base: BaseModelType = Field(description="base model")
format: ModelFormat = Field(description="model format")
name: str = Field(description="model name")
description: str = Field(description="short description of model")
tags: Set[str] = Field(description="tags associated with model")
class ModelRecordServiceBase(ABC): class ModelRecordServiceBase(ABC):
"""Abstract base class for storage and retrieval of model configs.""" """Abstract base class for storage and retrieval of model configs."""
@abstractmethod @abstractmethod
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig: def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
""" """
Add a model to the database. Add a model to the database.
@ -54,7 +81,7 @@ class ModelRecordServiceBase(ABC):
pass pass
@abstractmethod @abstractmethod
def update_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig: def update_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
""" """
Update the model, returning the updated version. Update the model, returning the updated version.
@ -75,6 +102,47 @@ class ModelRecordServiceBase(ABC):
""" """
pass pass
@property
@abstractmethod
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
pass
@abstractmethod
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
pass
@abstractmethod
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
pass
@abstractmethod
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
pass
@abstractmethod
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
pass
@abstractmethod
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
pass
@abstractmethod @abstractmethod
def exists(self, key: str) -> bool: def exists(self, key: str) -> bool:
""" """

View File

@ -42,9 +42,11 @@ Typical usage:
import json import json
import sqlite3 import sqlite3
from math import ceil
from pathlib import Path from pathlib import Path
from typing import List, Optional, Union from typing import Any, Dict, List, Optional, Set, Tuple, Union
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.backend.model_manager.config import ( from invokeai.backend.model_manager.config import (
AnyModelConfig, AnyModelConfig,
BaseModelType, BaseModelType,
@ -52,11 +54,14 @@ from invokeai.backend.model_manager.config import (
ModelFormat, ModelFormat,
ModelType, ModelType,
) )
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata, ModelMetadataStore, UnknownMetadataException
from ..shared.sqlite.sqlite_database import SqliteDatabase from ..shared.sqlite.sqlite_database import SqliteDatabase
from .model_records_base import ( from .model_records_base import (
DuplicateModelException, DuplicateModelException,
ModelRecordOrderBy,
ModelRecordServiceBase, ModelRecordServiceBase,
ModelSummary,
UnknownModelException, UnknownModelException,
) )
@ -64,9 +69,6 @@ from .model_records_base import (
class ModelRecordServiceSQL(ModelRecordServiceBase): class ModelRecordServiceSQL(ModelRecordServiceBase):
"""Implementation of the ModelConfigStore ABC using a SQL database.""" """Implementation of the ModelConfigStore ABC using a SQL database."""
_db: SqliteDatabase
_cursor: sqlite3.Cursor
def __init__(self, db: SqliteDatabase): def __init__(self, db: SqliteDatabase):
""" """
Initialize a new object from preexisting sqlite3 connection and threading lock objects. Initialize a new object from preexisting sqlite3 connection and threading lock objects.
@ -78,7 +80,12 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
self._db = db self._db = db
self._cursor = self._db.conn.cursor() self._cursor = self._db.conn.cursor()
def add_model(self, key: str, config: Union[dict, AnyModelConfig]) -> AnyModelConfig: @property
def db(self) -> SqliteDatabase:
"""Return the underlying database."""
return self._db
def add_model(self, key: str, config: Union[Dict[str, Any], AnyModelConfig]) -> AnyModelConfig:
""" """
Add a model to the database. Add a model to the database.
@ -293,3 +300,95 @@ class ModelRecordServiceSQL(ModelRecordServiceBase):
) )
results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()] results = [ModelConfigFactory.make_config(json.loads(x[0])) for x in self._cursor.fetchall()]
return results return results
@property
def metadata_store(self) -> ModelMetadataStore:
"""Return a ModelMetadataStore initialized on the same database."""
return ModelMetadataStore(self._db)
def get_metadata(self, key: str) -> Optional[AnyModelRepoMetadata]:
"""
Retrieve metadata (if any) from when model was downloaded from a repo.
:param key: Model key
"""
store = self.metadata_store
try:
metadata = store.get_metadata(key)
return metadata
except UnknownMetadataException:
return None
def search_by_metadata_tag(self, tags: Set[str]) -> List[AnyModelConfig]:
"""
Search model metadata for ones with all listed tags and return their corresponding configs.
:param tags: Set of tags to search for. All tags must be present.
"""
store = ModelMetadataStore(self._db)
keys = store.search_by_tag(tags)
return [self.get_model(x) for x in keys]
def list_tags(self) -> Set[str]:
"""Return a unique set of all the model tags in the metadata database."""
store = ModelMetadataStore(self._db)
return store.list_tags()
def list_all_metadata(self) -> List[Tuple[str, AnyModelRepoMetadata]]:
"""List metadata for all models that have it."""
store = ModelMetadataStore(self._db)
return store.list_all_metadata()
def list_models(
self, page: int = 0, per_page: int = 10, order_by: ModelRecordOrderBy = ModelRecordOrderBy.Default
) -> PaginatedResults[ModelSummary]:
"""Return a paginated summary listing of each model in the database."""
ordering = {
ModelRecordOrderBy.Default: "a.type, a.base, a.format, a.name",
ModelRecordOrderBy.Type: "a.type",
ModelRecordOrderBy.Base: "a.base",
ModelRecordOrderBy.Name: "a.name",
ModelRecordOrderBy.Format: "a.format",
}
def _fixup(summary: Dict[str, str]) -> Dict[str, Union[str, int, Set[str]]]:
"""Fix up results so that there are no null values."""
result: Dict[str, Union[str, int, Set[str]]] = {}
for key, item in summary.items():
result[key] = item or ""
result["tags"] = set(json.loads(summary["tags"] or "[]"))
return result
# Lock so that the database isn't updated while we're doing the two queries.
with self._db.lock:
# query1: get the total number of model configs
self._cursor.execute(
"""--sql
select count(*) from model_config;
""",
(),
)
total = int(self._cursor.fetchone()[0])
# query2: fetch key fields from the join of model_config and model_metadata
self._cursor.execute(
f"""--sql
SELECT a.id as key, a.type, a.base, a.format, a.name,
json_extract(a.config, '$.description') as description,
json_extract(b.metadata, '$.tags') as tags
FROM model_config AS a
LEFT JOIN model_metadata AS b on a.id=b.id
ORDER BY {ordering[order_by]} -- using ? to bind doesn't work here for some reason
LIMIT ?
OFFSET ?;
""",
(
per_page,
page * per_page,
),
)
rows = self._cursor.fetchall()
items = [ModelSummary.model_validate(_fixup(dict(x))) for x in rows]
return PaginatedResults(
page=page, pages=ceil(total / per_page), per_page=per_page, total=total, items=items
)

View File

@ -2,7 +2,7 @@
import copy import copy
import itertools import itertools
from typing import Annotated, Any, Optional, Union, get_args, get_origin, get_type_hints from typing import Annotated, Any, Optional, TypeVar, Union, get_args, get_origin, get_type_hints
import networkx as nx import networkx as nx
from pydantic import BaseModel, ConfigDict, field_validator, model_validator from pydantic import BaseModel, ConfigDict, field_validator, model_validator
@ -141,6 +141,16 @@ def are_connections_compatible(
return are_connection_types_compatible(from_node_field, to_node_field) return are_connection_types_compatible(from_node_field, to_node_field)
T = TypeVar("T")
def copydeep(obj: T) -> T:
"""Deep-copies an object. If it is a pydantic model, use the model's copy method."""
if isinstance(obj, BaseModel):
return obj.model_copy(deep=True)
return copy.deepcopy(obj)
class NodeAlreadyInGraphError(ValueError): class NodeAlreadyInGraphError(ValueError):
pass pass
@ -1118,17 +1128,22 @@ class GraphExecutionState(BaseModel):
def _prepare_inputs(self, node: BaseInvocation): def _prepare_inputs(self, node: BaseInvocation):
input_edges = [e for e in self.execution_graph.edges if e.destination.node_id == node.id] input_edges = [e for e in self.execution_graph.edges if e.destination.node_id == node.id]
# Inputs must be deep-copied, else if a node mutates the object, other nodes that get the same input
# will see the mutation.
if isinstance(node, CollectInvocation): if isinstance(node, CollectInvocation):
output_collection = [ output_collection = [
getattr(self.results[edge.source.node_id], edge.source.field) copydeep(getattr(self.results[edge.source.node_id], edge.source.field))
for edge in input_edges for edge in input_edges
if edge.destination.field == "item" if edge.destination.field == "item"
] ]
node.collection = output_collection node.collection = output_collection
else: else:
for edge in input_edges: for edge in input_edges:
output_value = getattr(self.results[edge.source.node_id], edge.source.field) setattr(
setattr(node, edge.destination.field, output_value) node,
edge.destination.field,
copydeep(getattr(self.results[edge.source.node_id], edge.source.field)),
)
# TODO: Add API for modifying underlying graph that checks if the change will be valid given the current execution state # TODO: Add API for modifying underlying graph that checks if the change will be valid given the current execution state
def _is_edge_valid(self, edge: Edge) -> bool: def _is_edge_valid(self, edge: Edge) -> bool:

View File

@ -6,6 +6,8 @@ from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1 from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import build_migration_1
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2 from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3 from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_4 import build_migration_4
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_5 import build_migration_5
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@ -28,7 +30,9 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator = SqliteMigrator(db=db) migrator = SqliteMigrator(db=db)
migrator.register_migration(build_migration_1()) migrator.register_migration(build_migration_1())
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger)) migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3()) migrator.register_migration(build_migration_3(app_config=config, logger=logger))
migrator.register_migration(build_migration_4())
migrator.register_migration(build_migration_5())
migrator.run_migrations() migrator.run_migrations()
return db return db

View File

@ -11,8 +11,6 @@ from invokeai.app.services.workflow_records.workflow_records_common import (
UnsafeWorkflowWithVersionValidator, UnsafeWorkflowWithVersionValidator,
) )
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration2Callback: class Migration2Callback:
def __init__(self, image_files: ImageFileStorageBase, logger: Logger): def __init__(self, image_files: ImageFileStorageBase, logger: Logger):
@ -25,8 +23,6 @@ class Migration2Callback:
self._drop_old_workflow_tables(cursor) self._drop_old_workflow_tables(cursor)
self._add_workflow_library(cursor) self._add_workflow_library(cursor)
self._drop_model_manager_metadata(cursor) self._drop_model_manager_metadata(cursor)
self._recreate_model_config(cursor)
self._migrate_model_config_records(cursor)
self._migrate_embedded_workflows(cursor) self._migrate_embedded_workflows(cursor)
def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None: def _add_images_has_workflow(self, cursor: sqlite3.Cursor) -> None:
@ -100,45 +96,6 @@ class Migration2Callback:
"""Drops the `model_manager_metadata` table.""" """Drops the `model_manager_metadata` table."""
cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;") cursor.execute("DROP TABLE IF EXISTS model_manager_metadata;")
def _recreate_model_config(self, cursor: sqlite3.Cursor) -> None:
"""
Drops the `model_config` table, recreating it.
In 3.4.0, this table used explicit columns but was changed to use json_extract 3.5.0.
Because this table is not used in production, we are able to simply drop it and recreate it.
"""
cursor.execute("DROP TABLE IF EXISTS model_config;")
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_config (
id TEXT NOT NULL PRIMARY KEY,
-- The next 3 fields are enums in python, unrestricted string here
base TEXT GENERATED ALWAYS as (json_extract(config, '$.base')) VIRTUAL NOT NULL,
type TEXT GENERATED ALWAYS as (json_extract(config, '$.type')) VIRTUAL NOT NULL,
name TEXT GENERATED ALWAYS as (json_extract(config, '$.name')) VIRTUAL NOT NULL,
path TEXT GENERATED ALWAYS as (json_extract(config, '$.path')) VIRTUAL NOT NULL,
format TEXT GENERATED ALWAYS as (json_extract(config, '$.format')) VIRTUAL NOT NULL,
original_hash TEXT, -- could be null
-- Serialized JSON representation of the whole config object,
-- which will contain additional fields from subclasses
config TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- unique constraint on combo of name, base and type
UNIQUE(name, base, type)
);
"""
)
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor)
model_record_migrator.migrate()
def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None: def _migrate_embedded_workflows(self, cursor: sqlite3.Cursor) -> None:
""" """
In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in In the v3.5.0 release, InvokeAI changed how it handles embedded workflows. The `images` table in

View File

@ -1,13 +1,16 @@
import sqlite3 import sqlite3
from logging import Logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1 from .util.migrate_yaml_config_1 import MigrateModelYamlToDb1
class Migration3Callback: class Migration3Callback:
def __init__(self) -> None: def __init__(self, app_config: InvokeAIAppConfig, logger: Logger) -> None:
pass self._app_config = app_config
self._logger = logger
def __call__(self, cursor: sqlite3.Cursor) -> None: def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_model_manager_metadata(cursor) self._drop_model_manager_metadata(cursor)
@ -54,11 +57,12 @@ class Migration3Callback:
def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None: def _migrate_model_config_records(self, cursor: sqlite3.Cursor) -> None:
"""After updating the model config table, we repopulate it.""" """After updating the model config table, we repopulate it."""
model_record_migrator = MigrateModelYamlToDb1(cursor) self._logger.info("Migrating model config records from models.yaml to database")
model_record_migrator = MigrateModelYamlToDb1(self._app_config, self._logger, cursor)
model_record_migrator.migrate() model_record_migrator.migrate()
def build_migration_3() -> Migration: def build_migration_3(app_config: InvokeAIAppConfig, logger: Logger) -> Migration:
""" """
Build the migration from database version 2 to 3. Build the migration from database version 2 to 3.
@ -69,7 +73,7 @@ def build_migration_3() -> Migration:
migration_3 = Migration( migration_3 = Migration(
from_version=2, from_version=2,
to_version=3, to_version=3,
callback=Migration3Callback(), callback=Migration3Callback(app_config=app_config, logger=logger),
) )
return migration_3 return migration_3

View File

@ -0,0 +1,83 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration4Callback:
"""Callback to do step 4 of migration."""
def __call__(self, cursor: sqlite3.Cursor) -> None: # noqa D102
self._create_model_metadata(cursor)
self._create_model_tags(cursor)
self._create_tags(cursor)
self._create_triggers(cursor)
def _create_model_metadata(self, cursor: sqlite3.Cursor) -> None:
"""Create the table used to store model metadata downloaded from remote sources."""
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_metadata (
id TEXT NOT NULL PRIMARY KEY,
name TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.name')) VIRTUAL NOT NULL,
author TEXT GENERATED ALWAYS AS (json_extract(metadata, '$.author')) VIRTUAL NOT NULL,
-- Serialized JSON representation of the whole metadata object,
-- which will contain additional fields from subclasses
metadata TEXT NOT NULL,
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
-- Updated via trigger
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
FOREIGN KEY(id) REFERENCES model_config(id) ON DELETE CASCADE
);
"""
)
def _create_model_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS model_tags (
model_id TEXT NOT NULL,
tag_id INTEGER NOT NULL,
FOREIGN KEY(model_id) REFERENCES model_config(id) ON DELETE CASCADE,
FOREIGN KEY(tag_id) REFERENCES tags(tag_id) ON DELETE CASCADE,
UNIQUE(model_id,tag_id)
);
"""
)
def _create_tags(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TABLE IF NOT EXISTS tags (
tag_id INTEGER NOT NULL PRIMARY KEY,
tag_text TEXT NOT NULL UNIQUE
);
"""
)
def _create_triggers(self, cursor: sqlite3.Cursor) -> None:
cursor.execute(
"""--sql
CREATE TRIGGER IF NOT EXISTS model_metadata_updated_at
AFTER UPDATE
ON model_metadata FOR EACH ROW
BEGIN
UPDATE model_metadata SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
WHERE id = old.id;
END;
"""
)
def build_migration_4() -> Migration:
"""
Build the migration from database version 3 to 4.
Adds the tables needed to store model metadata and tags.
"""
migration_4 = Migration(
from_version=3,
to_version=4,
callback=Migration4Callback(),
)
return migration_4

View File

@ -0,0 +1,34 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration5Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_graph_executions(cursor)
def _drop_graph_executions(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `graph_executions` table."""
cursor.execute(
"""--sql
DROP TABLE IF EXISTS graph_executions;
"""
)
def build_migration_5() -> Migration:
"""
Build the migration from database version 4 to 5.
Introduced in v3.6.3, this migration:
- Drops the `graph_executions` table. We are able to do this because we are moving the graph storage
to be purely in-memory.
"""
migration_5 = Migration(
from_version=4,
to_version=5,
callback=Migration5Callback(),
)
return migration_5

View File

@ -23,7 +23,6 @@ from invokeai.backend.model_manager.config import (
ModelType, ModelType,
) )
from invokeai.backend.model_manager.hash import FastModelHash from invokeai.backend.model_manager.hash import FastModelHash
from invokeai.backend.util.logging import InvokeAILogger
ModelsValidator = TypeAdapter(AnyModelConfig) ModelsValidator = TypeAdapter(AnyModelConfig)
@ -46,10 +45,9 @@ class MigrateModelYamlToDb1:
logger: Logger logger: Logger
cursor: sqlite3.Cursor cursor: sqlite3.Cursor
def __init__(self, cursor: sqlite3.Cursor = None) -> None: def __init__(self, config: InvokeAIAppConfig, logger: Logger, cursor: sqlite3.Cursor = None) -> None:
self.config = InvokeAIAppConfig.get_config() self.config = config
self.config.parse_args() self.logger = logger
self.logger = InvokeAILogger.get_logger()
self.cursor = cursor self.cursor = cursor
def get_yaml(self) -> DictConfig: def get_yaml(self) -> DictConfig:
@ -74,7 +72,12 @@ class MigrateModelYamlToDb1:
continue continue
base_type, model_type, model_name = str(model_key).split("/") base_type, model_type, model_name = str(model_key).split("/")
hash = FastModelHash.hash(self.config.models_path / stanza.path) try:
hash = FastModelHash.hash(self.config.models_path / stanza.path)
except OSError:
self.logger.warning(f"The model at {stanza.path} is not a valid file or directory. Skipping migration.")
continue
assert isinstance(model_key, str) assert isinstance(model_key, str)
new_key = sha1(model_key.encode("utf-8")).hexdigest() new_key = sha1(model_key.encode("utf-8")).hexdigest()

View File

@ -1,5 +1,4 @@
{ {
"id": "6bfa0b3a-7090-4cd9-ad2d-a4b8662b6e71",
"name": "ESRGAN Upscaling with Canny ControlNet", "name": "ESRGAN Upscaling with Canny ControlNet",
"author": "InvokeAI", "author": "InvokeAI",
"description": "Sample workflow for using Upscaling with ControlNet with SD1.5", "description": "Sample workflow for using Upscaling with ControlNet with SD1.5",
@ -77,12 +76,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 1250, "x": 1250,
"y": 1500 "y": 1500
} },
"width": 320,
"height": 219
}, },
{ {
"id": "d8ace142-c05f-4f1d-8982-88dc7473958d", "id": "d8ace142-c05f-4f1d-8982-88dc7473958d",
@ -148,12 +147,12 @@
} }
} }
}, },
"width": 320,
"height": 227,
"position": { "position": {
"x": 700, "x": 700,
"y": 1375 "y": 1375
} },
"width": 320,
"height": 193
}, },
{ {
"id": "771bdf6a-0813-4099-a5d8-921a138754d4", "id": "771bdf6a-0813-4099-a5d8-921a138754d4",
@ -214,12 +213,12 @@
} }
} }
}, },
"width": 320,
"height": 225,
"position": { "position": {
"x": 375, "x": 375,
"y": 1900 "y": 1900
} },
"width": 320,
"height": 189
}, },
{ {
"id": "f7564dd2-9539-47f2-ac13-190804461f4e", "id": "f7564dd2-9539-47f2-ac13-190804461f4e",
@ -315,12 +314,12 @@
} }
} }
}, },
"width": 320,
"height": 340,
"position": { "position": {
"x": 775, "x": 775,
"y": 1900 "y": 1900
} },
"width": 320,
"height": 295
}, },
{ {
"id": "1d887701-df21-4966-ae6e-a7d82307d7bd", "id": "1d887701-df21-4966-ae6e-a7d82307d7bd",
@ -416,12 +415,12 @@
} }
} }
}, },
"width": 320,
"height": 340,
"position": { "position": {
"x": 1200, "x": 1200,
"y": 1900 "y": 1900
} },
"width": 320,
"height": 293
}, },
{ {
"id": "ca1d020c-89a8-4958-880a-016d28775cfa", "id": "ca1d020c-89a8-4958-880a-016d28775cfa",
@ -434,7 +433,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.1.0", "version": "1.1.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"image": { "image": {
@ -537,12 +536,12 @@
} }
} }
}, },
"width": 320,
"height": 511,
"position": { "position": {
"x": 1650, "x": 1650,
"y": 1900 "y": 1900
} },
"width": 320,
"height": 451
}, },
{ {
"id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "id": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
@ -640,12 +639,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 1650, "x": 1650,
"y": 1775 "y": 1775
} },
"width": 320,
"height": 24
}, },
{ {
"id": "c3737554-8d87-48ff-a6f8-e71d2867f434", "id": "c3737554-8d87-48ff-a6f8-e71d2867f434",
@ -658,7 +657,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
@ -866,12 +865,12 @@
} }
} }
}, },
"width": 320,
"height": 705,
"position": { "position": {
"x": 2128.740065979906, "x": 2128.740065979906,
"y": 1232.6219060454753 "y": 1232.6219060454753
} },
"width": 320,
"height": 612
}, },
{ {
"id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0", "id": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
@ -978,12 +977,12 @@
} }
} }
}, },
"width": 320,
"height": 267,
"position": { "position": {
"x": 2559.4751127537957, "x": 2559.4751127537957,
"y": 1246.6000376741406 "y": 1246.6000376741406
} },
"width": 320,
"height": 224
}, },
{ {
"id": "5ca498a4-c8c8-4580-a396-0c984317205d", "id": "5ca498a4-c8c8-4580-a396-0c984317205d",
@ -1079,12 +1078,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 1650, "x": 1650,
"y": 1675 "y": 1675
} },
"width": 320,
"height": 24
}, },
{ {
"id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b", "id": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
@ -1137,12 +1136,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 1250, "x": 1250,
"y": 1200 "y": 1200
} },
"width": 320,
"height": 219
}, },
{ {
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35", "id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
@ -1195,168 +1194,168 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 1650, "x": 1650,
"y": 1600 "y": 1600
} },
"width": 320,
"height": 24
} }
], ],
"edges": [ "edges": [
{ {
"id": "5ca498a4-c8c8-4580-a396-0c984317205d-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed", "id": "5ca498a4-c8c8-4580-a396-0c984317205d-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed",
"type": "collapsed",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d", "source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48"
"type": "collapsed"
}, },
{ {
"id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed", "id": "eb8f6f8a-c7b1-4914-806e-045ee2717a35-f50624ce-82bf-41d0-bdf7-8aab11a80d48-collapsed",
"type": "collapsed",
"source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35", "source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48"
"type": "collapsed"
}, },
{ {
"id": "reactflow__edge-771bdf6a-0813-4099-a5d8-921a138754d4image-f7564dd2-9539-47f2-ac13-190804461f4eimage", "id": "reactflow__edge-771bdf6a-0813-4099-a5d8-921a138754d4image-f7564dd2-9539-47f2-ac13-190804461f4eimage",
"type": "default",
"source": "771bdf6a-0813-4099-a5d8-921a138754d4", "source": "771bdf6a-0813-4099-a5d8-921a138754d4",
"target": "f7564dd2-9539-47f2-ac13-190804461f4e", "target": "f7564dd2-9539-47f2-ac13-190804461f4e",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-1d887701-df21-4966-ae6e-a7d82307d7bdimage", "id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-1d887701-df21-4966-ae6e-a7d82307d7bdimage",
"type": "default",
"source": "f7564dd2-9539-47f2-ac13-190804461f4e", "source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"target": "1d887701-df21-4966-ae6e-a7d82307d7bd", "target": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dwidth-f50624ce-82bf-41d0-bdf7-8aab11a80d48width", "id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dwidth-f50624ce-82bf-41d0-bdf7-8aab11a80d48width",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d", "source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "width", "sourceHandle": "width",
"targetHandle": "width" "targetHandle": "width"
}, },
{ {
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dheight-f50624ce-82bf-41d0-bdf7-8aab11a80d48height", "id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dheight-f50624ce-82bf-41d0-bdf7-8aab11a80d48height",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d", "source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "height", "sourceHandle": "height",
"targetHandle": "height" "targetHandle": "height"
}, },
{ {
"id": "reactflow__edge-f50624ce-82bf-41d0-bdf7-8aab11a80d48noise-c3737554-8d87-48ff-a6f8-e71d2867f434noise", "id": "reactflow__edge-f50624ce-82bf-41d0-bdf7-8aab11a80d48noise-c3737554-8d87-48ff-a6f8-e71d2867f434noise",
"type": "default",
"source": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "source": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dlatents-c3737554-8d87-48ff-a6f8-e71d2867f434latents", "id": "reactflow__edge-5ca498a4-c8c8-4580-a396-0c984317205dlatents-c3737554-8d87-48ff-a6f8-e71d2867f434latents",
"type": "default",
"source": "5ca498a4-c8c8-4580-a396-0c984317205d", "source": "5ca498a4-c8c8-4580-a396-0c984317205d",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-e8bf67fe-67de-4227-87eb-79e86afdfc74conditioning-c3737554-8d87-48ff-a6f8-e71d2867f434negative_conditioning", "id": "reactflow__edge-e8bf67fe-67de-4227-87eb-79e86afdfc74conditioning-c3737554-8d87-48ff-a6f8-e71d2867f434negative_conditioning",
"type": "default",
"source": "e8bf67fe-67de-4227-87eb-79e86afdfc74", "source": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bconditioning-c3737554-8d87-48ff-a6f8-e71d2867f434positive_conditioning", "id": "reactflow__edge-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bconditioning-c3737554-8d87-48ff-a6f8-e71d2867f434positive_conditioning",
"type": "default",
"source": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b", "source": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bclip", "id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16bclip",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d", "source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b", "target": "63b6ab7e-5b05-4d1b-a3b1-42d8e53ce16b",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-e8bf67fe-67de-4227-87eb-79e86afdfc74clip", "id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dclip-e8bf67fe-67de-4227-87eb-79e86afdfc74clip",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d", "source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "e8bf67fe-67de-4227-87eb-79e86afdfc74", "target": "e8bf67fe-67de-4227-87eb-79e86afdfc74",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-1d887701-df21-4966-ae6e-a7d82307d7bdimage-ca1d020c-89a8-4958-880a-016d28775cfaimage", "id": "reactflow__edge-1d887701-df21-4966-ae6e-a7d82307d7bdimage-ca1d020c-89a8-4958-880a-016d28775cfaimage",
"type": "default",
"source": "1d887701-df21-4966-ae6e-a7d82307d7bd", "source": "1d887701-df21-4966-ae6e-a7d82307d7bd",
"target": "ca1d020c-89a8-4958-880a-016d28775cfa", "target": "ca1d020c-89a8-4958-880a-016d28775cfa",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-ca1d020c-89a8-4958-880a-016d28775cfacontrol-c3737554-8d87-48ff-a6f8-e71d2867f434control", "id": "reactflow__edge-ca1d020c-89a8-4958-880a-016d28775cfacontrol-c3737554-8d87-48ff-a6f8-e71d2867f434control",
"type": "default",
"source": "ca1d020c-89a8-4958-880a-016d28775cfa", "source": "ca1d020c-89a8-4958-880a-016d28775cfa",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "control", "sourceHandle": "control",
"targetHandle": "control" "targetHandle": "control"
}, },
{ {
"id": "reactflow__edge-c3737554-8d87-48ff-a6f8-e71d2867f434latents-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0latents", "id": "reactflow__edge-c3737554-8d87-48ff-a6f8-e71d2867f434latents-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0latents",
"type": "default",
"source": "c3737554-8d87-48ff-a6f8-e71d2867f434", "source": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0", "target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0vae", "id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-3ed9b2ef-f4ec-40a7-94db-92e63b583ec0vae",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d", "source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0", "target": "3ed9b2ef-f4ec-40a7-94db-92e63b583ec0",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
}, },
{ {
"id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-5ca498a4-c8c8-4580-a396-0c984317205dimage", "id": "reactflow__edge-f7564dd2-9539-47f2-ac13-190804461f4eimage-5ca498a4-c8c8-4580-a396-0c984317205dimage",
"type": "default",
"source": "f7564dd2-9539-47f2-ac13-190804461f4e", "source": "f7564dd2-9539-47f2-ac13-190804461f4e",
"target": "5ca498a4-c8c8-4580-a396-0c984317205d", "target": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dunet-c3737554-8d87-48ff-a6f8-e71d2867f434unet", "id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dunet-c3737554-8d87-48ff-a6f8-e71d2867f434unet",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d", "source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "c3737554-8d87-48ff-a6f8-e71d2867f434", "target": "c3737554-8d87-48ff-a6f8-e71d2867f434",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-5ca498a4-c8c8-4580-a396-0c984317205dvae", "id": "reactflow__edge-d8ace142-c05f-4f1d-8982-88dc7473958dvae-5ca498a4-c8c8-4580-a396-0c984317205dvae",
"type": "default",
"source": "d8ace142-c05f-4f1d-8982-88dc7473958d", "source": "d8ace142-c05f-4f1d-8982-88dc7473958d",
"target": "5ca498a4-c8c8-4580-a396-0c984317205d", "target": "5ca498a4-c8c8-4580-a396-0c984317205d",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
}, },
{ {
"id": "reactflow__edge-eb8f6f8a-c7b1-4914-806e-045ee2717a35value-f50624ce-82bf-41d0-bdf7-8aab11a80d48seed", "id": "reactflow__edge-eb8f6f8a-c7b1-4914-806e-045ee2717a35value-f50624ce-82bf-41d0-bdf7-8aab11a80d48seed",
"type": "default",
"source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35", "source": "eb8f6f8a-c7b1-4914-806e-045ee2717a35",
"target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48", "target": "f50624ce-82bf-41d0-bdf7-8aab11a80d48",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
} }

View File

@ -1,5 +1,4 @@
{ {
"id": "1e385b84-86f8-452e-9697-9e5abed20518",
"name": "Multi ControlNet (Canny & Depth)", "name": "Multi ControlNet (Canny & Depth)",
"author": "InvokeAI", "author": "InvokeAI",
"description": "A sample workflow using canny & depth ControlNets to guide the generation process. ", "description": "A sample workflow using canny & depth ControlNets to guide the generation process. ",
@ -93,12 +92,12 @@
} }
} }
}, },
"width": 320,
"height": 225,
"position": { "position": {
"x": 3625, "x": 3625,
"y": -75 "y": -75
} },
"width": 320,
"height": 189
}, },
{ {
"id": "a33199c2-8340-401e-b8a2-42ffa875fc1c", "id": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
@ -111,7 +110,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.1.0", "version": "1.1.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"image": { "image": {
@ -214,12 +213,12 @@
} }
} }
}, },
"width": 320,
"height": 511,
"position": { "position": {
"x": 4477.604342844504, "x": 4477.604342844504,
"y": -49.39005411272677 "y": -49.39005411272677
} },
"width": 320,
"height": 451
}, },
{ {
"id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1", "id": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
@ -272,12 +271,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 4075, "x": 4075,
"y": -825 "y": -825
} },
"width": 320,
"height": 219
}, },
{ {
"id": "54486974-835b-4d81-8f82-05f9f32ce9e9", "id": "54486974-835b-4d81-8f82-05f9f32ce9e9",
@ -343,12 +342,12 @@
} }
} }
}, },
"width": 320,
"height": 227,
"position": { "position": {
"x": 3600, "x": 3600,
"y": -1000 "y": -1000
} },
"width": 320,
"height": 193
}, },
{ {
"id": "7ce68934-3419-42d4-ac70-82cfc9397306", "id": "7ce68934-3419-42d4-ac70-82cfc9397306",
@ -401,12 +400,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 4075, "x": 4075,
"y": -1125 "y": -1125
} },
"width": 320,
"height": 219
}, },
{ {
"id": "d204d184-f209-4fae-a0a1-d152800844e1", "id": "d204d184-f209-4fae-a0a1-d152800844e1",
@ -419,7 +418,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.1.0", "version": "1.1.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"image": { "image": {
@ -522,12 +521,12 @@
} }
} }
}, },
"width": 320,
"height": 511,
"position": { "position": {
"x": 4479.68542130465, "x": 4479.68542130465,
"y": -618.4221638099414 "y": -618.4221638099414
} },
"width": 320,
"height": 451
}, },
{ {
"id": "c4b23e64-7986-40c4-9cad-46327b12e204", "id": "c4b23e64-7986-40c4-9cad-46327b12e204",
@ -588,12 +587,12 @@
} }
} }
}, },
"width": 320,
"height": 225,
"position": { "position": {
"x": 3625, "x": 3625,
"y": -425 "y": -425
} },
"width": 320,
"height": 189
}, },
{ {
"id": "ca4d5059-8bfb-447f-b415-da0faba5a143", "id": "ca4d5059-8bfb-447f-b415-da0faba5a143",
@ -633,12 +632,12 @@
} }
} }
}, },
"width": 320,
"height": 104,
"position": { "position": {
"x": 4875, "x": 4875,
"y": -575 "y": -575
} },
"width": 320,
"height": 87
}, },
{ {
"id": "018b1214-c2af-43a7-9910-fb687c6726d7", "id": "018b1214-c2af-43a7-9910-fb687c6726d7",
@ -734,12 +733,12 @@
} }
} }
}, },
"width": 320,
"height": 340,
"position": { "position": {
"x": 4100, "x": 4100,
"y": -75 "y": -75
} },
"width": 320,
"height": 293
}, },
{ {
"id": "c826ba5e-9676-4475-b260-07b85e88753c", "id": "c826ba5e-9676-4475-b260-07b85e88753c",
@ -835,12 +834,12 @@
} }
} }
}, },
"width": 320,
"height": 340,
"position": { "position": {
"x": 4095.757337055795, "x": 4095.757337055795,
"y": -455.63440891935863 "y": -455.63440891935863
} },
"width": 320,
"height": 293
}, },
{ {
"id": "9db25398-c869-4a63-8815-c6559341ef12", "id": "9db25398-c869-4a63-8815-c6559341ef12",
@ -947,12 +946,12 @@
} }
} }
}, },
"width": 320,
"height": 267,
"position": { "position": {
"x": 5675, "x": 5675,
"y": -825 "y": -825
} },
"width": 320,
"height": 224
}, },
{ {
"id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "id": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
@ -965,7 +964,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
@ -1173,12 +1172,12 @@
} }
} }
}, },
"width": 320,
"height": 705,
"position": { "position": {
"x": 5274.672987098195, "x": 5274.672987098195,
"y": -823.0752416664332 "y": -823.0752416664332
} },
"width": 320,
"height": 612
}, },
{ {
"id": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b", "id": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
@ -1275,12 +1274,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 4875, "x": 4875,
"y": -675 "y": -675
} },
"width": 320,
"height": 24
}, },
{ {
"id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce", "id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
@ -1333,146 +1332,146 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 4875, "x": 4875,
"y": -750 "y": -750
} },
"width": 320,
"height": 24
} }
], ],
"edges": [ "edges": [
{ {
"id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce-2e77a0a1-db6a-47a2-a8bf-1e003be6423b-collapsed", "id": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce-2e77a0a1-db6a-47a2-a8bf-1e003be6423b-collapsed",
"type": "collapsed",
"source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce", "source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
"target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b", "target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b"
"type": "collapsed"
}, },
{ {
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-7ce68934-3419-42d4-ac70-82cfc9397306clip", "id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-7ce68934-3419-42d4-ac70-82cfc9397306clip",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9", "source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "7ce68934-3419-42d4-ac70-82cfc9397306", "target": "7ce68934-3419-42d4-ac70-82cfc9397306",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-273e3f96-49ea-4dc5-9d5b-9660390f14e1clip", "id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9clip-273e3f96-49ea-4dc5-9d5b-9660390f14e1clip",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9", "source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "273e3f96-49ea-4dc5-9d5b-9660390f14e1", "target": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-a33199c2-8340-401e-b8a2-42ffa875fc1ccontrol-ca4d5059-8bfb-447f-b415-da0faba5a143item", "id": "reactflow__edge-a33199c2-8340-401e-b8a2-42ffa875fc1ccontrol-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default",
"source": "a33199c2-8340-401e-b8a2-42ffa875fc1c", "source": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143", "target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "default",
"sourceHandle": "control", "sourceHandle": "control",
"targetHandle": "item" "targetHandle": "item"
}, },
{ {
"id": "reactflow__edge-d204d184-f209-4fae-a0a1-d152800844e1control-ca4d5059-8bfb-447f-b415-da0faba5a143item", "id": "reactflow__edge-d204d184-f209-4fae-a0a1-d152800844e1control-ca4d5059-8bfb-447f-b415-da0faba5a143item",
"type": "default",
"source": "d204d184-f209-4fae-a0a1-d152800844e1", "source": "d204d184-f209-4fae-a0a1-d152800844e1",
"target": "ca4d5059-8bfb-447f-b415-da0faba5a143", "target": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"type": "default",
"sourceHandle": "control", "sourceHandle": "control",
"targetHandle": "item" "targetHandle": "item"
}, },
{ {
"id": "reactflow__edge-8e860e51-5045-456e-bf04-9a62a2a5c49eimage-018b1214-c2af-43a7-9910-fb687c6726d7image", "id": "reactflow__edge-8e860e51-5045-456e-bf04-9a62a2a5c49eimage-018b1214-c2af-43a7-9910-fb687c6726d7image",
"type": "default",
"source": "8e860e51-5045-456e-bf04-9a62a2a5c49e", "source": "8e860e51-5045-456e-bf04-9a62a2a5c49e",
"target": "018b1214-c2af-43a7-9910-fb687c6726d7", "target": "018b1214-c2af-43a7-9910-fb687c6726d7",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-018b1214-c2af-43a7-9910-fb687c6726d7image-a33199c2-8340-401e-b8a2-42ffa875fc1cimage", "id": "reactflow__edge-018b1214-c2af-43a7-9910-fb687c6726d7image-a33199c2-8340-401e-b8a2-42ffa875fc1cimage",
"type": "default",
"source": "018b1214-c2af-43a7-9910-fb687c6726d7", "source": "018b1214-c2af-43a7-9910-fb687c6726d7",
"target": "a33199c2-8340-401e-b8a2-42ffa875fc1c", "target": "a33199c2-8340-401e-b8a2-42ffa875fc1c",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-c4b23e64-7986-40c4-9cad-46327b12e204image-c826ba5e-9676-4475-b260-07b85e88753cimage", "id": "reactflow__edge-c4b23e64-7986-40c4-9cad-46327b12e204image-c826ba5e-9676-4475-b260-07b85e88753cimage",
"type": "default",
"source": "c4b23e64-7986-40c4-9cad-46327b12e204", "source": "c4b23e64-7986-40c4-9cad-46327b12e204",
"target": "c826ba5e-9676-4475-b260-07b85e88753c", "target": "c826ba5e-9676-4475-b260-07b85e88753c",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-c826ba5e-9676-4475-b260-07b85e88753cimage-d204d184-f209-4fae-a0a1-d152800844e1image", "id": "reactflow__edge-c826ba5e-9676-4475-b260-07b85e88753cimage-d204d184-f209-4fae-a0a1-d152800844e1image",
"type": "default",
"source": "c826ba5e-9676-4475-b260-07b85e88753c", "source": "c826ba5e-9676-4475-b260-07b85e88753c",
"target": "d204d184-f209-4fae-a0a1-d152800844e1", "target": "d204d184-f209-4fae-a0a1-d152800844e1",
"type": "default",
"sourceHandle": "image", "sourceHandle": "image",
"targetHandle": "image" "targetHandle": "image"
}, },
{ {
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9vae-9db25398-c869-4a63-8815-c6559341ef12vae", "id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9vae-9db25398-c869-4a63-8815-c6559341ef12vae",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9", "source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "9db25398-c869-4a63-8815-c6559341ef12", "target": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
}, },
{ {
"id": "reactflow__edge-ac481b7f-08bf-4a9d-9e0c-3a82ea5243celatents-9db25398-c869-4a63-8815-c6559341ef12latents", "id": "reactflow__edge-ac481b7f-08bf-4a9d-9e0c-3a82ea5243celatents-9db25398-c869-4a63-8815-c6559341ef12latents",
"type": "default",
"source": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "source": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"target": "9db25398-c869-4a63-8815-c6559341ef12", "target": "9db25398-c869-4a63-8815-c6559341ef12",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-ca4d5059-8bfb-447f-b415-da0faba5a143collection-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cecontrol", "id": "reactflow__edge-ca4d5059-8bfb-447f-b415-da0faba5a143collection-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cecontrol",
"type": "default",
"source": "ca4d5059-8bfb-447f-b415-da0faba5a143", "source": "ca4d5059-8bfb-447f-b415-da0faba5a143",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "collection", "sourceHandle": "collection",
"targetHandle": "control" "targetHandle": "control"
}, },
{ {
"id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9unet-ac481b7f-08bf-4a9d-9e0c-3a82ea5243ceunet", "id": "reactflow__edge-54486974-835b-4d81-8f82-05f9f32ce9e9unet-ac481b7f-08bf-4a9d-9e0c-3a82ea5243ceunet",
"type": "default",
"source": "54486974-835b-4d81-8f82-05f9f32ce9e9", "source": "54486974-835b-4d81-8f82-05f9f32ce9e9",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-273e3f96-49ea-4dc5-9d5b-9660390f14e1conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenegative_conditioning", "id": "reactflow__edge-273e3f96-49ea-4dc5-9d5b-9660390f14e1conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenegative_conditioning",
"type": "default",
"source": "273e3f96-49ea-4dc5-9d5b-9660390f14e1", "source": "273e3f96-49ea-4dc5-9d5b-9660390f14e1",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-7ce68934-3419-42d4-ac70-82cfc9397306conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cepositive_conditioning", "id": "reactflow__edge-7ce68934-3419-42d4-ac70-82cfc9397306conditioning-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cepositive_conditioning",
"type": "default",
"source": "7ce68934-3419-42d4-ac70-82cfc9397306", "source": "7ce68934-3419-42d4-ac70-82cfc9397306",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-2e77a0a1-db6a-47a2-a8bf-1e003be6423bnoise-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenoise", "id": "reactflow__edge-2e77a0a1-db6a-47a2-a8bf-1e003be6423bnoise-ac481b7f-08bf-4a9d-9e0c-3a82ea5243cenoise",
"type": "default",
"source": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b", "source": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
"target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce", "target": "ac481b7f-08bf-4a9d-9e0c-3a82ea5243ce",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-8b260b4d-3fd6-44d4-b1be-9f0e43c628cevalue-2e77a0a1-db6a-47a2-a8bf-1e003be6423bseed", "id": "reactflow__edge-8b260b4d-3fd6-44d4-b1be-9f0e43c628cevalue-2e77a0a1-db6a-47a2-a8bf-1e003be6423bseed",
"type": "default",
"source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce", "source": "8b260b4d-3fd6-44d4-b1be-9f0e43c628ce",
"target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b", "target": "2e77a0a1-db6a-47a2-a8bf-1e003be6423b",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
} }

View File

@ -20,7 +20,6 @@
"category": "default", "category": "default",
"version": "2.0.0" "version": "2.0.0"
}, },
"id": "d1609af5-eb0a-4f73-b573-c9af96a8d6bf",
"nodes": [ "nodes": [
{ {
"id": "c2eaf1ba-5708-4679-9e15-945b8b432692", "id": "c2eaf1ba-5708-4679-9e15-945b8b432692",
@ -73,12 +72,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 925, "x": 925,
"y": -200 "y": -200
} },
"width": 320,
"height": 24
}, },
{ {
"id": "1b7e0df8-8589-4915-a4ea-c0088f15d642", "id": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
@ -168,12 +167,12 @@
} }
} }
}, },
"width": 320,
"height": 580,
"position": { "position": {
"x": 475, "x": 475,
"y": -400 "y": -400
} },
"width": 320,
"height": 506
}, },
{ {
"id": "1b89067c-3f6b-42c8-991f-e3055789b251", "id": "1b89067c-3f6b-42c8-991f-e3055789b251",
@ -233,12 +232,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 925, "x": 925,
"y": -400 "y": -400
} },
"width": 320,
"height": 24
}, },
{ {
"id": "d6353b7f-b447-4e17-8f2e-80a88c91d426", "id": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
@ -304,12 +303,12 @@
} }
} }
}, },
"width": 320,
"height": 227,
"position": { "position": {
"x": 0, "x": 0,
"y": -375 "y": -375
} },
"width": 320,
"height": 193
}, },
{ {
"id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6", "id": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
@ -362,12 +361,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 925, "x": 925,
"y": -275 "y": -275
} },
"width": 320,
"height": 24
}, },
{ {
"id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77", "id": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
@ -465,12 +464,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 925, "x": 925,
"y": 25 "y": 25
} },
"width": 320,
"height": 24
}, },
{ {
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5", "id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
@ -524,12 +523,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 925, "x": 925,
"y": -50 "y": -50
} },
"width": 320,
"height": 24
}, },
{ {
"id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1", "id": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
@ -636,12 +635,12 @@
} }
} }
}, },
"width": 320,
"height": 267,
"position": { "position": {
"x": 2037.861329274915, "x": 2037.861329274915,
"y": -329.8393457509562 "y": -329.8393457509562
} },
"width": 320,
"height": 224
}, },
{ {
"id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "id": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
@ -654,7 +653,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
@ -862,112 +861,112 @@
} }
} }
}, },
"width": 320,
"height": 705,
"position": { "position": {
"x": 1570.9941088179146, "x": 1570.9941088179146,
"y": -407.6505491604564 "y": -407.6505491604564
} },
"width": 320,
"height": 612
} }
], ],
"edges": [ "edges": [
{ {
"id": "1b89067c-3f6b-42c8-991f-e3055789b251-fc9d0e35-a6de-4a19-84e1-c72497c823f6-collapsed", "id": "1b89067c-3f6b-42c8-991f-e3055789b251-fc9d0e35-a6de-4a19-84e1-c72497c823f6-collapsed",
"type": "collapsed",
"source": "1b89067c-3f6b-42c8-991f-e3055789b251", "source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6", "target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6"
"type": "collapsed"
}, },
{ {
"id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77-collapsed", "id": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77-collapsed",
"type": "collapsed",
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5", "source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77", "target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77"
"type": "collapsed"
}, },
{ {
"id": "reactflow__edge-1b7e0df8-8589-4915-a4ea-c0088f15d642collection-1b89067c-3f6b-42c8-991f-e3055789b251collection", "id": "reactflow__edge-1b7e0df8-8589-4915-a4ea-c0088f15d642collection-1b89067c-3f6b-42c8-991f-e3055789b251collection",
"type": "default",
"source": "1b7e0df8-8589-4915-a4ea-c0088f15d642", "source": "1b7e0df8-8589-4915-a4ea-c0088f15d642",
"target": "1b89067c-3f6b-42c8-991f-e3055789b251", "target": "1b89067c-3f6b-42c8-991f-e3055789b251",
"type": "default",
"sourceHandle": "collection", "sourceHandle": "collection",
"targetHandle": "collection" "targetHandle": "collection"
}, },
{ {
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-fc9d0e35-a6de-4a19-84e1-c72497c823f6clip", "id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-fc9d0e35-a6de-4a19-84e1-c72497c823f6clip",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426", "source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6", "target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-1b89067c-3f6b-42c8-991f-e3055789b251item-fc9d0e35-a6de-4a19-84e1-c72497c823f6prompt", "id": "reactflow__edge-1b89067c-3f6b-42c8-991f-e3055789b251item-fc9d0e35-a6de-4a19-84e1-c72497c823f6prompt",
"type": "default",
"source": "1b89067c-3f6b-42c8-991f-e3055789b251", "source": "1b89067c-3f6b-42c8-991f-e3055789b251",
"target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6", "target": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"type": "default",
"sourceHandle": "item", "sourceHandle": "item",
"targetHandle": "prompt" "targetHandle": "prompt"
}, },
{ {
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-c2eaf1ba-5708-4679-9e15-945b8b432692clip", "id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426clip-c2eaf1ba-5708-4679-9e15-945b8b432692clip",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426", "source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "c2eaf1ba-5708-4679-9e15-945b8b432692", "target": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5value-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77seed", "id": "reactflow__edge-dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5value-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77seed",
"type": "default",
"source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5", "source": "dfc20e07-7aef-4fc0-a3a1-7bf68ec6a4e5",
"target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77", "target": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
}, },
{ {
"id": "reactflow__edge-fc9d0e35-a6de-4a19-84e1-c72497c823f6conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5epositive_conditioning", "id": "reactflow__edge-fc9d0e35-a6de-4a19-84e1-c72497c823f6conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5epositive_conditioning",
"type": "default",
"source": "fc9d0e35-a6de-4a19-84e1-c72497c823f6", "source": "fc9d0e35-a6de-4a19-84e1-c72497c823f6",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-c2eaf1ba-5708-4679-9e15-945b8b432692conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enegative_conditioning", "id": "reactflow__edge-c2eaf1ba-5708-4679-9e15-945b8b432692conditioning-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enegative_conditioning",
"type": "default",
"source": "c2eaf1ba-5708-4679-9e15-945b8b432692", "source": "c2eaf1ba-5708-4679-9e15-945b8b432692",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77noise-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enoise", "id": "reactflow__edge-0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77noise-2fb1577f-0a56-4f12-8711-8afcaaaf1d5enoise",
"type": "default",
"source": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77", "source": "0eb5f3f5-1b91-49eb-9ef0-41d67c7eae77",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426unet-2fb1577f-0a56-4f12-8711-8afcaaaf1d5eunet", "id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426unet-2fb1577f-0a56-4f12-8711-8afcaaaf1d5eunet",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426", "source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "target": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-2fb1577f-0a56-4f12-8711-8afcaaaf1d5elatents-491ec988-3c77-4c37-af8a-39a0c4e7a2a1latents", "id": "reactflow__edge-2fb1577f-0a56-4f12-8711-8afcaaaf1d5elatents-491ec988-3c77-4c37-af8a-39a0c4e7a2a1latents",
"type": "default",
"source": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e", "source": "2fb1577f-0a56-4f12-8711-8afcaaaf1d5e",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1", "target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426vae-491ec988-3c77-4c37-af8a-39a0c4e7a2a1vae", "id": "reactflow__edge-d6353b7f-b447-4e17-8f2e-80a88c91d426vae-491ec988-3c77-4c37-af8a-39a0c4e7a2a1vae",
"type": "default",
"source": "d6353b7f-b447-4e17-8f2e-80a88c91d426", "source": "d6353b7f-b447-4e17-8f2e-80a88c91d426",
"target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1", "target": "491ec988-3c77-4c37-af8a-39a0c4e7a2a1",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
} }

View File

@ -80,12 +80,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 750, "x": 750,
"y": -225 "y": -225
} },
"width": 320,
"height": 24
}, },
{ {
"id": "719dabe8-8297-4749-aea1-37be301cd425", "id": "719dabe8-8297-4749-aea1-37be301cd425",
@ -126,12 +126,12 @@
} }
} }
}, },
"width": 320,
"height": 258,
"position": { "position": {
"x": 750, "x": 750,
"y": -125 "y": -125
} },
"width": 320,
"height": 219
}, },
{ {
"id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "id": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
@ -279,12 +279,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 750, "x": 750,
"y": 200 "y": 200
} },
"width": 320,
"height": 24
}, },
{ {
"id": "55705012-79b9-4aac-9f26-c0b10309785b", "id": "55705012-79b9-4aac-9f26-c0b10309785b",
@ -382,12 +382,12 @@
} }
} }
}, },
"width": 320,
"height": 388,
"position": { "position": {
"x": 375, "x": 375,
"y": 0 "y": 0
} },
"width": 320,
"height": 336
}, },
{ {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2", "id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
@ -441,12 +441,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 375, "x": 375,
"y": -50 "y": -50
} },
"width": 320,
"height": 24
}, },
{ {
"id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "id": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
@ -471,8 +471,7 @@
"isCollection": false, "isCollection": false,
"isCollectionOrScalar": false, "isCollectionOrScalar": false,
"name": "SDXLMainModelField" "name": "SDXLMainModelField"
}, }
"value": null
} }
}, },
"outputs": { "outputs": {
@ -518,12 +517,12 @@
} }
} }
}, },
"width": 320,
"height": 257,
"position": { "position": {
"x": 375, "x": 375,
"y": -500 "y": -500
} },
"width": 320,
"height": 219
}, },
{ {
"id": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "id": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
@ -671,12 +670,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 750, "x": 750,
"y": -175 "y": -175
} },
"width": 320,
"height": 24
}, },
{ {
"id": "63e91020-83b2-4f35-b174-ad9692aabb48", "id": "63e91020-83b2-4f35-b174-ad9692aabb48",
@ -783,12 +782,12 @@
} }
} }
}, },
"width": 320,
"height": 266,
"position": { "position": {
"x": 1475, "x": 1475,
"y": -500 "y": -500
} },
"width": 320,
"height": 224
}, },
{ {
"id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "id": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
@ -801,7 +800,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
@ -1009,12 +1008,12 @@
} }
} }
}, },
"width": 320,
"height": 702,
"position": { "position": {
"x": 1125, "x": 1125,
"y": -500 "y": -500
} },
"width": 320,
"height": 612
}, },
{ {
"id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8", "id": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
@ -1038,8 +1037,7 @@
"isCollection": false, "isCollection": false,
"isCollectionOrScalar": false, "isCollectionOrScalar": false,
"name": "VAEModelField" "name": "VAEModelField"
}, }
"value": null
} }
}, },
"outputs": { "outputs": {
@ -1055,12 +1053,12 @@
} }
} }
}, },
"width": 320,
"height": 161,
"position": { "position": {
"x": 375, "x": 375,
"y": -225 "y": -225
} },
"width": 320,
"height": 139
}, },
{ {
"id": "ade2c0d3-0384-4157-b39b-29ce429cfa15", "id": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
@ -1101,12 +1099,12 @@
} }
} }
}, },
"width": 320,
"height": 258,
"position": { "position": {
"x": 750, "x": 750,
"y": -500 "y": -500
} },
"width": 320,
"height": 219
}, },
{ {
"id": "ad8fa655-3a76-43d0-9c02-4d7644dea650", "id": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
@ -1159,162 +1157,162 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 750, "x": 750,
"y": 150 "y": 150
} },
"width": 320,
"height": 24
} }
], ],
"edges": [ "edges": [
{ {
"id": "3774ec24-a69e-4254-864c-097d07a6256f-faf965a4-7530-427b-b1f3-4ba6505c2a08-collapsed", "id": "3774ec24-a69e-4254-864c-097d07a6256f-faf965a4-7530-427b-b1f3-4ba6505c2a08-collapsed",
"type": "collapsed",
"source": "3774ec24-a69e-4254-864c-097d07a6256f", "source": "3774ec24-a69e-4254-864c-097d07a6256f",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "target": "faf965a4-7530-427b-b1f3-4ba6505c2a08"
"type": "collapsed"
}, },
{ {
"id": "ad8fa655-3a76-43d0-9c02-4d7644dea650-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204-collapsed", "id": "ad8fa655-3a76-43d0-9c02-4d7644dea650-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204-collapsed",
"type": "collapsed",
"source": "ad8fa655-3a76-43d0-9c02-4d7644dea650", "source": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204"
"type": "collapsed"
}, },
{ {
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed", "id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default",
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2", "source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b", "target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
}, },
{ {
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip", "id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-faf965a4-7530-427b-b1f3-4ba6505c2a08clip",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2", "id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-faf965a4-7530-427b-b1f3-4ba6505c2a08clip2",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "clip2", "sourceHandle": "clip2",
"targetHandle": "clip2" "targetHandle": "clip2"
}, },
{ {
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip", "id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2", "id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22clip2-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204clip2",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "clip2", "sourceHandle": "clip2",
"targetHandle": "clip2" "targetHandle": "clip2"
}, },
{ {
"id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbunet", "id": "reactflow__edge-30d3289c-773c-4152-a9d2-bd8a99c8fd22unet-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbunet",
"type": "default",
"source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22", "source": "30d3289c-773c-4152-a9d2-bd8a99c8fd22",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbpositive_conditioning", "id": "reactflow__edge-faf965a4-7530-427b-b1f3-4ba6505c2a08conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbpositive_conditioning",
"type": "default",
"source": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "source": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnegative_conditioning", "id": "reactflow__edge-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204conditioning-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnegative_conditioning",
"type": "default",
"source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "source": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnoise", "id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfbnoise",
"type": "default",
"source": "55705012-79b9-4aac-9f26-c0b10309785b", "source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "target": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfblatents-63e91020-83b2-4f35-b174-ad9692aabb48latents", "id": "reactflow__edge-50a36525-3c0a-4cc5-977c-e4bfc3fd6dfblatents-63e91020-83b2-4f35-b174-ad9692aabb48latents",
"type": "default",
"source": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb", "source": "50a36525-3c0a-4cc5-977c-e4bfc3fd6dfb",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48", "target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-0093692f-9cf4-454d-a5b8-62f0e3eb3bb8vae-63e91020-83b2-4f35-b174-ad9692aabb48vae", "id": "reactflow__edge-0093692f-9cf4-454d-a5b8-62f0e3eb3bb8vae-63e91020-83b2-4f35-b174-ad9692aabb48vae",
"type": "default",
"source": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8", "source": "0093692f-9cf4-454d-a5b8-62f0e3eb3bb8",
"target": "63e91020-83b2-4f35-b174-ad9692aabb48", "target": "63e91020-83b2-4f35-b174-ad9692aabb48",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
}, },
{ {
"id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-faf965a4-7530-427b-b1f3-4ba6505c2a08prompt", "id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-faf965a4-7530-427b-b1f3-4ba6505c2a08prompt",
"type": "default",
"source": "ade2c0d3-0384-4157-b39b-29ce429cfa15", "source": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "prompt" "targetHandle": "prompt"
}, },
{ {
"id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204prompt", "id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204prompt",
"type": "default",
"source": "719dabe8-8297-4749-aea1-37be301cd425", "source": "719dabe8-8297-4749-aea1-37be301cd425",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "prompt" "targetHandle": "prompt"
}, },
{ {
"id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-ad8fa655-3a76-43d0-9c02-4d7644dea650string_left", "id": "reactflow__edge-719dabe8-8297-4749-aea1-37be301cd425value-ad8fa655-3a76-43d0-9c02-4d7644dea650string_left",
"type": "default",
"source": "719dabe8-8297-4749-aea1-37be301cd425", "source": "719dabe8-8297-4749-aea1-37be301cd425",
"target": "ad8fa655-3a76-43d0-9c02-4d7644dea650", "target": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "string_left" "targetHandle": "string_left"
}, },
{ {
"id": "reactflow__edge-ad8fa655-3a76-43d0-9c02-4d7644dea650value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204style", "id": "reactflow__edge-ad8fa655-3a76-43d0-9c02-4d7644dea650value-3193ad09-a7c2-4bf4-a3a9-1c61cc33a204style",
"type": "default",
"source": "ad8fa655-3a76-43d0-9c02-4d7644dea650", "source": "ad8fa655-3a76-43d0-9c02-4d7644dea650",
"target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204", "target": "3193ad09-a7c2-4bf4-a3a9-1c61cc33a204",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "style" "targetHandle": "style"
}, },
{ {
"id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-3774ec24-a69e-4254-864c-097d07a6256fstring_left", "id": "reactflow__edge-ade2c0d3-0384-4157-b39b-29ce429cfa15value-3774ec24-a69e-4254-864c-097d07a6256fstring_left",
"type": "default",
"source": "ade2c0d3-0384-4157-b39b-29ce429cfa15", "source": "ade2c0d3-0384-4157-b39b-29ce429cfa15",
"target": "3774ec24-a69e-4254-864c-097d07a6256f", "target": "3774ec24-a69e-4254-864c-097d07a6256f",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "string_left" "targetHandle": "string_left"
}, },
{ {
"id": "reactflow__edge-3774ec24-a69e-4254-864c-097d07a6256fvalue-faf965a4-7530-427b-b1f3-4ba6505c2a08style", "id": "reactflow__edge-3774ec24-a69e-4254-864c-097d07a6256fvalue-faf965a4-7530-427b-b1f3-4ba6505c2a08style",
"type": "default",
"source": "3774ec24-a69e-4254-864c-097d07a6256f", "source": "3774ec24-a69e-4254-864c-097d07a6256f",
"target": "faf965a4-7530-427b-b1f3-4ba6505c2a08", "target": "faf965a4-7530-427b-b1f3-4ba6505c2a08",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "style" "targetHandle": "style"
} }
] ]
} }

View File

@ -84,12 +84,12 @@
} }
} }
}, },
"width": 320,
"height": 259,
"position": { "position": {
"x": 1000, "x": 1000,
"y": 350 "y": 350
} },
"width": 320,
"height": 219
}, },
{ {
"id": "55705012-79b9-4aac-9f26-c0b10309785b", "id": "55705012-79b9-4aac-9f26-c0b10309785b",
@ -187,12 +187,12 @@
} }
} }
}, },
"width": 320,
"height": 388,
"position": { "position": {
"x": 600, "x": 600,
"y": 325 "y": 325
} },
"width": 320,
"height": 388
}, },
{ {
"id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8", "id": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
@ -258,12 +258,12 @@
} }
} }
}, },
"width": 320,
"height": 226,
"position": { "position": {
"x": 600, "x": 600,
"y": 25 "y": 25
} },
"width": 320,
"height": 193
}, },
{ {
"id": "7d8bf987-284f-413a-b2fd-d825445a5d6c", "id": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
@ -316,12 +316,12 @@
} }
} }
}, },
"width": 320,
"height": 259,
"position": { "position": {
"x": 1000, "x": 1000,
"y": 25 "y": 25
} },
"width": 320,
"height": 219
}, },
{ {
"id": "ea94bc37-d995-4a83-aa99-4af42479f2f2", "id": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
@ -375,12 +375,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 600, "x": 600,
"y": 275 "y": 275
} },
"width": 320,
"height": 32
}, },
{ {
"id": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "id": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
@ -393,7 +393,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"nodePack": "invokeai", "nodePack": "invokeai",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
@ -601,12 +601,12 @@
} }
} }
}, },
"width": 320,
"height": 703,
"position": { "position": {
"x": 1400, "x": 1400,
"y": 25 "y": 25
} },
"width": 320,
"height": 612
}, },
{ {
"id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f", "id": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
@ -713,86 +713,86 @@
} }
} }
}, },
"width": 320,
"height": 266,
"position": { "position": {
"x": 1800, "x": 1800,
"y": 25 "y": 25
} },
"width": 320,
"height": 224
} }
], ],
"edges": [ "edges": [
{ {
"id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed", "id": "reactflow__edge-ea94bc37-d995-4a83-aa99-4af42479f2f2value-55705012-79b9-4aac-9f26-c0b10309785bseed",
"type": "default",
"source": "ea94bc37-d995-4a83-aa99-4af42479f2f2", "source": "ea94bc37-d995-4a83-aa99-4af42479f2f2",
"target": "55705012-79b9-4aac-9f26-c0b10309785b", "target": "55705012-79b9-4aac-9f26-c0b10309785b",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
}, },
{ {
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip", "id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-7d8bf987-284f-413a-b2fd-d825445a5d6cclip",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8", "source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "7d8bf987-284f-413a-b2fd-d825445a5d6c", "target": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip", "id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8clip-93dc02a4-d05b-48ed-b99c-c9b616af3402clip",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8", "source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "93dc02a4-d05b-48ed-b99c-c9b616af3402", "target": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise", "id": "reactflow__edge-55705012-79b9-4aac-9f26-c0b10309785bnoise-eea2702a-19fb-45b5-9d75-56b4211ec03cnoise",
"type": "default",
"source": "55705012-79b9-4aac-9f26-c0b10309785b", "source": "55705012-79b9-4aac-9f26-c0b10309785b",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning", "id": "reactflow__edge-7d8bf987-284f-413a-b2fd-d825445a5d6cconditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cpositive_conditioning",
"type": "default",
"source": "7d8bf987-284f-413a-b2fd-d825445a5d6c", "source": "7d8bf987-284f-413a-b2fd-d825445a5d6c",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning", "id": "reactflow__edge-93dc02a4-d05b-48ed-b99c-c9b616af3402conditioning-eea2702a-19fb-45b5-9d75-56b4211ec03cnegative_conditioning",
"type": "default",
"source": "93dc02a4-d05b-48ed-b99c-c9b616af3402", "source": "93dc02a4-d05b-48ed-b99c-c9b616af3402",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet", "id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8unet-eea2702a-19fb-45b5-9d75-56b4211ec03cunet",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8", "source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "target": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents", "id": "reactflow__edge-eea2702a-19fb-45b5-9d75-56b4211ec03clatents-58c957f5-0d01-41fc-a803-b2bbf0413d4flatents",
"type": "default",
"source": "eea2702a-19fb-45b5-9d75-56b4211ec03c", "source": "eea2702a-19fb-45b5-9d75-56b4211ec03c",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f", "target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae", "id": "reactflow__edge-c8d55139-f380-4695-b7f2-8b3d1e1e3db8vae-58c957f5-0d01-41fc-a803-b2bbf0413d4fvae",
"type": "default",
"source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8", "source": "c8d55139-f380-4695-b7f2-8b3d1e1e3db8",
"target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f", "target": "58c957f5-0d01-41fc-a803-b2bbf0413d4f",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
} }
] ]
} }

View File

@ -25,10 +25,9 @@
} }
], ],
"meta": { "meta": {
"version": "2.0.0", "category": "default",
"category": "default" "version": "2.0.0"
}, },
"id": "a9d70c39-4cdd-4176-9942-8ff3fe32d3b1",
"nodes": [ "nodes": [
{ {
"id": "85b77bb2-c67a-416a-b3e8-291abe746c44", "id": "85b77bb2-c67a-416a-b3e8-291abe746c44",
@ -80,12 +79,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 3425, "x": 3425,
"y": -300 "y": -300
} },
"width": 320,
"height": 219
}, },
{ {
"id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818", "id": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
@ -150,12 +149,12 @@
} }
} }
}, },
"width": 320,
"height": 227,
"position": { "position": {
"x": 2500, "x": 2500,
"y": -600 "y": -600
} },
"width": 320,
"height": 193
}, },
{ {
"id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "id": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
@ -243,12 +242,12 @@
} }
} }
}, },
"width": 320,
"height": 252,
"position": { "position": {
"x": 2975, "x": 2975,
"y": -600 "y": -600
} },
"width": 320,
"height": 218
}, },
{ {
"id": "c3fa6872-2599-4a82-a596-b3446a66cf8b", "id": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
@ -300,12 +299,12 @@
} }
} }
}, },
"width": 320,
"height": 256,
"position": { "position": {
"x": 3425, "x": 3425,
"y": -575 "y": -575
} },
"width": 320,
"height": 219
}, },
{ {
"id": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "id": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
@ -318,7 +317,7 @@
"notes": "", "notes": "",
"isIntermediate": true, "isIntermediate": true,
"useCache": true, "useCache": true,
"version": "1.5.0", "version": "1.5.1",
"inputs": { "inputs": {
"positive_conditioning": { "positive_conditioning": {
"id": "025ff44b-c4c6-4339-91b4-5f461e2cadc5", "id": "025ff44b-c4c6-4339-91b4-5f461e2cadc5",
@ -525,12 +524,12 @@
} }
} }
}, },
"width": 320,
"height": 705,
"position": { "position": {
"x": 3975, "x": 3975,
"y": -575 "y": -575
} },
"width": 320,
"height": 612
}, },
{ {
"id": "ea18915f-2c5b-4569-b725-8e9e9122e8d3", "id": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
@ -627,12 +626,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 3425, "x": 3425,
"y": 75 "y": 75
} },
"width": 320,
"height": 24
}, },
{ {
"id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953", "id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
@ -685,12 +684,12 @@
} }
} }
}, },
"width": 320,
"height": 32,
"position": { "position": {
"x": 3425, "x": 3425,
"y": 0 "y": 0
} },
"width": 320,
"height": 24
}, },
{ {
"id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400", "id": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
@ -796,106 +795,106 @@
} }
} }
}, },
"width": 320,
"height": 267,
"position": { "position": {
"x": 4450, "x": 4450,
"y": -550 "y": -550
} },
"width": 320,
"height": 224
} }
], ],
"edges": [ "edges": [
{ {
"id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953-ea18915f-2c5b-4569-b725-8e9e9122e8d3-collapsed", "id": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953-ea18915f-2c5b-4569-b725-8e9e9122e8d3-collapsed",
"type": "collapsed",
"source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953", "source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
"target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3", "target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3"
"type": "collapsed"
}, },
{ {
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818clip-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip", "id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818clip-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818", "source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-c3fa6872-2599-4a82-a596-b3446a66cf8bclip", "id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-c3fa6872-2599-4a82-a596-b3446a66cf8bclip",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "c3fa6872-2599-4a82-a596-b3446a66cf8b", "target": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
}, },
{ {
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818unet-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet", "id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818unet-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818", "source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "target": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63unet", "id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966unet-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63unet",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "unet", "sourceHandle": "unet",
"targetHandle": "unet" "targetHandle": "unet"
}, },
{ {
"id": "reactflow__edge-85b77bb2-c67a-416a-b3e8-291abe746c44conditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63negative_conditioning", "id": "reactflow__edge-85b77bb2-c67a-416a-b3e8-291abe746c44conditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63negative_conditioning",
"type": "default",
"source": "85b77bb2-c67a-416a-b3e8-291abe746c44", "source": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "negative_conditioning" "targetHandle": "negative_conditioning"
}, },
{ {
"id": "reactflow__edge-c3fa6872-2599-4a82-a596-b3446a66cf8bconditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63positive_conditioning", "id": "reactflow__edge-c3fa6872-2599-4a82-a596-b3446a66cf8bconditioning-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63positive_conditioning",
"type": "default",
"source": "c3fa6872-2599-4a82-a596-b3446a66cf8b", "source": "c3fa6872-2599-4a82-a596-b3446a66cf8b",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "conditioning", "sourceHandle": "conditioning",
"targetHandle": "positive_conditioning" "targetHandle": "positive_conditioning"
}, },
{ {
"id": "reactflow__edge-ea18915f-2c5b-4569-b725-8e9e9122e8d3noise-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63noise", "id": "reactflow__edge-ea18915f-2c5b-4569-b725-8e9e9122e8d3noise-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63noise",
"type": "default",
"source": "ea18915f-2c5b-4569-b725-8e9e9122e8d3", "source": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
"target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "target": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"type": "default",
"sourceHandle": "noise", "sourceHandle": "noise",
"targetHandle": "noise" "targetHandle": "noise"
}, },
{ {
"id": "reactflow__edge-6fd74a17-6065-47a5-b48b-f4e2b8fa7953value-ea18915f-2c5b-4569-b725-8e9e9122e8d3seed", "id": "reactflow__edge-6fd74a17-6065-47a5-b48b-f4e2b8fa7953value-ea18915f-2c5b-4569-b725-8e9e9122e8d3seed",
"type": "default",
"source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953", "source": "6fd74a17-6065-47a5-b48b-f4e2b8fa7953",
"target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3", "target": "ea18915f-2c5b-4569-b725-8e9e9122e8d3",
"type": "default",
"sourceHandle": "value", "sourceHandle": "value",
"targetHandle": "seed" "targetHandle": "seed"
}, },
{ {
"id": "reactflow__edge-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63latents-a9683c0a-6b1f-4a5e-8187-c57e764b3400latents", "id": "reactflow__edge-ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63latents-a9683c0a-6b1f-4a5e-8187-c57e764b3400latents",
"type": "default",
"source": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63", "source": "ad487d0c-dcbb-49c5-bb8e-b28d4cbc5a63",
"target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400", "target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "default",
"sourceHandle": "latents", "sourceHandle": "latents",
"targetHandle": "latents" "targetHandle": "latents"
}, },
{ {
"id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818vae-a9683c0a-6b1f-4a5e-8187-c57e764b3400vae", "id": "reactflow__edge-24e9d7ed-4836-4ec4-8f9e-e747721f9818vae-a9683c0a-6b1f-4a5e-8187-c57e764b3400vae",
"type": "default",
"source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818", "source": "24e9d7ed-4836-4ec4-8f9e-e747721f9818",
"target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400", "target": "a9683c0a-6b1f-4a5e-8187-c57e764b3400",
"type": "default",
"sourceHandle": "vae", "sourceHandle": "vae",
"targetHandle": "vae" "targetHandle": "vae"
}, },
{ {
"id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-85b77bb2-c67a-416a-b3e8-291abe746c44clip", "id": "reactflow__edge-c41e705b-f2e3-4d1a-83c4-e34bb9344966clip-85b77bb2-c67a-416a-b3e8-291abe746c44clip",
"type": "default",
"source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966", "source": "c41e705b-f2e3-4d1a-83c4-e34bb9344966",
"target": "85b77bb2-c67a-416a-b3e8-291abe746c44", "target": "85b77bb2-c67a-416a-b3e8-291abe746c44",
"type": "default",
"sourceHandle": "clip", "sourceHandle": "clip",
"targetHandle": "clip" "targetHandle": "clip"
} }

View File

@ -31,6 +31,7 @@ class WorkflowRecordOrderBy(str, Enum, metaclass=MetaEnum):
class WorkflowCategory(str, Enum, metaclass=MetaEnum): class WorkflowCategory(str, Enum, metaclass=MetaEnum):
User = "user" User = "user"
Default = "default" Default = "default"
Project = "project"
class WorkflowMeta(BaseModel): class WorkflowMeta(BaseModel):

View File

@ -169,7 +169,7 @@ class SqliteWorkflowRecordsStorage(WorkflowRecordsStorageBase):
self._cursor.execute(count_query, count_params) self._cursor.execute(count_query, count_params)
total = self._cursor.fetchone()[0] total = self._cursor.fetchone()[0]
pages = int(total / per_page) + 1 pages = total // per_page + (total % per_page > 0)
return PaginatedResults( return PaginatedResults(
items=workflows, items=workflows,

View File

@ -0,0 +1,67 @@
import cProfile
from logging import Logger
from pathlib import Path
from typing import Optional
class Profiler:
"""
Simple wrapper around cProfile.
Usage
```
# Create a profiler
profiler = Profiler(logger, output_dir, "sql_query_perf")
# Start a new profile
profiler.start("my_profile")
# Do stuff
profiler.stop()
```
Visualize a profile as a flamegraph with [snakeviz](https://jiffyclub.github.io/snakeviz/)
```sh
snakeviz my_profile.prof
```
Visualize a profile as directed graph with [graphviz](https://graphviz.org/download/) & [gprof2dot](https://github.com/jrfonseca/gprof2dot)
```sh
gprof2dot -f pstats my_profile.prof | dot -Tpng -o my_profile.png
# SVG or PDF may be nicer - you can search for function names
gprof2dot -f pstats my_profile.prof | dot -Tsvg -o my_profile.svg
gprof2dot -f pstats my_profile.prof | dot -Tpdf -o my_profile.pdf
```
"""
def __init__(self, logger: Logger, output_dir: Path, prefix: Optional[str] = None) -> None:
self._logger = logger.getChild(f"profiler.{prefix}" if prefix else "profiler")
self._output_dir = output_dir
self._output_dir.mkdir(parents=True, exist_ok=True)
self._profiler: Optional[cProfile.Profile] = None
self._prefix = prefix
self.profile_id: Optional[str] = None
def start(self, profile_id: str) -> None:
if self._profiler:
self.stop()
self.profile_id = profile_id
self._profiler = cProfile.Profile()
self._profiler.enable()
self._logger.info(f"Started profiling {self.profile_id}.")
def stop(self) -> Path:
if not self._profiler:
raise RuntimeError("Profiler not initialized. Call start() first.")
self._profiler.disable()
filename = f"{self._prefix}_{self.profile_id}.prof" if self._prefix else f"{self.profile_id}.prof"
path = Path(self._output_dir, filename)
self._profiler.dump_stats(path)
self._logger.info(f"Stopped profiling, profile dumped to {path}.")
self._profiler = None
self.profile_id = None
return path

Some files were not shown because too many files have changed in this diff Show More