Compare commits

..

253 Commits

Author SHA1 Message Date
b3abc7252d recover gracefully from VRAM out of memory errors 2024-02-24 12:10:52 -05:00
9986fce1a6 translationBot(ui): update translation (German)
Currently translated at 80.0% (1176 of 1470 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-23 07:57:15 +11:00
228f1d7f62 translationBot(ui): update translation (Italian)
Currently translated at 95.6% (1406 of 1470 strings)

translationBot(ui): update translation (Italian)

Currently translated at 93.9% (1381 of 1470 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-02-23 07:57:15 +11:00
B N
01a6378dc1 translationBot(ui): update translation (German)
Currently translated at 78.8% (1159 of 1470 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-23 07:57:15 +11:00
e01769294f translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-20 22:33:03 +11:00
16aa261e28 updated tooltip popovers (#5751)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Added new tooltip popovers and updated copy of existing ones

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-19 13:12:47 -05:00
1dabf18d14 Merge branch 'main' into chainchompa/tooltip-popovers 2024-02-19 13:04:15 -05:00
115d92b1ae updated copy 2024-02-19 12:50:35 -05:00
f0d4c71960 updated tooltip popovers 2024-02-19 12:50:11 -05:00
3e48edda6f add latent-upscale to communityNodes.md (#5728)
Adds the 'latent upscale' community node
2024-02-19 16:53:35 +00:00
716b584f03 translationBot(ui): update translation (Italian)
Currently translated at 97.1% (1384 of 1424 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-02-19 08:18:33 +11:00
B N
d43b843c23 translationBot(ui): update translation (German)
Currently translated at 80.2% (1143 of 1424 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-18 01:47:01 +11:00
f36b5990ed fix(ui): do not provide auth headers for openapi.json 2024-02-15 10:38:26 -05:00
5706237ec7 {release} 3.7.0 (#5727)
## What type of PR is this? (check all applicable)

Release - Invoke 3.7.0

## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke 3.7.0 Release

## QA Instructions, Screenshots, Recordings
Test Installer: 

[InvokeAI-installer-v3.7.0.zip](https://github.com/invoke-ai/InvokeAI/files/14298200/InvokeAI-installer-v3.7.0.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [X] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi
2. Release on GitHub
3. Announce on Discord
2024-02-15 07:59:20 -07:00
163b22a7b3 {release} 3.7.0 2024-02-15 07:34:31 -07:00
c5aeb36230 fix: repair Dockerfile for ROCm
With these changes, the Docker image can be built and executed
successfully on hosts with AMD devices with ROCm acceleration.
Previously, a ROCm-enabled version of torch would be installed, but
later removed during installation of InvokeAI itself. This was caused by
InvokeAI needing a newer torch version than was previously installed.

The fix consists of multiple components:
* Update the hardcoded versions of torch and torchvision to the versions
  currently used in pyproject.toml, so that a new version need not be
  installed during installation of InvokeAI.
* Specify --extra-index-url on installation of InvokeAI so that even if
  a verison mismatch occurs, the correct torch version should still be
  installed. This also necessitates changing --index-url to
  --extra-index-url for the Torch repo. Otherwise non-torch dependencies
  would not be found.
* In run.sh, build the image for the selected service.
2024-02-14 22:25:40 -05:00
5e77f0d93b Reorder exposed fields in workflow tab (#5711)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [x] Feature
- [ ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-14 18:32:19 -05:00
d3acb81743 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 18:26:35 -05:00
e0f2404c00 added reset to default back in, removed unneeded activation constraints 2024-02-14 18:07:15 -05:00
5ed7972e5f merge conflict 2024-02-14 17:28:59 -05:00
792131be01 added drag icon, added vertical strategy for smoother scrolling 2024-02-14 17:27:21 -05:00
fc278c5cb1 fix(images_default): correct get_metadata error message
The error was misleading, indicating an issue with getting the image DTO, when it was actually an issue with getting metadata.
2024-02-14 16:21:39 -05:00
d7f6af1f07 possible fix: seamless not being seamless with baked 2024-02-14 16:13:11 -05:00
ff9bd040cc possible fix: Seamless not working with Custom VAE's 2024-02-14 16:13:11 -05:00
17d5f7bebd Critical Space Removal 2024-02-14 16:13:11 -05:00
30dae0f5aa adding back skipped layer 2024-02-14 16:13:11 -05:00
161000cde6 Merge branch 'main' into chainchompa/reorder-exposed-fields 2024-02-14 15:00:54 -05:00
de832f6862 formatting 2024-02-14 15:00:18 -05:00
21ba3c63de cleanup 2024-02-14 14:52:48 -05:00
a948bd1310 refactored dndsortable to be its own component 2024-02-14 14:47:28 -05:00
2071972a8c refactored to just use a new dnd context, got reordering working and fixed flicker 2024-02-14 14:20:08 -05:00
5ed2f6e6c1 bump 2024-02-14 10:15:50 -05:00
b77f6bd0ad Update accelerate 0.26.1 -> 0.27.0 2024-02-14 10:15:50 -05:00
34cc26a4ed revert to using fetch, add token if needed (#5720)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-14 10:04:12 -05:00
9d6e4ff1fb workflow tab (#5680)
* new workflow tab UI - still using shared state with workflow editor tab

* polish workflow details

* remove workflow tab, add edit/view mode to workflow slice and get that working to switch between within editor tab

* UI updates for view/edit mode

* cleanup

* add warning to view mode

* lint

* start with isTouched false

* working on styling mode toggle

* more UX iteration

* lint

* cleanup

* save original field values to state, add indicator if they have been changed and give user choice to reset

* lint

* fix import and commit translation

* dont switch to view mode when loading a workflow

* warns before clearing editor

* use folder icon

* fix(ui): track do not erase value when resetting field value

- When adding an exposed field, we need to add it to originalExposedFieldValues
- When removing an exposed field, we need to remove it from originalExposedFieldValues
- add `useFieldValue` and `useOriginalFieldValue` hooks to encapsulate related logic

* feat(ui): use IconButton for workflow view/edit button

* feat(ui): change icon for new workflow

It was the same as the workflow tab icon, confusing bc you think it's going to somehow take you to the tab.

* feat(ui): use render props for NewWorkflowConfirmationAlertDialog

There was a lot of potentially sensitive logic shared between the new workflow button and menu items. Also, two instances of ConfirmationAlertDialog.

Using a render prop deduplicates the logic & components

* fix(ui): do not mark workflow touched when loading workflow

This was occurring because the `nodesChanged` action is called by reactflow when loading a workflow. Specifically, it calculates and sets the node dimensions as it loads.

The existing logic set `isTouched` whenever this action was called.

The changes reactflow emits have types, and we can use the change types and data to determine if a change should result in the workflow being marked as touched.

* chore(ui): lint

* chore(ui): lint

* delete empty file

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-14 09:02:07 -05:00
85bbf65967 only refetch intermediates on modal open if it is enabled 2024-02-14 09:47:15 +11:00
3726293258 feat(nodes): improve types in graph.py
Methods `get_node` and `complete` were typed as returning a dynamically created unions `InvocationsUnion` and `InvocationOutputsUnion`, respectively.

Static type analysers cannot work with dynamic objects, so these methods end up as effectively un-annotated, returning `Unknown`.

They now return `BaseInvocation` and `BaseInvocationOutput`, respectively, which are the superclasses of all members of each union. This gives us the best type annotation that is possible.

Note: the return types of these methods are never introspected, so it doesn't really matter what they are at runtime.
2024-02-14 07:56:10 +11:00
8bd65be8c8 Quick Seamless Fixes (#5685)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ X ] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [ X ] No, because: It's small

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ X ] No


## Description
This pulls out some of the updates from the WIP Seamless branch that has
yet to be completed, and hardcodes values that are exposed in that
branch. Given that seamless currently does not generate seamless
textures, and this fix results in seamless outputs, it's an improvement
even if it doesn't resolve this in a "perfect" way that exposes all
variables to the end user.

better over perfect.


![f07b7e49-80c2-4659-bb36-d50ec80b1f8b](https://github.com/invoke-ai/InvokeAI/assets/31807370/36a40bd9-8fc4-41d5-bd1e-209fc828987e)
2024-02-13 11:08:07 -07:00
783442c40d Merge branch 'main' into SeamlessFixes 2024-02-13 10:38:55 -07:00
8a147bd6e6 added sortable to linear view, not saving yet 2024-02-13 11:53:49 -05:00
273994b742 chore: bump diffusers 0.26.2 -> 0.26.3
https://github.com/huggingface/diffusers/releases/tag/v0.26.3

This fixes an issue with `DPMSolverSinglestepScheduler` with even numbers of steps.
2024-02-13 08:40:42 -05:00
3339ad4df8 feat(nodes): seamless.py minor cleanup 2024-02-13 13:34:48 +11:00
c3b2a8cb27 Quick Seamless Fixes 2024-02-13 13:34:48 +11:00
daa780940b translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
2289680ae1 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1377 of 1416 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
B N
cda85a0637 translationBot(ui): update translation (German)
Currently translated at 79.4% (1128 of 1419 strings)

translationBot(ui): update translation (German)

Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-13 13:20:30 +11:00
1d9801e7be fix(ui): add input el for workflow upload button
Need this to select the file
2024-02-13 13:18:31 +11:00
3ecb1e580f update bc button is only ever used in modal context 2024-02-13 13:18:31 +11:00
6301e58a2e move upload button into workflow library modal 2024-02-13 13:18:31 +11:00
5dd552effa Update 020_INSTALL_MANUAL.md (#5700)
updated the commands for running InvokeAI local and web server

Co-authored-by: Millun Atluri <Millu@users.noreply.github.com>
2024-02-13 00:36:00 +00:00
25ce505628 exposed field loading state (#5704)
* remove thunk for receivedOpenApiSchema and use RTK query instead. add loading state for exposed fields

* clean up

* ignore any

* fix(ui): do not log on canceled openapi.json queries

- Rely on RTK Query for the `loadSchema` query by providing a custom `jsonReplacer` in our `dynamicBaseQuery`, so we don't need to manage error state.
- Detect when the query was canceled and do not log the error message in those situations.

* feat(ui): `utilitiesApi.endpoints.loadSchema` -> `appInfoApi.endpoints.getOpenAPISchema`

- Utilities is for server actions, move this to `appInfo` bc it fits better there.
- Rename to match convention for HTTP GET queries.
- Fix inverted logic in the `matchRejected` listener (typo'd this)

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-02-12 18:48:32 -05:00
1dd07fb1eb Updated docs on OpenPose 2024-02-12 11:12:45 -05:00
e82c21b5ba chore: rename DWPose to DW Openpose 2024-02-12 11:12:45 -05:00
50b93992cf cleanup: Remove Openpose Image Processor 2024-02-12 11:12:45 -05:00
f8e566d62a cleanup: unused util functions 2024-02-12 11:12:45 -05:00
f588b95c7f cleanup: remove unused code from the DWPose implementation 2024-02-12 11:12:45 -05:00
67daf1751c fix: lint erros 2024-02-12 11:12:45 -05:00
7d80261d47 chore: Add code attribution for the DWPoseDetector 2024-02-12 11:12:45 -05:00
67cbfeb33d feat: Add output image resizing for DWPose 2024-02-12 11:12:45 -05:00
f7998b4be0 feat: Add DWPose to Linear UI 2024-02-12 11:12:45 -05:00
675c73c94f fix: ruff lint errors 2024-02-12 11:12:45 -05:00
0a27b0379f feat: Initial implementation of DWPoseDetector 2024-02-12 11:12:45 -05:00
0ef18b6477 fix(ui): enable lora when recalling
Closes #5698
2024-02-12 16:47:46 +11:00
6539ef7c9f {release} v3.6.3 (#5696)
## What type of PR is this? (check all applicable)
Release Invoke 3.6.3


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
Invoke 3.6.3 Release



## QA Instructions, Screenshots, Recordings
Test the installer:
[InvokeAI-installer-v3.6.3.zip](https://github.com/invoke-ai/InvokeAI/files/14233359/InvokeAI-installer-v3.6.3.zip)

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge once approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
## [optional] Are there any post deployment tasks we need to perform?
1. Release on PyPi & GitHub
2. Announce on Discord
2024-02-11 16:02:30 -05:00
14c9a1e4f3 Merge branch 'main' into release/3.6.3 2024-02-11 15:36:05 -05:00
64b0feca31 Update ruff 2024-02-11 15:24:28 -05:00
0be9a2d906 Update string formatting 2024-02-11 15:24:28 -05:00
d925f721b9 fix references to .env.sample (#5695)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [x] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [x] No, because: it is text only, simple, and (hopefully) self-evident

      
## Have you updated all relevant documentation?
- [x] Yes - as far as I can grep.
- [ ] No


## Description

`.env.sample` was misspelled as `env.sample` in a few places.

This changes documentation only. You may need to re-build/deploy docs,
I'm not sure.
2024-02-11 13:43:14 -05:00
4e5be1891a {release} v3.6.3 2024-02-11 10:34:47 -07:00
156d4ec3b2 fix references to .env.sample 2024-02-10 21:11:22 -08:00
c45a43519a chore: bump deps
- ruff 0.1.11 -> 0.2.1
- update config format
2024-02-11 08:50:49 +11:00
763816ca0c chore: bump deps
- pydantic 2.5.3 -> 2.6.1
- uvicorn 0.25.0 -> 0.27.1
2024-02-11 08:50:49 +11:00
B N
83a7c9059f translationBot(ui): update translation (German)
Currently translated at 78.1% (1107 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-11 08:40:55 +11:00
c5f069a255 feat(backend): remove dependency on basicsr
`basicsr` has a hard dependency on torchvision <= 0.16 and is unmaintained. Extract the code we need from it and remove the dep.

Closes #5108
2024-02-11 08:34:54 +11:00
cd169ee082 fix(nodes): deep copy graph inputs (#5686)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission

## Description

The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state,
invocation outputs, etc) constantly. The meant we were effectively
working with deep-copied objects at all time. We could mutate objects
freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we
cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are
two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node
mutates the output object while `invoke`ing, the next node will get the
mutated object.
- The invocation cache stores live python objects. When a node mutates
an output pulled from the cache, the next node that uses the cached
object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set,
effectively reproducing the same behaviour as we had with the SQLite
session storage. Nodes can safely mutate their inputs and those changes
never leave the node's scope.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Closes  #5665

The root issue affects CLIP Skip because that node mutates its input
`ClipField`. Specifically, it increments `self.clip.skipped_layers` and
passes `self.clip` as its output. I don't know if there are any other
nodes that do this.

## QA Instructions, Screenshots, Recordings

Two issues to reproduce. 

First is the caching issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/7a251e48-bc70-4b8e-8816-84aac41ce4d3)

Note the cache is enabled. Run this simple graph a couple times, and
check the outputs of the CLIP Skip node. You'll see the `skipped_layers`
value increasing each time.

Second is the nodes-sharing-inputs issue:


![image](https://github.com/invoke-ai/InvokeAI/assets/4822129/ecdaefab-2beb-4950-b4bf-2a5738ce6832)

Note the cache is _disabled_. Run the graph a couple times and check the
outputs of the two CLIP Skip nodes. You'll see that one has the expected
value for `skipped_layers` and the other has double that.

Now update to the PR and try again. You should see `skipped_layers` is
the right value in all cases.

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

This PR can be merged when approved. It needs a real review with
braintime.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->
2024-02-09 13:24:10 -05:00
66b106f107 Merge branch 'main' into fix/nodes/deep-copy-inputs 2024-02-09 11:49:16 -05:00
b10d745dae fix(ui): when using control image dimensions, round to 8
The control image dimensions were set directly without rounding them to 8, causing an error during generation if they weren't a multiple of 8.
2024-02-09 08:44:11 -05:00
d20f98fb4f fix(nodes): deep copy graph inputs
The change to memory session storage brings a subtle behaviour change.

Previously, we serialized and deserialized everything (e.g. field state, invocation outputs, etc) constantly. The meant we were effectively working with deep-copied objects at all time. We could mutate objects freely without worrying about other references to the object.

With memory storage, objects are now passed around by reference, and we cannot handle them in the same way.

This is problematic for nodes that mutate their own inputs. There are two ways this causes a problem:

- An output is used as input for multiple nodes. If the first node mutates the output object while `invoke`ing, the next node will get the mutated object.
- The invocation cache stores live python objects. When a node mutates an output pulled from the cache, the next node that uses the cached object will get the mutated object.

The solution is to deep-copy a node's inputs as they are set, effectively reproducing the same behaviour as we had with the SQLite session storage. Nodes can safely mutate their inputs and those changes never leave the node's scope.

Closes  #5665
2024-02-09 21:17:32 +11:00
c9c150f850 feat(ui): use cfgRescaleMultiplier on canvas graphs 2024-02-09 18:53:08 +11:00
a60e2b7c77 fix existing graphs with cfg_RescaleMultiplier not used 2024-02-09 18:53:08 +11:00
da6e5b2ba1 fix(ui): fix lora count badge when none enabled 2024-02-08 19:22:28 -05:00
c65d497cbc fix(ui): filter disabled LoRAs on sdxl 2024-02-08 19:22:28 -05:00
B N
a68d8fe203 translationBot(ui): update translation (German)
Currently translated at 74.4% (1054 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 69.6% (986 of 1416 strings)

translationBot(ui): update translation (German)

Currently translated at 68.6% (972 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-09 10:10:50 +11:00
5de2288cfa addressed feedback 2024-02-09 10:09:27 +11:00
2ce70b4457 added button on hover for exposing fields to linear workflow ui 2024-02-09 10:09:27 +11:00
6c5f743e2b Upgrade version of fastapi and socketio 2024-02-09 09:04:01 +11:00
bb242c4e1e Print correct version when a non-default version is selected for install (#5675)
…elected

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Small bugfix: the installer would always print the latest stable version
as the one to be installed, even if a different one was selected. The
selected version would still be installed correctly. This PR fixes the
message.

## QA Instructions, Screenshots, Recordings

Select a pre-release version on install and observe the correct version
being printed. Compare to current behaviour to ascertain the fix.

## Merge Plan

- "This PR can be merged when approved"

## Added/updated tests?

- [ ] Yes
- [x] No
2024-02-08 11:07:14 -05:00
c9e246ed1b fix(installer): print correct version when a non-default version is selected 2024-02-08 09:56:56 -05:00
B N
2175fe3823 translationBot(ui): update translation (German)
Currently translated at 66.2% (938 of 1416 strings)

Co-authored-by: B N <berndnieschalk@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-08 07:57:55 +11:00
f64fc2c8b7 feat(installer): add a deprecation message to the in-launcher updater 2024-02-07 14:31:26 -05:00
3d1b5c57ea fix(installer): more reliably upgrade pip 2024-02-07 14:31:26 -05:00
31b9538976 feat(installer): improve directory selection experience 2024-02-07 14:31:26 -05:00
97c1545cca feat(installer): show latest versions in the welcome panel 2024-02-07 14:31:26 -05:00
6a8a3b50bc feat(installer): add an interactive version chooser 2024-02-07 14:31:26 -05:00
5a816818dc feat(installer): get list of (pre-)releases from github api 2024-02-07 14:31:26 -05:00
1cb866d1fc fix(installer): small formatting fix in welcome banner 2024-02-07 14:31:26 -05:00
29bcc4b595 fix(installer) slightly better typing for GPU selection 2024-02-07 14:31:26 -05:00
ca2bb6f0cc fix(installer): bubble up exceptions during install 2024-02-07 14:31:26 -05:00
1c8fc908b2 fix(installer): minor logic fixes 2024-02-07 14:31:26 -05:00
d397beaa47 fix(installer): upgrade the temporary pip before installation 2024-02-07 14:31:26 -05:00
60eea09629 feat(installer): *always* force-reinstall
This has repeatedly shown itself useful in fixing install issues,
especially regarding pytorch CPU/GPU version, so there is little
downside to making this the default.

Performance impact of this should be negligible. Packages will
be reinstalled from pip cache if possible, and downloaded only if
necessary. Impact may be felt on slower disks.
2024-02-07 14:31:26 -05:00
5b7b1122cb tidy(installer): clean up unused code 2024-02-07 14:31:26 -05:00
dfc8d1bb10 tidy(installer): remove unused argument / env var 2024-02-07 14:31:26 -05:00
f9fa62164e tidy(installer): remove .whl publishing and bundling - we now install from pypi 2024-02-07 14:31:26 -05:00
d47905d2fb chore(installer): reorder messages in util script
fail fast if there's a virtualenv activated
2024-02-07 14:31:26 -05:00
03b1cde97d tidy(installer): remove unused update scripts and references thereto 2024-02-07 14:31:26 -05:00
7162ff04df tidy(installer): do not preinstall torch separately 2024-02-07 14:31:26 -05:00
32b1e974ca feat(installer): install from PyPi instead of using prepackaged wheel 2024-02-07 14:31:26 -05:00
82c3c7fc38 tidy(installer): remove unused experimental venv location 2024-02-07 14:31:26 -05:00
3dcbb79ef7 chore(installer): typing pass 2024-02-07 14:31:26 -05:00
3b41104427 Minor dep updates for diffusers and numpy (#5673)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [X] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [ ] Yes
- [X] No, because probably not needed

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

These are another minor dep updates that I was able to test without any
regressions. This will ensure we are up-to-date again.
The fixes are very minor, probably not noticeable in InvokeAI (at least
for diffusers) but it's still good to have them.

This is also to make sure that the RC is releasing with the latest
packages to ensure extended testing.

Greetings

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-07 13:29:29 -05:00
35bf7ee66d Minor dep updates 2024-02-07 17:58:28 +01:00
430e17a5d2 community node: BriaAI RMBG 1.4 (#5671)
## What type of PR is this? (check all applicable)

- [x] Community Node Submission


## Description

- Adds BriaAI's new 1.4 model for background removal. Far superior
results from what I've tested compared to any other BG removal so far:
https://github.com/blessedcoolant/invoke_bria_rmbg
2024-02-07 11:06:31 -05:00
400d66fa5d community node: BriaAI RMBG 1.4 2024-02-07 19:55:04 +05:30
800c481515 add actions for workflow library (#5669)
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-07 14:14:54 +00:00
79ae9c4e64 feat(nodes): move profiler/stats cleanup logic to function
Harder to miss something going forward.
2024-02-07 11:26:15 +11:00
0dc6cb0535 feat(nodes): do not log stats errors
The stats service was logging error messages when attempting to retrieve stats for a graph that it wasn't tracking. This was rather noisy.

Instead of logging these errors within the service, we now will just raise the error and let the consumer of the service decide whether or not to log. Our usage of the service at this time is to suppress errors - we don't want to log anything to the console.

Note: With the improvements in the previous two commits, we shouldn't get these errors moving forward, but I still think this change is correct.
2024-02-07 11:26:15 +11:00
810fc19e43 feat(nodes): log stats for canceled graphs
When an invocation is canceled, we consider the graph canceled. Log its graph's stats before resetting its graph's stats. No reason to not log these stats.

We also should stop the profiler at this point, because this graph is finished. If we don't stop it manually, it will stop itself and write the profile to disk when it is next started, but the resultant profile will include more than just its target graph.

Now we get both stats and profiles for canceled graphs.
2024-02-07 11:26:15 +11:00
e0e106367d fix(nodes): do not clear invocation stats on invoke error
When an invocation errored, we clear the stats for the whole graph. Later on, we check the graph for errors and see the failed invocation, and we consider the graph failed. We then attempts to log the stats for the failed graph.

Except now the failed graph has no stats, and the stats raises an error.

The user sees, in the terminal:
- An invocation error
- A stats error (scary!)
- No stats for the failed graph (uninformative!)

What the user should see:
- An invocation error
- Graph stats

The fix is simple - don't reset the graph stats when an invocation has an error.
2024-02-07 11:26:15 +11:00
14472dc09d translationBot(ui): update translation files
Updated by "Cleanup translation files" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 11:16:38 +11:00
e8095b73ae feat(ui): improve types for language picker
Makes it impossible to miss a language or typo.
2024-02-05 10:47:36 +11:00
c979cf5ecc tidy(ui): remove language translation strings
There's no need to do things like translate Arabic into Finnish. We never use those strings. Remove these translations entirely.
2024-02-05 10:47:36 +11:00
1b4dbd283e fix(ui): hardcode language picker languages
Hardcode the options in the dropdown, don't rely on translators to fill this in.

Also, add a number of missing languages (Azerbaijani, Finnish, Hungarian, Swedish, Turkish).
2024-02-05 10:47:36 +11:00
fb50a221f8 fix(ui): fix color input field alpha
Closes #5647

The alpha values in the UI are `0-1` but the backend wants `0-255`.

Previously, this was handled in `parseFIeldValue` when building the graph. In a recent release, field types were refactored and broke the alpha handling.

The logic for handling alpha values is moved into `ColorFieldInputComponent`, and `parseFieldValue` now just does no value transformations.

Though it would be a minor change, I'm leaving this function in because I don't want to change the rest of the logic except when necessary.
2024-02-05 09:28:20 +11:00
52e07db06b Update communityNodes.md
added Autostereogram nodes
2024-02-05 09:26:41 +11:00
6643b5cec4 feat(ui): log trace when skipping reserved input field type 2024-02-05 09:24:46 +11:00
e8bf9ea058 fix(ui): do not swallow errors during schema parsing
Unknown errors were swallowed during schema parsing. Now they log a warning.
2024-02-05 09:24:46 +11:00
ce3d37e829 fix(ui): handle fields with single option literal
Closes #5616

Turns out the OpenAPI schema definition for a pydantic field with a `Literal` type annotation is different depending on the number of options.

When there is a single value (e.g. `Literal["foo"]`, this results in a `const` schema object. The schema parser didn't know how to handle this, and displayed a warning in the JS console.

 This situation is now handled. When a `const` schema object is encountered, we interpret that as an `EnumField` with a single option.

 I think this makes sense - if you had a truly constant value, you wouldn't make it a field, so a `const` must mean a dynamically generated enum that ended up with only a single option.
2024-02-05 09:15:09 +11:00
8a61063e84 translationBot(ui): update translation (Turkish)
Currently translated at 57.5% (825 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
87ff96553a translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
209bf105bc translationBot(ui): update translation (Turkish)
Currently translated at 57.3% (822 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
804dbeba34 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
067cd4dc2e translationBot(ui): update translation (Turkish)
Currently translated at 40.6% (582 of 1433 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 38.8% (557 of 1433 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
feb4a3f242 translationBot(ui): update translation (Azerbaijani)
Currently translated at 0.1% (1 of 1433 strings)

translationBot(ui): added translation (Azerbaijani)

Co-authored-by: Mehrab Poladov <thepoladov@protonmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/az/
Translation: InvokeAI/Web UI
2024-02-05 08:31:44 +11:00
4a886c0a4a Minor dep updates 2024-02-04 13:04:36 -05:00
8e500283b6 Fix broken import in checkpoint_convert (#5635)
* Fix broken import in checkpoint_convert

* simplify the fix

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-04 12:56:51 +00:00
3205371654 feat(ui): better error handling for persist serialize function 2024-02-03 07:39:19 -05:00
d713620d9e refactor(ui): refactor reducer list
Instead of manually naming reducers, use each slice's `name` property. Makes typos impossible.
2024-02-03 07:39:19 -05:00
c1300fa8b1 refactor(ui): refactor persist config
Add more structure around persist configs to avoid bugs from typos and misplaced persist denylists.
2024-02-03 07:39:19 -05:00
0976ddba23 chore(invocation-stats): improve types in _prune_stale_stats 2024-02-03 07:34:06 -05:00
3ebb806410 fix(invocation-stats): use appropriate method to get the type of an invocation 2024-02-03 07:34:06 -05:00
9f274c79dc chore(item-storage): improve types
Provide type args to the generics.
2024-02-03 07:34:06 -05:00
88c08bbfc7 fix(item-storage-memory): throw when requested item does not exist
- `ItemStorageMemory.get` now throws an `ItemNotFoundError` when the requested `item_id` is not found.
- Update docstrings in ABC and tests.

The new memory item storage implementation implemented the `get` method incorrectly, by returning `None` if the item didn't exist.

The ABC typed `get` as returning `T`, while the SQLite implementation typed `get` as returning `Optional[T]`. The SQLite implementation was referenced when writing the memory implementation.

This mismatched typing is a violation of the Liskov substitution principle, because the signature of the implementation of `get` in the implementation is wider than the abstract class's definition. Using `pyright` in strict mode catches this.

In `invocation_stats_default`, this introduced an error. The `_prune_stats` method calls `get`, expecting the method to throw if the item is not found. If the graph is no longer stored in the bounded item storage, we will call `is_complete()` on `None`, causing the error.

Note: This error condition never arose the SQLite implementation because it parsed the item with pydantic before returning it, which would throw if the item was not found. It implicitly threw, while the memory implementation did not.
2024-02-03 07:34:06 -05:00
c2af124622 fix(ui): refetch intermediates count when settings modal open
The `getIntermediatesCount` query is set to `refetchOnMountOrArgsChange`. The intention was for when the settings modal opens (i.e. mounts), the `getIntermediatesCount` query is refetched. But it doesn't work - modals only mount once, there is no lazy rendering for them.

So we have to imperatively refetch, by refetching as we open the modal.

Closes #5639
2024-02-03 12:14:37 +11:00
f972fe9836 pref: annotate 2024-02-03 10:18:26 +11:00
dcfc883ab3 perf: remove TypeAdapter 2024-02-03 10:18:26 +11:00
1d2bd6b8f7 perf: TypeAdapter instantiated once 2024-02-03 10:18:26 +11:00
f2777f5096 Port the command-line tools to use model_manager2 (#5546)
* Port the command-line tools to use model_manager2

1.Reimplement the following:

  - invokeai-model-install
  - invokeai-merge
  - invokeai-ti

  To avoid breaking the original modeal manager, the udpated tools
  have been renamed invokeai-model-install2 and invokeai-merge2. The
  textual inversion training script should continue to work with
  existing installations. The "starter" models now live in
  `invokeai/configs/INITIAL_MODELS2.yaml`.

  When the full model manager 2 is in place and working, I'll rename
  these files and commands.

2. Add the `merge` route to the web API. This will merge two or three models,
   resulting a new one.

   - Note that because the model installer selectively installs the `fp16` variant
     of models (rather than both 16- and 32-bit versions as previous),
     the diffusers merge script will choke on any huggingface diffuserse models
     that were downloaded with the new installer. Previously-downloaded models
     should continue to merge correctly. I have a PR
     upstream https://github.com/huggingface/diffusers/pull/6670 to fix
     this.

3. (more important!)
  During implementation of the CLI tools, found and fixed a number of small
  runtime bugs in the model_manager2 implementation:

  - During model database migration, if a registered models file was
    not found on disk, the migration would be aborted. Now the
    offending model is skipped with a log warning.

  - Caught and fixed a condition in which the installer would download the
    entire diffusers repo when the user provided a single `.safetensors`
    file URL.

  - Caught and fixed a condition in which the installer would raise an
    exception and stop the app when a request for an unknown model's metadata
    was passed to Civitai. Now an error is logged and the installer continues.

  - Replaced the LoWRA starter LoRA with FlatColor. The former has been removed
    from Civitai.

* fix ruff issue

---------

Co-authored-by: Lincoln Stein <lstein@gmail.com>
2024-02-02 17:18:47 +00:00
d3320dc4ee convert checkpoints to safetensors (#5620)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description

Seems we elected to convert checkpoints into .bin files when we set it
up. This doesn't seem to corrupt them anymore.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-02-02 10:27:24 -05:00
72db2ee352 Merge branch 'main' into sdxl-convert-safetensors 2024-02-02 10:10:49 -05:00
60c3a4ad5e chore: add Hand Refiner to communityNodes.md 2024-02-02 08:12:32 -05:00
cf7a7928af Update mkdocs.yml 2024-02-01 20:43:49 -05:00
1057314508 Fix ruff? 2024-02-01 20:40:28 -05:00
73a077956b Why did my IDE change the comment? 2024-02-01 20:40:28 -05:00
5e1e50bd47 Fix hopefully last import 2024-02-01 20:40:28 -05:00
413fe566b8 Fix imports 2024-02-01 20:40:28 -05:00
c9b5f06c42 Update diffusers + hotfix 2024-02-01 20:40:28 -05:00
b53e432b0f translationBot(ui): update translation (German)
Currently translated at 60.8% (871 of 1432 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2024-02-02 11:16:45 +11:00
88164447e9 fix(ui): hide HRF if SDXL model selected 2024-02-02 11:10:54 +11:00
1ac85fd049 tidy(migrator): remove logic to check if graph_executions exists in migration 5
Initially I wanted to show how many sessions were being deleted. In hindsight, this is not great:
- It requires extra logic in the migrator, which should be as simple as possible.
- It may be alarming to see "Clearing 224591 old sessions".

The app still reports on freed space during the DB startup logic.
2024-02-02 09:20:41 +11:00
ee6fc4ab1d chore(item_storage): excise SqliteItemStorage 2024-02-02 09:20:41 +11:00
9f793bdae8 feat(item_storage): implement item_storage_memory with LRU eviction strategy
Implemented with OrderedDict.
2024-02-02 09:20:41 +11:00
a0eecaecd0 feat(item_storage): implement item_storage_memory max_size
Implemented with unordered dict and set.
2024-02-02 09:20:41 +11:00
d532073f5b fix(db): check for graph_executions table before dropping
This is needed to not fail tests; see comment in code.
2024-02-02 09:20:41 +11:00
198e8c9d55 feat(db): add migration 5 to drop graph_executions table 2024-02-02 09:20:41 +11:00
30367deeca feat(nodes): use memory item storage 2024-02-02 09:20:41 +11:00
e73298aea2 tidy(item_storage): remove extraneous class attribute declarations 2024-02-02 09:20:41 +11:00
59279851a3 tidy(item_storage): remove unused list and search methods 2024-02-02 09:20:41 +11:00
2965357d99 feat(nodes): add ItemStorageMemory
The sqlite item storage class can be swapped for this eliminate costly network calls.
2024-02-02 09:20:41 +11:00
8bd32ee142 feat(nodes): add delete method to ItemStorageABC 2024-02-02 09:20:41 +11:00
a4f892dcfb tidy(nodes): remove unused get_raw method on ItemStorageABC 2024-02-02 09:20:41 +11:00
e675983e20 fix(ui): download image opens in new tab (#5625)
* fix(ui): download image opens in new tab

In some environments, a simple `a` element cannot trigger a download of an image. Fetching the image directly can get around this and provide more reliable download functionality.

* use hook for imageUrlToBlob so token gets sent if needed

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-02-01 20:25:01 +00:00
e9558f97c4 perf(config): change default png_compress_level to 1
This substantially reduces the time spent encoding PNGs. In workflows with many image outputs, this is a drastic improvement.

For a tiled upscaling workflow going from 512x512 to a scale factor of 4, this can provide over 15% speed increase.
2024-02-02 00:32:00 +11:00
a1a611f8cb chore(ui): lint 2024-02-02 00:20:28 +11:00
182dc859a0 chore(ui): update eslint rules
- Add `i18next/no-literal-string` (was removed from upstream config)
- Restore `path/no-relative-imports`, this was lost in the shuffle a while ago
2024-02-02 00:20:28 +11:00
c0240a8568 chore(ui): bump @invoke-ai/eslint-config-react 2024-02-02 00:20:28 +11:00
02bcff29e8 feat: update ROCm to 5.6 everywhere 2024-02-01 00:07:16 -05:00
d4ed64df7d feat: add force-reinstall option to the updater 2024-02-01 00:07:16 -05:00
701f14c1e3 fix: add PyTorch extra-index-url to the updater command 2024-02-01 00:07:16 -05:00
45bf2c7da6 chore(updater): address deprecation of pkg_resources
as per module docstring:
This module is deprecated. Users are directed to importlib.resources,
importlib.metadata and packaging instead.
2024-02-01 00:07:16 -05:00
67ada70a26 docs: update link to frontend README 2024-01-31 22:34:59 -05:00
06bcc07f65 Merge branch 'main' into sdxl-convert-safetensors 2024-01-31 17:00:19 -05:00
4410ecf62c fix(stats): log errors at error level
They were erroneously at warning before.
2024-02-01 08:50:56 +11:00
9f6b9d4d23 fix(stats): preserve stack when raising GESStatsNotFoundError 2024-02-01 08:50:56 +11:00
b24e8dd829 feat(stats): refactor InvocationStatsService to output stats as dataclasses
This allows the stats to be written to disk as JSON and analyzed.

- Add dataclasses to hold stats.
- Move stats pretty-print logic to `__str__` of the new `InvocationStatsSummary` class.
- Add `get_stats` and `dump_stats` methods to `InvocationStatsServiceBase`.
- `InvocationStatsService` now throws if stats are requested for a session it doesn't know about. This avoids needing to do a lot of messy null checks.
- Update `DefaultInvocationProcessor` to use the new stats methods and suppresses the new errors.
2024-02-01 08:50:56 +11:00
25291a2e01 select first image if no selectedImageName 2024-01-31 11:52:47 -05:00
332f3930a5 Allow civit ai API Key on Imports (#5608)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [x] Bug Fix
- [ ] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [ ] Yes
- [ ] No


## Description
Small PR to allow users to pass in a civit api key via config options

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-31 10:51:33 -05:00
ed466a99ec Merge branch 'main' into fix-civit-model-imports 2024-01-31 10:12:44 -05:00
f68f8898c0 Workflow navigation & save-as (#5607)
* redo top panel of workflow editor

* add checkbox option to save to project, integrate save-as flow into first time saving workflow

* remove log

* remove workflowLibrary as a feature that can be disabled

* lint

* feat(ui): make SaveWorkflowAsDialog a singleton

Fixes an issue where the workflow name would erroneously be an empty string (which it should show the current workflow name).

Also makes it easier to interact with this component.

- Extract the dialog state to a hook
- Render the dialog once in `<NodeEditor />`
- Use the hook in the various buttons that should open the dialog
- Fix a few wonkily named components (pre-existing issue)

* fix(ui): when saving a never-before-saved workflow, do not append " (copy)" to the name

* fix(ui): do not obscure workflow library button with add node popover

This component is kinda janky :/ the popover content somehow renders invisibly over the button. I think it's related to the `<PopoverAnchor />.

Need to redo this in the future, but for now, making the popover render lazily fixes this.

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 13:32:31 +00:00
a0996b1c0a Fix ruff styling 2024-01-31 07:16:14 -06:00
522ff4a042 civit -> civitai 2024-01-31 07:16:14 -06:00
a769f93be0 Remove unnecessary change 2024-01-31 07:16:14 -06:00
2c5ef92979 Move location of config property, comment for explanation of use 2024-01-31 07:16:14 -06:00
5d773dc94c Remove debug line 2024-01-31 07:16:14 -06:00
088e3420e6 Allow passing of civit api key via config 2024-01-31 07:16:14 -06:00
14efc95707 Allow passing of a civit api key 2024-01-31 07:16:14 -06:00
f48a2c5fd2 fix(ui): workflow settings styling
Got borked in the redesign.
2024-01-31 07:16:01 -06:00
74ae4d7774 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
191203ea0c translationBot(ui): update translation (Turkish)
Currently translated at 36.1% (516 of 1427 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
6aceae5c22 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1388 of 1427 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-31 23:05:11 +11:00
8c6b3efd39 fix(ui): remove hard reset of cursor on canvas during state reset
Remove resetting cursor when resetting state letting event handlers to take care of presentation
2024-01-31 23:03:14 +11:00
4602efd598 feat: add profiler util (#5601)
* feat(config): add profiling config settings

- `profile_graphs` enables graph profiling with cProfile
- `profiles_dir` sets the output for profiles

* feat(nodes): add Profiler util

Simple wrapper around cProfile.

* feat(nodes): use Profiler in invocation processor

* scripts: add generate_profile_graphs.sh script

Helper to generate graphs for profiles.

* pkg: add snakeviz and gprof2dot to dev deps

These are useful for profiling.

* tests: add tests for profiler util

* fix(profiler): handle previous profile not stopped cleanly

* feat(profiler): add profile_prefix config setting

The prefix is used when writing profile output files. Useful to organise profiles into sessions.

* tidy(profiler): add `_` to private API

* feat(profiler): simplify API

* feat(profiler): use child logger for profiler logs

* chore(profiler): update docstrings

* feat(profiler): stop() returns output path

* chore(profiler): fix docstring

* tests(profiler): update tests

* chore: ruff
2024-01-31 10:51:57 +00:00
f70c0936ca feat: disable/enable LoRas with a switch (#5591)
* feat:  disable/enable LorRas with a switch

* feat:  visually display previous weight when disabled

* style: 🚨 linting

* feat:  lora badge count reflects active loras

* style: 🚨 linting

* feat:  track disabled lora on state instead of weight

* style: 🚨 linting

* feat:  it all works now

tracking isEnabled on lora state, disabled slider when disabled, removed disabled loras from graph, updated badge counting and renamed lora add function

* style: 🚨 linting

* fix: 🐛 enabledLoRAs filter nullish coalescing

* refactor: 🎨 minor changes

renamed lora toggle action, removed errent comment, removed extraneous type annotation

* style: 🚨 linting
2024-01-31 05:50:03 +00:00
0d4de4cc63 changed hotkeys (#5542)
Adds adds ctrl/meta + scroll to change brush size on canvas.

* changed hotkeys

* new hotkey as an additional

* lint fixed"

* added ctrl scroll and removed hotkey

* using

* added fix

* feedbck_changes

* brush size change logic

* feat(ui): also check for meta key when modifying brush size

* feat(ui): add comment linking to where brush size algo was determined

---------

Co-authored-by: psychedelicious <4822129+psychedelicious@users.noreply.github.com>
2024-01-31 15:57:16 +11:00
1e855f8290 Update safetensors and transformers to their latest versions (#5562)
* Update Safetensors to the lastest version

* Update Transformers while at it

* Update transformers again
2024-01-31 04:54:56 +00:00
bb2787584d chore(deps-dev): bump vite in /invokeai/frontend/web
Bumps [vite](https://github.com/vitejs/vite/tree/HEAD/packages/vite) from 5.0.11 to 5.0.12.
- [Release notes](https://github.com/vitejs/vite/releases)
- [Changelog](https://github.com/vitejs/vite/blob/v5.0.12/packages/vite/CHANGELOG.md)
- [Commits](https://github.com/vitejs/vite/commits/v5.0.12/packages/vite)

---
updated-dependencies:
- dependency-name: vite
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-01-31 15:47:13 +11:00
a04981b418 This seems to work now 2024-01-30 21:32:08 -05:00
d7f16b7c87 fix(ui): the bottom button on floating side panel clears all queue items 2024-01-31 01:04:24 +11:00
4477e04d59 fix(ui): filter out interactive targets when pressing space on canvas tab
Improve input filtering for better accessibility
2024-01-30 09:56:21 +11:00
30e11b4b42 feat(ui): save the current staging image with shift+s 2024-01-30 09:56:21 +11:00
b93695b78f feat(ui): discard all staging images in canvas on escape 2024-01-30 09:56:21 +11:00
b01311813b fix(ui): activate move tool on pressing space
canvas element is not guaranteed to be in focus (e.g. after accepting new staging image) so we check for the active tab name instead
2024-01-30 09:56:21 +11:00
5ae80fab87 fix(ui): accept staging image hotkey callback 2024-01-30 09:56:21 +11:00
c4291f2136 fix(ui): block gallery navigation when staging images on canvas 2024-01-30 09:56:21 +11:00
287d3c2b04 add UI library to rollup config (#5598)
* try rolling up ui library

* lint

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-29 13:13:09 -05:00
7fde19730e translationBot(ui): update translation (Turkish)
Currently translated at 22.8% (326 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-29 14:15:29 +11:00
13575642d8 chore: update issue template
- Improve spelling and grammar
- Add browser, GPU model, python deps fields
- Revise other fields
2024-01-29 14:11:00 +11:00
3f5370b284 feat(ui): add a copy button to the about modal
This copies the dependencies as JSON.
2024-01-28 20:50:08 -06:00
d048eb5b20 docs(ui): add STATE_MGMT.md
Supersedes the mini nanostores doc.
2024-01-29 07:28:20 +11:00
dd7031a472 docs(ui): update README.md
Also moved it to the frontend package root
2024-01-29 07:28:20 +11:00
4160d5ef26 update contributors list to bring into sync with discord roles (#5586)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description

This brings `docs/other/CONTRIBUTORS.md` into sync with collaborator
roles in Discord as of January 27, 2024.

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

## QA Instructions, Screenshots, Recordings

N/A

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

Merge when approved.

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [ ] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-28 11:28:22 -05:00
51bdf2fd19 Merge branch 'main' into docs/update-contributors 2024-01-28 11:26:35 -05:00
6a44697911 translationBot(ui): update translation (Turkish)
Currently translated at 10.5% (151 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 8.1% (116 of 1426 strings)

translationBot(ui): update translation (Turkish)

Currently translated at 6.6% (95 of 1426 strings)

Co-authored-by: Ufuk Sarp Selçok <ilkel@live.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
7a1d0ec228 translationBot(ui): update translation files
Updated by "Remove blank strings" hook in Weblate.

Co-authored-by: Hosted Weblate <hosted@weblate.org>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
b5928fd411 translationBot(ui): update translation (Italian)
Currently translated at 97.2% (1387 of 1426 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2024-01-28 22:27:25 +11:00
2f345d1976 chore(ui): lint 2024-01-28 19:57:53 +11:00
f5d0721fa8 chore(ui): bump @invoke-ai/eslint-config-react 2024-01-28 19:57:53 +11:00
c3b36cb61d chore(ui): remove chakra CLI
It doesn't work now that the theme is external. I'm not sure how to fix it and not sure if it really did much (I don't think I ever got autocomplete...). Maybe it can be implemented in `@invoke-ai/ui-library`.
2024-01-28 19:57:53 +11:00
189c430e46 chore(ui): format
Lots of changed bc the line length is now 120. May as well do it now.
2024-01-28 19:57:53 +11:00
b922ee566a chore(ui): use new prettier config 2024-01-28 19:57:53 +11:00
89da69f647 fix(ui): correct import in ReduxInit 2024-01-28 19:57:53 +11:00
138caa34de chore(ui): lint 2024-01-28 19:57:53 +11:00
26c3378ede chore(ui): use new eslint config, add some overrides 2024-01-28 19:57:53 +11:00
aa134a2db8 chore(ui): remove postinstall script 2024-01-28 19:57:53 +11:00
d0391cb430 chore(ui): bump @invoke-ai/ui-library, add @invoke-ai/eslint-config-react & @invoke-ai/prettier-config-react 2024-01-28 19:57:53 +11:00
c955ea9de0 Update CONTRIBUTORS.md 2024-01-27 17:04:32 -05:00
fc29a5d439 update contributors list to bring into sync with discord roles 2024-01-27 16:59:56 -05:00
7e9942dbab {fix} install docs house keeping (#5583)
## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [ ] Optimization
- [X] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [X] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [X] Yes
- [ ] No


## Description
- Update docs to make link to automated installer easier to find
- Fixed issue in SDXL + refiner example workflow 

## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings
Read over docs changes
<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan
Merge when approved
<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## [optional] Are there any post deployment tasks we need to perform?
Deploy new docs
2024-01-27 12:10:47 -05:00
c003967eaa Merge branch 'main' into feat/install_docs_update 2024-01-27 11:55:19 -05:00
b28fcc6be5 lint 2024-01-27 21:36:42 +11:00
418cdbabb7 add option for workflowCategories 2024-01-27 21:36:42 +11:00
18e61e92d9 {fix} install docs house keeping 2024-01-26 21:19:48 -06:00
de20711637 add nanostore for open API schema 2024-01-27 12:43:47 +11:00
55e91b97be dep 2024-01-27 12:43:47 +11:00
f79bbd2d6e account for baseUrl 2024-01-27 12:43:47 +11:00
e1c2c3905d Github action for ensuring PRs are labeled in a way that makes it eas… (#5543)
…y to distinguish what's being changed

## What type of PR is this? (check all applicable)

- [ ] Refactor
- [ ] Feature
- [ ] Bug Fix
- [x] Optimization
- [ ] Documentation Update
- [ ] Community Node Submission


## Have you discussed this change with the InvokeAI team?
- [x] Yes
- [ ] No, because:

      
## Have you updated all relevant documentation?
- [x] Yes
- [ ] No


## Description


## Related Tickets & Documents

<!--
For pull requests that relate or close an issue, please include them
below. 

For example having the text: "closes #1234" would connect the current
pull
request to issue 1234.  And when we merge the pull request, Github will
automatically close the issue.
-->

- Related Issue #
- Closes #

## QA Instructions, Screenshots, Recordings

<!-- 
Please provide steps on how to test changes, any hardware or 
software specifications as well as any other pertinent information. 
-->

## Merge Plan

<!--
A merge plan describes how this PR should be handled after it is
approved.

Example merge plans:
- "This PR can be merged when approved"
- "This must be squash-merged when approved"
- "DO NOT MERGE - I will rebase and tidy commits before merging"
- "#dev-chat on discord needs to be advised of this change when it is
merged"

A merge plan is particularly important for large PRs or PRs that touch
the
database in any way.
-->

## Added/updated tests?

- [ ] Yes
- [x] No : _please replace this line with details on why tests
      have not been included_

## [optional] Are there any post deployment tasks we need to perform?
2024-01-25 20:37:39 -05:00
03ac93bfc7 Merge branch 'main' into pr-labeler 2024-01-25 20:36:12 -05:00
89da976949 workflow library updates (#5568)
* dont show duplicate toasts if workflow actions fail due to auth

* dynamic order by options based on projectId

* add endpointName to authtoast to makeit unique per endpoint

* lint

* update toast logic to check based on endpoint name w type safety

* fix save as endpoit name

* lint

* fix type

---------

Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2024-01-25 11:43:47 -05:00
e160cbb1e9 Merge branch 'main' into pr-labeler 2024-01-24 15:44:35 -05:00
68232e642f Merge branch 'main' into pr-labeler 2024-01-23 09:20:43 -05:00
b94f6a4a29 Fix python label, add test label 2024-01-22 15:14:02 -05:00
4caf63d53d Added a few more labels 2024-01-22 15:08:11 -05:00
6057229ceb Github action for ensuring PRs are labeled in a way that makes it easy to distinguish what's being changed 2024-01-22 11:22:33 -05:00
779 changed files with 16145 additions and 15214 deletions

View File

@ -6,10 +6,6 @@ title: '[bug]: '
labels: ['bug']
# assignees:
# - moderator_bot
# - lstein
body:
- type: markdown
attributes:
@ -18,10 +14,9 @@ body:
- type: checkboxes
attributes:
label: Is there an existing issue for this?
label: Is there an existing issue for this problem?
description: |
Please use the [search function](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen+label%3Abug)
irst to see if an issue already exists for the bug you encountered.
Please [search](https://github.com/invoke-ai/InvokeAI/issues) first to see if an issue already exists for the problem.
options:
- label: I have searched the existing issues
required: true
@ -33,80 +28,119 @@ body:
- type: dropdown
id: os_dropdown
attributes:
label: OS
description: Which operating System did you use when the bug occured
label: Operating system
description: Your computer's operating system.
multiple: false
options:
- 'Linux'
- 'Windows'
- 'macOS'
- 'other'
validations:
required: true
- type: dropdown
id: gpu_dropdown
attributes:
label: GPU
description: Which kind of Graphic-Adapter is your System using
label: GPU vendor
description: Your GPU's vendor.
multiple: false
options:
- 'cuda'
- 'amd'
- 'mps'
- 'cpu'
- 'Nvidia (CUDA)'
- 'AMD (ROCm)'
- 'Apple Silicon (MPS)'
- 'None (CPU)'
validations:
required: true
- type: input
id: gpu_model
attributes:
label: GPU model
description: Your GPU's model. If on Apple Silicon, this is your Mac's chip. Leave blank if on CPU.
placeholder: ex. RTX 2080 Ti, Mac M1 Pro
validations:
required: false
- type: input
id: vram
attributes:
label: VRAM
description: Size of the VRAM if known
label: GPU VRAM
description: Your GPU's VRAM. If on Apple Silicon, this is your Mac's unified memory. Leave blank if on CPU.
placeholder: 8GB
validations:
required: false
- type: input
id: version-number
attributes:
label: What version did you experience this issue on?
label: Version number
description: |
Please share the version of Invoke AI that you experienced the issue on. If this is not the latest version, please update first to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: X.X.X
The version of Invoke you have installed. If it is not the latest version, please update and try again to confirm the issue still exists. If you are testing main, please include the commit hash instead.
placeholder: ex. 3.6.1
validations:
required: true
- type: input
id: browser-version
attributes:
label: Browser
description: Your web browser and version.
placeholder: ex. Firefox 123.0b3
validations:
required: true
- type: textarea
id: python-deps
attributes:
label: Python dependencies
description: |
If the problem occurred during image generation, click the gear icon at the bottom left corner, click "About", click the copy button and then paste here.
validations:
required: false
- type: textarea
id: what-happened
attributes:
label: What happened?
label: What happened
description: |
Briefly describe what happened, what you expected to happen and how to reproduce this bug.
placeholder: When using the webinterface and right-clicking on button X instead of the popup-menu there error Y appears
Describe what happened. Include any relevant error messages, stack traces and screenshots here.
placeholder: I clicked button X and then Y happened.
validations:
required: true
- type: textarea
id: what-you-expected
attributes:
label: Screenshots
description: If applicable, add screenshots to help explain your problem
placeholder: this is what the result looked like <screenshot>
label: What you expected to happen
description: Describe what you expected to happen.
placeholder: I expected Z to happen.
validations:
required: true
- type: textarea
id: how-to-repro
attributes:
label: How to reproduce the problem
description: List steps to reproduce the problem.
placeholder: Start the app, generate an image with these settings, then click button X.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: Additional context
description: Add any other context about the problem here
description: Any other context that might help us to understand the problem.
placeholder: Only happens when there is full moon and Friday the 13th on Christmas Eve 🎅🏻
validations:
required: false
- type: input
id: contact
id: discord-username
attributes:
label: Contact Details
description: __OPTIONAL__ How can we get in touch with you if we need more info (besides this issue)?
placeholder: ex. email@example.com, discordname, twitter, ...
label: Discord username
description: If you are on the Invoke discord and would prefer to be contacted there, please provide your username.
placeholder: supercoolusername123
validations:
required: false

59
.github/pr_labels.yml vendored Normal file
View File

@ -0,0 +1,59 @@
Root:
- changed-files:
- any-glob-to-any-file: '*'
PythonDeps:
- changed-files:
- any-glob-to-any-file: 'pyproject.toml'
Python:
- changed-files:
- all-globs-to-any-file:
- 'invokeai/**'
- '!invokeai/frontend/web/**'
PythonTests:
- changed-files:
- any-glob-to-any-file: 'tests/**'
CICD:
- changed-files:
- any-glob-to-any-file: .github/**
Docker:
- changed-files:
- any-glob-to-any-file: docker/**
Installer:
- changed-files:
- any-glob-to-any-file: installer/**
Documentation:
- changed-files:
- any-glob-to-any-file: docs/**
Invocations:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/invocations/**'
Backend:
- changed-files:
- any-glob-to-any-file: 'invokeai/backend/**'
Api:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/api/**'
Services:
- changed-files:
- any-glob-to-any-file: 'invokeai/app/services/**'
FrontendDeps:
- changed-files:
- any-glob-to-any-file:
- '**/*/package.json'
- '**/*/pnpm-lock.yaml'
Frontend:
- changed-files:
- any-glob-to-any-file: 'invokeai/frontend/web/**'

16
.github/workflows/label-pr.yml vendored Normal file
View File

@ -0,0 +1,16 @@
name: "Pull Request Labeler"
on:
- pull_request_target
jobs:
labeler:
permissions:
contents: read
pull-requests: write
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: .github/pr_labels.yml

View File

@ -169,7 +169,7 @@ the command `npm install -g pnpm` if needed)
_For Linux with an AMD GPU:_
```sh
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
_For non-GPU systems:_

View File

@ -18,8 +18,8 @@ ENV INVOKEAI_SRC=/opt/invokeai
ENV VIRTUAL_ENV=/opt/venv/invokeai
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ARG TORCH_VERSION=2.1.0
ARG TORCHVISION_VERSION=0.16
ARG TORCH_VERSION=2.1.2
ARG TORCHVISION_VERSION=0.16.2
ARG GPU_DRIVER=cuda
ARG TARGETPLATFORM="linux/amd64"
# unused but available
@ -35,7 +35,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$TARGETPLATFORM" = "linux/arm64" ] || [ "$GPU_DRIVER" = "cpu" ]; then \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cpu"; \
elif [ "$GPU_DRIVER" = "rocm" ]; then \
extra_index_url_arg="--index-url https://download.pytorch.org/whl/rocm5.6"; \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/rocm5.6"; \
else \
extra_index_url_arg="--extra-index-url https://download.pytorch.org/whl/cu121"; \
fi &&\
@ -54,7 +54,7 @@ RUN --mount=type=cache,target=/root/.cache/pip \
if [ "$GPU_DRIVER" = "cuda" ] && [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
pip install -e ".[xformers]"; \
else \
pip install -e "."; \
pip install $extra_index_url_arg -e "."; \
fi
# #### Build the Web UI ------------------------------------

View File

@ -28,7 +28,7 @@ This is done via Docker Desktop preferences
### Configure Invoke environment
1. Make a copy of `env.sample` and name it `.env` (`cp env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
1. Make a copy of `.env.sample` and name it `.env` (`cp .env.sample .env` (Mac/Linux) or `copy example.env .env` (Windows)). Make changes as necessary. Set `INVOKEAI_ROOT` to an absolute path to:
a. the desired location of the InvokeAI runtime directory, or
b. an existing, v3.0.0 compatible runtime directory.
1. Execute `run.sh`

View File

@ -21,7 +21,7 @@ run() {
printf "%s\n" "$build_args"
fi
docker compose build $build_args
docker compose build $build_args $service_name
unset build_args
printf "%s\n" "starting service $service_name"

View File

@ -94,6 +94,8 @@ A model that helps generate creative QR codes that still scan. Can also be used
**Openpose**:
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
*Note:* The DWPose Processor has replaced the OpenPose processor in Invoke. Workflows and generations that relied on the OpenPose Processor will need to be updated to use the DWPose Processor instead.
**Mediapipe Face**:
The MediaPipe Face identification processor is able to clearly identify facial features in order to capture vivid expressions of human faces.

BIN
docs/img/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

View File

@ -117,6 +117,11 @@ Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
## :octicons-gift-24: InvokeAI Features
### Installation
- [Automated Installer](installation/010_INSTALL_AUTOMATED.md)
- [Manual Installation](installation/020_INSTALL_MANUAL.md)
- [Docker Installation](installation/040_INSTALL_DOCKER.md)
### The InvokeAI Web Interface
- [WebUI overview](features/WEB.md)
- [WebUI hotkey reference guide](features/WEBUIHOTKEYS.md)

View File

@ -477,7 +477,7 @@ Then type the following commands:
=== "AMD System"
```bash
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install torch torchvision --force-reinstall --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
### Corrupted configuration file

View File

@ -154,7 +154,7 @@ manager, please follow these steps:
=== "ROCm (AMD)"
```bash
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@ -230,13 +230,13 @@ manager, please follow these steps:
=== "local Webserver"
```bash
invokeai --web
invokeai-web
```
=== "Public Webserver"
```bash
invokeai --web --host 0.0.0.0
invokeai-web --host 0.0.0.0
```
=== "CLI"
@ -313,7 +313,7 @@ code for InvokeAI. For this to work, you will need to install the
on your system, please see the [Git Installation
Guide](https://github.com/git-guides/install-git)
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md).
You will also need to install the [frontend development toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md).
If you have a "normal" installation, you should create a totally separate virtual environment for the git-based installation, else the two may interfere.
@ -345,7 +345,7 @@ installation protocol (important!)
=== "ROCm (AMD)"
```bash
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
pip install -e . --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.6
```
=== "CPU (Intel Macs & non-GPU systems)"
@ -361,7 +361,7 @@ installation protocol (important!)
Be sure to pass `-e` (for an editable install) and don't forget the
dot ("."). It is part of the command.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/docs/contributing/contribution_guides/contributingToFrontend.md) and do a production build of the UI as described.
5. Install the [frontend toolchain](https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/frontend/web/README.md) and do a production build of the UI as described.
6. You can now run `invokeai` and its related commands. The code will be
read from the repository, so that you can edit the .py source files
@ -402,4 +402,4 @@ environment variable INVOKEAI_ROOT to point to the installation directory.
Note that if you run into problems with the Conda installation, the InvokeAI
staff will **not** be able to help you out. Caveat Emptor!
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939
[dev-chat]: https://discord.com/channels/1020123559063990373/1049495067846524939

View File

@ -134,7 +134,7 @@ recipes are available
When installing torch and torchvision manually with `pip`, remember to provide
the argument `--extra-index-url
https://download.pytorch.org/whl/rocm5.4.2` as described in the [Manual
https://download.pytorch.org/whl/rocm5.6` as described in the [Manual
Installation Guide](020_INSTALL_MANUAL.md).
This will be done automatically for you if you use the installer

View File

@ -69,7 +69,7 @@ a token and copy it, since you will need in for the next step.
### Setup
Set up your environmnent variables. In the `docker` directory, make a copy of `env.sample` and name it `.env`. Make changes as necessary.
Set up your environmnent variables. In the `docker` directory, make a copy of `.env.sample` and name it `.env`. Make changes as necessary.
Any environment variables supported by InvokeAI can be set here - please see the [CONFIGURATION](../features/CONFIGURATION.md) for further detail.

View File

@ -18,13 +18,18 @@ either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm
driver).
## **[Automated Installer](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
## **[Automated Installer (Recommended)](010_INSTALL_AUTOMATED.md)**
✅ This is the recommended installation method for first-time users.
This is a script that will install all of InvokeAI's essential
third party libraries and InvokeAI itself. It includes access to a
"developer console" which will help us debug problems with you and
give you to access experimental features.
third party libraries and InvokeAI itself.
🖥️ **Download the latest installer .zip file here** : https://github.com/invoke-ai/InvokeAI/releases/latest
- *Look for the file labelled "InvokeAI-installer-v3.X.X.zip" at the bottom of the page*
- If you experience issues, read through the full [installation instructions](010_INSTALL_AUTOMATED.md) to make sure you have met all of the installation requirements. If you need more help, join the [Discord](discord.gg/invoke-ai) or create an issue on [Github](https://github.com/invoke-ai/InvokeAI).
## **[Manual Installation](020_INSTALL_MANUAL.md)**
This method is recommended for experienced users and developers.

View File

@ -14,6 +14,7 @@ To use a community workflow, download the the `.json` node graph file and load i
- Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes)
+ [Autostereogram](#autostereogram-nodes)
+ [Average Images](#average-images)
+ [Clean Image Artifacts After Cut](#clean-image-artifacts-after-cut)
+ [Close Color Mask](#close-color-mask)
@ -25,11 +26,13 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [GPT2RandomPromptMaker](#gpt2randompromptmaker)
+ [Grid to Gif](#grid-to-gif)
+ [Halftone](#halftone)
+ [Hand Refiner with MeshGraphormer](#hand-refiner-with-meshgraphormer)
+ [Image and Mask Composition Pack](#image-and-mask-composition-pack)
+ [Image Dominant Color](#image-dominant-color)
+ [Image to Character Art Image Nodes](#image-to-character-art-image-nodes)
+ [Image Picker](#image-picker)
+ [Image Resize Plus](#image-resize-plus)
+ [Latent Upscale](#latent-upscale)
+ [Load Video Frame](#load-video-frame)
+ [Make 3D](#make-3d)
+ [Mask Operations](#mask-operations)
@ -40,6 +43,7 @@ To use a community workflow, download the the `.json` node graph file and load i
+ [Oobabooga](#oobabooga)
+ [Prompt Tools](#prompt-tools)
+ [Remote Image](#remote-image)
+ [BriaAI Background Remove](#briaai-remove-background)
+ [Remove Background](#remove-background)
+ [Retroize](#retroize)
+ [Size Stepper Nodes](#size-stepper-nodes)
@ -66,6 +70,17 @@ Note: These are inherited from the core nodes so any update to the core nodes sh
**Node Link:** https://github.com/skunkworxdark/adapters-linked-nodes
--------------------------------
### Autostereogram Nodes
**Description:** Generate autostereogram images from a depth map. This is not a very practically useful node but more a 90s nostalgic indulgence as I used to love these images as a kid.
**Node Link:** https://github.com/skunkworxdark/autostereogram_nodes
**Example Usage:**
</br>
<img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/blob/main/images/spider-depth.png" width="200" /> -> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-dots.png" width="200" /> <img src="https://github.com/skunkworxdark/autostereogram_nodes/raw/main/images/spider-pattern.png" width="200" />
--------------------------------
### Average Images
@ -196,6 +211,18 @@ CMYK Halftone Output:
<img src="https://github.com/invoke-ai/InvokeAI/assets/34005131/c59c578f-db8e-4d66-8c66-2851752d75ea" width="300" />
--------------------------------
### Hand Refiner with MeshGraphormer
**Description**: Hand Refiner takes in your image and automatically generates a fixed depth map for the hands along with a mask of the hands region that will conveniently allow you to use them along with ControlNet to fix the wonky hands generated by Stable Diffusion
**Node Link:** https://github.com/blessedcoolant/invoke_meshgraphormer
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_meshgraphormer/main/assets/preview.jpg" />
--------------------------------
### Image and Mask Composition Pack
**Description:** This is a pack of nodes for composing masks and images, including a simple text mask creator and both image and latent offset nodes. The offsets wrap around, so these can be used in conjunction with the Seamless node to progressively generate centered on different parts of the seamless tiling.
@ -264,6 +291,13 @@ View:
</br><img src="https://raw.githubusercontent.com/VeyDlin/image-resize-plus-node/master/.readme/node.png" width="500" />
--------------------------------
### Latent Upscale
**Description:** This node uses a small (~2.4mb) model to upscale the latents used in a Stable Diffusion 1.5 or Stable Diffusion XL image generation, rather than the typical interpolation method, avoiding the traditional downsides of the latent upscale technique.
**Node Link:** [https://github.com/gogurtenjoyer/latent-upscale](https://github.com/gogurtenjoyer/latent-upscale)
--------------------------------
### Load Video Frame
@ -409,6 +443,17 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai
**Node Link:** https://github.com/fieldOfView/InvokeAI-remote_image
--------------------------------
### BriaAI Remove Background
**Description**: Implements one click background removal with BriaAI's new version 1.4 model which seems to be be producing better results than any other previous background removal tool.
**Node Link:** https://github.com/blessedcoolant/invoke_bria_rmbg
**View**
<img src="https://raw.githubusercontent.com/blessedcoolant/invoke_bria_rmbg/main/assets/preview.jpg" />
--------------------------------
### Remove Background

View File

@ -81,7 +81,7 @@ their descriptions.
| ONNX Text to Latents | Generates latents from conditionings. |
| ONNX Model Loader | Loads a main model, outputting its submodels. |
| OpenCV Inpaint | Simple inpaint using opencv. |
| Openpose Processor | Applies Openpose processing to image |
| DW Openpose Processor | Applies Openpose processing to image |
| PIDI Processor | Applies PIDI processing to image |
| Prompts from File | Loads prompts from a text file |
| Random Integer | Outputs a single random integer. |

View File

@ -13,46 +13,69 @@ We thank them for all of their time and hard work.
- [Lincoln D. Stein](mailto:lincoln.stein@gmail.com)
## **Current core team**
## **Current Core Team**
* @lstein (Lincoln Stein) - Co-maintainer
* @blessedcoolant - Co-maintainer
* @hipsterusername (Kent Keirsey) - Co-maintainer, CEO, Positive Vibes
* @psychedelicious (Spencer Mabrito) - Web Team Leader
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @damian0815 - Attention Systems and Compel Maintainer
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @genomancer (Gregg Helt) - Controlnet support
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @josh is toast (Josh Corbett) - Web Development
* @cheerio (Mary Rogers) - Lead Engineer & Web App Development
* @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
* @sunija - Standalone version
* @genomancer (Gregg Helt) - Controlnet support
* @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
* @ryanjdick (Ryan Dick) - Machine Learning & Training
* @millu (Millun Atluri) - Community Manager, Documentation, Node-wrangler
* @chainchompa (Jennifer Player) - Web Development & Chain-Chomping
* @JPPhoto - Core image generation nodes
* @dunkeroni - Image generation backend
* @SkunkWorxDark - Image generation backend
* @keturn (Kevin Turner) - Diffusers
* @millu (Millun Atluri) - Community Wizard, Documentation, Node-wrangler,
* @glimmerleaf (Devon Hopkins) - Community Wizard
* @gogurt enjoyer - Discord moderator and end user support
* @whosawhatsis - Discord moderator and end user support
* @dwinrger - Discord moderator and end user support
* @526christian - Discord moderator and end user support
* @harvester62 - Discord moderator and end user support
## **Honored Team Alumni**
* @StAlKeR7779 (Sergey Borisov) - Torch stack, ONNX, model management, optimization
* @damian0815 - Attention Systems and Compel Maintainer
* @netsvetaev (Artur) - Localization support
* @Kyle0654 (Kyle Schouviller) - Node Architect and General Backend Wizard
* @tildebyte - Installation and configuration
* @mauwii (Matthias Wilde) - Installation, release, continuous integration
## **Full List of Contributors by Commit Name**
- 이승석
- AbdBarho
- ablattmann
- AdamOStark
- Adam Rice
- Airton Silva
- Aldo Hoeben
- Alexander Eichhorn
- Alexandre D. Roberge
- Alexandre Macabies
- Alfie John
- Andreas Rozek
- Andre LaBranche
- Andy Bearman
- Andy Luhrs
- Andy Pilate
- Anonymous
- Anthony Monthe
- Any-Winter-4079
- apolinario
- Ar7ific1al
- ArDiouscuros
- Armando C. Santisbon
- Arnold Cordewiner
- Arthur Holstvoogd
- artmen1516
- Artur
@ -64,13 +87,16 @@ We thank them for all of their time and hard work.
- blhook
- BlueAmulet
- Bouncyknighter
- Brandon
- Brandon Rising
- Brent Ozar
- Brian Racer
- bsilvereagle
- c67e708d
- camenduru
- CapableWeb
- Carson Katri
- chainchompa
- Chloe
- Chris Dawson
- Chris Hayes
@ -86,30 +112,45 @@ We thank them for all of their time and hard work.
- cpacker
- Cragin Godley
- creachec
- CrypticWit
- d8ahazard
- damian
- damian0815
- Damian at mba
- Damian Stewart
- Daniel Manzke
- Danny Beer
- Dan Sully
- Darren Ringer
- David Burnett
- David Ford
- David Regla
- David Sisco
- David Wager
- Daya Adianto
- db3000
- DekitaRPG
- Denis Olshin
- Dennis
- dependabot[bot]
- Dmitry Parnas
- Dobrynia100
- Dominic Letz
- DrGunnarMallon
- Drun555
- dunkeroni
- Edward Johan
- elliotsayes
- Elrik
- ElrikUnderlake
- Eric Khun
- Eric Wolf
- Eugene
- Eugene Brodsky
- ExperimentalCyborg
- Fabian Bahl
- Fabio 'MrWHO' Torchetti
- Fattire
- fattire
- Felipe Nogueira
- Félix Sanz
@ -118,8 +159,12 @@ We thank them for all of their time and hard work.
- gabrielrotbart
- gallegonovato
- Gérald LONLAS
- Gille
- GitHub Actions Bot
- glibesyck
- gogurtenjoyer
- Gohsuke Shimada
- greatwolf
- greentext2
- Gregg Helt
- H4rk
@ -131,6 +176,7 @@ We thank them for all of their time and hard work.
- Hosted Weblate
- Iman Karim
- ismail ihsan bülbül
- ItzAttila
- Ivan Efimov
- jakehl
- Jakub Kolčář
@ -141,6 +187,7 @@ We thank them for all of their time and hard work.
- Jason Toffaletti
- Jaulustus
- Jeff Mahoney
- Jennifer Player
- jeremy
- Jeremy Clark
- JigenD
@ -148,19 +195,26 @@ We thank them for all of their time and hard work.
- Johan Roxendal
- Johnathon Selstad
- Jonathan
- Jordan Hewitt
- Joseph Dries III
- Josh Corbett
- JPPhoto
- jspraul
- junzi
- Justin Wong
- Juuso V
- Kaspar Emanuel
- Katsuyuki-Karasawa
- Keerigan45
- Kent Keirsey
- Kevin Brack
- Kevin Coakley
- Kevin Gibbons
- Kevin Schaul
- Kevin Turner
- Kieran Klaassen
- krummrey
- Kyle
- Kyle Lacy
- Kyle Schouviller
- Lawrence Norton
@ -171,10 +225,15 @@ We thank them for all of their time and hard work.
- Lynne Whitehorn
- majick
- Marco Labarile
- Marta Nahorniuk
- Martin Kristiansen
- Mary Hipp
- maryhipp
- Mary Hipp Rogers
- mastercaster
- mastercaster9000
- Matthias Wild
- mauwii
- michaelk71
- mickr777
- Mihai
@ -182,11 +241,15 @@ We thank them for all of their time and hard work.
- Mikhail Tishin
- Millun Atluri
- Minjune Song
- Mitchell Allain
- mitien
- mofuzz
- Muhammad Usama
- Name
- _nderscore
- Neil Wang
- nekowaiz
- nemuruibai
- Netzer R
- Nicholas Koh
- Nicholas Körfer
@ -197,9 +260,11 @@ We thank them for all of their time and hard work.
- ofirkris
- Olivier Louvignes
- owenvincent
- pand4z31
- Patrick Esser
- Patrick Tien
- Patrick von Platen
- Paul Curry
- Paul Sajna
- pejotr
- Peter Baylies
@ -207,6 +272,7 @@ We thank them for all of their time and hard work.
- plucked
- prixt
- psychedelicious
- psychedelicious@windows
- Rainer Bernhardt
- Riccardo Giovanetti
- Rich Jones
@ -215,17 +281,22 @@ We thank them for all of their time and hard work.
- Robert Bolender
- Robin Rombach
- Rohan Barar
- rohinish404
- Rohinish
- rpagliuca
- rromb
- Rupesh Sreeraman
- Ryan
- Ryan Cao
- Ryan Dick
- Saifeddine
- Saifeddine ALOUI
- Sam
- SammCheese
- Sam McLeod
- Sammy
- sammyf
- Samuel Husso
- Saurav Maheshkar
- Scott Lahteine
- Sean McLellan
- Sebastian Aigner
@ -233,16 +304,21 @@ We thank them for all of their time and hard work.
- Sergey Krashevich
- Shapor Naghibzadeh
- Shawn Zhong
- Simona Liliac
- Simon Vans-Colina
- skunkworxdark
- slashtechno
- SoheilRezaei
- Song, Pengcheng
- spezialspezial
- ssantos
- StAlKeR7779
- Stefan Tobler
- Stephan Koglin-Fischer
- SteveCaruso
- Steve Martinelli
- Steven Frank
- Surisen
- System X - Files
- Taylor Kems
- techicode
@ -261,26 +337,34 @@ We thank them for all of their time and hard work.
- tyler
- unknown
- user1
- vedant-3010
- Vedant Madane
- veprogames
- wa.code
- wfng92
- whjms
- whosawhatsis
- Will
- William Becher
- William Chong
- Wilson E. Alvarez
- woweenie
- Wubbbi
- xra
- Yeung Yiu Hung
- ymgenesis
- Yorzaren
- Yosuke Shinya
- yun saki
- ZachNagengast
- Zadagu
- zeptofine
- Zerdoumi
- Васянатор
- 冯不游
- 唐澤 克幸
## **Original CompVis Authors**
## **Original CompVis (Stable Diffusion) Authors**
- [Robin Rombach](https://github.com/rromb)
- [Patrick von Platen](https://github.com/patrickvonplaten)

File diff suppressed because it is too large Load Diff

View File

@ -14,11 +14,19 @@ function is_bin_in_path {
}
function git_show {
git show -s --format='%h %s' $1
git show -s --format=oneline --abbrev-commit "$1" | cat
}
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
cd "$(dirname "$0")"
echo
echo -e "${BYELLOW}This script must be run from the installer directory!${RESET}"
echo "The current working directory is $(pwd)"
read -p "If that looks right, press any key to proceed, or CTRL-C to exit..."
@ -32,13 +40,6 @@ if ! is_bin_in_path python && is_bin_in_path python3; then
}
fi
if [[ -v "VIRTUAL_ENV" ]]; then
# we can't just call 'deactivate' because this function is not exported
# to the environment of this script from the bash process that runs the script
echo -e "${BRED}A virtual environment is activated. Please deactivate it before proceeding.${RESET}"
exit -1
fi
VERSION=$(
cd ..
python -c "from invokeai.version import __version__ as version; print(version)"
@ -47,38 +48,9 @@ PATCH=""
VERSION="v${VERSION}${PATCH}"
echo -e "${BGREEN}HEAD${RESET}:"
git_show
git_show HEAD
echo
# ---------------------- FRONTEND ----------------------
pushd ../invokeai/frontend/web >/dev/null
echo
echo "Installing frontend dependencies..."
echo
pnpm i --frozen-lockfile
echo
echo "Building frontend..."
echo
pnpm build
popd
# ---------------------- BACKEND ----------------------
echo
echo "Building wheel..."
echo
# install the 'build' package in the user site packages, if needed
# could be improved by using a temporary venv, but it's tiny and harmless
if [[ $(python -c 'from importlib.util import find_spec; print(find_spec("build") is None)') == "True" ]]; then
pip install --user build
fi
rm -rf ../build
python -m build --wheel --outdir dist/ ../.
# ----------------------
echo
@ -97,16 +69,13 @@ done
mkdir InvokeAI-Installer/lib
cp lib/*.py InvokeAI-Installer/lib
# Move the wheel
mv dist/*.whl InvokeAI-Installer/lib/
# Install scripts
# Mac/Linux
cp install.sh.in InvokeAI-Installer/install.sh
chmod a+x InvokeAI-Installer/install.sh
# Windows
perl -p -e "s/^set INVOKEAI_VERSION=.*/set INVOKEAI_VERSION=$VERSION/" install.bat.in >InvokeAI-Installer/install.bat
cp install.bat.in InvokeAI-Installer/install.bat
cp WinLongPathsEnabled.reg InvokeAI-Installer/
# Zip everything up

View File

@ -15,7 +15,6 @@ if "%1" == "use-cache" (
@rem Config
@rem The version in the next line is replaced by an up to date release number
@rem when create_installer.sh is run. Change the release number there.
set INVOKEAI_VERSION=latest
set INSTRUCTIONS=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/
set TROUBLESHOOTING=https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting
set PYTHON_URL=https://www.python.org/downloads/windows/

View File

@ -11,7 +11,7 @@ import sys
import venv
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import Union
from typing import Optional, Tuple
SUPPORTED_PYTHON = ">=3.10.0,<=3.11.100"
INSTALLER_REQS = ["rich", "semver", "requests", "plumbum", "prompt-toolkit"]
@ -21,40 +21,20 @@ OS = platform.uname().system
ARCH = platform.uname().machine
VERSION = "latest"
### Feature flags
# Install the virtualenv into the runtime dir
FF_VENV_IN_RUNTIME = True
# Install the wheel packaged with the installer
FF_USE_LOCAL_WHEEL = True
class Installer:
"""
Deploys an InvokeAI installation into a given path
"""
reqs: list[str] = INSTALLER_REQS
def __init__(self) -> None:
self.reqs = INSTALLER_REQS
self.preflight()
if os.getenv("VIRTUAL_ENV") is not None:
print("A virtual environment is already activated. Please 'deactivate' before installation.")
sys.exit(-1)
self.bootstrap()
def preflight(self) -> None:
"""
Preflight checks
"""
# TODO
# verify python version
# on macOS verify XCode tools are present
# verify libmesa, libglx on linux
# check that the system arch is not i386 (?)
# check that the system has a GPU, and the type of GPU
pass
self.available_releases = get_github_releases()
def mktemp_venv(self) -> TemporaryDirectory:
"""
@ -78,12 +58,9 @@ class Installer:
return venv_dir
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory:
def bootstrap(self, verbose: bool = False) -> TemporaryDirectory | None:
"""
Bootstrap the installer venv with packages required at install time
:return: path to the virtual environment directory that was bootstrapped
:rtype: TemporaryDirectory
"""
print("Initializing the installer. This may take a minute - please wait...")
@ -95,39 +72,27 @@ class Installer:
cmd.extend(self.reqs)
try:
res = subprocess.check_output(cmd).decode()
# upgrade pip to the latest version to avoid a confusing message
res = upgrade_pip(Path(venv_dir.name))
if verbose:
print(res)
# run the install prerequisites installation
res = subprocess.check_output(cmd).decode()
if verbose:
print(res)
return venv_dir
except subprocess.CalledProcessError as e:
print(e)
def app_venv(self, path: str = None):
def app_venv(self, venv_parent) -> Path:
"""
Create a virtualenv for the InvokeAI installation
"""
# explicit venv location
# currently unused in normal operation
# useful for testing or special cases
if path is not None:
venv_dir = Path(path)
# experimental / testing
elif not FF_VENV_IN_RUNTIME:
if OS == "Windows":
venv_dir_parent = os.getenv("APPDATA", "~/AppData/Roaming")
elif OS == "Darwin":
# there is no environment variable on macOS to find this
# TODO: confirm this is working as expected
venv_dir_parent = "~/Library/Application Support"
elif OS == "Linux":
venv_dir_parent = os.getenv("XDG_DATA_DIR", "~/.local/share")
venv_dir = Path(venv_dir_parent).expanduser().resolve() / f"InvokeAI/{VERSION}/venv"
# stable / current
else:
venv_dir = self.dest / ".venv"
venv_dir = venv_parent / ".venv"
# Prefer to copy python executables
# so that updates to system python don't break InvokeAI
@ -141,7 +106,7 @@ class Installer:
return venv_dir
def install(
self, root: str = "~/invokeai", version: str = "latest", yes_to_all=False, find_links: Path = None
self, version=None, root: str = "~/invokeai", yes_to_all=False, find_links: Optional[Path] = None
) -> None:
"""
Install the InvokeAI application into the given runtime path
@ -158,15 +123,20 @@ class Installer:
import messages
messages.welcome()
messages.welcome(self.available_releases)
default_path = os.environ.get("INVOKEAI_ROOT") or Path(root).expanduser().resolve()
self.dest = default_path if yes_to_all else messages.dest_path(root)
version = messages.choose_version(self.available_releases)
auto_dest = Path(os.environ.get("INVOKEAI_ROOT", root)).expanduser().resolve()
destination = auto_dest if yes_to_all else messages.dest_path(root)
if destination is None:
print("Could not find or create the destination directory. Installation cancelled.")
sys.exit(0)
# create the venv for the app
self.venv = self.app_venv()
self.venv = self.app_venv(venv_parent=destination)
self.instance = InvokeAiInstance(runtime=self.dest, venv=self.venv, version=version)
self.instance = InvokeAiInstance(runtime=destination, venv=self.venv, version=version)
# install dependencies and the InvokeAI application
(extra_index_url, optional_modules) = get_torch_source() if not yes_to_all else (None, None)
@ -190,7 +160,7 @@ class InvokeAiInstance:
A single runtime directory *may* be shared by multiple virtual environments, though this isn't currently tested or supported.
"""
def __init__(self, runtime: Path, venv: Path, version: str) -> None:
def __init__(self, runtime: Path, venv: Path, version: str = "stable") -> None:
self.runtime = runtime
self.venv = venv
self.pip = get_pip_from_venv(venv)
@ -199,6 +169,7 @@ class InvokeAiInstance:
set_sys_path(venv)
os.environ["INVOKEAI_ROOT"] = str(self.runtime.expanduser().resolve())
os.environ["VIRTUAL_ENV"] = str(self.venv.expanduser().resolve())
upgrade_pip(venv)
def get(self) -> tuple[Path, Path]:
"""
@ -212,54 +183,7 @@ class InvokeAiInstance:
def install(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install this instance, including dependencies and the app itself
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
"""
import messages
# install torch first to ensure the correct version gets installed.
# works with either source or wheel install with negligible impact on installation times.
messages.simple_banner("Installing PyTorch :fire:")
self.install_torch(extra_index_url, find_links)
messages.simple_banner("Installing the InvokeAI Application :art:")
self.install_app(extra_index_url, optional_modules, find_links)
def install_torch(self, extra_index_url=None, find_links=None):
"""
Install PyTorch
"""
from plumbum import FG, local
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"numpy==1.26.3", # choose versions that won't be uninstalled during phase 2
"urllib3~=1.26.0",
"requests~=2.28.0",
"torch==2.1.2",
"torchmetrics==0.11.4",
"torchvision==0.16.2",
"--force-reinstall",
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
]
& FG
)
def install_app(self, extra_index_url=None, optional_modules=None, find_links=None):
"""
Install the application with pip.
Supports installation from PyPi or from a local source directory.
Install the package from PyPi.
:param extra_index_url: the "--extra-index-url ..." line for pip to look in extra indexes.
:type extra_index_url: str
@ -271,53 +195,52 @@ class InvokeAiInstance:
:type find_links: Path
"""
## this only applies to pypi installs; TODO actually use this
if self.version == "pre":
import messages
# not currently used, but may be useful for "install most recent version" option
if self.version == "prerelease":
version = None
pre = "--pre"
pre_flag = "--pre"
elif self.version == "stable":
version = None
pre_flag = None
else:
version = self.version
pre = None
pre_flag = None
## TODO: only local wheel will be installed as of now; support for --version arg is TODO
if FF_USE_LOCAL_WHEEL:
# if no wheel, try to do a source install before giving up
try:
src = str(next(Path(__file__).parent.glob("InvokeAI-*.whl")))
except StopIteration:
try:
src = Path(__file__).parents[1].expanduser().resolve()
# if the above directory contains one of these files, we'll do a source install
next(src.glob("pyproject.toml"))
next(src.glob("invokeai"))
except StopIteration:
print("Unable to find a wheel or perform a source install. Giving up.")
src = "invokeai"
if optional_modules:
src += optional_modules
if version:
src += f"=={version}"
elif version == "source":
# this makes an assumption about the location of the installer package in the source tree
src = Path(__file__).parents[1].expanduser().resolve()
else:
# will install from PyPi
src = f"invokeai=={version}" if version is not None else "invokeai"
messages.simple_banner("Installing the InvokeAI Application :art:")
from plumbum import FG, local
from plumbum import FG, ProcessExecutionError, local # type: ignore
pip = local[self.pip]
(
pip[
"install",
"--require-virtualenv",
"--use-pep517",
str(src) + (optional_modules if optional_modules else ""),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
pre,
]
& FG
)
pipeline = pip[
"install",
"--require-virtualenv",
"--force-reinstall",
"--use-pep517",
str(src),
"--find-links" if find_links is not None else None,
find_links,
"--extra-index-url" if extra_index_url is not None else None,
extra_index_url,
pre_flag,
]
try:
_ = pipeline & FG
except ProcessExecutionError as e:
print(f"Error: {e}")
print(
"Could not install InvokeAI. Please try downloading the latest version of the installer and install again."
)
sys.exit(1)
def configure(self):
"""
@ -373,7 +296,6 @@ class InvokeAiInstance:
ext = "bat" if OS == "Windows" else "sh"
# scripts = ['invoke', 'update']
scripts = ["invoke"]
for script in scripts:
@ -408,6 +330,23 @@ def get_pip_from_venv(venv_path: Path) -> str:
return str(venv_path.expanduser().resolve() / pip)
def upgrade_pip(venv_path: Path) -> str | None:
"""
Upgrade the pip executable in the given virtual environment
"""
python = "Scripts\\python.exe" if OS == "Windows" else "bin/python"
python = str(venv_path.expanduser().resolve() / python)
try:
result = subprocess.check_output([python, "-m", "pip", "install", "--upgrade", "pip"]).decode()
except subprocess.CalledProcessError as e:
print(e)
result = None
return result
def set_sys_path(venv_path: Path) -> None:
"""
Given a path to a virtual environment, set the sys.path, in a cross-platform fashion,
@ -431,7 +370,43 @@ def set_sys_path(venv_path: Path) -> None:
sys.path.append(str(Path(venv_path, lib, "site-packages").expanduser().resolve()))
def get_torch_source() -> (Union[str, None], str):
def get_github_releases() -> tuple[list, list] | None:
"""
Query Github for published (pre-)release versions.
Return a tuple where the first element is a list of stable releases and the second element is a list of pre-releases.
Return None if the query fails for any reason.
"""
import requests
## get latest releases using github api
url = "https://api.github.com/repos/invoke-ai/InvokeAI/releases"
releases, pre_releases = [], []
try:
res = requests.get(url)
res.raise_for_status()
tag_info = res.json()
for tag in tag_info:
if not tag["prerelease"]:
releases.append(tag["tag_name"].lstrip("v"))
else:
pre_releases.append(tag["tag_name"].lstrip("v"))
except requests.HTTPError as e:
print(f"Error: {e}")
print("Could not fetch version information from GitHub. Please check your network connection and try again.")
return
except Exception as e:
print(f"Error: {e}")
print("An unexpected error occurred while trying to fetch version information from GitHub. Please try again.")
return
releases.sort(reverse=True)
pre_releases.sort(reverse=True)
return releases, pre_releases
def get_torch_source() -> Tuple[str | None, str | None]:
"""
Determine the extra index URL for pip to use for torch installation.
This depends on the OS and the graphics accelerator in use.
@ -446,25 +421,26 @@ def get_torch_source() -> (Union[str, None], str):
:rtype: list
"""
from messages import graphical_accelerator
from messages import select_gpu
# device can be one of: "cuda", "rocm", "cpu", "idk"
device = graphical_accelerator()
# device can be one of: "cuda", "rocm", "cpu", "cuda_and_dml, autodetect"
device = select_gpu()
url = None
optional_modules = "[onnx]"
if OS == "Linux":
if device == "rocm":
url = "https://download.pytorch.org/whl/rocm5.4.2"
elif device == "cpu":
if device.value == "rocm":
url = "https://download.pytorch.org/whl/rocm5.6"
elif device.value == "cpu":
url = "https://download.pytorch.org/whl/cpu"
if device == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
elif OS == "Windows":
if device.value == "cuda":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-cuda]"
if device.value == "cuda_and_dml":
url = "https://download.pytorch.org/whl/cu121"
optional_modules = "[xformers,onnx-directml]"
# in all other cases, Torch wheels should be coming from PyPi as of Torch 1.13

View File

@ -5,10 +5,11 @@ Installer user interaction
import os
import platform
from enum import Enum
from pathlib import Path
from prompt_toolkit import HTML, prompt
from prompt_toolkit.completion import PathCompleter
from prompt_toolkit.completion import FuzzyWordCompleter, PathCompleter
from prompt_toolkit.validation import Validator
from rich import box, print
from rich.console import Console, Group, group
@ -35,16 +36,26 @@ else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def welcome():
def welcome(available_releases: tuple | None = None) -> None:
@group()
def text():
if (platform_specific := _platform_specific_help()) != "":
if (platform_specific := _platform_specific_help()) is not None:
yield platform_specific
yield ""
yield Text.from_markup(
"Some of the installation steps take a long time to run. Please be patient. If the script appears to hang for more than 10 minutes, please interrupt with [i]Control-C[/] and retry.",
justify="center",
)
if available_releases is not None:
latest_stable = available_releases[0][0]
last_pre = available_releases[1][0]
yield ""
yield Text.from_markup(
f"[red3]🠶[/] Latest stable release (recommended): [b bright_white]{latest_stable}", justify="center"
)
yield Text.from_markup(
f"[red3]🠶[/] Last published pre-release version: [b bright_white]{last_pre}", justify="center"
)
console.rule()
print(
@ -61,19 +72,30 @@ def welcome():
console.line()
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":exclamation: Directory {dest} already exists :exclamation:")
dest_confirmed = Confirm.ask(
":stop_sign: (re)install in this location?",
default=False,
)
else:
print(f"InvokeAI will be installed in {dest}")
dest_confirmed = Confirm.ask("Use this location?", default=True)
def choose_version(available_releases: tuple | None = None) -> str:
"""
Prompt the user to choose an Invoke version to install
"""
# short circuit if we couldn't get a version list
# still try to install the latest stable version
if available_releases is None:
return "stable"
console.print(":grey_question: [orange3]Please choose an Invoke version to install.")
choices = available_releases[0] + available_releases[1]
response = prompt(
message=f" <Enter> to install the recommended release ({choices[0]}). <Tab> or type to pick a version: ",
complete_while_typing=True,
completer=FuzzyWordCompleter(choices),
)
console.print(f" Version {choices[0] if response == '' else response} will be installed.")
console.line()
return dest_confirmed
return "stable" if response == "" else response
def user_wants_auto_configuration() -> bool:
@ -109,7 +131,23 @@ def user_wants_auto_configuration() -> bool:
return choice.lower().startswith("a")
def dest_path(dest=None) -> Path:
def confirm_install(dest: Path) -> bool:
if dest.exists():
print(f":stop_sign: Directory {dest} already exists!")
print(" Is this location correct?")
default = False
else:
print(f":file_folder: InvokeAI will be installed in {dest}")
default = True
dest_confirmed = Confirm.ask(" Please confirm:", default=default)
console.line()
return dest_confirmed
def dest_path(dest=None) -> Path | None:
"""
Prompt the user for the destination path and create the path
@ -124,25 +162,21 @@ def dest_path(dest=None) -> Path:
else:
dest = Path.cwd().expanduser().resolve()
prev_dest = init_path = dest
dest_confirmed = confirm_install(dest)
dest_confirmed = False
while not dest_confirmed:
# if the given destination already exists, the starting point for browsing is its parent directory.
# the user may have made a typo, or otherwise wants to place the root dir next to an existing one.
# if the destination dir does NOT exist, then the user must have changed their mind about the selection.
# since we can't read their mind, start browsing at Path.cwd().
browse_start = (prev_dest.parent if prev_dest.exists() else Path.cwd()).expanduser().resolve()
browse_start = (dest or Path.cwd()).expanduser().resolve()
path_completer = PathCompleter(
only_directories=True,
expanduser=True,
get_paths=lambda: [browse_start], # noqa: B023
get_paths=lambda: [str(browse_start)], # noqa: B023
# get_paths=lambda: [".."].extend(list(browse_start.iterdir()))
)
console.line()
console.print(f"[orange3]Please select the destination directory for the installation:[/] \\[{browse_start}]: ")
console.print(f":grey_question: [orange3]Please select the install destination:[/] \\[{browse_start}]: ")
selected = prompt(
">>> ",
complete_in_thread=True,
@ -155,6 +189,7 @@ def dest_path(dest=None) -> Path:
)
prev_dest = dest
dest = Path(selected)
console.line()
dest_confirmed = confirm_install(dest.expanduser().resolve())
@ -182,41 +217,45 @@ def dest_path(dest=None) -> Path:
console.rule("Goodbye!")
def graphical_accelerator():
class GpuType(Enum):
CUDA = "cuda"
CUDA_AND_DML = "cuda_and_dml"
ROCM = "rocm"
CPU = "cpu"
AUTODETECT = "autodetect"
def select_gpu() -> GpuType:
"""
Prompt the user to select the graphical accelerator in their system
This does not validate user's choices (yet), but only offers choices
valid for the platform.
CUDA is the fallback.
We may be able to detect the GPU driver by shelling out to `modprobe` or `lspci`,
but this is not yet supported or reliable. Also, some users may have exotic preferences.
Prompt the user to select the GPU driver
"""
if ARCH == "arm64" and OS != "Darwin":
print(f"Only CPU acceleration is available on {ARCH} architecture. Proceeding with that.")
return "cpu"
return GpuType.CPU
nvidia = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™)",
"cuda",
GpuType.CUDA,
)
nvidia_with_dml = (
"an [gold1 b]NVIDIA[/] GPU (using CUDA™, and DirectML™ for ONNX) -- ALPHA",
"cuda_and_dml",
GpuType.CUDA_AND_DML,
)
amd = (
"an [gold1 b]AMD[/] GPU (using ROCm™)",
"rocm",
GpuType.ROCM,
)
cpu = (
"no compatible GPU, or specifically prefer to use the CPU",
"cpu",
"Do not install any GPU support, use CPU for generation (slow)",
GpuType.CPU,
)
idk = (
autodetect = (
"I'm not sure what to choose",
"idk",
GpuType.AUTODETECT,
)
options = []
if OS == "Windows":
options = [nvidia, nvidia_with_dml, cpu]
if OS == "Linux":
@ -230,7 +269,7 @@ def graphical_accelerator():
return options[0][1]
# "I don't know" is always added the last option
options.append(idk)
options.append(autodetect) # type: ignore
options = {str(i): opt for i, opt in enumerate(options, 1)}
@ -265,9 +304,9 @@ def graphical_accelerator():
),
)
if options[choice][1] == "idk":
if options[choice][1] is GpuType.AUTODETECT:
console.print(
"No problem. We will try to install a version that [i]should[/i] be compatible. :crossed_fingers:"
"No problem. We will install CUDA support first :crossed_fingers: If Invoke does not detect a GPU, please re-run the installer and select one of the other GPU types."
)
return options[choice][1]
@ -291,7 +330,7 @@ def windows_long_paths_registry() -> None:
"""
with open(str(Path(__file__).parent / "WinLongPathsEnabled.reg"), "r", encoding="utf-16le") as code:
syntax = Syntax(code.read(), line_numbers=True)
syntax = Syntax(code.read(), line_numbers=True, lexer="regedit")
console.print(
Panel(
@ -301,7 +340,7 @@ def windows_long_paths_registry() -> None:
"We will now apply a registry fix to enable long paths on Windows. InvokeAI needs this to function correctly. We are asking your permission to modify the Windows Registry on your behalf.",
"",
"This is the change that will be applied:",
syntax,
str(syntax),
]
)
),
@ -340,7 +379,7 @@ def introduction() -> None:
console.line(2)
def _platform_specific_help() -> str:
def _platform_specific_help() -> Text | None:
if OS == "Darwin":
text = Text.from_markup(
"""[b wheat1]macOS Users![/]\n\nPlease be sure you have the [b wheat1]Xcode command-line tools[/] installed before continuing.\nIf not, cancel with [i]Control-C[/] and follow the Xcode install instructions at [deep_sky_blue1]https://www.freecodecamp.org/news/install-xcode-command-line-tools/[/]."""
@ -354,5 +393,5 @@ def _platform_specific_help() -> str:
[deep_sky_blue1]https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170[/]"""
)
else:
text = ""
return
return text

View File

@ -15,7 +15,7 @@ echo 4. Download and install models
echo 5. Change InvokeAI startup options
echo 6. Re-run the configure script to fix a broken install or to complete a major upgrade
echo 7. Open the developer console
echo 8. Update InvokeAI
echo 8. Update InvokeAI (DEPRECATED - please use the installer)
echo 9. Run the InvokeAI image database maintenance script
echo 10. Command-line help
echo Q - Quit
@ -52,8 +52,10 @@ IF /I "%choice%" == "1" (
echo *** Type `exit` to quit this shell and deactivate the Python virtual environment ***
call cmd /k
) ELSE IF /I "%choice%" == "8" (
echo Running invokeai-update...
python -m invokeai.frontend.install.invokeai_update
echo UPDATING FROM WITHIN THE APP IS BEING DEPRECATED.
echo Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.
timeout 4
python -m invokeai.frontend.install.invokeai_update
) ELSE IF /I "%choice%" == "9" (
echo Running the db maintenance script...
python .venv\Scripts\invokeai-db-maintenance.exe
@ -77,4 +79,3 @@ pause
:ending
exit /b

View File

@ -90,7 +90,9 @@ do_choice() {
;;
8)
clear
printf "Update InvokeAI\n"
printf "UPDATING FROM WITHIN THE APP IS BEING DEPRECATED\n"
printf "Please download the installer from https://github.com/invoke-ai/InvokeAI/releases/latest and run it to update your installation.\n"
sleep 4
python -m invokeai.frontend.install.invokeai_update
;;
9)
@ -122,7 +124,7 @@ do_dialog() {
5 "Change InvokeAI startup options"
6 "Re-run the configure script to fix a broken install or to complete a major upgrade"
7 "Open the developer console"
8 "Update InvokeAI"
8 "Update InvokeAI (DEPRECATED - please use the installer)"
9 "Run the InvokeAI image database maintenance script"
10 "Command-line help"
)

View File

@ -1,72 +0,0 @@
@echo off
setlocal EnableExtensions EnableDelayedExpansion
PUSHD "%~dp0"
set INVOKE_AI_VERSION=latest
set arg=%1
if "%arg%" neq "" (
if "%arg:~0,2%" equ "/?" (
echo Usage: update.bat ^<release name or branch^>
echo Updates InvokeAI to use the indicated version of the code base.
echo Find the version or branch for the release you want, and pass it as the argument.
echo For example '.\update.bat v2.2.5' for release 2.2.5.
echo '.\update.bat main' for the latest development version
echo.
echo If no argument provided then will install the most recent release, equivalent to
echo '.\update.bat latest'
exit /b
) else (
set INVOKE_AI_VERSION=%arg%
)
)
set INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/!INVOKE_AI_VERSION!.zip"
set INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/!INVOKE_AI_VERSION!/environments-and-requirements/requirements-base.txt
set INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
call curl -I "%INVOKE_AI_DEP%" -fs >.tmp.out
if %errorlevel% neq 0 (
echo '!INVOKE_AI_VERSION!' is not a known branch name or tag. Please check the version and try again.
echo "Press any key to continue"
pause
exit /b
)
del .tmp.out
echo This script will update InvokeAI and all its dependencies to !INVOKE_AI_SRC!.
echo If you do not want to do this, press control-C now!
pause
call curl -L "%INVOKE_AI_DEP%" > environments-and-requirements/requirements-base.txt
call curl -L "%INVOKE_AI_MODELS%" > configs/INITIAL_MODELS.yaml
call .venv\Scripts\activate.bat
call .venv\Scripts\python -mpip install -r requirements.txt
if %errorlevel% neq 0 (
echo Installation of requirements failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
call .venv\Scripts\python -mpip install !INVOKE_AI_SRC!
if %errorlevel% neq 0 (
echo Installation of InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
pause
exit /b
)
@rem call .venv\Scripts\invokeai-configure --root=.
@rem if %errorlevel% neq 0 (
@rem echo Configuration InvokeAI failed. See https://invoke-ai.github.io/InvokeAI/installation/INSTALL_AUTOMATED/#troubleshooting for suggestions.
@rem pause
@rem exit /b
@rem )
echo InvokeAI has been updated to '%INVOKE_AI_VERSION%'
echo "Press any key to continue"
pause
endlocal

View File

@ -1,58 +0,0 @@
#!/usr/bin/env bash
set -eu
if [ $# -ge 1 ] && [ "${1:0:2}" == "-h" ]; then
echo "Usage: update.sh <release>"
echo "Updates InvokeAI to use the indicated version of the code base."
echo "Find the version or branch for the release you want, and pass it as the argument."
echo "For example: update.sh v2.2.5 for release 2.2.5."
echo " update.sh main for the current development version."
echo ""
echo "If no argument provided then will install the version tagged with 'latest', equivalent to"
echo "update.sh latest"
exit -1
fi
INVOKE_AI_VERSION=${1:-latest}
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive/$INVOKE_AI_VERSION.zip"
INVOKE_AI_DEP=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/environments-and-requirements/requirements-base.txt
INVOKE_AI_MODELS=https://raw.githubusercontent.com/invoke-ai/InvokeAI/$INVOKE_AI_VERSION/configs/INITIAL_MODELS.yaml
# ensure we're in the correct folder in case user's CWD is somewhere else
scriptdir=$(dirname "$0")
cd "$scriptdir"
function _err_exit {
if test "$1" -ne 0
then
echo "Something went wrong while installing InvokeAI and/or its requirements."
echo "Update cannot continue. Please report this error to https://github.com/invoke-ai/InvokeAI/issues"
echo -e "Error code $1; Error caught was '$2'"
read -p "Press any key to exit..."
exit
fi
}
if ! curl -I "$INVOKE_AI_DEP" -fs >/dev/null; then
echo \'$INVOKE_AI_VERSION\' is not a known branch name or tag. Please check the version and try again.
exit
fi
echo This script will update InvokeAI and all its dependencies to version \'$INVOKE_AI_VERSION\'.
echo If you do not want to do this, press control-C now!
read -p "Press any key to continue, or CTRL-C to exit..."
curl -L "$INVOKE_AI_DEP" > environments-and-requirements/requirements-base.txt
curl -L "$INVOKE_AI_MODELS" > configs/INITIAL_MODELS.yaml
. .venv/bin/activate
./.venv/bin/python -mpip install -r requirements.txt
_err_exit $? "The pip program failed to install InvokeAI's requirements."
./.venv/bin/python -mpip install $INVOKE_AI_SRC
_err_exit $? "The pip program failed to install InvokeAI."
echo InvokeAI updated to \'$INVOKE_AI_VERSION\'

View File

@ -2,6 +2,7 @@
from logging import Logger
from invokeai.app.services.item_storage.item_storage_memory import ItemStorageMemory
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager.metadata import ModelMetadataStore
from invokeai.backend.util.logging import InvokeAILogger
@ -22,7 +23,6 @@ from ..services.invocation_queue.invocation_queue_memory import MemoryInvocation
from ..services.invocation_services import InvocationServices
from ..services.invocation_stats.invocation_stats_default import InvocationStatsService
from ..services.invoker import Invoker
from ..services.item_storage.item_storage_sqlite import SqliteItemStorage
from ..services.latents_storage.latents_storage_disk import DiskLatentsStorage
from ..services.latents_storage.latents_storage_forward_cache import ForwardCacheLatentsStorage
from ..services.model_install import ModelInstallService
@ -80,7 +80,7 @@ class ApiDependencies:
board_records = SqliteBoardRecordStorage(db=db)
boards = BoardService()
events = FastAPIEventService(event_handler_id)
graph_execution_manager = SqliteItemStorage[GraphExecutionState](db=db, table_name="graph_executions")
graph_execution_manager = ItemStorageMemory[GraphExecutionState]()
image_records = SqliteImageRecordStorage(db=db)
images = ImageService()
invocation_cache = MemoryInvocationCache(max_cache_size=config.node_cache_size)

View File

@ -1,7 +1,7 @@
# Copyright (c) 2023 Lincoln D. Stein
"""FastAPI route for model configuration records."""
import pathlib
from hashlib import sha1
from random import randbytes
from typing import Any, Dict, List, Optional, Set
@ -27,6 +27,7 @@ from invokeai.backend.model_manager.config import (
ModelFormat,
ModelType,
)
from invokeai.backend.model_manager.merge import MergeInterpolationMethod, ModelMerger
from invokeai.backend.model_manager.metadata import AnyModelRepoMetadata
from ..dependencies import ApiDependencies
@ -415,3 +416,57 @@ async def sync_models_to_config() -> Response:
"""
ApiDependencies.invoker.services.model_install.sync_to_config()
return Response(status_code=204)
@model_records_router.put(
"/merge",
operation_id="merge",
)
async def merge(
keys: List[str] = Body(description="Keys for two to three models to merge", min_length=2, max_length=3),
merged_model_name: Optional[str] = Body(description="Name of destination model", default=None),
alpha: float = Body(description="Alpha weighting strength to apply to 2d and 3d models", default=0.5),
force: bool = Body(
description="Force merging of models created with different versions of diffusers",
default=False,
),
interp: Optional[MergeInterpolationMethod] = Body(description="Interpolation method", default=None),
merge_dest_directory: Optional[str] = Body(
description="Save the merged model to the designated directory (with 'merged_model_name' appended)",
default=None,
),
) -> AnyModelConfig:
"""
Merge diffusers models.
keys: List of 2-3 model keys to merge together. All models must use the same base type.
merged_model_name: Name for the merged model [Concat model names]
alpha: Alpha value (0.0-1.0). Higher values give more weight to the second model [0.5]
force: If true, force the merge even if the models were generated by different versions of the diffusers library [False]
interp: Interpolation method. One of "weighted_sum", "sigmoid", "inv_sigmoid" or "add_difference" [weighted_sum]
merge_dest_directory: Specify a directory to store the merged model in [models directory]
"""
print(f"here i am, keys={keys}")
logger = ApiDependencies.invoker.services.logger
try:
logger.info(f"Merging models: {keys} into {merge_dest_directory or '<MODELS>'}/{merged_model_name}")
dest = pathlib.Path(merge_dest_directory) if merge_dest_directory else None
installer = ApiDependencies.invoker.services.model_install
merger = ModelMerger(installer)
model_names = [installer.record_store.get_model(x).name for x in keys]
response = merger.merge_diffusion_models_and_save(
model_keys=keys,
merged_model_name=merged_model_name or "+".join(model_names),
alpha=alpha,
interp=interp,
force=force,
merge_dest_directory=dest,
)
except UnknownModelException:
raise HTTPException(
status_code=404,
detail=f"One or more of the models '{keys}' not found",
)
except ValueError as e:
raise HTTPException(status_code=400, detail=str(e))
return response

View File

@ -14,7 +14,7 @@ class SocketIO:
def __init__(self, app: FastAPI):
self.__sio = AsyncServer(async_mode="asgi", cors_allowed_origins="*")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="socket.io")
self.__app = ASGIApp(socketio_server=self.__sio, socketio_path="/ws/socket.io")
app.mount("/ws", self.__app)
self.__sio.on("subscribe_queue", handler=self._handle_sub_queue)

View File

@ -17,7 +17,6 @@ from controlnet_aux import (
MidasDetector,
MLSDdetector,
NormalBaeDetector,
OpenposeDetector,
PidiNetDetector,
SamDetector,
ZoeDetector,
@ -31,6 +30,7 @@ from invokeai.app.invocations.util import validate_begin_end_step, validate_weig
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.app.shared.fields import FieldDescriptions
from invokeai.backend.image_util.depth_anything import DepthAnythingDetector
from invokeai.backend.image_util.dw_openpose import DWOpenposeDetector
from ...backend.model_management import BaseModelType
from .baseinvocation import (
@ -276,31 +276,6 @@ class LineartAnimeImageProcessorInvocation(ImageProcessorInvocation):
return processed_image
@invocation(
"openpose_image_processor",
title="Openpose Processor",
tags=["controlnet", "openpose", "pose"],
category="controlnet",
version="1.2.0",
)
class OpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Applies Openpose processing to image"""
hand_and_face: bool = InputField(default=False, description="Whether to use hands and face mode")
detect_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.detect_res)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
openpose_processor = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
processed_image = openpose_processor(
image,
detect_resolution=self.detect_resolution,
image_resolution=self.image_resolution,
hand_and_face=self.hand_and_face,
)
return processed_image
@invocation(
"midas_depth_image_processor",
title="Midas Depth Processor",
@ -624,7 +599,7 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
resolution: int = InputField(default=512, ge=64, multiple_of=64, description=FieldDescriptions.image_res)
offload: bool = InputField(default=False)
def run_processor(self, image):
def run_processor(self, image: Image.Image):
depth_anything_detector = DepthAnythingDetector()
depth_anything_detector.load_model(model_size=self.model_size)
@ -633,3 +608,30 @@ class DepthAnythingImageProcessorInvocation(ImageProcessorInvocation):
processed_image = depth_anything_detector(image=image, resolution=self.resolution, offload=self.offload)
return processed_image
@invocation(
"dw_openpose_image_processor",
title="DW Openpose Image Processor",
tags=["controlnet", "dwpose", "openpose"],
category="controlnet",
version="1.0.0",
)
class DWOpenposeImageProcessorInvocation(ImageProcessorInvocation):
"""Generates an openpose pose from an image using DWPose"""
draw_body: bool = InputField(default=True)
draw_face: bool = InputField(default=False)
draw_hands: bool = InputField(default=False)
image_resolution: int = InputField(default=512, ge=0, description=FieldDescriptions.image_res)
def run_processor(self, image):
dw_openpose = DWOpenposeDetector()
processed_image = dw_openpose(
image,
draw_face=self.draw_face,
draw_hands=self.draw_hands,
draw_body=self.draw_body,
resolution=self.image_resolution,
)
return processed_image

View File

@ -5,12 +5,12 @@ from typing import Literal
import cv2
import numpy as np
import torch
from basicsr.archs.rrdbnet_arch import RRDBNet
from PIL import Image
from pydantic import ConfigDict
from invokeai.app.invocations.primitives import ImageField, ImageOutput
from invokeai.app.services.image_records.image_records_common import ImageCategory, ResourceOrigin
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.image_util.realesrgan.realesrgan import RealESRGAN
from invokeai.backend.util.devices import choose_torch_device

View File

@ -173,10 +173,10 @@ from __future__ import annotations
import os
from pathlib import Path
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union, get_type_hints
from typing import Any, ClassVar, Dict, List, Literal, Optional, Union
from omegaconf import DictConfig, OmegaConf
from pydantic import Field, TypeAdapter
from pydantic import Field
from pydantic.config import JsonDict
from pydantic_settings import SettingsConfigDict
@ -251,7 +251,11 @@ class InvokeAIAppConfig(InvokeAISettings):
log_level : Literal["debug", "info", "warning", "error", "critical"] = Field(default="info", description="Emit logging messages at this level or higher", json_schema_extra=Categories.Logging)
log_sql : bool = Field(default=False, description="Log SQL queries", json_schema_extra=Categories.Logging)
# Development
dev_reload : bool = Field(default=False, description="Automatically reload when Python sources are changed.", json_schema_extra=Categories.Development)
profile_graphs : bool = Field(default=False, description="Enable graph profiling", json_schema_extra=Categories.Development)
profile_prefix : Optional[str] = Field(default=None, description="An optional prefix for profile output files.", json_schema_extra=Categories.Development)
profiles_dir : Path = Field(default=Path('profiles'), description="Directory for graph profiles", json_schema_extra=Categories.Development)
version : bool = Field(default=False, description="Show InvokeAI version and exit", json_schema_extra=Categories.Other)
@ -270,7 +274,7 @@ class InvokeAIAppConfig(InvokeAISettings):
attention_type : Literal["auto", "normal", "xformers", "sliced", "torch-sdp"] = Field(default="auto", description="Attention type", json_schema_extra=Categories.Generation)
attention_slice_size: Literal["auto", "balanced", "max", 1, 2, 3, 4, 5, 6, 7, 8] = Field(default="auto", description='Slice size, valid when attention_type=="sliced"', json_schema_extra=Categories.Generation)
force_tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=6, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
png_compress_level : int = Field(default=1, description="The compress_level setting of PIL.Image.save(), used for PNG encoding. All settings are lossless. 0 = fastest, largest filesize, 9 = slowest, smallest filesize", json_schema_extra=Categories.Generation)
# QUEUE
max_queue_size : int = Field(default=10000, gt=0, description="Maximum number of items in the session queue", json_schema_extra=Categories.Queue)
@ -280,6 +284,9 @@ class InvokeAIAppConfig(InvokeAISettings):
deny_nodes : Optional[List[str]] = Field(default=None, description="List of nodes to deny. Omit to deny none.", json_schema_extra=Categories.Nodes)
node_cache_size : int = Field(default=512, description="How many cached nodes to keep in memory", json_schema_extra=Categories.Nodes)
# MODEL IMPORT
civitai_api_key : Optional[str] = Field(default=os.environ.get("CIVITAI_API_KEY"), description="API key for CivitAI", json_schema_extra=Categories.Other)
# DEPRECATED FIELDS - STILL HERE IN ORDER TO OBTAN VALUES FROM PRE-3.1 CONFIG FILES
always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", json_schema_extra=Categories.MemoryPerformance)
max_cache_size : Optional[float] = Field(default=None, gt=0, description="Maximum memory amount used by model cache for rapid switching", json_schema_extra=Categories.MemoryPerformance)
@ -289,6 +296,7 @@ class InvokeAIAppConfig(InvokeAISettings):
lora_dir : Optional[Path] = Field(default=None, description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', json_schema_extra=Categories.Paths)
embedding_dir : Optional[Path] = Field(default=None, description='Path to a directory of Textual Inversion embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
controlnet_dir : Optional[Path] = Field(default=None, description='Path to a directory of ControlNet embeddings to be imported on startup.', json_schema_extra=Categories.Paths)
# this is not referred to in the source code and can be removed entirely
#free_gpu_mem : Optional[bool] = Field(default=None, description="If true, purge model from GPU after each generation.", json_schema_extra=Categories.MemoryPerformance)
@ -328,13 +336,9 @@ class InvokeAIAppConfig(InvokeAISettings):
super().parse_args(argv)
if self.singleton_init and not clobber:
hints = get_type_hints(self.__class__)
for k in self.singleton_init:
setattr(
self,
k,
TypeAdapter(hints[k]).validate_python(self.singleton_init[k]),
)
# When setting values in this way, set validate_assignment to true if you want to validate the value.
for k, v in self.singleton_init.items():
setattr(self, k, v)
@classmethod
def get_config(cls, **kwargs: Any) -> InvokeAIAppConfig:
@ -449,6 +453,11 @@ class InvokeAIAppConfig(InvokeAISettings):
disabled_in_config = not self.xformers_enabled
return disabled_in_config and self.attention_type != "xformers"
@property
def profiles_path(self) -> Path:
"""Path to the graph profiles directory."""
return self._resolve(self.profiles_dir)
@staticmethod
def find_root() -> Path:
"""Choose the runtime root directory when not specified on command line or init file."""

View File

@ -208,7 +208,6 @@ class DownloadQueueService(DownloadQueueServiceBase):
job = self._queue.get(timeout=1)
except Empty:
continue
try:
job.job_started = get_iso_timestamp()
self._do_download(job)

View File

@ -154,7 +154,7 @@ class ImageService(ImageServiceABC):
self.__invoker.services.logger.error("Image record not found")
raise
except Exception as e:
self.__invoker.services.logger.error("Problem getting image DTO")
self.__invoker.services.logger.error("Problem getting image metadata")
raise e
def get_workflow(self, image_name: str) -> Optional[WorkflowWithoutID]:

View File

@ -1,11 +1,16 @@
import time
import traceback
from contextlib import suppress
from threading import BoundedSemaphore, Event, Thread
from typing import Optional
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import InvocationContext
from invokeai.app.services.invocation_queue.invocation_queue_common import InvocationQueueItem
from invokeai.app.services.invocation_stats.invocation_stats_common import (
GESStatsNotFoundError,
)
from invokeai.app.util.profiler import Profiler
from ..invoker import Invoker
from .invocation_processor_base import InvocationProcessorABC
@ -18,7 +23,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
__invoker: Invoker
__threadLimit: BoundedSemaphore
def start(self, invoker) -> None:
def start(self, invoker: Invoker) -> None:
# if we do want multithreading at some point, we could make this configurable
self.__threadLimit = BoundedSemaphore(1)
self.__invoker = invoker
@ -39,6 +44,27 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
self.__threadLimit.acquire()
queue_item: Optional[InvocationQueueItem] = None
profiler = (
Profiler(
logger=self.__invoker.services.logger,
output_dir=self.__invoker.services.configuration.profiles_path,
prefix=self.__invoker.services.configuration.profile_prefix,
)
if self.__invoker.services.configuration.profile_graphs
else None
)
def stats_cleanup(graph_execution_state_id: str) -> None:
if profiler:
profile_path = profiler.stop()
stats_path = profile_path.with_suffix(".json")
self.__invoker.services.performance_statistics.dump_stats(
graph_execution_state_id=graph_execution_state_id, output_path=stats_path
)
with suppress(GESStatsNotFoundError):
self.__invoker.services.performance_statistics.log_stats(graph_execution_state_id)
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state_id)
while not stop_event.is_set():
try:
queue_item = self.__invoker.services.queue.get()
@ -49,6 +75,10 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# do not hammer the queue
time.sleep(0.5)
continue
if profiler and profiler.profile_id != queue_item.graph_execution_state_id:
profiler.start(profile_id=queue_item.graph_execution_state_id)
try:
graph_execution_state = self.__invoker.services.graph_execution_manager.get(
queue_item.graph_execution_state_id
@ -137,7 +167,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
pass
except CanceledException:
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
stats_cleanup(graph_execution_state.id)
pass
except Exception as e:
@ -162,7 +192,6 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error_type=e.__class__.__name__,
error=error,
)
self.__invoker.services.performance_statistics.reset_stats(graph_execution_state.id)
pass
# Check queue to see if this is canceled, and skip if so
@ -194,13 +223,13 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
error=traceback.format_exc(),
)
elif is_complete:
self.__invoker.services.performance_statistics.log_stats(graph_execution_state.id)
self.__invoker.services.events.emit_graph_execution_complete(
queue_batch_id=queue_item.session_queue_batch_id,
queue_item_id=queue_item.session_queue_item_id,
queue_id=queue_item.session_queue_id,
graph_execution_state_id=graph_execution_state.id,
)
stats_cleanup(graph_execution_state.id)
except KeyboardInterrupt:
pass # Log something? KeyboardInterrupt is probably not going to be seen by the processor

View File

@ -30,8 +30,10 @@ writes to the system log is stored in InvocationServices.performance_statistics.
from abc import ABC, abstractmethod
from contextlib import AbstractContextManager
from pathlib import Path
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invocation_stats.invocation_stats_common import InvocationStatsSummary
class InvocationStatsServiceBase(ABC):
@ -61,8 +63,9 @@ class InvocationStatsServiceBase(ABC):
@abstractmethod
def reset_stats(self, graph_execution_state_id: str):
"""
Reset all statistics for the indicated graph
:param graph_execution_state_id
Reset all statistics for the indicated graph.
:param graph_execution_state_id: The id of the session whose stats to reset.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@ -70,5 +73,26 @@ class InvocationStatsServiceBase(ABC):
def log_stats(self, graph_execution_state_id: str):
"""
Write out the accumulated statistics to the log or somewhere else.
:param graph_execution_state_id: The id of the session whose stats to log.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@abstractmethod
def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
"""
Gets the accumulated statistics for the indicated graph.
:param graph_execution_state_id: The id of the session whose stats to get.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass
@abstractmethod
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
"""
Write out the accumulated statistics to the indicated path as JSON.
:param graph_execution_state_id: The id of the session whose stats to dump.
:param output_path: The file to write the stats to.
:raises GESStatsNotFoundError: if the graph isn't tracked in the stats.
"""
pass

View File

@ -1,5 +1,91 @@
from collections import defaultdict
from dataclasses import dataclass
from dataclasses import asdict, dataclass
from typing import Any, Optional
class GESStatsNotFoundError(Exception):
"""Raised when execution stats are not found for a given Graph Execution State."""
@dataclass
class NodeExecutionStatsSummary:
"""The stats for a specific type of node."""
node_type: str
num_calls: int
time_used_seconds: float
peak_vram_gb: float
@dataclass
class ModelCacheStatsSummary:
"""The stats for the model cache."""
high_water_mark_gb: float
cache_size_gb: float
total_usage_gb: float
cache_hits: int
cache_misses: int
models_cached: int
models_cleared: int
@dataclass
class GraphExecutionStatsSummary:
"""The stats for the graph execution state."""
graph_execution_state_id: str
execution_time_seconds: float
# `wall_time_seconds`, `ram_usage_gb` and `ram_change_gb` are derived from the node execution stats.
# In some situations, there are no node stats, so these values are optional.
wall_time_seconds: Optional[float]
ram_usage_gb: Optional[float]
ram_change_gb: Optional[float]
@dataclass
class InvocationStatsSummary:
"""
The accumulated stats for a graph execution.
Its `__str__` method returns a human-readable stats summary.
"""
vram_usage_gb: Optional[float]
graph_stats: GraphExecutionStatsSummary
model_cache_stats: ModelCacheStatsSummary
node_stats: list[NodeExecutionStatsSummary]
def __str__(self) -> str:
_str = ""
_str = f"Graph stats: {self.graph_stats.graph_execution_state_id}\n"
_str += f"{'Node':>30} {'Calls':>7} {'Seconds':>9} {'VRAM Used':>10}\n"
for summary in self.node_stats:
_str += f"{summary.node_type:>30} {summary.num_calls:>7} {summary.time_used_seconds:>8.3f}s {summary.peak_vram_gb:>9.3f}G\n"
_str += f"TOTAL GRAPH EXECUTION TIME: {self.graph_stats.execution_time_seconds:7.3f}s\n"
if self.graph_stats.wall_time_seconds is not None:
_str += f"TOTAL GRAPH WALL TIME: {self.graph_stats.wall_time_seconds:7.3f}s\n"
if self.graph_stats.ram_usage_gb is not None and self.graph_stats.ram_change_gb is not None:
_str += f"RAM used by InvokeAI process: {self.graph_stats.ram_usage_gb:4.2f}G ({self.graph_stats.ram_change_gb:+5.3f}G)\n"
_str += f"RAM used to load models: {self.model_cache_stats.total_usage_gb:4.2f}G\n"
if self.vram_usage_gb:
_str += f"VRAM in use: {self.vram_usage_gb:4.3f}G\n"
_str += "RAM cache statistics:\n"
_str += f" Model cache hits: {self.model_cache_stats.cache_hits}\n"
_str += f" Model cache misses: {self.model_cache_stats.cache_misses}\n"
_str += f" Models cached: {self.model_cache_stats.models_cached}\n"
_str += f" Models cleared from cache: {self.model_cache_stats.models_cleared}\n"
_str += f" Cache high water mark: {self.model_cache_stats.high_water_mark_gb:4.2f}/{self.model_cache_stats.cache_size_gb:4.2f}G\n"
return _str
def as_dict(self) -> dict[str, Any]:
"""Returns the stats as a dictionary."""
return asdict(self)
@dataclass
@ -55,12 +141,33 @@ class GraphExecutionStats:
return last_node
def get_pretty_log(self, graph_execution_state_id: str) -> str:
log = f"Graph stats: {graph_execution_state_id}\n"
log += f"{'Node':>30} {'Calls':>7}{'Seconds':>9} {'VRAM Used':>10}\n"
def get_graph_stats_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
"""Get a summary of the graph stats."""
first_node = self.get_first_node_stats()
last_node = self.get_last_node_stats()
# Log stats aggregated by node type.
wall_time_seconds: Optional[float] = None
ram_usage_gb: Optional[float] = None
ram_change_gb: Optional[float] = None
if last_node and first_node:
wall_time_seconds = last_node.end_time - first_node.start_time
ram_usage_gb = last_node.end_ram_gb
ram_change_gb = last_node.end_ram_gb - first_node.start_ram_gb
return GraphExecutionStatsSummary(
graph_execution_state_id=graph_execution_state_id,
execution_time_seconds=self.get_total_run_time(),
wall_time_seconds=wall_time_seconds,
ram_usage_gb=ram_usage_gb,
ram_change_gb=ram_change_gb,
)
def get_node_stats_summaries(self) -> list[NodeExecutionStatsSummary]:
"""Get a summary of the node stats."""
summaries: list[NodeExecutionStatsSummary] = []
node_stats_by_type: dict[str, list[NodeExecutionStats]] = defaultdict(list)
for node_stats in self._node_stats_list:
node_stats_by_type[node_stats.invocation_type].append(node_stats)
@ -68,17 +175,9 @@ class GraphExecutionStats:
num_calls = len(node_type_stats_list)
time_used = sum([n.total_time() for n in node_type_stats_list])
peak_vram = max([n.peak_vram_gb for n in node_type_stats_list])
log += f"{node_type:>30} {num_calls:>4} {time_used:7.3f}s {peak_vram:4.3f}G\n"
summary = NodeExecutionStatsSummary(
node_type=node_type, num_calls=num_calls, time_used_seconds=time_used, peak_vram_gb=peak_vram
)
summaries.append(summary)
# Log stats for the entire graph.
log += f"TOTAL GRAPH EXECUTION TIME: {self.get_total_run_time():7.3f}s\n"
first_node = self.get_first_node_stats()
last_node = self.get_last_node_stats()
if first_node is not None and last_node is not None:
total_wall_time = last_node.end_time - first_node.start_time
ram_change = last_node.end_ram_gb - first_node.start_ram_gb
log += f"TOTAL GRAPH WALL TIME: {total_wall_time:7.3f}s\n"
log += f"RAM used by InvokeAI process: {last_node.end_ram_gb:4.2f}G ({ram_change:+5.3f}G)\n"
return log
return summaries

View File

@ -1,5 +1,7 @@
import json
import time
from contextlib import contextmanager
from pathlib import Path
import psutil
import torch
@ -7,10 +9,19 @@ import torch
import invokeai.backend.util.logging as logger
from invokeai.app.invocations.baseinvocation import BaseInvocation
from invokeai.app.services.invoker import Invoker
from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
from invokeai.backend.model_management.model_cache import CacheStats
from .invocation_stats_base import InvocationStatsServiceBase
from .invocation_stats_common import GraphExecutionStats, NodeExecutionStats
from .invocation_stats_common import (
GESStatsNotFoundError,
GraphExecutionStats,
GraphExecutionStatsSummary,
InvocationStatsSummary,
ModelCacheStatsSummary,
NodeExecutionStats,
NodeExecutionStatsSummary,
)
# Size of 1GB in bytes.
GB = 2**30
@ -53,7 +64,7 @@ class InvocationStatsService(InvocationStatsServiceBase):
finally:
# Record state after the invocation.
node_stats = NodeExecutionStats(
invocation_type=invocation.type,
invocation_type=invocation.get_type(),
start_time=start_time,
end_time=time.time(),
start_ram_gb=start_ram / GB,
@ -68,11 +79,11 @@ class InvocationStatsService(InvocationStatsServiceBase):
This shouldn't be necessary, but we don't have totally robust upstream handling of graph completions/errors, so
for now we call this function periodically to prevent them from accumulating.
"""
to_prune = []
to_prune: list[str] = []
for graph_execution_state_id in self._stats:
try:
graph_execution_state = self._invoker.services.graph_execution_manager.get(graph_execution_state_id)
except Exception:
except ItemNotFoundError:
# TODO(ryand): What would cause this? Should this exception just be allowed to propagate?
logger.warning(f"Failed to get graph state for {graph_execution_state_id}.")
continue
@ -95,31 +106,66 @@ class InvocationStatsService(InvocationStatsServiceBase):
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
except KeyError as e:
logger.warning(f"Attempted to clear statistics for unknown graph {graph_execution_state_id}: {e}.")
raise GESStatsNotFoundError(
f"Attempted to clear statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
def log_stats(self, graph_execution_state_id: str):
def get_stats(self, graph_execution_state_id: str) -> InvocationStatsSummary:
graph_stats_summary = self._get_graph_summary(graph_execution_state_id)
node_stats_summaries = self._get_node_summaries(graph_execution_state_id)
model_cache_stats_summary = self._get_model_cache_summary(graph_execution_state_id)
vram_usage_gb = torch.cuda.memory_allocated() / GB if torch.cuda.is_available() else None
return InvocationStatsSummary(
graph_stats=graph_stats_summary,
model_cache_stats=model_cache_stats_summary,
node_stats=node_stats_summaries,
vram_usage_gb=vram_usage_gb,
)
def log_stats(self, graph_execution_state_id: str) -> None:
stats = self.get_stats(graph_execution_state_id)
logger.info(str(stats))
def dump_stats(self, graph_execution_state_id: str, output_path: Path) -> None:
stats = self.get_stats(graph_execution_state_id)
with open(output_path, "w") as f:
f.write(json.dumps(stats.as_dict(), indent=2))
def _get_model_cache_summary(self, graph_execution_state_id: str) -> ModelCacheStatsSummary:
try:
graph_stats = self._stats[graph_execution_state_id]
cache_stats = self._cache_stats[graph_execution_state_id]
except KeyError as e:
logger.warning(f"Attempted to log statistics for unknown graph {graph_execution_state_id}: {e}.")
return
raise GESStatsNotFoundError(
f"Attempted to get model cache statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
log = graph_stats.get_pretty_log(graph_execution_state_id)
return ModelCacheStatsSummary(
cache_hits=cache_stats.hits,
cache_misses=cache_stats.misses,
high_water_mark_gb=cache_stats.high_watermark / GB,
cache_size_gb=cache_stats.cache_size / GB,
total_usage_gb=sum(list(cache_stats.loaded_model_sizes.values())) / GB,
models_cached=cache_stats.in_cache,
models_cleared=cache_stats.cleared,
)
hwm = cache_stats.high_watermark / GB
tot = cache_stats.cache_size / GB
loaded = sum(list(cache_stats.loaded_model_sizes.values())) / GB
log += f"RAM used to load models: {loaded:4.2f}G\n"
if torch.cuda.is_available():
log += f"VRAM in use: {(torch.cuda.memory_allocated() / GB):4.3f}G\n"
log += "RAM cache statistics:\n"
log += f" Model cache hits: {cache_stats.hits}\n"
log += f" Model cache misses: {cache_stats.misses}\n"
log += f" Models cached: {cache_stats.in_cache}\n"
log += f" Models cleared from cache: {cache_stats.cleared}\n"
log += f" Cache high water mark: {hwm:4.2f}/{tot:4.2f}G\n"
logger.info(log)
def _get_graph_summary(self, graph_execution_state_id: str) -> GraphExecutionStatsSummary:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to get graph statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
del self._stats[graph_execution_state_id]
del self._cache_stats[graph_execution_state_id]
return graph_stats.get_graph_stats_summary(graph_execution_state_id)
def _get_node_summaries(self, graph_execution_state_id: str) -> list[NodeExecutionStatsSummary]:
try:
graph_stats = self._stats[graph_execution_state_id]
except KeyError as e:
raise GESStatsNotFoundError(
f"Attempted to get node statistics for unknown graph {graph_execution_state_id}: {e}."
) from e
return graph_stats.get_node_stats_summaries()

View File

@ -1,10 +1,8 @@
from abc import ABC, abstractmethod
from typing import Callable, Generic, Optional, TypeVar
from typing import Callable, Generic, TypeVar
from pydantic import BaseModel
from invokeai.app.services.shared.pagination import PaginatedResults
T = TypeVar("T", bound=BaseModel)
@ -22,26 +20,26 @@ class ItemStorageABC(ABC, Generic[T]):
@abstractmethod
def get(self, item_id: str) -> T:
"""Gets the item, parsing it into a Pydantic model"""
pass
@abstractmethod
def get_raw(self, item_id: str) -> Optional[str]:
"""Gets the raw item as a string, skipping Pydantic parsing"""
"""
Gets the item.
:param item_id: the id of the item to get
:raises ItemNotFoundError: if the item is not found
"""
pass
@abstractmethod
def set(self, item: T) -> None:
"""Sets the item"""
"""
Sets the item. The id will be extracted based on id_field.
:param item: the item to set
"""
pass
@abstractmethod
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
"""Gets a paginated list of items"""
pass
@abstractmethod
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
def delete(self, item_id: str) -> None:
"""
Deletes the item, if it exists.
"""
pass
def on_changed(self, on_changed: Callable[[T], None]) -> None:

View File

@ -0,0 +1,5 @@
class ItemNotFoundError(KeyError):
"""Raised when an item is not found in storage"""
def __init__(self, item_id: str) -> None:
super().__init__(f"Item with id {item_id} not found")

View File

@ -0,0 +1,52 @@
from collections import OrderedDict
from contextlib import suppress
from typing import Generic, TypeVar
from pydantic import BaseModel
from invokeai.app.services.item_storage.item_storage_base import ItemStorageABC
from invokeai.app.services.item_storage.item_storage_common import ItemNotFoundError
T = TypeVar("T", bound=BaseModel)
class ItemStorageMemory(ItemStorageABC[T], Generic[T]):
"""
Provides a simple in-memory storage for items, with a maximum number of items to store.
The storage uses the LRU strategy to evict items from storage when the max has been reached.
"""
def __init__(self, id_field: str = "id", max_items: int = 10) -> None:
super().__init__()
if max_items < 1:
raise ValueError("max_items must be at least 1")
if not id_field:
raise ValueError("id_field must not be empty")
self._id_field = id_field
self._items: OrderedDict[str, T] = OrderedDict()
self._max_items = max_items
def get(self, item_id: str) -> T:
# If the item exists, move it to the end of the OrderedDict.
item = self._items.pop(item_id, None)
if item is None:
raise ItemNotFoundError(item_id)
self._items[item_id] = item
return item
def set(self, item: T) -> None:
item_id = getattr(item, self._id_field)
if item_id in self._items:
# If item already exists, remove it and add it to the end
self._items.pop(item_id)
elif len(self._items) >= self._max_items:
# If cache is full, evict the least recently used item
self._items.popitem(last=False)
self._items[item_id] = item
self._on_changed(item)
def delete(self, item_id: str) -> None:
# This is a no-op if the item doesn't exist.
with suppress(KeyError):
del self._items[item_id]
self._on_deleted(item_id)

View File

@ -1,147 +0,0 @@
import sqlite3
import threading
from typing import Generic, Optional, TypeVar, get_args
from pydantic import BaseModel, TypeAdapter
from invokeai.app.services.shared.pagination import PaginatedResults
from invokeai.app.services.shared.sqlite.sqlite_database import SqliteDatabase
from .item_storage_base import ItemStorageABC
T = TypeVar("T", bound=BaseModel)
class SqliteItemStorage(ItemStorageABC, Generic[T]):
_table_name: str
_conn: sqlite3.Connection
_cursor: sqlite3.Cursor
_id_field: str
_lock: threading.RLock
_validator: Optional[TypeAdapter[T]]
def __init__(self, db: SqliteDatabase, table_name: str, id_field: str = "id"):
super().__init__()
self._lock = db.lock
self._conn = db.conn
self._table_name = table_name
self._id_field = id_field # TODO: validate that T has this field
self._cursor = self._conn.cursor()
self._validator: Optional[TypeAdapter[T]] = None
self._create_table()
def _create_table(self):
try:
self._lock.acquire()
self._cursor.execute(
f"""CREATE TABLE IF NOT EXISTS {self._table_name} (
item TEXT,
id TEXT GENERATED ALWAYS AS (json_extract(item, '$.{self._id_field}')) VIRTUAL NOT NULL);"""
)
self._cursor.execute(
f"""CREATE UNIQUE INDEX IF NOT EXISTS {self._table_name}_id ON {self._table_name}(id);"""
)
finally:
self._lock.release()
def _parse_item(self, item: str) -> T:
if self._validator is None:
"""
We don't get access to `__orig_class__` in `__init__()`, and we need this before start(), so
we can create it when it is first needed instead.
__orig_class__ is technically an implementation detail of the typing module, not a supported API
"""
self._validator = TypeAdapter(get_args(self.__orig_class__)[0]) # type: ignore [attr-defined]
return self._validator.validate_json(item)
def set(self, item: T):
try:
self._lock.acquire()
self._cursor.execute(
f"""INSERT OR REPLACE INTO {self._table_name} (item) VALUES (?);""",
(item.model_dump_json(warnings=False, exclude_none=True),),
)
self._conn.commit()
finally:
self._lock.release()
self._on_changed(item)
def get(self, id: str) -> Optional[T]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return self._parse_item(result[0])
def get_raw(self, id: str) -> Optional[str]:
try:
self._lock.acquire()
self._cursor.execute(f"""SELECT item FROM {self._table_name} WHERE id = ?;""", (str(id),))
result = self._cursor.fetchone()
finally:
self._lock.release()
if not result:
return None
return result[0]
def delete(self, id: str):
try:
self._lock.acquire()
self._cursor.execute(f"""DELETE FROM {self._table_name} WHERE id = ?;""", (str(id),))
self._conn.commit()
finally:
self._lock.release()
self._on_deleted(id)
def list(self, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} LIMIT ? OFFSET ?;""",
(per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(f"""SELECT count(*) FROM {self._table_name};""")
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)
def search(self, query: str, page: int = 0, per_page: int = 10) -> PaginatedResults[T]:
try:
self._lock.acquire()
self._cursor.execute(
f"""SELECT item FROM {self._table_name} WHERE item LIKE ? LIMIT ? OFFSET ?;""",
(f"%{query}%", per_page, page * per_page),
)
result = self._cursor.fetchall()
items = [self._parse_item(r[0]) for r in result]
self._cursor.execute(
f"""SELECT count(*) FROM {self._table_name} WHERE item LIKE ?;""",
(f"%{query}%",),
)
count = self._cursor.fetchone()[0]
finally:
self._lock.release()
pageCount = int(count / per_page) + 1
return PaginatedResults[T](items=items, page=page, pages=pageCount, per_page=per_page, total=count)

View File

@ -165,8 +165,8 @@ class ModelInstallJob(BaseModel):
)
source: ModelSource = Field(description="Source (URL, repo_id, or local path) of model")
local_path: Path = Field(description="Path to locally-downloaded model; may be the same as the source")
bytes: Optional[int] = Field(
default=None, description="For a remote model, the number of bytes downloaded so far (may not be available)"
bytes: int = Field(
default=0, description="For a remote model, the number of bytes downloaded so far (may not be available)"
)
total_bytes: int = Field(default=0, description="Total size of the model to be installed")
source_metadata: Optional[AnyModelRepoMetadata] = Field(

View File

@ -535,19 +535,19 @@ class ModelInstallService(ModelInstallServiceBase):
def _import_from_url(self, source: URLModelSource, config: Optional[Dict[str, Any]]) -> ModelInstallJob:
# URLs from Civitai or HuggingFace will be handled specially
url_patterns = {
r"https?://civitai.com/": CivitaiMetadataFetch,
r"https?://huggingface.co/": HuggingFaceMetadataFetch,
r"^https?://civitai.com/": CivitaiMetadataFetch,
r"^https?://huggingface.co/[^/]+/[^/]+$": HuggingFaceMetadataFetch,
}
metadata = None
for pattern, fetcher in url_patterns.items():
if re.match(pattern, str(source.url), re.IGNORECASE):
metadata = fetcher(self._session).from_url(source.url)
break
self._logger.debug(f"metadata={metadata}")
if metadata and isinstance(metadata, ModelMetadataWithFiles):
remote_files = metadata.download_urls(session=self._session)
else:
remote_files = [RemoteModelFile(url=source.url, path=Path("."), size=0)]
return self._import_remote_model(
source=source,
config=config,
@ -586,6 +586,7 @@ class ModelInstallService(ModelInstallServiceBase):
assert install_job.total_bytes is not None # to avoid type checking complaints in the loop below
self._logger.info(f"Queuing {source} for downloading")
self._logger.debug(f"remote_files={remote_files}")
for model_file in remote_files:
url = model_file.url
path = model_file.path

View File

@ -2,7 +2,7 @@
import copy
import itertools
from typing import Annotated, Any, Optional, Union, get_args, get_origin, get_type_hints
from typing import Annotated, Any, Optional, TypeVar, Union, get_args, get_origin, get_type_hints
import networkx as nx
from pydantic import BaseModel, ConfigDict, field_validator, model_validator
@ -141,6 +141,16 @@ def are_connections_compatible(
return are_connection_types_compatible(from_node_field, to_node_field)
T = TypeVar("T")
def copydeep(obj: T) -> T:
"""Deep-copies an object. If it is a pydantic model, use the model's copy method."""
if isinstance(obj, BaseModel):
return obj.model_copy(deep=True)
return copy.deepcopy(obj)
class NodeAlreadyInGraphError(ValueError):
pass
@ -530,7 +540,7 @@ class Graph(BaseModel):
except NodeNotFoundError:
return False
def get_node(self, node_path: str) -> InvocationsUnion:
def get_node(self, node_path: str) -> BaseInvocation:
"""Gets a node from the graph using a node path."""
# Materialized graphs may have nodes at the top level
graph, node_id = self._get_graph_and_node(node_path)
@ -881,7 +891,7 @@ class GraphExecutionState(BaseModel):
# If next is still none, there's no next node, return None
return next_node
def complete(self, node_id: str, output: InvocationOutputsUnion):
def complete(self, node_id: str, output: BaseInvocationOutput) -> None:
"""Marks a node as complete"""
if node_id not in self.execution_graph.nodes:
@ -1118,17 +1128,22 @@ class GraphExecutionState(BaseModel):
def _prepare_inputs(self, node: BaseInvocation):
input_edges = [e for e in self.execution_graph.edges if e.destination.node_id == node.id]
# Inputs must be deep-copied, else if a node mutates the object, other nodes that get the same input
# will see the mutation.
if isinstance(node, CollectInvocation):
output_collection = [
getattr(self.results[edge.source.node_id], edge.source.field)
copydeep(getattr(self.results[edge.source.node_id], edge.source.field))
for edge in input_edges
if edge.destination.field == "item"
]
node.collection = output_collection
else:
for edge in input_edges:
output_value = getattr(self.results[edge.source.node_id], edge.source.field)
setattr(node, edge.destination.field, output_value)
setattr(
node,
edge.destination.field,
copydeep(getattr(self.results[edge.source.node_id], edge.source.field)),
)
# TODO: Add API for modifying underlying graph that checks if the change will be valid given the current execution state
def _is_edge_valid(self, edge: Edge) -> bool:

View File

@ -7,6 +7,7 @@ from invokeai.app.services.shared.sqlite_migrator.migrations.migration_1 import
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_2 import build_migration_2
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_3 import build_migration_3
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_4 import build_migration_4
from invokeai.app.services.shared.sqlite_migrator.migrations.migration_5 import build_migration_5
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_impl import SqliteMigrator
@ -31,6 +32,7 @@ def init_db(config: InvokeAIAppConfig, logger: Logger, image_files: ImageFileSto
migrator.register_migration(build_migration_2(image_files=image_files, logger=logger))
migrator.register_migration(build_migration_3(app_config=config, logger=logger))
migrator.register_migration(build_migration_4())
migrator.register_migration(build_migration_5())
migrator.run_migrations()
return db

View File

@ -0,0 +1,34 @@
import sqlite3
from invokeai.app.services.shared.sqlite_migrator.sqlite_migrator_common import Migration
class Migration5Callback:
def __call__(self, cursor: sqlite3.Cursor) -> None:
self._drop_graph_executions(cursor)
def _drop_graph_executions(self, cursor: sqlite3.Cursor) -> None:
"""Drops the `graph_executions` table."""
cursor.execute(
"""--sql
DROP TABLE IF EXISTS graph_executions;
"""
)
def build_migration_5() -> Migration:
"""
Build the migration from database version 4 to 5.
Introduced in v3.6.3, this migration:
- Drops the `graph_executions` table. We are able to do this because we are moving the graph storage
to be purely in-memory.
"""
migration_5 = Migration(
from_version=4,
to_version=5,
callback=Migration5Callback(),
)
return migration_5

View File

@ -72,7 +72,12 @@ class MigrateModelYamlToDb1:
continue
base_type, model_type, model_name = str(model_key).split("/")
hash = FastModelHash.hash(self.config.models_path / stanza.path)
try:
hash = FastModelHash.hash(self.config.models_path / stanza.path)
except OSError:
self.logger.warning(f"The model at {stanza.path} is not a valid file or directory. Skipping migration.")
continue
assert isinstance(model_key, str)
new_key = sha1(model_key.encode("utf-8")).hexdigest()

View File

@ -0,0 +1,67 @@
import cProfile
from logging import Logger
from pathlib import Path
from typing import Optional
class Profiler:
"""
Simple wrapper around cProfile.
Usage
```
# Create a profiler
profiler = Profiler(logger, output_dir, "sql_query_perf")
# Start a new profile
profiler.start("my_profile")
# Do stuff
profiler.stop()
```
Visualize a profile as a flamegraph with [snakeviz](https://jiffyclub.github.io/snakeviz/)
```sh
snakeviz my_profile.prof
```
Visualize a profile as directed graph with [graphviz](https://graphviz.org/download/) & [gprof2dot](https://github.com/jrfonseca/gprof2dot)
```sh
gprof2dot -f pstats my_profile.prof | dot -Tpng -o my_profile.png
# SVG or PDF may be nicer - you can search for function names
gprof2dot -f pstats my_profile.prof | dot -Tsvg -o my_profile.svg
gprof2dot -f pstats my_profile.prof | dot -Tpdf -o my_profile.pdf
```
"""
def __init__(self, logger: Logger, output_dir: Path, prefix: Optional[str] = None) -> None:
self._logger = logger.getChild(f"profiler.{prefix}" if prefix else "profiler")
self._output_dir = output_dir
self._output_dir.mkdir(parents=True, exist_ok=True)
self._profiler: Optional[cProfile.Profile] = None
self._prefix = prefix
self.profile_id: Optional[str] = None
def start(self, profile_id: str) -> None:
if self._profiler:
self.stop()
self.profile_id = profile_id
self._profiler = cProfile.Profile()
self._profiler.enable()
self._logger.info(f"Started profiling {self.profile_id}.")
def stop(self) -> Path:
if not self._profiler:
raise RuntimeError("Profiler not initialized. Call start() first.")
self._profiler.disable()
filename = f"{self._prefix}_{self.profile_id}.prof" if self._prefix else f"{self.profile_id}.prof"
path = Path(self._output_dir, filename)
self._profiler.dump_stats(path)
self._logger.info(f"Stopped profiling, profile dumped to {path}.")
self._profiler = None
self.profile_id = None
return path

View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018-2022 BasicSR Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,18 @@
"""
Adapted from https://github.com/XPixelGroup/BasicSR
License: Apache-2.0
As of Feb 2024, `basicsr` appears to be unmaintained. It imports a function from `torchvision` that is removed in
`torchvision` 0.17. Here is the deprecation warning:
UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in
0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in
torchvision.transforms.v2.functional.
As a result, a dependency on `basicsr` means we cannot keep our `torchvision` dependency up to date.
Because we only rely on a single class `RRDBNet` from `basicsr`, we've copied the relevant code here and removed the
dependency on `basicsr`.
The code is almost unchanged, only a few type annotations have been added. The license is also copied.
"""

View File

@ -0,0 +1,75 @@
from typing import Type
import torch
from torch import nn as nn
from torch.nn import init as init
from torch.nn.modules.batchnorm import _BatchNorm
@torch.no_grad()
def default_init_weights(
module_list: list[nn.Module] | nn.Module, scale: float = 1, bias_fill: float = 0, **kwargs
) -> None:
"""Initialize network weights.
Args:
module_list (list[nn.Module] | nn.Module): Modules to be initialized.
scale (float): Scale initialized weights, especially for residual
blocks. Default: 1.
bias_fill (float): The value to fill bias. Default: 0
kwargs (dict): Other arguments for initialization function.
"""
if not isinstance(module_list, list):
module_list = [module_list]
for module in module_list:
for m in module.modules():
if isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight, **kwargs)
m.weight.data *= scale
if m.bias is not None:
m.bias.data.fill_(bias_fill)
elif isinstance(m, nn.Linear):
init.kaiming_normal_(m.weight, **kwargs)
m.weight.data *= scale
if m.bias is not None:
m.bias.data.fill_(bias_fill)
elif isinstance(m, _BatchNorm):
init.constant_(m.weight, 1)
if m.bias is not None:
m.bias.data.fill_(bias_fill)
def make_layer(basic_block: Type[nn.Module], num_basic_block: int, **kwarg) -> nn.Sequential:
"""Make layers by stacking the same blocks.
Args:
basic_block (Type[nn.Module]): nn.Module class for basic block.
num_basic_block (int): number of blocks.
Returns:
nn.Sequential: Stacked blocks in nn.Sequential.
"""
layers = []
for _ in range(num_basic_block):
layers.append(basic_block(**kwarg))
return nn.Sequential(*layers)
# TODO: may write a cpp file
def pixel_unshuffle(x: torch.Tensor, scale: int) -> torch.Tensor:
"""Pixel unshuffle.
Args:
x (Tensor): Input feature with shape (b, c, hh, hw).
scale (int): Downsample ratio.
Returns:
Tensor: the pixel unshuffled feature.
"""
b, c, hh, hw = x.size()
out_channel = c * (scale**2)
assert hh % scale == 0 and hw % scale == 0
h = hh // scale
w = hw // scale
x_view = x.view(b, c, h, scale, w, scale)
return x_view.permute(0, 1, 3, 5, 2, 4).reshape(b, out_channel, h, w)

View File

@ -0,0 +1,125 @@
import torch
from torch import nn as nn
from torch.nn import functional as F
from .arch_util import default_init_weights, make_layer, pixel_unshuffle
class ResidualDenseBlock(nn.Module):
"""Residual Dense Block.
Used in RRDB block in ESRGAN.
Args:
num_feat (int): Channel number of intermediate features.
num_grow_ch (int): Channels for each growth.
"""
def __init__(self, num_feat: int = 64, num_grow_ch: int = 32) -> None:
super(ResidualDenseBlock, self).__init__()
self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1)
self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1)
self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1)
self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1)
self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
# initialization
default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x1 = self.lrelu(self.conv1(x))
x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1)))
x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1)))
x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1)))
x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1))
# Empirically, we use 0.2 to scale the residual for better performance
return x5 * 0.2 + x
class RRDB(nn.Module):
"""Residual in Residual Dense Block.
Used in RRDB-Net in ESRGAN.
Args:
num_feat (int): Channel number of intermediate features.
num_grow_ch (int): Channels for each growth.
"""
def __init__(self, num_feat: int, num_grow_ch: int = 32) -> None:
super(RRDB, self).__init__()
self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch)
self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch)
self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch)
def forward(self, x: torch.Tensor) -> torch.Tensor:
out = self.rdb1(x)
out = self.rdb2(out)
out = self.rdb3(out)
# Empirically, we use 0.2 to scale the residual for better performance
return out * 0.2 + x
class RRDBNet(nn.Module):
"""Networks consisting of Residual in Residual Dense Block, which is used
in ESRGAN.
ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks.
We extend ESRGAN for scale x2 and scale x1.
Note: This is one option for scale 1, scale 2 in RRDBNet.
We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size
and enlarge the channel size before feeding inputs into the main ESRGAN architecture.
Args:
num_in_ch (int): Channel number of inputs.
num_out_ch (int): Channel number of outputs.
num_feat (int): Channel number of intermediate features.
Default: 64
num_block (int): Block number in the trunk network. Defaults: 23
num_grow_ch (int): Channels for each growth. Default: 32.
"""
def __init__(
self,
num_in_ch: int,
num_out_ch: int,
scale: int = 4,
num_feat: int = 64,
num_block: int = 23,
num_grow_ch: int = 32,
) -> None:
super(RRDBNet, self).__init__()
self.scale = scale
if scale == 2:
num_in_ch = num_in_ch * 4
elif scale == 1:
num_in_ch = num_in_ch * 16
self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1)
self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch)
self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
# upsample
self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
def forward(self, x: torch.Tensor) -> torch.Tensor:
if self.scale == 2:
feat = pixel_unshuffle(x, scale=2)
elif self.scale == 1:
feat = pixel_unshuffle(x, scale=4)
else:
feat = x
feat = self.conv_first(feat)
body_feat = self.conv_body(self.body(feat))
feat = feat + body_feat
# upsample
feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode="nearest")))
feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode="nearest")))
out = self.conv_last(self.lrelu(self.conv_hr(feat)))
return out

View File

@ -0,0 +1,81 @@
import numpy as np
import torch
from controlnet_aux.util import resize_image
from PIL import Image
from invokeai.backend.image_util.dw_openpose.utils import draw_bodypose, draw_facepose, draw_handpose
from invokeai.backend.image_util.dw_openpose.wholebody import Wholebody
def draw_pose(pose, H, W, draw_face=True, draw_body=True, draw_hands=True, resolution=512):
bodies = pose["bodies"]
faces = pose["faces"]
hands = pose["hands"]
candidate = bodies["candidate"]
subset = bodies["subset"]
canvas = np.zeros(shape=(H, W, 3), dtype=np.uint8)
if draw_body:
canvas = draw_bodypose(canvas, candidate, subset)
if draw_hands:
canvas = draw_handpose(canvas, hands)
if draw_face:
canvas = draw_facepose(canvas, faces)
dwpose_image = resize_image(
canvas,
resolution,
)
dwpose_image = Image.fromarray(dwpose_image)
return dwpose_image
class DWOpenposeDetector:
"""
Code from the original implementation of the DW Openpose Detector.
Credits: https://github.com/IDEA-Research/DWPose
"""
def __init__(self) -> None:
self.pose_estimation = Wholebody()
def __call__(
self, image: Image.Image, draw_face=False, draw_body=True, draw_hands=False, resolution=512
) -> Image.Image:
np_image = np.array(image)
H, W, C = np_image.shape
with torch.no_grad():
candidate, subset = self.pose_estimation(np_image)
nums, keys, locs = candidate.shape
candidate[..., 0] /= float(W)
candidate[..., 1] /= float(H)
body = candidate[:, :18].copy()
body = body.reshape(nums * 18, locs)
score = subset[:, :18]
for i in range(len(score)):
for j in range(len(score[i])):
if score[i][j] > 0.3:
score[i][j] = int(18 * i + j)
else:
score[i][j] = -1
un_visible = subset < 0.3
candidate[un_visible] = -1
# foot = candidate[:, 18:24]
faces = candidate[:, 24:92]
hands = candidate[:, 92:113]
hands = np.vstack([hands, candidate[:, 113:]])
bodies = {"candidate": body, "subset": score}
pose = {"bodies": bodies, "hands": hands, "faces": faces}
return draw_pose(
pose, H, W, draw_face=draw_face, draw_hands=draw_hands, draw_body=draw_body, resolution=resolution
)

View File

@ -0,0 +1,128 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
import cv2
import numpy as np
def nms(boxes, scores, nms_thr):
"""Single class NMS implemented in Numpy."""
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
order = scores.argsort()[::-1]
keep = []
while order.size > 0:
i = order[0]
keep.append(i)
xx1 = np.maximum(x1[i], x1[order[1:]])
yy1 = np.maximum(y1[i], y1[order[1:]])
xx2 = np.minimum(x2[i], x2[order[1:]])
yy2 = np.minimum(y2[i], y2[order[1:]])
w = np.maximum(0.0, xx2 - xx1 + 1)
h = np.maximum(0.0, yy2 - yy1 + 1)
inter = w * h
ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= nms_thr)[0]
order = order[inds + 1]
return keep
def multiclass_nms(boxes, scores, nms_thr, score_thr):
"""Multiclass NMS implemented in Numpy. Class-aware version."""
final_dets = []
num_classes = scores.shape[1]
for cls_ind in range(num_classes):
cls_scores = scores[:, cls_ind]
valid_score_mask = cls_scores > score_thr
if valid_score_mask.sum() == 0:
continue
else:
valid_scores = cls_scores[valid_score_mask]
valid_boxes = boxes[valid_score_mask]
keep = nms(valid_boxes, valid_scores, nms_thr)
if len(keep) > 0:
cls_inds = np.ones((len(keep), 1)) * cls_ind
dets = np.concatenate([valid_boxes[keep], valid_scores[keep, None], cls_inds], 1)
final_dets.append(dets)
if len(final_dets) == 0:
return None
return np.concatenate(final_dets, 0)
def demo_postprocess(outputs, img_size, p6=False):
grids = []
expanded_strides = []
strides = [8, 16, 32] if not p6 else [8, 16, 32, 64]
hsizes = [img_size[0] // stride for stride in strides]
wsizes = [img_size[1] // stride for stride in strides]
for hsize, wsize, stride in zip(hsizes, wsizes, strides, strict=False):
xv, yv = np.meshgrid(np.arange(wsize), np.arange(hsize))
grid = np.stack((xv, yv), 2).reshape(1, -1, 2)
grids.append(grid)
shape = grid.shape[:2]
expanded_strides.append(np.full((*shape, 1), stride))
grids = np.concatenate(grids, 1)
expanded_strides = np.concatenate(expanded_strides, 1)
outputs[..., :2] = (outputs[..., :2] + grids) * expanded_strides
outputs[..., 2:4] = np.exp(outputs[..., 2:4]) * expanded_strides
return outputs
def preprocess(img, input_size, swap=(2, 0, 1)):
if len(img.shape) == 3:
padded_img = np.ones((input_size[0], input_size[1], 3), dtype=np.uint8) * 114
else:
padded_img = np.ones(input_size, dtype=np.uint8) * 114
r = min(input_size[0] / img.shape[0], input_size[1] / img.shape[1])
resized_img = cv2.resize(
img,
(int(img.shape[1] * r), int(img.shape[0] * r)),
interpolation=cv2.INTER_LINEAR,
).astype(np.uint8)
padded_img[: int(img.shape[0] * r), : int(img.shape[1] * r)] = resized_img
padded_img = padded_img.transpose(swap)
padded_img = np.ascontiguousarray(padded_img, dtype=np.float32)
return padded_img, r
def inference_detector(session, oriImg):
input_shape = (640, 640)
img, ratio = preprocess(oriImg, input_shape)
ort_inputs = {session.get_inputs()[0].name: img[None, :, :, :]}
output = session.run(None, ort_inputs)
predictions = demo_postprocess(output[0], input_shape)[0]
boxes = predictions[:, :4]
scores = predictions[:, 4:5] * predictions[:, 5:]
boxes_xyxy = np.ones_like(boxes)
boxes_xyxy[:, 0] = boxes[:, 0] - boxes[:, 2] / 2.0
boxes_xyxy[:, 1] = boxes[:, 1] - boxes[:, 3] / 2.0
boxes_xyxy[:, 2] = boxes[:, 0] + boxes[:, 2] / 2.0
boxes_xyxy[:, 3] = boxes[:, 1] + boxes[:, 3] / 2.0
boxes_xyxy /= ratio
dets = multiclass_nms(boxes_xyxy, scores, nms_thr=0.45, score_thr=0.1)
if dets is not None:
final_boxes, final_scores, final_cls_inds = dets[:, :4], dets[:, 4], dets[:, 5]
isscore = final_scores > 0.3
iscat = final_cls_inds == 0
isbbox = [i and j for (i, j) in zip(isscore, iscat, strict=False)]
final_boxes = final_boxes[isbbox]
else:
final_boxes = np.array([])
return final_boxes

View File

@ -0,0 +1,361 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
from typing import List, Tuple
import cv2
import numpy as np
import onnxruntime as ort
def preprocess(
img: np.ndarray, out_bbox, input_size: Tuple[int, int] = (192, 256)
) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
"""Do preprocessing for RTMPose model inference.
Args:
img (np.ndarray): Input image in shape.
input_size (tuple): Input image size in shape (w, h).
Returns:
tuple:
- resized_img (np.ndarray): Preprocessed image.
- center (np.ndarray): Center of image.
- scale (np.ndarray): Scale of image.
"""
# get shape of image
img_shape = img.shape[:2]
out_img, out_center, out_scale = [], [], []
if len(out_bbox) == 0:
out_bbox = [[0, 0, img_shape[1], img_shape[0]]]
for i in range(len(out_bbox)):
x0 = out_bbox[i][0]
y0 = out_bbox[i][1]
x1 = out_bbox[i][2]
y1 = out_bbox[i][3]
bbox = np.array([x0, y0, x1, y1])
# get center and scale
center, scale = bbox_xyxy2cs(bbox, padding=1.25)
# do affine transformation
resized_img, scale = top_down_affine(input_size, scale, center, img)
# normalize image
mean = np.array([123.675, 116.28, 103.53])
std = np.array([58.395, 57.12, 57.375])
resized_img = (resized_img - mean) / std
out_img.append(resized_img)
out_center.append(center)
out_scale.append(scale)
return out_img, out_center, out_scale
def inference(sess: ort.InferenceSession, img: np.ndarray) -> np.ndarray:
"""Inference RTMPose model.
Args:
sess (ort.InferenceSession): ONNXRuntime session.
img (np.ndarray): Input image in shape.
Returns:
outputs (np.ndarray): Output of RTMPose model.
"""
all_out = []
# build input
for i in range(len(img)):
input = [img[i].transpose(2, 0, 1)]
# build output
sess_input = {sess.get_inputs()[0].name: input}
sess_output = []
for out in sess.get_outputs():
sess_output.append(out.name)
# run model
outputs = sess.run(sess_output, sess_input)
all_out.append(outputs)
return all_out
def postprocess(
outputs: List[np.ndarray],
model_input_size: Tuple[int, int],
center: Tuple[int, int],
scale: Tuple[int, int],
simcc_split_ratio: float = 2.0,
) -> Tuple[np.ndarray, np.ndarray]:
"""Postprocess for RTMPose model output.
Args:
outputs (np.ndarray): Output of RTMPose model.
model_input_size (tuple): RTMPose model Input image size.
center (tuple): Center of bbox in shape (x, y).
scale (tuple): Scale of bbox in shape (w, h).
simcc_split_ratio (float): Split ratio of simcc.
Returns:
tuple:
- keypoints (np.ndarray): Rescaled keypoints.
- scores (np.ndarray): Model predict scores.
"""
all_key = []
all_score = []
for i in range(len(outputs)):
# use simcc to decode
simcc_x, simcc_y = outputs[i]
keypoints, scores = decode(simcc_x, simcc_y, simcc_split_ratio)
# rescale keypoints
keypoints = keypoints / model_input_size * scale[i] + center[i] - scale[i] / 2
all_key.append(keypoints[0])
all_score.append(scores[0])
return np.array(all_key), np.array(all_score)
def bbox_xyxy2cs(bbox: np.ndarray, padding: float = 1.0) -> Tuple[np.ndarray, np.ndarray]:
"""Transform the bbox format from (x,y,w,h) into (center, scale)
Args:
bbox (ndarray): Bounding box(es) in shape (4,) or (n, 4), formatted
as (left, top, right, bottom)
padding (float): BBox padding factor that will be multilied to scale.
Default: 1.0
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: Center (x, y) of the bbox in shape (2,) or
(n, 2)
- np.ndarray[float32]: Scale (w, h) of the bbox in shape (2,) or
(n, 2)
"""
# convert single bbox from (4, ) to (1, 4)
dim = bbox.ndim
if dim == 1:
bbox = bbox[None, :]
# get bbox center and scale
x1, y1, x2, y2 = np.hsplit(bbox, [1, 2, 3])
center = np.hstack([x1 + x2, y1 + y2]) * 0.5
scale = np.hstack([x2 - x1, y2 - y1]) * padding
if dim == 1:
center = center[0]
scale = scale[0]
return center, scale
def _fix_aspect_ratio(bbox_scale: np.ndarray, aspect_ratio: float) -> np.ndarray:
"""Extend the scale to match the given aspect ratio.
Args:
scale (np.ndarray): The image scale (w, h) in shape (2, )
aspect_ratio (float): The ratio of ``w/h``
Returns:
np.ndarray: The reshaped image scale in (2, )
"""
w, h = np.hsplit(bbox_scale, [1])
bbox_scale = np.where(w > h * aspect_ratio, np.hstack([w, w / aspect_ratio]), np.hstack([h * aspect_ratio, h]))
return bbox_scale
def _rotate_point(pt: np.ndarray, angle_rad: float) -> np.ndarray:
"""Rotate a point by an angle.
Args:
pt (np.ndarray): 2D point coordinates (x, y) in shape (2, )
angle_rad (float): rotation angle in radian
Returns:
np.ndarray: Rotated point in shape (2, )
"""
sn, cs = np.sin(angle_rad), np.cos(angle_rad)
rot_mat = np.array([[cs, -sn], [sn, cs]])
return rot_mat @ pt
def _get_3rd_point(a: np.ndarray, b: np.ndarray) -> np.ndarray:
"""To calculate the affine matrix, three pairs of points are required. This
function is used to get the 3rd point, given 2D points a & b.
The 3rd point is defined by rotating vector `a - b` by 90 degrees
anticlockwise, using b as the rotation center.
Args:
a (np.ndarray): The 1st point (x,y) in shape (2, )
b (np.ndarray): The 2nd point (x,y) in shape (2, )
Returns:
np.ndarray: The 3rd point.
"""
direction = a - b
c = b + np.r_[-direction[1], direction[0]]
return c
def get_warp_matrix(
center: np.ndarray,
scale: np.ndarray,
rot: float,
output_size: Tuple[int, int],
shift: Tuple[float, float] = (0.0, 0.0),
inv: bool = False,
) -> np.ndarray:
"""Calculate the affine transformation matrix that can warp the bbox area
in the input image to the output size.
Args:
center (np.ndarray[2, ]): Center of the bounding box (x, y).
scale (np.ndarray[2, ]): Scale of the bounding box
wrt [width, height].
rot (float): Rotation angle (degree).
output_size (np.ndarray[2, ] | list(2,)): Size of the
destination heatmaps.
shift (0-100%): Shift translation ratio wrt the width/height.
Default (0., 0.).
inv (bool): Option to inverse the affine transform direction.
(inv=False: src->dst or inv=True: dst->src)
Returns:
np.ndarray: A 2x3 transformation matrix
"""
shift = np.array(shift)
src_w = scale[0]
dst_w = output_size[0]
dst_h = output_size[1]
# compute transformation matrix
rot_rad = np.deg2rad(rot)
src_dir = _rotate_point(np.array([0.0, src_w * -0.5]), rot_rad)
dst_dir = np.array([0.0, dst_w * -0.5])
# get four corners of the src rectangle in the original image
src = np.zeros((3, 2), dtype=np.float32)
src[0, :] = center + scale * shift
src[1, :] = center + src_dir + scale * shift
src[2, :] = _get_3rd_point(src[0, :], src[1, :])
# get four corners of the dst rectangle in the input image
dst = np.zeros((3, 2), dtype=np.float32)
dst[0, :] = [dst_w * 0.5, dst_h * 0.5]
dst[1, :] = np.array([dst_w * 0.5, dst_h * 0.5]) + dst_dir
dst[2, :] = _get_3rd_point(dst[0, :], dst[1, :])
if inv:
warp_mat = cv2.getAffineTransform(np.float32(dst), np.float32(src))
else:
warp_mat = cv2.getAffineTransform(np.float32(src), np.float32(dst))
return warp_mat
def top_down_affine(
input_size: dict, bbox_scale: dict, bbox_center: dict, img: np.ndarray
) -> Tuple[np.ndarray, np.ndarray]:
"""Get the bbox image as the model input by affine transform.
Args:
input_size (dict): The input size of the model.
bbox_scale (dict): The bbox scale of the img.
bbox_center (dict): The bbox center of the img.
img (np.ndarray): The original image.
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: img after affine transform.
- np.ndarray[float32]: bbox scale after affine transform.
"""
w, h = input_size
warp_size = (int(w), int(h))
# reshape bbox to fixed aspect ratio
bbox_scale = _fix_aspect_ratio(bbox_scale, aspect_ratio=w / h)
# get the affine matrix
center = bbox_center
scale = bbox_scale
rot = 0
warp_mat = get_warp_matrix(center, scale, rot, output_size=(w, h))
# do affine transform
img = cv2.warpAffine(img, warp_mat, warp_size, flags=cv2.INTER_LINEAR)
return img, bbox_scale
def get_simcc_maximum(simcc_x: np.ndarray, simcc_y: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
"""Get maximum response location and value from simcc representations.
Note:
instance number: N
num_keypoints: K
heatmap height: H
heatmap width: W
Args:
simcc_x (np.ndarray): x-axis SimCC in shape (K, Wx) or (N, K, Wx)
simcc_y (np.ndarray): y-axis SimCC in shape (K, Wy) or (N, K, Wy)
Returns:
tuple:
- locs (np.ndarray): locations of maximum heatmap responses in shape
(K, 2) or (N, K, 2)
- vals (np.ndarray): values of maximum heatmap responses in shape
(K,) or (N, K)
"""
N, K, Wx = simcc_x.shape
simcc_x = simcc_x.reshape(N * K, -1)
simcc_y = simcc_y.reshape(N * K, -1)
# get maximum value locations
x_locs = np.argmax(simcc_x, axis=1)
y_locs = np.argmax(simcc_y, axis=1)
locs = np.stack((x_locs, y_locs), axis=-1).astype(np.float32)
max_val_x = np.amax(simcc_x, axis=1)
max_val_y = np.amax(simcc_y, axis=1)
# get maximum value across x and y axis
mask = max_val_x > max_val_y
max_val_x[mask] = max_val_y[mask]
vals = max_val_x
locs[vals <= 0.0] = -1
# reshape
locs = locs.reshape(N, K, 2)
vals = vals.reshape(N, K)
return locs, vals
def decode(simcc_x: np.ndarray, simcc_y: np.ndarray, simcc_split_ratio) -> Tuple[np.ndarray, np.ndarray]:
"""Modulate simcc distribution with Gaussian.
Args:
simcc_x (np.ndarray[K, Wx]): model predicted simcc in x.
simcc_y (np.ndarray[K, Wy]): model predicted simcc in y.
simcc_split_ratio (int): The split ratio of simcc.
Returns:
tuple: A tuple containing center and scale.
- np.ndarray[float32]: keypoints in shape (K, 2) or (n, K, 2)
- np.ndarray[float32]: scores in shape (K,) or (n, K)
"""
keypoints, scores = get_simcc_maximum(simcc_x, simcc_y)
keypoints /= simcc_split_ratio
return keypoints, scores
def inference_pose(session, out_bbox, oriImg):
h, w = session.get_inputs()[0].shape[2:]
model_input_size = (w, h)
resized_img, center, scale = preprocess(oriImg, out_bbox, model_input_size)
outputs = inference(session, resized_img)
keypoints, scores = postprocess(outputs, model_input_size, center, scale)
return keypoints, scores

View File

@ -0,0 +1,155 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
import math
import cv2
import matplotlib
import numpy as np
eps = 0.01
def draw_bodypose(canvas, candidate, subset):
H, W, C = canvas.shape
candidate = np.array(candidate)
subset = np.array(subset)
stickwidth = 4
limbSeq = [
[2, 3],
[2, 6],
[3, 4],
[4, 5],
[6, 7],
[7, 8],
[2, 9],
[9, 10],
[10, 11],
[2, 12],
[12, 13],
[13, 14],
[2, 1],
[1, 15],
[15, 17],
[1, 16],
[16, 18],
[3, 17],
[6, 18],
]
colors = [
[255, 0, 0],
[255, 85, 0],
[255, 170, 0],
[255, 255, 0],
[170, 255, 0],
[85, 255, 0],
[0, 255, 0],
[0, 255, 85],
[0, 255, 170],
[0, 255, 255],
[0, 170, 255],
[0, 85, 255],
[0, 0, 255],
[85, 0, 255],
[170, 0, 255],
[255, 0, 255],
[255, 0, 170],
[255, 0, 85],
]
for i in range(17):
for n in range(len(subset)):
index = subset[n][np.array(limbSeq[i]) - 1]
if -1 in index:
continue
Y = candidate[index.astype(int), 0] * float(W)
X = candidate[index.astype(int), 1] * float(H)
mX = np.mean(X)
mY = np.mean(Y)
length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5
angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1)
cv2.fillConvexPoly(canvas, polygon, colors[i])
canvas = (canvas * 0.6).astype(np.uint8)
for i in range(18):
for n in range(len(subset)):
index = int(subset[n][i])
if index == -1:
continue
x, y = candidate[index][0:2]
x = int(x * W)
y = int(y * H)
cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1)
return canvas
def draw_handpose(canvas, all_hand_peaks):
H, W, C = canvas.shape
edges = [
[0, 1],
[1, 2],
[2, 3],
[3, 4],
[0, 5],
[5, 6],
[6, 7],
[7, 8],
[0, 9],
[9, 10],
[10, 11],
[11, 12],
[0, 13],
[13, 14],
[14, 15],
[15, 16],
[0, 17],
[17, 18],
[18, 19],
[19, 20],
]
for peaks in all_hand_peaks:
peaks = np.array(peaks)
for ie, e in enumerate(edges):
x1, y1 = peaks[e[0]]
x2, y2 = peaks[e[1]]
x1 = int(x1 * W)
y1 = int(y1 * H)
x2 = int(x2 * W)
y2 = int(y2 * H)
if x1 > eps and y1 > eps and x2 > eps and y2 > eps:
cv2.line(
canvas,
(x1, y1),
(x2, y2),
matplotlib.colors.hsv_to_rgb([ie / float(len(edges)), 1.0, 1.0]) * 255,
thickness=2,
)
for _, keyponit in enumerate(peaks):
x, y = keyponit
x = int(x * W)
y = int(y * H)
if x > eps and y > eps:
cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1)
return canvas
def draw_facepose(canvas, all_lmks):
H, W, C = canvas.shape
for lmks in all_lmks:
lmks = np.array(lmks)
for lmk in lmks:
x, y = lmk
x = int(x * W)
y = int(y * H)
if x > eps and y > eps:
cv2.circle(canvas, (x, y), 3, (255, 255, 255), thickness=-1)
return canvas

View File

@ -0,0 +1,67 @@
# Code from the original DWPose Implementation: https://github.com/IDEA-Research/DWPose
# Modified pathing to suit Invoke
import pathlib
import numpy as np
import onnxruntime as ort
from invokeai.app.services.config.config_default import InvokeAIAppConfig
from invokeai.backend.util.devices import choose_torch_device
from invokeai.backend.util.util import download_with_progress_bar
from .onnxdet import inference_detector
from .onnxpose import inference_pose
DWPOSE_MODELS = {
"yolox_l.onnx": {
"local": "any/annotators/dwpose/yolox_l.onnx",
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/yolox_l.onnx?download=true",
},
"dw-ll_ucoco_384.onnx": {
"local": "any/annotators/dwpose/dw-ll_ucoco_384.onnx",
"url": "https://huggingface.co/yzd-v/DWPose/resolve/main/dw-ll_ucoco_384.onnx?download=true",
},
}
config = InvokeAIAppConfig.get_config()
class Wholebody:
def __init__(self):
device = choose_torch_device()
providers = ["CUDAExecutionProvider"] if device == "cuda" else ["CPUExecutionProvider"]
DET_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["yolox_l.onnx"]["local"])
if not DET_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["yolox_l.onnx"]["url"], DET_MODEL_PATH)
POSE_MODEL_PATH = pathlib.Path(config.models_path / DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["local"])
if not POSE_MODEL_PATH.exists():
download_with_progress_bar(DWPOSE_MODELS["dw-ll_ucoco_384.onnx"]["url"], POSE_MODEL_PATH)
onnx_det = DET_MODEL_PATH
onnx_pose = POSE_MODEL_PATH
self.session_det = ort.InferenceSession(path_or_bytes=onnx_det, providers=providers)
self.session_pose = ort.InferenceSession(path_or_bytes=onnx_pose, providers=providers)
def __call__(self, oriImg):
det_result = inference_detector(self.session_det, oriImg)
keypoints, scores = inference_pose(self.session_pose, det_result, oriImg)
keypoints_info = np.concatenate((keypoints, scores[..., None]), axis=-1)
# compute neck joint
neck = np.mean(keypoints_info[:, [5, 6]], axis=1)
# neck score when visualizing pred
neck[:, 2:4] = np.logical_and(keypoints_info[:, 5, 2:4] > 0.3, keypoints_info[:, 6, 2:4] > 0.3).astype(int)
new_keypoints_info = np.insert(keypoints_info, 17, neck, axis=1)
mmpose_idx = [17, 6, 8, 10, 7, 9, 12, 14, 16, 13, 15, 2, 1, 4, 3]
openpose_idx = [1, 2, 3, 4, 6, 7, 8, 9, 10, 12, 13, 14, 15, 16, 17]
new_keypoints_info[:, openpose_idx] = new_keypoints_info[:, mmpose_idx]
keypoints_info = new_keypoints_info
keypoints, scores = keypoints_info[..., :2], keypoints_info[..., 2]
return keypoints, scores

View File

@ -7,10 +7,10 @@ import cv2
import numpy as np
import numpy.typing as npt
import torch
from basicsr.archs.rrdbnet_arch import RRDBNet
from cv2.typing import MatLike
from tqdm import tqdm
from invokeai.backend.image_util.basicsr.rrdbnet_arch import RRDBNet
from invokeai.backend.util.devices import choose_torch_device
"""

View File

@ -0,0 +1,281 @@
"""Utility (backend) functions used by model_install.py"""
import re
from logging import Logger
from pathlib import Path
from typing import Any, Dict, List, Optional
import omegaconf
from huggingface_hub import HfFolder
from pydantic import BaseModel, Field
from pydantic.dataclasses import dataclass
from pydantic.networks import AnyHttpUrl
from requests import HTTPError
from tqdm import tqdm
import invokeai.configs as configs
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.download import DownloadQueueService
from invokeai.app.services.events.events_base import EventServiceBase
from invokeai.app.services.image_files.image_files_disk import DiskImageFileStorage
from invokeai.app.services.model_install import (
HFModelSource,
LocalModelSource,
ModelInstallService,
ModelInstallServiceBase,
ModelSource,
URLModelSource,
)
from invokeai.app.services.model_records import ModelRecordServiceBase, ModelRecordServiceSQL
from invokeai.app.services.shared.sqlite.sqlite_util import init_db
from invokeai.backend.model_manager import (
BaseModelType,
InvalidModelConfigException,
ModelType,
)
from invokeai.backend.model_manager.metadata import UnknownMetadataException
from invokeai.backend.util.logging import InvokeAILogger
# name of the starter models file
INITIAL_MODELS = "INITIAL_MODELS2.yaml"
def initialize_record_store(app_config: InvokeAIAppConfig) -> ModelRecordServiceBase:
"""Return an initialized ModelConfigRecordServiceBase object."""
logger = InvokeAILogger.get_logger(config=app_config)
image_files = DiskImageFileStorage(f"{app_config.output_path}/images")
db = init_db(config=app_config, logger=logger, image_files=image_files)
obj: ModelRecordServiceBase = ModelRecordServiceSQL(db)
return obj
def initialize_installer(
app_config: InvokeAIAppConfig, event_bus: Optional[EventServiceBase] = None
) -> ModelInstallServiceBase:
"""Return an initialized ModelInstallService object."""
record_store = initialize_record_store(app_config)
metadata_store = record_store.metadata_store
download_queue = DownloadQueueService()
installer = ModelInstallService(
app_config=app_config,
record_store=record_store,
metadata_store=metadata_store,
download_queue=download_queue,
event_bus=event_bus,
)
download_queue.start()
installer.start()
return installer
class UnifiedModelInfo(BaseModel):
"""Catchall class for information in INITIAL_MODELS2.yaml."""
name: Optional[str] = None
base: Optional[BaseModelType] = None
type: Optional[ModelType] = None
source: Optional[str] = None
subfolder: Optional[str] = None
description: Optional[str] = None
recommended: bool = False
installed: bool = False
default: bool = False
requires: List[str] = Field(default_factory=list)
@dataclass
class InstallSelections:
"""Lists of models to install and remove."""
install_models: List[UnifiedModelInfo] = Field(default_factory=list)
remove_models: List[str] = Field(default_factory=list)
class TqdmEventService(EventServiceBase):
"""An event service to track downloads."""
def __init__(self) -> None:
"""Create a new TqdmEventService object."""
super().__init__()
self._bars: Dict[str, tqdm] = {}
self._last: Dict[str, int] = {}
def dispatch(self, event_name: str, payload: Any) -> None:
"""Dispatch an event by appending it to self.events."""
if payload["event"] == "model_install_downloading":
data = payload["data"]
dest = data["local_path"]
total_bytes = data["total_bytes"]
bytes = data["bytes"]
if dest not in self._bars:
self._bars[dest] = tqdm(desc=Path(dest).name, initial=0, total=total_bytes, unit="iB", unit_scale=True)
self._last[dest] = 0
self._bars[dest].update(bytes - self._last[dest])
self._last[dest] = bytes
class InstallHelper(object):
"""Capture information stored jointly in INITIAL_MODELS.yaml and the installed models db."""
def __init__(self, app_config: InvokeAIAppConfig, logger: Logger):
"""Create new InstallHelper object."""
self._app_config = app_config
self.all_models: Dict[str, UnifiedModelInfo] = {}
omega = omegaconf.OmegaConf.load(Path(configs.__path__[0]) / INITIAL_MODELS)
assert isinstance(omega, omegaconf.dictconfig.DictConfig)
self._installer = initialize_installer(app_config, TqdmEventService())
self._initial_models = omega
self._installed_models: List[str] = []
self._starter_models: List[str] = []
self._default_model: Optional[str] = None
self._logger = logger
self._initialize_model_lists()
@property
def installer(self) -> ModelInstallServiceBase:
"""Return the installer object used internally."""
return self._installer
def _initialize_model_lists(self) -> None:
"""
Initialize our model slots.
Set up the following:
installed_models -- list of installed model keys
starter_models -- list of starter model keys from INITIAL_MODELS
all_models -- dict of key => UnifiedModelInfo
default_model -- key to default model
"""
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
for key in self._initial_models.keys():
assert isinstance(key, str)
if key in self.all_models:
# we want to preserve the description
description = self.all_models[key].description or self._initial_models[key].get("description")
self.all_models[key].description = description
else:
base_model, model_type, model_name = key.split("/")
info = UnifiedModelInfo(
name=model_name,
type=ModelType(model_type),
base=BaseModelType(base_model),
source=self._initial_models[key].source,
description=self._initial_models[key].get("description"),
recommended=self._initial_models[key].get("recommended", False),
default=self._initial_models[key].get("default", False),
subfolder=self._initial_models[key].get("subfolder"),
requires=list(self._initial_models[key].get("requires", [])),
)
self.all_models[key] = info
if not self.default_model():
self._default_model = key
elif self._initial_models[key].get("default", False):
self._default_model = key
self._starter_models.append(key)
# previously-installed models
for model in self._installer.record_store.all_models():
info = UnifiedModelInfo.parse_obj(model.dict())
info.installed = True
model_key = f"{model.base.value}/{model.type.value}/{model.name}"
self.all_models[model_key] = info
self._installed_models.append(model_key)
def recommended_models(self) -> List[UnifiedModelInfo]:
"""List of the models recommended in INITIAL_MODELS.yaml."""
return [self._to_model(x) for x in self._starter_models if self._to_model(x).recommended]
def installed_models(self) -> List[UnifiedModelInfo]:
"""List of models already installed."""
return [self._to_model(x) for x in self._installed_models]
def starter_models(self) -> List[UnifiedModelInfo]:
"""List of starter models."""
return [self._to_model(x) for x in self._starter_models]
def default_model(self) -> Optional[UnifiedModelInfo]:
"""Return the default model."""
return self._to_model(self._default_model) if self._default_model else None
def _to_model(self, key: str) -> UnifiedModelInfo:
return self.all_models[key]
def _add_required_models(self, model_list: List[UnifiedModelInfo]) -> None:
installed = {x.source for x in self.installed_models()}
reverse_source = {x.source: x for x in self.all_models.values()}
additional_models: List[UnifiedModelInfo] = []
for model_info in model_list:
for requirement in model_info.requires:
if requirement not in installed and reverse_source.get(requirement):
additional_models.append(reverse_source[requirement])
model_list.extend(additional_models)
def _make_install_source(self, model_info: UnifiedModelInfo) -> ModelSource:
assert model_info.source
model_path_id_or_url = model_info.source.strip("\"' ")
model_path = Path(model_path_id_or_url)
if model_path.exists(): # local file on disk
return LocalModelSource(path=model_path.absolute(), inplace=True)
if re.match(r"^[^/]+/[^/]+$", model_path_id_or_url): # hugging face repo_id
return HFModelSource(
repo_id=model_path_id_or_url,
access_token=HfFolder.get_token(),
subfolder=model_info.subfolder,
)
if re.match(r"^(http|https):", model_path_id_or_url):
return URLModelSource(url=AnyHttpUrl(model_path_id_or_url))
raise ValueError(f"Unsupported model source: {model_path_id_or_url}")
def add_or_delete(self, selections: InstallSelections) -> None:
"""Add or delete selected models."""
installer = self._installer
self._add_required_models(selections.install_models)
for model in selections.install_models:
source = self._make_install_source(model)
config = (
{
"description": model.description,
"name": model.name,
}
if model.name
else None
)
try:
installer.import_model(
source=source,
config=config,
)
except (UnknownMetadataException, InvalidModelConfigException, HTTPError, OSError) as e:
self._logger.warning(f"{source}: {e}")
for model_to_remove in selections.remove_models:
parts = model_to_remove.split("/")
if len(parts) == 1:
base_model, model_type, model_name = (None, None, model_to_remove)
else:
base_model, model_type, model_name = parts
matches = installer.record_store.search_by_attr(
base_model=BaseModelType(base_model) if base_model else None,
model_type=ModelType(model_type) if model_type else None,
model_name=model_name,
)
if len(matches) > 1:
print(f"{model} is ambiguous. Please use model_type:model_name (e.g. main:my_model) to disambiguate.")
elif not matches:
print(f"{model}: unknown model")
else:
for m in matches:
print(f"Deleting {m.type}:{m.name}")
installer.delete(m.key)
installer.wait_for_installs()

View File

@ -849,7 +849,7 @@ def migrate_if_needed(opt: Namespace, root: Path) -> bool:
# -------------------------------------
def main():
def main() -> None:
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
parser.add_argument(
"--skip-sd-weights",

View File

@ -104,12 +104,14 @@ class ModelInstall(object):
prediction_type_helper: Optional[Callable[[Path], SchedulerPredictionType]] = None,
model_manager: Optional[ModelManager] = None,
access_token: Optional[str] = None,
civitai_api_key: Optional[str] = None,
):
self.config = config
self.mgr = model_manager or ModelManager(config.model_conf_path)
self.datasets = OmegaConf.load(Dataset_path)
self.prediction_helper = prediction_type_helper
self.access_token = access_token or HfFolder.get_token()
self.civitai_api_key = civitai_api_key or config.civitai_api_key
self.reverse_paths = self._reverse_paths(self.datasets)
def all_models(self) -> Dict[str, ModelLoadInfo]:
@ -326,7 +328,11 @@ class ModelInstall(object):
def _install_url(self, url: str) -> AddModelResult:
with TemporaryDirectory(dir=self.config.models_path) as staging:
location = download_with_resume(url, Path(staging))
CIVITAI_RE = r".*civitai.com.*"
civit_url = re.match(CIVITAI_RE, url, re.IGNORECASE)
location = download_with_resume(
url, Path(staging), access_token=self.civitai_api_key if civit_url else None
)
if not location:
logger.error(f"Unable to download {url}. Skipping.")
info = ModelProbe().heuristic_probe(location, self.prediction_helper)

View File

@ -42,8 +42,7 @@ from diffusers.schedulers import (
PNDMScheduler,
UnCLIPScheduler,
)
from diffusers.utils import is_accelerate_available, is_omegaconf_available
from diffusers.utils.import_utils import BACKENDS_MAPPING
from diffusers.utils import is_accelerate_available
from picklescan.scanner import scan_file_path
from transformers import (
AutoFeatureExtractor,
@ -1211,9 +1210,6 @@ def download_from_original_stable_diffusion_ckpt(
if prediction_type == "v-prediction":
prediction_type = "v_prediction"
if not is_omegaconf_available():
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
if from_safetensors:
from safetensors.torch import load_file as safe_load
@ -1647,11 +1643,6 @@ def download_controlnet_from_original_ckpt(
cross_attention_dim: Optional[bool] = None,
scan_needed: bool = False,
) -> DiffusionPipeline:
if not is_omegaconf_available():
raise ValueError(BACKENDS_MAPPING["omegaconf"][1])
from omegaconf import OmegaConf
if from_safetensors:
from safetensors import safe_open

View File

@ -287,6 +287,14 @@ class ModelCache(object):
if torch.device(source_device).type == torch.device(target_device).type:
return
if target_device.type == "cuda":
vram_device = (
target_device if target_device.index is not None else torch.device(str(target_device), index=0)
)
free_mem, _ = torch.cuda.mem_get_info(torch.device(vram_device))
if cache_entry.size > free_mem:
raise torch.cuda.OutOfMemoryError
start_model_to_time = time.time()
snapshot_before = self._capture_memory_snapshot()
cache_entry.model.to(target_device)
@ -356,6 +364,10 @@ class ModelCache(object):
self.cache.logger.debug(f"Locking {self.key} in {self.cache.execution_device}")
self.cache._print_cuda_stats()
except torch.cuda.OutOfMemoryError:
self.cache.logger.warning("Out of GPU memory encountered.")
self.cache_entry.unlock()
raise
except Exception:
self.cache_entry.unlock()
raise
@ -524,7 +536,6 @@ class ModelCache(object):
break
if not cache_entry.locked and cache_entry.loaded:
self._move_model_to_device(model_key, self.storage_device)
vram_in_use = torch.cuda.memory_allocated()
self.logger.debug(f"{(vram_in_use/GIG):.2f}GB VRAM used for models; max allowed={(reserved/GIG):.2f}GB")

View File

@ -141,7 +141,7 @@ class StableDiffusionXLModel(DiffusersModel):
version=base_model,
model_config=config,
output_path=output_path,
use_safetensors=False, # corrupts sdxl models for some reason
use_safetensors=True,
**kwargs,
)
else:

View File

@ -1,10 +1,11 @@
from __future__ import annotations
from contextlib import contextmanager
from typing import List, Union
from typing import Callable, List, Union
import torch.nn as nn
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.models.autoencoders.autoencoder_kl import AutoencoderKL
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
def _conv_forward_asymmetric(self, input, weight, bias):
@ -26,70 +27,51 @@ def _conv_forward_asymmetric(self, input, weight, bias):
@contextmanager
def set_seamless(model: Union[UNet2DConditionModel, AutoencoderKL], seamless_axes: List[str]):
# Callable: (input: Tensor, weight: Tensor, bias: Optional[Tensor]) -> Tensor
to_restore: list[tuple[nn.Conv2d | nn.ConvTranspose2d, Callable]] = []
try:
to_restore = []
# Hard coded to skip down block layers, allowing for seamless tiling at the expense of prompt adherence
skipped_layers = 1
for m_name, m in model.named_modules():
if isinstance(model, UNet2DConditionModel):
if ".attentions." in m_name:
if not isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
continue
if isinstance(model, UNet2DConditionModel) and m_name.startswith("down_blocks.") and ".resnets." in m_name:
# down_blocks.1.resnets.1.conv1
_, block_num, _, resnet_num, submodule_name = m_name.split(".")
block_num = int(block_num)
resnet_num = int(resnet_num)
if block_num >= len(model.down_blocks) - skipped_layers:
continue
if ".resnets." in m_name:
if ".conv2" in m_name:
continue
if ".conv_shortcut" in m_name:
continue
"""
if isinstance(model, UNet2DConditionModel):
if False and ".upsamplers." in m_name:
# Skip the second resnet (could be configurable)
if resnet_num > 0:
continue
if False and ".downsamplers." in m_name:
# Skip Conv2d layers (could be configurable)
if submodule_name == "conv2":
continue
if True and ".resnets." in m_name:
if True and ".conv1" in m_name:
if False and "down_blocks" in m_name:
continue
if False and "mid_block" in m_name:
continue
if False and "up_blocks" in m_name:
continue
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
if True and ".conv2" in m_name:
continue
if True and ".conv_shortcut" in m_name:
continue
if True and ".attentions." in m_name:
continue
if False and m_name in ["conv_in", "conv_out"]:
continue
"""
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
m.asymmetric_padding_mode = {}
m.asymmetric_padding = {}
m.asymmetric_padding_mode["x"] = "circular" if ("x" in seamless_axes) else "constant"
m.asymmetric_padding["x"] = (
m._reversed_padding_repeated_twice[0],
m._reversed_padding_repeated_twice[1],
0,
0,
)
m.asymmetric_padding_mode["y"] = "circular" if ("y" in seamless_axes) else "constant"
m.asymmetric_padding["y"] = (
0,
0,
m._reversed_padding_repeated_twice[2],
m._reversed_padding_repeated_twice[3],
)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
to_restore.append((m, m._conv_forward))
m._conv_forward = _conv_forward_asymmetric.__get__(m, nn.Conv2d)
yield

View File

@ -0,0 +1,177 @@
"""
invokeai.backend.model_manager.merge exports:
merge_diffusion_models() -- combine multiple models by location and return a pipeline object
merge_diffusion_models_and_commit() -- combine multiple models by ModelManager ID and write to models.yaml
Copyright (c) 2023 Lincoln Stein and the InvokeAI Development Team
"""
import warnings
from enum import Enum
from pathlib import Path
from typing import Any, List, Optional, Set
import torch
from diffusers import AutoPipelineForText2Image
from diffusers import logging as dlogging
from invokeai.app.services.model_install import ModelInstallServiceBase
from invokeai.backend.util.devices import choose_torch_device, torch_dtype
from . import (
AnyModelConfig,
BaseModelType,
ModelType,
ModelVariantType,
)
from .config import MainDiffusersConfig
class MergeInterpolationMethod(str, Enum):
WeightedSum = "weighted_sum"
Sigmoid = "sigmoid"
InvSigmoid = "inv_sigmoid"
AddDifference = "add_difference"
class ModelMerger(object):
"""Wrapper class for model merge function."""
def __init__(self, installer: ModelInstallServiceBase):
"""
Initialize a ModelMerger object.
:param store: Underlying storage manager for the running process.
:param config: InvokeAIAppConfig object (if not provided, default will be selected).
"""
self._installer = installer
def merge_diffusion_models(
self,
model_paths: List[Path],
alpha: float = 0.5,
interp: Optional[MergeInterpolationMethod] = None,
force: bool = False,
variant: Optional[str] = None,
**kwargs: Any,
) -> Any: # pipe.merge is an untyped function.
"""
:param model_paths: up to three models, designated by their local paths or HuggingFace repo_ids
:param alpha: The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
:param interp: The interpolation method to use for the merging. Supports "sigmoid", "inv_sigmoid", "add_difference" and None.
Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported.
:param force: Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
"""
with warnings.catch_warnings():
warnings.simplefilter("ignore")
verbosity = dlogging.get_verbosity()
dlogging.set_verbosity_error()
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
# Note that checkpoint_merger will not work with downloaded HuggingFace fp16 models
# until upstream https://github.com/huggingface/diffusers/pull/6670 is merged and released.
pipe = AutoPipelineForText2Image.from_pretrained(
model_paths[0],
custom_pipeline="checkpoint_merger",
torch_dtype=dtype,
variant=variant,
)
merged_pipe = pipe.merge(
pretrained_model_name_or_path_list=model_paths,
alpha=alpha,
interp=interp.value if interp else None, # diffusers API treats None as "weighted sum"
force=force,
torch_dtype=dtype,
variant=variant,
**kwargs,
)
dlogging.set_verbosity(verbosity)
return merged_pipe
def merge_diffusion_models_and_save(
self,
model_keys: List[str],
merged_model_name: str,
alpha: float = 0.5,
force: bool = False,
interp: Optional[MergeInterpolationMethod] = None,
merge_dest_directory: Optional[Path] = None,
variant: Optional[str] = None,
**kwargs: Any,
) -> AnyModelConfig:
"""
:param models: up to three models, designated by their InvokeAI models.yaml model name
:param merged_model_name: name for new model
:param alpha: The interpolation parameter. Ranges from 0 to 1. It affects the ratio in which the checkpoints are merged. A 0.8 alpha
would mean that the first model checkpoints would affect the final result far less than an alpha of 0.2
:param interp: The interpolation method to use for the merging. Supports "weighted_average", "sigmoid", "inv_sigmoid", "add_difference" and None.
Passing None uses the default interpolation which is weighted sum interpolation. For merging three checkpoints, only "add_difference" is supported. Add_difference is A+(B-C).
:param force: Whether to ignore mismatch in model_config.json for the current models. Defaults to False.
:param merge_dest_directory: Save the merged model to the designated directory (with 'merged_model_name' appended)
**kwargs - the default DiffusionPipeline.get_config_dict kwargs:
cache_dir, resume_download, force_download, proxies, local_files_only, use_auth_token, revision, torch_dtype, device_map
"""
model_paths: List[Path] = []
model_names: List[str] = []
config = self._installer.app_config
store = self._installer.record_store
base_models: Set[BaseModelType] = set()
vae = None
variant = None if self._installer.app_config.full_precision else "fp16"
assert (
len(model_keys) <= 2 or interp == MergeInterpolationMethod.AddDifference
), "When merging three models, only the 'add_difference' merge method is supported"
for key in model_keys:
info = store.get_model(key)
model_names.append(info.name)
assert isinstance(
info, MainDiffusersConfig
), f"{info.name} ({info.key}) is not a diffusers model. It must be optimized before merging"
assert info.variant == ModelVariantType(
"normal"
), f"{info.name} ({info.key}) is a {info.variant} model, which cannot currently be merged"
# pick up the first model's vae
if key == model_keys[0]:
vae = info.vae
# tally base models used
base_models.add(info.base)
model_paths.extend([config.models_path / info.path])
assert len(base_models) == 1, f"All models to merge must have same base model, but found bases {base_models}"
base_model = base_models.pop()
merge_method = None if interp == "weighted_sum" else MergeInterpolationMethod(interp)
merged_pipe = self.merge_diffusion_models(model_paths, alpha, merge_method, force, variant=variant, **kwargs)
dump_path = (
Path(merge_dest_directory)
if merge_dest_directory
else config.models_path / base_model.value / ModelType.Main.value
)
dump_path.mkdir(parents=True, exist_ok=True)
dump_path = dump_path / merged_model_name
dtype = torch.float16 if variant == "fp16" else torch_dtype(choose_torch_device())
merged_pipe.save_pretrained(dump_path.as_posix(), safe_serialization=True, torch_dtype=dtype, variant=variant)
# register model and get its unique key
key = self._installer.register_path(dump_path)
# update model's config
model_config = self._installer.record_store.get_model(key)
model_config.update(
{
"name": merged_model_name,
"description": f"Merge of models {', '.join(model_names)}",
"vae": vae,
}
)
self._installer.record_store.update_model(key, model_config)
return model_config

View File

@ -170,6 +170,8 @@ class CivitaiMetadataFetch(ModelMetadataFetchBase):
if model_id is None:
version_url = CIVITAI_VERSION_ENDPOINT + str(version_id)
version = self._requests.get(version_url).json()
if error := version.get("error"):
raise UnknownMetadataException(error)
model_id = version["modelId"]
model_url = CIVITAI_MODEL_ENDPOINT + str(model_id)

View File

@ -12,7 +12,7 @@ import psutil
import torch
from compel.cross_attention_control import Arguments
from diffusers.models.attention_processor import Attention, AttentionProcessor, AttnProcessor, SlicedAttnProcessor
from diffusers.models.unet_2d_condition import UNet2DConditionModel
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from torch import nn
import invokeai.backend.util.logging as logger

View File

@ -11,6 +11,7 @@ import logging
import math
import os
import random
from argparse import Namespace
from pathlib import Path
from typing import Optional
@ -30,8 +31,6 @@ from diffusers.optimization import get_scheduler
from diffusers.utils import check_min_version
from diffusers.utils.import_utils import is_xformers_available
from huggingface_hub import HfFolder, Repository, whoami
# TODO: remove and import from diffusers.utils when the new version of diffusers is released
from packaging import version
from PIL import Image
from torch.utils.data import Dataset
@ -41,8 +40,8 @@ from transformers import CLIPTextModel, CLIPTokenizer
# invokeai stuff
from invokeai.app.services.config import InvokeAIAppConfig, PagingArgumentParser
from invokeai.app.services.model_manager import ModelManagerService
from invokeai.backend.model_management.models import SubModelType
from invokeai.backend.install.install_helper import initialize_record_store
from invokeai.backend.model_manager import BaseModelType, ModelType
if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"):
PIL_INTERPOLATION = {
@ -77,7 +76,7 @@ def save_progress(text_encoder, placeholder_token_id, accelerator, placeholder_t
torch.save(learned_embeds_dict, save_path)
def parse_args():
def parse_args() -> Namespace:
config = InvokeAIAppConfig.get_config()
parser = PagingArgumentParser(description="Textual inversion training")
general_group = parser.add_argument_group("General")
@ -444,7 +443,7 @@ class TextualInversionDataset(Dataset):
self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small
self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p)
def __len__(self):
def __len__(self) -> int:
return self._length
def __getitem__(self, i):
@ -509,11 +508,10 @@ def do_textual_inversion_training(
initializer_token: str,
save_steps: int = 500,
only_save_embeds: bool = False,
revision: str = None,
tokenizer_name: str = None,
tokenizer_name: Optional[str] = None,
learnable_property: str = "object",
repeats: int = 100,
seed: int = None,
seed: Optional[int] = None,
resolution: int = 512,
center_crop: bool = False,
train_batch_size: int = 16,
@ -530,18 +528,18 @@ def do_textual_inversion_training(
adam_weight_decay: float = 1e-02,
adam_epsilon: float = 1e-08,
push_to_hub: bool = False,
hub_token: str = None,
hub_token: Optional[str] = None,
logging_dir: Path = Path("logs"),
mixed_precision: str = "fp16",
allow_tf32: bool = False,
report_to: str = "tensorboard",
local_rank: int = -1,
checkpointing_steps: int = 500,
resume_from_checkpoint: Path = None,
resume_from_checkpoint: Optional[Path] = None,
enable_xformers_memory_efficient_attention: bool = False,
hub_model_id: str = None,
hub_model_id: Optional[str] = None,
**kwargs,
):
) -> None:
assert model, "Please specify a base model with --model"
assert train_data_dir, "Please specify a directory containing the training images using --train_data_dir"
assert placeholder_token, "Please specify a trigger term using --placeholder_token"
@ -564,8 +562,6 @@ def do_textual_inversion_training(
project_config=accelerator_config,
)
model_manager = ModelManagerService(config, logger)
# Make one log on every process with the configuration for debugging.
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
@ -603,44 +599,37 @@ def do_textual_inversion_training(
elif output_dir is not None:
os.makedirs(output_dir, exist_ok=True)
known_models = model_manager.model_names()
model_name = model.split("/")[-1]
model_meta = next((mm for mm in known_models if mm[0].endswith(model_name)), None)
assert model_meta is not None, f"Unknown model: {model}"
model_info = model_manager.model_info(*model_meta)
assert model_info["model_format"] == "diffusers", "This script only works with models of type 'diffusers'"
tokenizer_info = model_manager.get_model(*model_meta, submodel=SubModelType.Tokenizer)
noise_scheduler_info = model_manager.get_model(*model_meta, submodel=SubModelType.Scheduler)
text_encoder_info = model_manager.get_model(*model_meta, submodel=SubModelType.TextEncoder)
vae_info = model_manager.get_model(*model_meta, submodel=SubModelType.Vae)
unet_info = model_manager.get_model(*model_meta, submodel=SubModelType.UNet)
model_records = initialize_record_store(config)
base, type, name = model.split("/") # note frontend still returns old-style keys
try:
model_config = model_records.search_by_attr(
model_name=name, model_type=ModelType(type), base_model=BaseModelType(base)
)[0]
except IndexError:
raise Exception(f"Unknown model {model}")
model_path = config.models_path / model_config.path
pipeline_args = {"local_files_only": True}
if tokenizer_name:
tokenizer = CLIPTokenizer.from_pretrained(tokenizer_name, **pipeline_args)
else:
tokenizer = CLIPTokenizer.from_pretrained(tokenizer_info.location, subfolder="tokenizer", **pipeline_args)
tokenizer = CLIPTokenizer.from_pretrained(model_path, subfolder="tokenizer", **pipeline_args)
# Load scheduler and models
noise_scheduler = DDPMScheduler.from_pretrained(
noise_scheduler_info.location, subfolder="scheduler", **pipeline_args
)
noise_scheduler = DDPMScheduler.from_pretrained(model_path, subfolder="scheduler", **pipeline_args)
text_encoder = CLIPTextModel.from_pretrained(
text_encoder_info.location,
model_path,
subfolder="text_encoder",
revision=revision,
**pipeline_args,
)
vae = AutoencoderKL.from_pretrained(
vae_info.location,
model_path,
subfolder="vae",
revision=revision,
**pipeline_args,
)
unet = UNet2DConditionModel.from_pretrained(
unet_info.location,
model_path,
subfolder="unet",
revision=revision,
**pipeline_args,
)
@ -728,7 +717,7 @@ def do_textual_inversion_training(
max_train_steps = num_train_epochs * num_update_steps_per_epoch
overrode_max_train_steps = True
lr_scheduler = get_scheduler(
scheduler = get_scheduler(
lr_scheduler,
optimizer=optimizer,
num_warmup_steps=lr_warmup_steps * gradient_accumulation_steps,
@ -737,7 +726,7 @@ def do_textual_inversion_training(
# Prepare everything with our `accelerator`.
text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare(
text_encoder, optimizer, train_dataloader, lr_scheduler
text_encoder, optimizer, train_dataloader, scheduler
)
# For mixed precision training we cast the unet and vae weights to half-precision
@ -863,7 +852,7 @@ def do_textual_inversion_training(
accelerator.backward(loss)
optimizer.step()
lr_scheduler.step()
scheduler.step()
optimizer.zero_grad()
# Let's make sure we don't update any embedding weights besides the newly added token
@ -893,7 +882,7 @@ def do_textual_inversion_training(
accelerator.save_state(save_path)
logger.info(f"Saved state to {save_path}")
logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]}
logs = {"loss": loss.detach().item(), "lr": scheduler.get_last_lr()[0]}
progress_bar.set_postfix(**logs)
accelerator.log(logs, step=global_step)
@ -910,7 +899,7 @@ def do_textual_inversion_training(
save_full_model = not only_save_embeds
if save_full_model:
pipeline = StableDiffusionPipeline.from_pretrained(
unet_info.location,
model_path,
text_encoder=accelerator.unwrap_model(text_encoder),
vae=vae,
unet=unet,

View File

@ -3,7 +3,7 @@ from typing import Any, Dict, List, Optional, Tuple, Union
import diffusers
import torch
from diffusers.configuration_utils import ConfigMixin, register_to_config
from diffusers.loaders import FromOriginalControlnetMixin
from diffusers.loaders import FromOriginalControlNetMixin
from diffusers.models.attention_processor import AttentionProcessor, AttnProcessor
from diffusers.models.controlnet import ControlNetConditioningEmbedding, ControlNetOutput, zero_module
from diffusers.models.embeddings import (
@ -14,8 +14,13 @@ from diffusers.models.embeddings import (
Timesteps,
)
from diffusers.models.modeling_utils import ModelMixin
from diffusers.models.unet_2d_blocks import CrossAttnDownBlock2D, DownBlock2D, UNetMidBlock2DCrossAttn, get_down_block
from diffusers.models.unet_2d_condition import UNet2DConditionModel
from diffusers.models.unets.unet_2d_blocks import (
CrossAttnDownBlock2D,
DownBlock2D,
UNetMidBlock2DCrossAttn,
get_down_block,
)
from diffusers.models.unets.unet_2d_condition import UNet2DConditionModel
from torch import nn
from invokeai.backend.util.logging import InvokeAILogger
@ -27,7 +32,7 @@ from invokeai.backend.util.logging import InvokeAILogger
logger = InvokeAILogger.get_logger(__name__)
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlnetMixin):
class ControlNetModel(ModelMixin, ConfigMixin, FromOriginalControlNetMixin):
"""
A ControlNet model.

View File

@ -286,7 +286,7 @@ def download_with_resume(url: str, dest: Path, access_token: str = None) -> Path
open_mode = "wb"
exist_size = 0
resp = requests.get(url, header, stream=True)
resp = requests.get(url, headers=header, stream=True, allow_redirects=True)
content_length = int(resp.headers.get("content-length", 0))
if dest.is_dir():

View File

@ -0,0 +1,157 @@
# This file predefines a few models that the user may want to install.
sd-1/main/stable-diffusion-v1-5:
description: Stable Diffusion version 1.5 diffusers model (4.27 GB)
source: runwayml/stable-diffusion-v1-5
recommended: True
default: True
sd-1/main/stable-diffusion-v1-5-inpainting:
description: RunwayML SD 1.5 model optimized for inpainting, diffusers version (4.27 GB)
source: runwayml/stable-diffusion-inpainting
recommended: True
sd-2/main/stable-diffusion-2-1:
description: Stable Diffusion version 2.1 diffusers model, trained on 768 pixel images (5.21 GB)
source: stabilityai/stable-diffusion-2-1
recommended: False
sd-2/main/stable-diffusion-2-inpainting:
description: Stable Diffusion version 2.0 inpainting model (5.21 GB)
source: stabilityai/stable-diffusion-2-inpainting
recommended: False
sdxl/main/stable-diffusion-xl-base-1-0:
description: Stable Diffusion XL base model (12 GB)
source: stabilityai/stable-diffusion-xl-base-1.0
recommended: True
sdxl-refiner/main/stable-diffusion-xl-refiner-1-0:
description: Stable Diffusion XL refiner model (12 GB)
source: stabilityai/stable-diffusion-xl-refiner-1.0
recommended: False
sdxl/vae/sdxl-vae-fp16-fix:
description: Version of the SDXL-1.0 VAE that works in half precision mode
source: madebyollin/sdxl-vae-fp16-fix
recommended: True
sd-1/main/Analog-Diffusion:
description: An SD-1.5 model trained on diverse analog photographs (2.13 GB)
source: wavymulder/Analog-Diffusion
recommended: False
sd-1/main/Deliberate:
description: Versatile model that produces detailed images up to 768px (4.27 GB)
source: XpucT/Deliberate
recommended: False
sd-1/main/Dungeons-and-Diffusion:
description: Dungeons & Dragons characters (2.13 GB)
source: 0xJustin/Dungeons-and-Diffusion
recommended: False
sd-1/main/dreamlike-photoreal-2:
description: A photorealistic model trained on 768 pixel images based on SD 1.5 (2.13 GB)
source: dreamlike-art/dreamlike-photoreal-2.0
recommended: False
sd-1/main/Inkpunk-Diffusion:
description: Stylized illustrations inspired by Gorillaz, FLCL and Shinkawa; prompt with "nvinkpunk" (4.27 GB)
source: Envvi/Inkpunk-Diffusion
recommended: False
sd-1/main/openjourney:
description: An SD 1.5 model fine tuned on Midjourney; prompt with "mdjrny-v4 style" (2.13 GB)
source: prompthero/openjourney
recommended: False
sd-1/main/seek.art_MEGA:
source: coreco/seek.art_MEGA
description: A general use SD-1.5 "anything" model that supports multiple styles (2.1 GB)
recommended: False
sd-1/main/trinart_stable_diffusion_v2:
description: An SD-1.5 model finetuned with ~40K assorted high resolution manga/anime-style images (2.13 GB)
source: naclbit/trinart_stable_diffusion_v2
recommended: False
sd-1/controlnet/qrcode_monster:
source: monster-labs/control_v1p_sd15_qrcode_monster
subfolder: v2
sd-1/controlnet/canny:
source: lllyasviel/control_v11p_sd15_canny
recommended: True
sd-1/controlnet/inpaint:
source: lllyasviel/control_v11p_sd15_inpaint
sd-1/controlnet/mlsd:
source: lllyasviel/control_v11p_sd15_mlsd
sd-1/controlnet/depth:
source: lllyasviel/control_v11f1p_sd15_depth
recommended: True
sd-1/controlnet/normal_bae:
source: lllyasviel/control_v11p_sd15_normalbae
sd-1/controlnet/seg:
source: lllyasviel/control_v11p_sd15_seg
sd-1/controlnet/lineart:
source: lllyasviel/control_v11p_sd15_lineart
recommended: True
sd-1/controlnet/lineart_anime:
source: lllyasviel/control_v11p_sd15s2_lineart_anime
sd-1/controlnet/openpose:
source: lllyasviel/control_v11p_sd15_openpose
recommended: True
sd-1/controlnet/scribble:
source: lllyasviel/control_v11p_sd15_scribble
recommended: False
sd-1/controlnet/softedge:
source: lllyasviel/control_v11p_sd15_softedge
sd-1/controlnet/shuffle:
source: lllyasviel/control_v11e_sd15_shuffle
sd-1/controlnet/tile:
source: lllyasviel/control_v11f1e_sd15_tile
sd-1/controlnet/ip2p:
source: lllyasviel/control_v11e_sd15_ip2p
sd-1/t2i_adapter/canny-sd15:
source: TencentARC/t2iadapter_canny_sd15v2
sd-1/t2i_adapter/sketch-sd15:
source: TencentARC/t2iadapter_sketch_sd15v2
sd-1/t2i_adapter/depth-sd15:
source: TencentARC/t2iadapter_depth_sd15v2
sd-1/t2i_adapter/zoedepth-sd15:
source: TencentARC/t2iadapter_zoedepth_sd15v1
sdxl/t2i_adapter/canny-sdxl:
source: TencentARC/t2i-adapter-canny-sdxl-1.0
sdxl/t2i_adapter/zoedepth-sdxl:
source: TencentARC/t2i-adapter-depth-zoe-sdxl-1.0
sdxl/t2i_adapter/lineart-sdxl:
source: TencentARC/t2i-adapter-lineart-sdxl-1.0
sdxl/t2i_adapter/sketch-sdxl:
source: TencentARC/t2i-adapter-sketch-sdxl-1.0
sd-1/embedding/EasyNegative:
source: https://huggingface.co/embed/EasyNegative/resolve/main/EasyNegative.safetensors
recommended: True
description: A textual inversion to use in the negative prompt to reduce bad anatomy
sd-1/lora/FlatColor:
source: https://civitai.com/models/6433/loraflatcolor
recommended: True
description: A LoRA that generates scenery using solid blocks of color
sd-1/lora/Ink scenery:
source: https://civitai.com/api/download/models/83390
description: Generate india ink-like landscapes
sd-1/ip_adapter/ip_adapter_sd15:
source: InvokeAI/ip_adapter_sd15
recommended: True
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: IP-Adapter for SD 1.5 models
sd-1/ip_adapter/ip_adapter_plus_sd15:
source: InvokeAI/ip_adapter_plus_sd15
recommended: False
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: Refined IP-Adapter for SD 1.5 models
sd-1/ip_adapter/ip_adapter_plus_face_sd15:
source: InvokeAI/ip_adapter_plus_face_sd15
recommended: False
requires:
- InvokeAI/ip_adapter_sd_image_encoder
description: Refined IP-Adapter for SD 1.5 models, adapted for faces
sdxl/ip_adapter/ip_adapter_sdxl:
source: InvokeAI/ip_adapter_sdxl
recommended: False
requires:
- InvokeAI/ip_adapter_sdxl_image_encoder
description: IP-Adapter for SDXL models
any/clip_vision/ip_adapter_sd_image_encoder:
source: InvokeAI/ip_adapter_sd_image_encoder
recommended: False
description: Required model for using IP-Adapters with SD-1/2 models
any/clip_vision/ip_adapter_sdxl_image_encoder:
source: InvokeAI/ip_adapter_sdxl_image_encoder
recommended: False
description: Required model for using IP-Adapters with SDXL models

View File

@ -2,3 +2,5 @@
Wrapper for invokeai.backend.configure.invokeai_configure
"""
from ...backend.install.invokeai_configure import main as invokeai_configure # noqa: F401
__all__ = ["invokeai_configure"]

View File

@ -5,14 +5,14 @@ pip install <path_to_git_source>.
import os
import platform
from distutils.version import LooseVersion
from importlib.metadata import PackageNotFoundError, distribution, distributions
import pkg_resources
import psutil
import requests
from rich import box, print
from rich.console import Console, group
from rich.panel import Panel
from rich.prompt import Prompt
from rich.prompt import Confirm, Prompt
from rich.style import Style
from invokeai.version import __version__
@ -61,6 +61,65 @@ def get_pypi_versions():
return latest_version, latest_release_candidate, versions
def get_torch_extra_index_url() -> str | None:
"""
Determine torch wheel source URL and optional modules based on the user's OS.
"""
resolved_url = None
# In all other cases (like MacOS (MPS) or Linux+CUDA), there is no need to specify the extra index URL.
torch_package_urls = {
"windows_cuda": "https://download.pytorch.org/whl/cu121",
"linux_rocm": "https://download.pytorch.org/whl/rocm5.6",
"linux_cpu": "https://download.pytorch.org/whl/cpu",
}
nvidia_packages_present = (
len([d.metadata["Name"] for d in distributions() if d.metadata["Name"].startswith("nvidia")]) > 0
)
device = "cuda" if nvidia_packages_present else None
manual_gpu_selection_prompt = (
"[bold]We tried and failed to guess your GPU capabilities[/] :thinking_face:. Please select the GPU type:"
)
if OS == "Linux":
if not device:
# do we even need to offer a CPU-only install option?
print(manual_gpu_selection_prompt)
print("1: NVIDIA (CUDA)")
print("2: AMD (ROCm)")
print("3: No GPU - CPU only")
answer = Prompt.ask("Choice:", choices=["1", "2", "3"], default="1")
match answer:
case "1":
device = "cuda"
case "2":
device = "rocm"
case "3":
device = "cpu"
if device != "cuda":
resolved_url = torch_package_urls[f"linux_{device}"]
if OS == "Windows":
if not device:
print(manual_gpu_selection_prompt)
print("1: NVIDIA (CUDA)")
print("2: No GPU - CPU only")
answer = Prompt.ask("Your choice:", choices=["1", "2"], default="1")
match answer:
case "1":
device = "cuda"
case "2":
device = "cpu"
if device == "cuda":
resolved_url = torch_package_urls[f"windows_{device}"]
return resolved_url
def welcome(latest_release: str, latest_prerelease: str):
@group()
def text():
@ -89,12 +148,11 @@ def welcome(latest_release: str, latest_prerelease: str):
def get_extras():
extras = ""
try:
_ = pkg_resources.get_distribution("xformers")
distribution("xformers")
extras = "[xformers]"
except pkg_resources.DistributionNotFound:
pass
except PackageNotFoundError:
extras = ""
return extras
@ -125,8 +183,22 @@ def main():
extras = get_extras()
console.line()
force_reinstall = Confirm.ask(
"[bold]Force reinstallation of all dependencies?[/] This [i]may[/] help fix a broken upgrade, but is usually not necessary.",
default=False,
)
console.line()
flags = []
if (index_url := get_torch_extra_index_url()) is not None:
flags.append(f"--extra-index-url {index_url}")
if force_reinstall:
flags.append("--force-reinstall")
flags = " ".join(flags)
print(f":crossed_fingers: Upgrading to [yellow]{release}[/yellow]")
cmd = f'pip install "invokeai{extras}=={release}" --use-pep517 --upgrade'
cmd = f'pip install "invokeai{extras}=={release}" --use-pep517 --upgrade {flags}'
print("")
print("")

View File

@ -0,0 +1,645 @@
#!/usr/bin/env python
# Copyright (c) 2022 Lincoln D. Stein (https://github.com/lstein)
# Before running stable-diffusion on an internet-isolated machine,
# run this script from one with internet connectivity. The
# two machines must share a common .cache directory.
"""
This is the npyscreen frontend to the model installation application.
It is currently named model_install2.py, but will ultimately replace model_install.py.
"""
import argparse
import curses
import sys
import traceback
import warnings
from argparse import Namespace
from shutil import get_terminal_size
from typing import Any, Dict, List, Optional, Set
import npyscreen
import torch
from npyscreen import widget
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_install import ModelInstallService
from invokeai.backend.install.install_helper import InstallHelper, InstallSelections, UnifiedModelInfo
from invokeai.backend.model_manager import ModelType
from invokeai.backend.util import choose_precision, choose_torch_device
from invokeai.backend.util.logging import InvokeAILogger
from invokeai.frontend.install.widgets import (
MIN_COLS,
MIN_LINES,
CenteredTitleText,
CyclingForm,
MultiSelectColumns,
SingleSelectColumns,
TextBox,
WindowTooSmallException,
set_min_terminal_size,
)
warnings.filterwarnings("ignore", category=UserWarning) # noqa: E402
config = InvokeAIAppConfig.get_config()
logger = InvokeAILogger.get_logger("ModelInstallService")
logger.setLevel("WARNING")
# logger.setLevel('DEBUG')
# build a table mapping all non-printable characters to None
# for stripping control characters
# from https://stackoverflow.com/questions/92438/stripping-non-printable-characters-from-a-string-in-python
NOPRINT_TRANS_TABLE = {i: None for i in range(0, sys.maxunicode + 1) if not chr(i).isprintable()}
# maximum number of installed models we can display before overflowing vertically
MAX_OTHER_MODELS = 72
def make_printable(s: str) -> str:
"""Replace non-printable characters in a string."""
return s.translate(NOPRINT_TRANS_TABLE)
class addModelsForm(CyclingForm, npyscreen.FormMultiPage):
"""Main form for interactive TUI."""
# for responsive resizing set to False, but this seems to cause a crash!
FIX_MINIMUM_SIZE_WHEN_CREATED = True
# for persistence
current_tab = 0
def __init__(self, parentApp: npyscreen.NPSAppManaged, name: str, multipage: bool = False, **keywords: Any):
self.multipage = multipage
self.subprocess = None
super().__init__(parentApp=parentApp, name=name, **keywords)
def create(self) -> None:
self.installer = self.parentApp.install_helper.installer
self.model_labels = self._get_model_labels()
self.keypress_timeout = 10
self.counter = 0
self.subprocess_connection = None
window_width, window_height = get_terminal_size()
# npyscreen has no typing hints
self.nextrely -= 1 # type: ignore
self.add_widget_intelligent(
npyscreen.FixedText,
value="Use ctrl-N and ctrl-P to move to the <N>ext and <P>revious fields. Cursor keys navigate, and <space> selects.",
editable=False,
color="CAUTION",
)
self.nextrely += 1 # type: ignore
self.tabs = self.add_widget_intelligent(
SingleSelectColumns,
values=[
"STARTERS",
"MAINS",
"CONTROLNETS",
"T2I-ADAPTERS",
"IP-ADAPTERS",
"LORAS",
"TI EMBEDDINGS",
],
value=[self.current_tab],
columns=7,
max_height=2,
relx=8,
scroll_exit=True,
)
self.tabs.on_changed = self._toggle_tables
top_of_table = self.nextrely # type: ignore
self.starter_pipelines = self.add_starter_pipelines()
bottom_of_table = self.nextrely # type: ignore
self.nextrely = top_of_table
self.pipeline_models = self.add_pipeline_widgets(
model_type=ModelType.Main, window_width=window_width, exclude=self.starter_models
)
# self.pipeline_models['autoload_pending'] = True
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = top_of_table
self.controlnet_models = self.add_model_widgets(
model_type=ModelType.ControlNet,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = top_of_table
self.t2i_models = self.add_model_widgets(
model_type=ModelType.T2IAdapter,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = top_of_table
self.ipadapter_models = self.add_model_widgets(
model_type=ModelType.IPAdapter,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = top_of_table
self.lora_models = self.add_model_widgets(
model_type=ModelType.Lora,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = top_of_table
self.ti_models = self.add_model_widgets(
model_type=ModelType.TextualInversion,
window_width=window_width,
)
bottom_of_table = max(bottom_of_table, self.nextrely)
self.nextrely = bottom_of_table + 1
self.nextrely += 1
back_label = "BACK"
cancel_label = "CANCEL"
current_position = self.nextrely
if self.multipage:
self.back_button = self.add_widget_intelligent(
npyscreen.ButtonPress,
name=back_label,
when_pressed_function=self.on_back,
)
else:
self.nextrely = current_position
self.cancel_button = self.add_widget_intelligent(
npyscreen.ButtonPress, name=cancel_label, when_pressed_function=self.on_cancel
)
self.nextrely = current_position
label = "APPLY CHANGES"
self.nextrely = current_position
self.done = self.add_widget_intelligent(
npyscreen.ButtonPress,
name=label,
relx=window_width - len(label) - 15,
when_pressed_function=self.on_done,
)
# This restores the selected page on return from an installation
for _i in range(1, self.current_tab + 1):
self.tabs.h_cursor_line_down(1)
self._toggle_tables([self.current_tab])
############# diffusers tab ##########
def add_starter_pipelines(self) -> dict[str, npyscreen.widget]:
"""Add widgets responsible for selecting diffusers models"""
widgets: Dict[str, npyscreen.widget] = {}
all_models = self.all_models # master dict of all models, indexed by key
model_list = [x for x in self.starter_models if all_models[x].type in ["main", "vae"]]
model_labels = [self.model_labels[x] for x in model_list]
widgets.update(
label1=self.add_widget_intelligent(
CenteredTitleText,
name="Select from a starter set of Stable Diffusion models from HuggingFace and Civitae.",
editable=False,
labelColor="CAUTION",
)
)
self.nextrely -= 1
# if user has already installed some initial models, then don't patronize them
# by showing more recommendations
show_recommended = len(self.installed_models) == 0
checked = [
model_list.index(x)
for x in model_list
if (show_recommended and all_models[x].recommended) or all_models[x].installed
]
widgets.update(
models_selected=self.add_widget_intelligent(
MultiSelectColumns,
columns=1,
name="Install Starter Models",
values=model_labels,
value=checked,
max_height=len(model_list) + 1,
relx=4,
scroll_exit=True,
),
models=model_list,
)
self.nextrely += 1
return widgets
############# Add a set of model install widgets ########
def add_model_widgets(
self,
model_type: ModelType,
window_width: int = 120,
install_prompt: Optional[str] = None,
exclude: Optional[Set[str]] = None,
) -> dict[str, npyscreen.widget]:
"""Generic code to create model selection widgets"""
if exclude is None:
exclude = set()
widgets: Dict[str, npyscreen.widget] = {}
all_models = self.all_models
model_list = sorted(
[x for x in all_models if all_models[x].type == model_type and x not in exclude],
key=lambda x: all_models[x].name or "",
)
model_labels = [self.model_labels[x] for x in model_list]
show_recommended = len(self.installed_models) == 0
truncated = False
if len(model_list) > 0:
max_width = max([len(x) for x in model_labels])
columns = window_width // (max_width + 8) # 8 characters for "[x] " and padding
columns = min(len(model_list), columns) or 1
prompt = (
install_prompt
or f"Select the desired {model_type.value.title()} models to install. Unchecked models will be purged from disk."
)
widgets.update(
label1=self.add_widget_intelligent(
CenteredTitleText,
name=prompt,
editable=False,
labelColor="CAUTION",
)
)
if len(model_labels) > MAX_OTHER_MODELS:
model_labels = model_labels[0:MAX_OTHER_MODELS]
truncated = True
widgets.update(
models_selected=self.add_widget_intelligent(
MultiSelectColumns,
columns=columns,
name=f"Install {model_type} Models",
values=model_labels,
value=[
model_list.index(x)
for x in model_list
if (show_recommended and all_models[x].recommended) or all_models[x].installed
],
max_height=len(model_list) // columns + 1,
relx=4,
scroll_exit=True,
),
models=model_list,
)
if truncated:
widgets.update(
warning_message=self.add_widget_intelligent(
npyscreen.FixedText,
value=f"Too many models to display (max={MAX_OTHER_MODELS}). Some are not displayed.",
editable=False,
color="CAUTION",
)
)
self.nextrely += 1
widgets.update(
download_ids=self.add_widget_intelligent(
TextBox,
name="Additional URLs, or HuggingFace repo_ids to install (Space separated. Use shift-control-V to paste):",
max_height=6,
scroll_exit=True,
editable=True,
)
)
return widgets
### Tab for arbitrary diffusers widgets ###
def add_pipeline_widgets(
self,
model_type: ModelType = ModelType.Main,
window_width: int = 120,
**kwargs,
) -> dict[str, npyscreen.widget]:
"""Similar to add_model_widgets() but adds some additional widgets at the bottom
to support the autoload directory"""
widgets = self.add_model_widgets(
model_type=model_type,
window_width=window_width,
install_prompt=f"Installed {model_type.value.title()} models. Unchecked models in the InvokeAI root directory will be deleted. Enter URLs, paths or repo_ids to import.",
**kwargs,
)
return widgets
def resize(self) -> None:
super().resize()
if s := self.starter_pipelines.get("models_selected"):
if model_list := self.starter_pipelines.get("models"):
s.values = [self.model_labels[x] for x in model_list]
def _toggle_tables(self, value: List[int]) -> None:
selected_tab = value[0]
widgets = [
self.starter_pipelines,
self.pipeline_models,
self.controlnet_models,
self.t2i_models,
self.ipadapter_models,
self.lora_models,
self.ti_models,
]
for group in widgets:
for _k, v in group.items():
try:
v.hidden = True
v.editable = False
except Exception:
pass
for _k, v in widgets[selected_tab].items():
try:
v.hidden = False
if not isinstance(v, (npyscreen.FixedText, npyscreen.TitleFixedText, CenteredTitleText)):
v.editable = True
except Exception:
pass
self.__class__.current_tab = selected_tab # for persistence
self.display()
def _get_model_labels(self) -> dict[str, str]:
"""Return a list of trimmed labels for all models."""
window_width, window_height = get_terminal_size()
checkbox_width = 4
spacing_width = 2
result = {}
models = self.all_models
label_width = max([len(models[x].name or "") for x in self.starter_models])
description_width = window_width - label_width - checkbox_width - spacing_width
for key in self.all_models:
description = models[key].description
description = (
description[0 : description_width - 3] + "..."
if description and len(description) > description_width
else description
if description
else ""
)
result[key] = f"%-{label_width}s %s" % (models[key].name, description)
return result
def _get_columns(self) -> int:
window_width, window_height = get_terminal_size()
cols = 4 if window_width > 240 else 3 if window_width > 160 else 2 if window_width > 80 else 1
return min(cols, len(self.installed_models))
def confirm_deletions(self, selections: InstallSelections) -> bool:
remove_models = selections.remove_models
if remove_models:
model_names = [self.all_models[x].name or "" for x in remove_models]
mods = "\n".join(model_names)
is_ok = npyscreen.notify_ok_cancel(
f"These unchecked models will be deleted from disk. Continue?\n---------\n{mods}"
)
assert isinstance(is_ok, bool) # npyscreen doesn't have return type annotations
return is_ok
else:
return True
@property
def all_models(self) -> Dict[str, UnifiedModelInfo]:
# npyscreen doesn't having typing hints
return self.parentApp.install_helper.all_models # type: ignore
@property
def starter_models(self) -> List[str]:
return self.parentApp.install_helper._starter_models # type: ignore
@property
def installed_models(self) -> List[str]:
return self.parentApp.install_helper._installed_models # type: ignore
def on_back(self) -> None:
self.parentApp.switchFormPrevious()
self.editing = False
def on_cancel(self) -> None:
self.parentApp.setNextForm(None)
self.parentApp.user_cancelled = True
self.editing = False
def on_done(self) -> None:
self.marshall_arguments()
if not self.confirm_deletions(self.parentApp.install_selections):
return
self.parentApp.setNextForm(None)
self.parentApp.user_cancelled = False
self.editing = False
def marshall_arguments(self) -> None:
"""
Assemble arguments and store as attributes of the application:
.starter_models: dict of model names to install from INITIAL_CONFIGURE.yaml
True => Install
False => Remove
.scan_directory: Path to a directory of models to scan and import
.autoscan_on_startup: True if invokeai should scan and import at startup time
.import_model_paths: list of URLs, repo_ids and file paths to import
"""
selections = self.parentApp.install_selections
all_models = self.all_models
# Defined models (in INITIAL_CONFIG.yaml or models.yaml) to add/remove
ui_sections = [
self.starter_pipelines,
self.pipeline_models,
self.controlnet_models,
self.t2i_models,
self.ipadapter_models,
self.lora_models,
self.ti_models,
]
for section in ui_sections:
if "models_selected" not in section:
continue
selected = {section["models"][x] for x in section["models_selected"].value}
models_to_install = [x for x in selected if not self.all_models[x].installed]
models_to_remove = [x for x in section["models"] if x not in selected and self.all_models[x].installed]
selections.remove_models.extend(models_to_remove)
selections.install_models.extend([all_models[x] for x in models_to_install])
# models located in the 'download_ids" section
for section in ui_sections:
if downloads := section.get("download_ids"):
models = [UnifiedModelInfo(source=x) for x in downloads.value.split()]
selections.install_models.extend(models)
class AddModelApplication(npyscreen.NPSAppManaged): # type: ignore
def __init__(self, opt: Namespace, install_helper: InstallHelper):
super().__init__()
self.program_opts = opt
self.user_cancelled = False
self.install_selections = InstallSelections()
self.install_helper = install_helper
def onStart(self) -> None:
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
self.main_form = self.addForm(
"MAIN",
addModelsForm,
name="Install Stable Diffusion Models",
cycle_widgets=False,
)
def list_models(installer: ModelInstallService, model_type: ModelType):
"""Print out all models of type model_type."""
models = installer.record_store.search_by_attr(model_type=model_type)
print(f"Installed models of type `{model_type}`:")
for model in models:
path = (config.models_path / model.path).resolve()
print(f"{model.name:40}{model.base.value:14}{path}")
# --------------------------------------------------------
def select_and_download_models(opt: Namespace) -> None:
"""Prompt user for install/delete selections and execute."""
precision = "float32" if opt.full_precision else choose_precision(torch.device(choose_torch_device()))
# unsure how to avoid a typing complaint in the next line: config.precision is an enumerated Literal
config.precision = precision # type: ignore
install_helper = InstallHelper(config, logger)
installer = install_helper.installer
if opt.list_models:
list_models(installer, opt.list_models)
elif opt.add or opt.delete:
selections = InstallSelections(
install_models=[UnifiedModelInfo(source=x) for x in (opt.add or [])], remove_models=opt.delete or []
)
install_helper.add_or_delete(selections)
elif opt.default_only:
selections = InstallSelections(install_models=[install_helper.default_model()])
install_helper.add_or_delete(selections)
elif opt.yes_to_all:
selections = InstallSelections(install_models=install_helper.recommended_models())
install_helper.add_or_delete(selections)
# this is where the TUI is called
else:
if not set_min_terminal_size(MIN_COLS, MIN_LINES):
raise WindowTooSmallException(
"Could not increase terminal size. Try running again with a larger window or smaller font size."
)
installApp = AddModelApplication(opt, install_helper)
try:
installApp.run()
except KeyboardInterrupt:
print("Aborted...")
sys.exit(-1)
install_helper.add_or_delete(installApp.install_selections)
# -------------------------------------
def main() -> None:
parser = argparse.ArgumentParser(description="InvokeAI model downloader")
parser.add_argument(
"--add",
nargs="*",
help="List of URLs, local paths or repo_ids of models to install",
)
parser.add_argument(
"--delete",
nargs="*",
help="List of names of models to delete. Use type:name to disambiguate, as in `controlnet:my_model`",
)
parser.add_argument(
"--full-precision",
dest="full_precision",
action=argparse.BooleanOptionalAction,
type=bool,
default=False,
help="use 32-bit weights instead of faster 16-bit weights",
)
parser.add_argument(
"--yes",
"-y",
dest="yes_to_all",
action="store_true",
help='answer "yes" to all prompts',
)
parser.add_argument(
"--default_only",
action="store_true",
help="Only install the default model",
)
parser.add_argument(
"--list-models",
choices=[x.value for x in ModelType],
help="list installed models",
)
parser.add_argument(
"--root_dir",
dest="root",
type=str,
default=None,
help="path to root of install directory",
)
opt = parser.parse_args()
invoke_args = []
if opt.root:
invoke_args.extend(["--root", opt.root])
if opt.full_precision:
invoke_args.extend(["--precision", "float32"])
config.parse_args(invoke_args)
logger = InvokeAILogger().get_logger(config=config)
if not config.model_conf_path.exists():
logger.info("Your InvokeAI root directory is not set up. Calling invokeai-configure.")
from invokeai.frontend.install.invokeai_configure import invokeai_configure
invokeai_configure()
sys.exit(0)
try:
select_and_download_models(opt)
except AssertionError as e:
logger.error(e)
sys.exit(-1)
except KeyboardInterrupt:
curses.nocbreak()
curses.echo()
curses.endwin()
logger.info("Goodbye! Come back soon.")
except WindowTooSmallException as e:
logger.error(str(e))
except widget.NotEnoughSpaceForWidget as e:
if str(e).startswith("Height of 1 allocated"):
logger.error("Insufficient vertical space for the interface. Please make your window taller and try again")
input("Press any key to continue...")
except Exception as e:
if str(e).startswith("addwstr"):
logger.error(
"Insufficient horizontal space for the interface. Please make your window wider and try again."
)
else:
print(f"An exception has occurred: {str(e)} Details:")
print(traceback.format_exc(), file=sys.stderr)
input("Press any key to continue...")
# -------------------------------------
if __name__ == "__main__":
main()

View File

@ -0,0 +1,438 @@
"""
invokeai.frontend.merge exports a single function called merge_diffusion_models().
It merges 2-3 models together and create a new InvokeAI-registered diffusion model.
Copyright (c) 2023-24 Lincoln Stein and the InvokeAI Development Team
"""
import argparse
import curses
import re
import sys
from argparse import Namespace
from pathlib import Path
from typing import List, Optional, Tuple
import npyscreen
from npyscreen import widget
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.app.services.model_install import ModelInstallServiceBase
from invokeai.app.services.model_records import ModelRecordServiceBase
from invokeai.backend.install.install_helper import initialize_installer
from invokeai.backend.model_manager import (
BaseModelType,
ModelFormat,
ModelType,
ModelVariantType,
)
from invokeai.backend.model_manager.merge import ModelMerger
from invokeai.frontend.install.widgets import FloatTitleSlider, SingleSelectColumns, TextBox
config = InvokeAIAppConfig.get_config()
BASE_TYPES = [
(BaseModelType.StableDiffusion1, "Models Built on SD-1.x"),
(BaseModelType.StableDiffusion2, "Models Built on SD-2.x"),
(BaseModelType.StableDiffusionXL, "Models Built on SDXL"),
]
def _parse_args() -> Namespace:
parser = argparse.ArgumentParser(description="InvokeAI model merging")
parser.add_argument(
"--root_dir",
type=Path,
default=config.root,
help="Path to the invokeai runtime directory",
)
parser.add_argument(
"--front_end",
"--gui",
dest="front_end",
action="store_true",
default=False,
help="Activate the text-based graphical front end for collecting parameters. Aside from --root_dir, other parameters will be ignored.",
)
parser.add_argument(
"--models",
dest="model_names",
type=str,
nargs="+",
help="Two to three model names to be merged",
)
parser.add_argument(
"--base_model",
type=str,
choices=[x[0].value for x in BASE_TYPES],
help="The base model shared by the models to be merged",
)
parser.add_argument(
"--merged_model_name",
"--destination",
dest="merged_model_name",
type=str,
help="Name of the output model. If not specified, will be the concatenation of the input model names.",
)
parser.add_argument(
"--alpha",
type=float,
default=0.5,
help="The interpolation parameter, ranging from 0 to 1. It affects the ratio in which the checkpoints are merged. Higher values give more weight to the 2d and 3d models",
)
parser.add_argument(
"--interpolation",
dest="interp",
type=str,
choices=["weighted_sum", "sigmoid", "inv_sigmoid", "add_difference"],
default="weighted_sum",
help='Interpolation method to use. If three models are present, only "add_difference" will work.',
)
parser.add_argument(
"--force",
action="store_true",
help="Try to merge models even if they are incompatible with each other",
)
parser.add_argument(
"--clobber",
"--overwrite",
dest="clobber",
action="store_true",
help="Overwrite the merged model if --merged_model_name already exists",
)
return parser.parse_args()
# ------------------------- GUI HERE -------------------------
class mergeModelsForm(npyscreen.FormMultiPageAction):
interpolations = ["weighted_sum", "sigmoid", "inv_sigmoid"]
def __init__(self, parentApp, name):
self.parentApp = parentApp
self.ALLOW_RESIZE = True
self.FIX_MINIMUM_SIZE_WHEN_CREATED = False
super().__init__(parentApp, name)
@property
def model_record_store(self) -> ModelRecordServiceBase:
installer: ModelInstallServiceBase = self.parentApp.installer
return installer.record_store
def afterEditing(self) -> None:
self.parentApp.setNextForm(None)
def create(self) -> None:
window_height, window_width = curses.initscr().getmaxyx()
self.current_base = 0
self.models = self.get_models(BASE_TYPES[self.current_base][0])
self.model_names = [x[1] for x in self.models]
max_width = max([len(x) for x in self.model_names])
max_width += 6
horizontal_layout = max_width * 3 < window_width
self.add_widget_intelligent(
npyscreen.FixedText,
color="CONTROL",
value="Select two models to merge and optionally a third.",
editable=False,
)
self.add_widget_intelligent(
npyscreen.FixedText,
color="CONTROL",
value="Use up and down arrows to move, <space> to select an item, <tab> and <shift-tab> to move from one field to the next.",
editable=False,
)
self.nextrely += 1
self.base_select = self.add_widget_intelligent(
SingleSelectColumns,
values=[x[1] for x in BASE_TYPES],
value=[self.current_base],
columns=4,
max_height=2,
relx=8,
scroll_exit=True,
)
self.base_select.on_changed = self._populate_models
self.add_widget_intelligent(
npyscreen.FixedText,
value="MODEL 1",
color="GOOD",
editable=False,
rely=6 if horizontal_layout else None,
)
self.model1 = self.add_widget_intelligent(
npyscreen.SelectOne,
values=self.model_names,
value=0,
max_height=len(self.model_names),
max_width=max_width,
scroll_exit=True,
rely=7,
)
self.add_widget_intelligent(
npyscreen.FixedText,
value="MODEL 2",
color="GOOD",
editable=False,
relx=max_width + 3 if horizontal_layout else None,
rely=6 if horizontal_layout else None,
)
self.model2 = self.add_widget_intelligent(
npyscreen.SelectOne,
name="(2)",
values=self.model_names,
value=1,
max_height=len(self.model_names),
max_width=max_width,
relx=max_width + 3 if horizontal_layout else None,
rely=7 if horizontal_layout else None,
scroll_exit=True,
)
self.add_widget_intelligent(
npyscreen.FixedText,
value="MODEL 3",
color="GOOD",
editable=False,
relx=max_width * 2 + 3 if horizontal_layout else None,
rely=6 if horizontal_layout else None,
)
models_plus_none = self.model_names.copy()
models_plus_none.insert(0, "None")
self.model3 = self.add_widget_intelligent(
npyscreen.SelectOne,
name="(3)",
values=models_plus_none,
value=0,
max_height=len(self.model_names) + 1,
max_width=max_width,
scroll_exit=True,
relx=max_width * 2 + 3 if horizontal_layout else None,
rely=7 if horizontal_layout else None,
)
for m in [self.model1, self.model2, self.model3]:
m.when_value_edited = self.models_changed
self.merged_model_name = self.add_widget_intelligent(
TextBox,
name="Name for merged model:",
labelColor="CONTROL",
max_height=3,
value="",
scroll_exit=True,
)
self.force = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Force merge of models created by different diffusers library versions",
labelColor="CONTROL",
value=True,
scroll_exit=True,
)
self.nextrely += 1
self.merge_method = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Merge Method:",
values=self.interpolations,
value=0,
labelColor="CONTROL",
max_height=len(self.interpolations) + 1,
scroll_exit=True,
)
self.alpha = self.add_widget_intelligent(
FloatTitleSlider,
name="Weight (alpha) to assign to second and third models:",
out_of=1.0,
step=0.01,
lowest=0,
value=0.5,
labelColor="CONTROL",
scroll_exit=True,
)
self.model1.editing = True
def models_changed(self) -> None:
models = self.model1.values
selected_model1 = self.model1.value[0]
selected_model2 = self.model2.value[0]
selected_model3 = self.model3.value[0]
merged_model_name = f"{models[selected_model1]}+{models[selected_model2]}"
self.merged_model_name.value = merged_model_name
if selected_model3 > 0:
self.merge_method.values = ["add_difference ( A+(B-C) )"]
self.merged_model_name.value += f"+{models[selected_model3 -1]}" # In model3 there is one more element in the list (None). So we have to subtract one.
else:
self.merge_method.values = self.interpolations
self.merge_method.value = 0
def on_ok(self) -> None:
if self.validate_field_values() and self.check_for_overwrite():
self.parentApp.setNextForm(None)
self.editing = False
self.parentApp.merge_arguments = self.marshall_arguments()
npyscreen.notify("Starting the merge...")
else:
self.editing = True
def on_cancel(self) -> None:
sys.exit(0)
def marshall_arguments(self) -> dict:
model_keys = [x[0] for x in self.models]
models = [
model_keys[self.model1.value[0]],
model_keys[self.model2.value[0]],
]
if self.model3.value[0] > 0:
models.append(model_keys[self.model3.value[0] - 1])
interp = "add_difference"
else:
interp = self.interpolations[self.merge_method.value[0]]
args = {
"model_keys": models,
"alpha": self.alpha.value,
"interp": interp,
"force": self.force.value,
"merged_model_name": self.merged_model_name.value,
}
return args
def check_for_overwrite(self) -> bool:
model_out = self.merged_model_name.value
if model_out not in self.model_names:
return True
else:
result: bool = npyscreen.notify_yes_no(
f"The chosen merged model destination, {model_out}, is already in use. Overwrite?"
)
return result
def validate_field_values(self) -> bool:
bad_fields = []
model_names = self.model_names
selected_models = {model_names[self.model1.value[0]], model_names[self.model2.value[0]]}
if self.model3.value[0] > 0:
selected_models.add(model_names[self.model3.value[0] - 1])
if len(selected_models) < 2:
bad_fields.append(f"Please select two or three DIFFERENT models to compare. You selected {selected_models}")
if len(bad_fields) > 0:
message = "The following problems were detected and must be corrected:"
for problem in bad_fields:
message += f"\n* {problem}"
npyscreen.notify_confirm(message)
return False
else:
return True
def get_models(self, base_model: Optional[BaseModelType] = None) -> List[Tuple[str, str]]: # key to name
models = [
(x.key, x.name)
for x in self.model_record_store.search_by_attr(model_type=ModelType.Main, base_model=base_model)
if x.format == ModelFormat("diffusers")
and hasattr(x, "variant")
and x.variant == ModelVariantType("normal")
]
return sorted(models, key=lambda x: x[1])
def _populate_models(self, value: List[int]) -> None:
base_model = BASE_TYPES[value[0]][0]
self.models = self.get_models(base_model)
self.model_names = [x[1] for x in self.models]
models_plus_none = self.model_names.copy()
models_plus_none.insert(0, "None")
self.model1.values = self.model_names
self.model2.values = self.model_names
self.model3.values = models_plus_none
self.display()
# npyscreen is untyped and causes mypy to get naggy
class Mergeapp(npyscreen.NPSAppManaged): # type: ignore
def __init__(self, installer: ModelInstallServiceBase):
"""Initialize the npyscreen application."""
super().__init__()
self.installer = installer
def onStart(self) -> None:
npyscreen.setTheme(npyscreen.Themes.ElegantTheme)
self.main = self.addForm("MAIN", mergeModelsForm, name="Merge Models Settings")
def run_gui(args: Namespace) -> None:
installer = initialize_installer(config)
mergeapp = Mergeapp(installer)
mergeapp.run()
merge_args = mergeapp.merge_arguments
merger = ModelMerger(installer)
merger.merge_diffusion_models_and_save(**merge_args)
logger.info(f'Models merged into new model: "{merge_args.merged_model_name}".')
def run_cli(args: Namespace) -> None:
assert args.alpha >= 0 and args.alpha <= 1.0, "alpha must be between 0 and 1"
assert (
args.model_names and len(args.model_names) >= 1 and len(args.model_names) <= 3
), "Please provide the --models argument to list 2 to 3 models to merge. Use --help for full usage."
if not args.merged_model_name:
args.merged_model_name = "+".join(args.model_names)
logger.info(f'No --merged_model_name provided. Defaulting to "{args.merged_model_name}"')
installer = initialize_installer(config)
store = installer.record_store
assert (
len(store.search_by_attr(args.merged_model_name, args.base_model, ModelType.Main)) == 0 or args.clobber
), f'A model named "{args.merged_model_name}" already exists. Use --clobber to overwrite.'
merger = ModelMerger(installer)
model_keys = []
for name in args.model_names:
if len(name) == 32 and re.match(r"^[0-9a-f]$", name):
model_keys.append(name)
else:
models = store.search_by_attr(
model_name=name, model_type=ModelType.Main, base_model=BaseModelType(args.base_model)
)
assert len(models) > 0, f"{name}: Unknown model"
assert len(models) < 2, f"{name}: More than one model by this name. Please specify the model key instead."
model_keys.append(models[0].key)
merger.merge_diffusion_models_and_save(
alpha=args.alpha,
model_keys=model_keys,
merged_model_name=args.merged_model_name,
interp=args.interp,
force=args.force,
)
logger.info(f'Models merged into new model: "{args.merged_model_name}".')
def main() -> None:
args = _parse_args()
if args.root_dir:
config.parse_args(["--root", str(args.root_dir)])
else:
config.parse_args([])
try:
if args.front_end:
run_gui(args)
else:
run_cli(args)
except widget.NotEnoughSpaceForWidget as e:
if str(e).startswith("Height of 1 allocated"):
logger.error("You need to have at least two diffusers models defined in models.yaml in order to merge")
else:
logger.error("Not enough room for the user interface. Try making this window larger.")
sys.exit(-1)
except Exception as e:
logger.error(str(e))
sys.exit(-1)
except KeyboardInterrupt:
sys.exit(-1)
if __name__ == "__main__":
main()

View File

@ -3,7 +3,7 @@
"""
This is the frontend to "textual_inversion_training.py".
Copyright (c) 2023 Lincoln Stein and the InvokeAI Development Team
Copyright (c) 2023-24 Lincoln Stein and the InvokeAI Development Team
"""
@ -14,7 +14,7 @@ import sys
import traceback
from argparse import Namespace
from pathlib import Path
from typing import List, Tuple
from typing import Dict, List, Optional, Tuple
import npyscreen
from npyscreen import widget
@ -22,8 +22,9 @@ from omegaconf import OmegaConf
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from ...backend.training import do_textual_inversion_training, parse_args
from invokeai.backend.install.install_helper import initialize_installer
from invokeai.backend.model_manager import ModelType
from invokeai.backend.training import do_textual_inversion_training, parse_args
TRAINING_DATA = "text-inversion-training-data"
TRAINING_DIR = "text-inversion-output"
@ -44,19 +45,21 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
precisions = ["no", "fp16", "bf16"]
learnable_properties = ["object", "style"]
def __init__(self, parentApp, name, saved_args=None):
def __init__(self, parentApp: npyscreen.NPSAppManaged, name: str, saved_args: Optional[Dict[str, str]] = None):
self.saved_args = saved_args or {}
super().__init__(parentApp, name)
def afterEditing(self):
def afterEditing(self) -> None:
self.parentApp.setNextForm(None)
def create(self):
def create(self) -> None:
self.model_names, default = self.get_model_names()
default_initializer_token = ""
default_placeholder_token = ""
saved_args = self.saved_args
assert config is not None
try:
default = self.model_names.index(saved_args["model"])
except Exception:
@ -71,7 +74,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
self.model = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Model Name:",
values=self.model_names,
values=sorted(self.model_names),
value=default,
max_height=len(self.model_names) + 1,
scroll_exit=True,
@ -236,7 +239,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
)
self.model.editing = True
def initializer_changed(self):
def initializer_changed(self) -> None:
placeholder = self.placeholder_token.value
self.prompt_token.value = f"(Trigger by using <{placeholder}> in your prompts)"
self.train_data_dir.value = str(config.root_dir / TRAINING_DATA / placeholder)
@ -275,10 +278,13 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
return True
def get_model_names(self) -> Tuple[List[str], int]:
conf = OmegaConf.load(config.root_dir / "configs/models.yaml")
model_names = [idx for idx in sorted(conf.keys()) if conf[idx].get("format", None) == "diffusers"]
defaults = [idx for idx in range(len(model_names)) if "default" in conf[model_names[idx]]]
default = defaults[0] if len(defaults) > 0 else 0
global config
assert config is not None
installer = initialize_installer(config)
store = installer.record_store
main_models = store.search_by_attr(model_type=ModelType.Main)
model_names = [f"{x.base.value}/{x.type.value}/{x.name}" for x in main_models if x.format == "diffusers"]
default = 0
return (model_names, default)
def marshall_arguments(self) -> dict:
@ -326,7 +332,7 @@ class textualInversionForm(npyscreen.FormMultiPageAction):
class MyApplication(npyscreen.NPSAppManaged):
def __init__(self, saved_args=None):
def __init__(self, saved_args: Optional[Dict[str, str]] = None):
super().__init__()
self.ti_arguments = None
self.saved_args = saved_args
@ -341,11 +347,12 @@ class MyApplication(npyscreen.NPSAppManaged):
)
def copy_to_embeddings_folder(args: dict):
def copy_to_embeddings_folder(args: Dict[str, str]) -> None:
"""
Copy learned_embeds.bin into the embeddings folder, and offer to
delete the full model and checkpoints.
"""
assert config is not None
source = Path(args["output_dir"], "learned_embeds.bin")
dest_dir_name = args["placeholder_token"].strip("<>")
destination = config.root_dir / "embeddings" / dest_dir_name
@ -358,10 +365,11 @@ def copy_to_embeddings_folder(args: dict):
logger.info(f'Keeping {args["output_dir"]}')
def save_args(args: dict):
def save_args(args: dict) -> None:
"""
Save the current argument values to an omegaconf file
"""
assert config is not None
dest_dir = config.root_dir / TRAINING_DIR
os.makedirs(dest_dir, exist_ok=True)
conf_file = dest_dir / CONF_FILE
@ -373,6 +381,7 @@ def previous_args() -> dict:
"""
Get the previous arguments used.
"""
assert config is not None
conf_file = config.root_dir / TRAINING_DIR / CONF_FILE
try:
conf = OmegaConf.load(conf_file)
@ -383,24 +392,26 @@ def previous_args() -> dict:
return conf
def do_front_end(args: Namespace):
def do_front_end() -> None:
global config
saved_args = previous_args()
myapplication = MyApplication(saved_args=saved_args)
myapplication.run()
if args := myapplication.ti_arguments:
os.makedirs(args["output_dir"], exist_ok=True)
if my_args := myapplication.ti_arguments:
os.makedirs(my_args["output_dir"], exist_ok=True)
# Automatically add angle brackets around the trigger
if not re.match("^<.+>$", args["placeholder_token"]):
args["placeholder_token"] = f"<{args['placeholder_token']}>"
if not re.match("^<.+>$", my_args["placeholder_token"]):
my_args["placeholder_token"] = f"<{my_args['placeholder_token']}>"
args["only_save_embeds"] = True
save_args(args)
my_args["only_save_embeds"] = True
save_args(my_args)
try:
do_textual_inversion_training(InvokeAIAppConfig.get_config(), **args)
copy_to_embeddings_folder(args)
print(my_args)
do_textual_inversion_training(config, **my_args)
copy_to_embeddings_folder(my_args)
except Exception as e:
logger.error("An exception occurred during training. The exception was:")
logger.error(str(e))
@ -408,11 +419,12 @@ def do_front_end(args: Namespace):
logger.error(traceback.format_exc())
def main():
def main() -> None:
global config
args = parse_args()
args: Namespace = parse_args()
config = InvokeAIAppConfig.get_config()
config.parse_args([])
# change root if needed
if args.root_dir:
@ -420,7 +432,7 @@ def main():
try:
if args.front_end:
do_front_end(args)
do_front_end()
else:
do_textual_inversion_training(config, **vars(args))
except AssertionError as e:

View File

@ -0,0 +1,454 @@
#!/usr/bin/env python
"""
This is the frontend to "textual_inversion_training.py".
Copyright (c) 2023-24 Lincoln Stein and the InvokeAI Development Team
"""
import os
import re
import shutil
import sys
import traceback
from argparse import Namespace
from pathlib import Path
from typing import Dict, List, Optional, Tuple
import npyscreen
from npyscreen import widget
from omegaconf import OmegaConf
import invokeai.backend.util.logging as logger
from invokeai.app.services.config import InvokeAIAppConfig
from invokeai.backend.install.install_helper import initialize_installer
from invokeai.backend.model_manager import ModelType
from invokeai.backend.training import do_textual_inversion_training, parse_args
TRAINING_DATA = "text-inversion-training-data"
TRAINING_DIR = "text-inversion-output"
CONF_FILE = "preferences.conf"
config = None
class textualInversionForm(npyscreen.FormMultiPageAction):
resolutions = [512, 768, 1024]
lr_schedulers = [
"linear",
"cosine",
"cosine_with_restarts",
"polynomial",
"constant",
"constant_with_warmup",
]
precisions = ["no", "fp16", "bf16"]
learnable_properties = ["object", "style"]
def __init__(self, parentApp: npyscreen.NPSAppManaged, name: str, saved_args: Optional[Dict[str, str]] = None):
self.saved_args = saved_args or {}
super().__init__(parentApp, name)
def afterEditing(self) -> None:
self.parentApp.setNextForm(None)
def create(self) -> None:
self.model_names, default = self.get_model_names()
default_initializer_token = ""
default_placeholder_token = ""
saved_args = self.saved_args
assert config is not None
try:
default = self.model_names.index(saved_args["model"])
except Exception:
pass
self.add_widget_intelligent(
npyscreen.FixedText,
value="Use ctrl-N and ctrl-P to move to the <N>ext and <P>revious fields, cursor arrows to make a selection, and space to toggle checkboxes.",
editable=False,
)
self.model = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Model Name:",
values=sorted(self.model_names),
value=default,
max_height=len(self.model_names) + 1,
scroll_exit=True,
)
self.placeholder_token = self.add_widget_intelligent(
npyscreen.TitleText,
name="Trigger Term:",
value="", # saved_args.get('placeholder_token',''), # to restore previous term
scroll_exit=True,
)
self.placeholder_token.when_value_edited = self.initializer_changed
self.nextrely -= 1
self.nextrelx += 30
self.prompt_token = self.add_widget_intelligent(
npyscreen.FixedText,
name="Trigger term for use in prompt",
value="",
editable=False,
scroll_exit=True,
)
self.nextrelx -= 30
self.initializer_token = self.add_widget_intelligent(
npyscreen.TitleText,
name="Initializer:",
value=saved_args.get("initializer_token", default_initializer_token),
scroll_exit=True,
)
self.resume_from_checkpoint = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Resume from last saved checkpoint",
value=False,
scroll_exit=True,
)
self.learnable_property = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Learnable property:",
values=self.learnable_properties,
value=self.learnable_properties.index(saved_args.get("learnable_property", "object")),
max_height=4,
scroll_exit=True,
)
self.train_data_dir = self.add_widget_intelligent(
npyscreen.TitleFilename,
name="Data Training Directory:",
select_dir=True,
must_exist=False,
value=str(
saved_args.get(
"train_data_dir",
config.root_dir / TRAINING_DATA / default_placeholder_token,
)
),
scroll_exit=True,
)
self.output_dir = self.add_widget_intelligent(
npyscreen.TitleFilename,
name="Output Destination Directory:",
select_dir=True,
must_exist=False,
value=str(
saved_args.get(
"output_dir",
config.root_dir / TRAINING_DIR / default_placeholder_token,
)
),
scroll_exit=True,
)
self.resolution = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Image resolution (pixels):",
values=self.resolutions,
value=self.resolutions.index(saved_args.get("resolution", 512)),
max_height=4,
scroll_exit=True,
)
self.center_crop = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Center crop images before resizing to resolution",
value=saved_args.get("center_crop", False),
scroll_exit=True,
)
self.mixed_precision = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Mixed Precision:",
values=self.precisions,
value=self.precisions.index(saved_args.get("mixed_precision", "fp16")),
max_height=4,
scroll_exit=True,
)
self.num_train_epochs = self.add_widget_intelligent(
npyscreen.TitleSlider,
name="Number of training epochs:",
out_of=1000,
step=50,
lowest=1,
value=saved_args.get("num_train_epochs", 100),
scroll_exit=True,
)
self.max_train_steps = self.add_widget_intelligent(
npyscreen.TitleSlider,
name="Max Training Steps:",
out_of=10000,
step=500,
lowest=1,
value=saved_args.get("max_train_steps", 3000),
scroll_exit=True,
)
self.train_batch_size = self.add_widget_intelligent(
npyscreen.TitleSlider,
name="Batch Size (reduce if you run out of memory):",
out_of=50,
step=1,
lowest=1,
value=saved_args.get("train_batch_size", 8),
scroll_exit=True,
)
self.gradient_accumulation_steps = self.add_widget_intelligent(
npyscreen.TitleSlider,
name="Gradient Accumulation Steps (may need to decrease this to resume from a checkpoint):",
out_of=10,
step=1,
lowest=1,
value=saved_args.get("gradient_accumulation_steps", 4),
scroll_exit=True,
)
self.lr_warmup_steps = self.add_widget_intelligent(
npyscreen.TitleSlider,
name="Warmup Steps:",
out_of=100,
step=1,
lowest=0,
value=saved_args.get("lr_warmup_steps", 0),
scroll_exit=True,
)
self.learning_rate = self.add_widget_intelligent(
npyscreen.TitleText,
name="Learning Rate:",
value=str(
saved_args.get("learning_rate", "5.0e-04"),
),
scroll_exit=True,
)
self.scale_lr = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Scale learning rate by number GPUs, steps and batch size",
value=saved_args.get("scale_lr", True),
scroll_exit=True,
)
self.enable_xformers_memory_efficient_attention = self.add_widget_intelligent(
npyscreen.Checkbox,
name="Use xformers acceleration",
value=saved_args.get("enable_xformers_memory_efficient_attention", False),
scroll_exit=True,
)
self.lr_scheduler = self.add_widget_intelligent(
npyscreen.TitleSelectOne,
name="Learning rate scheduler:",
values=self.lr_schedulers,
max_height=7,
value=self.lr_schedulers.index(saved_args.get("lr_scheduler", "constant")),
scroll_exit=True,
)
self.model.editing = True
def initializer_changed(self) -> None:
placeholder = self.placeholder_token.value
self.prompt_token.value = f"(Trigger by using <{placeholder}> in your prompts)"
self.train_data_dir.value = str(config.root_dir / TRAINING_DATA / placeholder)
self.output_dir.value = str(config.root_dir / TRAINING_DIR / placeholder)
self.resume_from_checkpoint.value = Path(self.output_dir.value).exists()
def on_ok(self):
if self.validate_field_values():
self.parentApp.setNextForm(None)
self.editing = False
self.parentApp.ti_arguments = self.marshall_arguments()
npyscreen.notify("Launching textual inversion training. This will take a while...")
else:
self.editing = True
def ok_cancel(self):
sys.exit(0)
def validate_field_values(self) -> bool:
bad_fields = []
if self.model.value is None:
bad_fields.append("Model Name must correspond to a known model in models.yaml")
if not re.match("^[a-zA-Z0-9.-]+$", self.placeholder_token.value):
bad_fields.append("Trigger term must only contain alphanumeric characters, the dot and hyphen")
if self.train_data_dir.value is None:
bad_fields.append("Data Training Directory cannot be empty")
if self.output_dir.value is None:
bad_fields.append("The Output Destination Directory cannot be empty")
if len(bad_fields) > 0:
message = "The following problems were detected and must be corrected:"
for problem in bad_fields:
message += f"\n* {problem}"
npyscreen.notify_confirm(message)
return False
else:
return True
def get_model_names(self) -> Tuple[List[str], int]:
global config
assert config is not None
installer = initialize_installer(config)
store = installer.record_store
main_models = store.search_by_attr(model_type=ModelType.Main)
model_names = [f"{x.base.value}/{x.type.value}/{x.name}" for x in main_models if x.format == "diffusers"]
default = 0
return (model_names, default)
def marshall_arguments(self) -> dict:
args = {}
# the choices
args.update(
model=self.model_names[self.model.value[0]],
resolution=self.resolutions[self.resolution.value[0]],
lr_scheduler=self.lr_schedulers[self.lr_scheduler.value[0]],
mixed_precision=self.precisions[self.mixed_precision.value[0]],
learnable_property=self.learnable_properties[self.learnable_property.value[0]],
)
# all the strings and booleans
for attr in (
"initializer_token",
"placeholder_token",
"train_data_dir",
"output_dir",
"scale_lr",
"center_crop",
"enable_xformers_memory_efficient_attention",
):
args[attr] = getattr(self, attr).value
# all the integers
for attr in (
"train_batch_size",
"gradient_accumulation_steps",
"num_train_epochs",
"max_train_steps",
"lr_warmup_steps",
):
args[attr] = int(getattr(self, attr).value)
# the floats (just one)
args.update(learning_rate=float(self.learning_rate.value))
# a special case
if self.resume_from_checkpoint.value and Path(self.output_dir.value).exists():
args["resume_from_checkpoint"] = "latest"
return args
class MyApplication(npyscreen.NPSAppManaged):
def __init__(self, saved_args: Optional[Dict[str, str]] = None):
super().__init__()
self.ti_arguments = None
self.saved_args = saved_args
def onStart(self):
npyscreen.setTheme(npyscreen.Themes.DefaultTheme)
self.main = self.addForm(
"MAIN",
textualInversionForm,
name="Textual Inversion Settings",
saved_args=self.saved_args,
)
def copy_to_embeddings_folder(args: Dict[str, str]) -> None:
"""
Copy learned_embeds.bin into the embeddings folder, and offer to
delete the full model and checkpoints.
"""
assert config is not None
source = Path(args["output_dir"], "learned_embeds.bin")
dest_dir_name = args["placeholder_token"].strip("<>")
destination = config.root_dir / "embeddings" / dest_dir_name
os.makedirs(destination, exist_ok=True)
logger.info(f"Training completed. Copying learned_embeds.bin into {str(destination)}")
shutil.copy(source, destination)
if (input("Delete training logs and intermediate checkpoints? [y] ") or "y").startswith(("y", "Y")):
shutil.rmtree(Path(args["output_dir"]))
else:
logger.info(f'Keeping {args["output_dir"]}')
def save_args(args: dict) -> None:
"""
Save the current argument values to an omegaconf file
"""
assert config is not None
dest_dir = config.root_dir / TRAINING_DIR
os.makedirs(dest_dir, exist_ok=True)
conf_file = dest_dir / CONF_FILE
conf = OmegaConf.create(args)
OmegaConf.save(config=conf, f=conf_file)
def previous_args() -> dict:
"""
Get the previous arguments used.
"""
assert config is not None
conf_file = config.root_dir / TRAINING_DIR / CONF_FILE
try:
conf = OmegaConf.load(conf_file)
conf["placeholder_token"] = conf["placeholder_token"].strip("<>")
except Exception:
conf = None
return conf
def do_front_end() -> None:
global config
saved_args = previous_args()
myapplication = MyApplication(saved_args=saved_args)
myapplication.run()
if my_args := myapplication.ti_arguments:
os.makedirs(my_args["output_dir"], exist_ok=True)
# Automatically add angle brackets around the trigger
if not re.match("^<.+>$", my_args["placeholder_token"]):
my_args["placeholder_token"] = f"<{my_args['placeholder_token']}>"
my_args["only_save_embeds"] = True
save_args(my_args)
try:
print(my_args)
do_textual_inversion_training(config, **my_args)
copy_to_embeddings_folder(my_args)
except Exception as e:
logger.error("An exception occurred during training. The exception was:")
logger.error(str(e))
logger.error("DETAILS:")
logger.error(traceback.format_exc())
def main() -> None:
global config
args: Namespace = parse_args()
config = InvokeAIAppConfig.get_config()
config.parse_args([])
# change root if needed
if args.root_dir:
config.root = args.root_dir
try:
if args.front_end:
do_front_end()
else:
do_textual_inversion_training(config, **vars(args))
except AssertionError as e:
logger.error(e)
sys.exit(-1)
except KeyboardInterrupt:
pass
except (widget.NotEnoughSpaceForWidget, Exception) as e:
if str(e).startswith("Height of 1 allocated"):
logger.error("You need to have at least one diffusers models defined in models.yaml in order to train")
elif str(e).startswith("addwstr"):
logger.error("Not enough window space for the interface. Please make your window larger and try again.")
else:
logger.error(e)
sys.exit(-1)
if __name__ == "__main__":
main()

View File

@ -1,131 +1,26 @@
module.exports = {
env: {
browser: true,
es6: true,
node: true,
},
extends: [
'eslint:recommended',
'plugin:@typescript-eslint/recommended',
'plugin:react/recommended',
'plugin:react-hooks/recommended',
'plugin:react/jsx-runtime',
'prettier',
'plugin:storybook/recommended',
],
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaFeatures: {
jsx: true,
},
ecmaVersion: 2018,
sourceType: 'module',
},
plugins: [
'react',
'@typescript-eslint',
'eslint-plugin-react-hooks',
'i18next',
'path',
'unused-imports',
'simple-import-sort',
'eslint-plugin-import',
// These rules are too strict for normal usage, but are useful for optimizing rerenders
// '@arthurgeron/react-usememo',
],
root: true,
extends: ['@invoke-ai/eslint-config-react'],
plugins: ['path', 'i18next'],
rules: {
// TODO(psyche): Enable this rule. Requires no default exports in components - many changes.
'react-refresh/only-export-components': 'off',
// TODO(psyche): Enable this rule. Requires a lot of eslint-disable-next-line comments.
'@typescript-eslint/consistent-type-assertions': 'off',
// https://github.com/qdanik/eslint-plugin-path
'path/no-relative-imports': ['error', { maxDepth: 0 }],
curly: 'error',
'i18next/no-literal-string': 'warn',
'react/jsx-no-bind': ['error', { allowBind: true }],
'react/jsx-curly-brace-presence': [
'error',
{ props: 'never', children: 'never' },
],
'react-hooks/exhaustive-deps': 'error',
'no-var': 'error',
'brace-style': 'error',
'prefer-template': 'error',
'import/no-duplicates': 'error',
radix: 'error',
'space-before-blocks': 'error',
'import/prefer-default-export': 'off',
'@typescript-eslint/no-unused-vars': 'off',
'unused-imports/no-unused-imports': 'error',
'unused-imports/no-unused-vars': [
'warn',
{
vars: 'all',
varsIgnorePattern: '^_',
args: 'after-used',
argsIgnorePattern: '^_',
},
],
// These rules are too strict for normal usage, but are useful for optimizing rerenders
// '@arthurgeron/react-usememo/require-usememo': [
// 'warn',
// {
// strict: false,
// checkHookReturnObject: false,
// fix: { addImports: true },
// checkHookCalls: false,
// },
// ],
// '@arthurgeron/react-usememo/require-memo': 'warn',
'@typescript-eslint/ban-ts-comment': 'warn',
'@typescript-eslint/no-explicit-any': 'warn',
'@typescript-eslint/no-empty-interface': [
'error',
{
allowSingleExtends: true,
},
],
'@typescript-eslint/consistent-type-imports': [
'error',
{
prefer: 'type-imports',
fixStyle: 'separate-type-imports',
disallowTypeAnnotations: true,
},
],
'@typescript-eslint/no-import-type-side-effects': 'error',
'simple-import-sort/imports': 'error',
'simple-import-sort/exports': 'error',
// Prefer @invoke-ai/ui components over chakra
'no-restricted-imports': 'off',
'@typescript-eslint/no-restricted-imports': [
'warn',
{
paths: [
{
name: '@chakra-ui/react',
message: "Please import from '@invoke-ai/ui' instead.",
},
{
name: '@chakra-ui/layout',
message: "Please import from '@invoke-ai/ui' instead.",
},
{
name: '@chakra-ui/portal',
message: "Please import from '@invoke-ai/ui' instead.",
},
],
},
],
// https://github.com/edvardchen/eslint-plugin-i18next/blob/HEAD/docs/rules/no-literal-string.md
'i18next/no-literal-string': 'error',
},
overrides: [
/**
* Overrides for stories
*/
{
files: ['*.stories.tsx'],
rules: {
// We may not have i18n available in stories.
'i18next/no-literal-string': 'off',
},
},
],
settings: {
react: {
version: 'detect',
},
},
};

View File

@ -1,9 +1,5 @@
module.exports = {
trailingComma: 'es5',
tabWidth: 2,
semi: true,
singleQuote: true,
endOfLine: 'auto',
...require('@invoke-ai/prettier-config-react'),
overrides: [
{
files: ['public/locales/*.json'],

View File

@ -1,7 +1,7 @@
import { PropsWithChildren, memo, useEffect } from 'react';
import { modelChanged } from '../src/features/parameters/store/generationSlice';
import { useAppDispatch } from '../src/app/store/storeHooks';
import { useGlobalModifiersInit } from '@invoke-ai/ui';
import { useGlobalModifiersInit } from '@invoke-ai/ui-library';
/**
* Initializes some state for storybook. Must be in a different component
* so that it is run inside the redux context.

View File

@ -1,13 +1,7 @@
{
"entry": ["src/main.tsx"],
"extensions": [".ts", ".tsx"],
"ignorePatterns": [
"**/node_modules/**",
"dist/**",
"public/**",
"**/*.stories.tsx",
"config/**"
],
"ignorePatterns": ["**/node_modules/**", "dist/**", "public/**", "**/*.stories.tsx", "config/**"],
"ignoreUnresolved": [],
"ignoreUnimported": ["src/i18.d.ts", "vite.config.ts", "src/vite-env.d.ts"],
"respectGitignore": true,

View File

@ -0,0 +1,150 @@
# Invoke UI
<!-- @import "[TOC]" {cmd="toc" depthFrom=2 depthTo=3 orderedList=false} -->
<!-- code_chunk_output -->
- [Dev environment](#dev-environment)
- [Setup](#setup)
- [Package scripts](#package-scripts)
- [Type generation](#type-generation)
- [Localization](#localization)
- [VSCode](#vscode)
- [Contributing](#contributing)
- [Check in before investing your time](#check-in-before-investing-your-time)
- [Commit format](#commit-format)
- [Submitting a PR](#submitting-a-pr)
- [Other docs](#other-docs)
<!-- /code_chunk_output -->
Invoke's UI is made possible by many contributors and open-source libraries. Thank you!
## Dev environment
### Setup
1. Install [node] and [pnpm].
1. Run `pnpm i` to install all packages.
#### Run in dev mode
1. From `invokeai/frontend/web/`, run `pnpm dev`.
1. From repo root, run `python scripts/invokeai-web.py`.
1. Point your browser to the dev server address, e.g. <http://localhost:5173/>
### Package scripts
- `dev`: run the frontend in dev mode, enabling hot reloading
- `build`: run all checks (madge, eslint, prettier, tsc) and then build the frontend
- `typegen`: generate types from the OpenAPI schema (see [Type generation])
- `lint:madge`: check frontend for circular dependencies
- `lint:eslint`: check frontend for code quality
- `lint:prettier`: check frontend for code formatting
- `lint:tsc`: check frontend for type issues
- `lint`: run all checks concurrently
- `fix`: run `eslint` and `prettier`, fixing fixable issues
### Type generation
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
The generated types are committed to the repo in [schema.ts].
```sh
# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
pnpm typegen
```
### Localization
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project.
Only the English source strings should be changed on this repo.
### VSCode
#### Example debugger config
```jsonc
{
"version": "0.2.0",
"configurations": [
{
"type": "chrome",
"request": "launch",
"name": "Invoke UI",
"url": "http://localhost:5173",
"webRoot": "${workspaceFolder}/invokeai/frontend/web",
},
],
}
```
#### Remote dev
We've noticed an intermittent timeout issue with the VSCode remote dev port forwarding.
We suggest disabling the editor's port forwarding feature and doing it manually via SSH:
```sh
ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host
```
## Contributing Guidelines
Thanks for your interest in contributing to the Invoke Web UI!
Please follow these guidelines when contributing.
### Check in before investing your time
Please check in before you invest your time on anything besides a trivial fix, in case it conflicts with ongoing work or isn't aligned with the vision for the app.
If a feature request or issue doesn't already exist for the thing you want to work on, please create one.
Ping `@psychedelicious` on [discord] in the `#frontend-dev` channel or in the feature request / issue you want to work on - we're happy chat.
### Code conventions
- This is a fairly complex app with a deep component tree. Please use memoization (`useCallback`, `useMemo`, `memo`) with enthusiasm.
- If you need to add some global, ephemeral state, please use [nanostores] if possible.
- Be careful with your redux selectors. If they need to be parameterized, consider creating them inside a `useMemo`.
- Feel free to use `lodash` (via `lodash-es`) to make the intent of your code clear.
- Please add comments describing the "why", not the "how" (unless it is really arcane).
### Commit format
Please use the [conventional commits] spec for the web UI, with a scope of "ui":
- `chore(ui): bump deps`
- `chore(ui): lint`
- `feat(ui): add some cool new feature`
- `fix(ui): fix some bug`
### Submitting a PR
- Ensure your branch is tidy. Use an interactive rebase to clean up the commit history and reword the commit messages if they are not descriptive.
- Run `pnpm lint`. Some issues are auto-fixable with `pnpm fix`.
- Fill out the PR form when creating the PR.
- It doesn't need to be super detailed, but a screenshot or video is nice if you changed something visually.
- If a section isn't relevant, delete it. There are no UI tests at this time.
## Other docs
- [Workflows - Design and Implementation]
- [State Management]
[node]: https://nodejs.org/en/download/
[pnpm]: https://github.com/pnpm/pnpm
[discord]: https://discord.gg/ZmtBAhwWhy
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[Type generation]: #type-generation
[schema.ts]: ../src/services/api/schema.ts
[conventional commits]: https://www.conventionalcommits.org/en/v1.0.0/
[Workflows - Design and Implementation]: ./docs/WORKFLOWS_DESIGN_IMPLEMENTATION.md
[State Management]: ./docs/STATE_MGMT.md

View File

@ -22,12 +22,13 @@ export const packageConfig: UserConfig = {
fileName: (format) => `invoke-ai-ui.${format}.js`,
},
rollupOptions: {
external: ['react', 'react-dom', '@emotion/react', '@chakra-ui/react'],
external: ['react', 'react-dom', '@emotion/react', '@chakra-ui/react', '@invoke-ai/ui-library'],
output: {
globals: {
react: 'React',
'react-dom': 'ReactDOM',
'@emotion/react': 'EmotionReact',
'@invoke-ai/ui-library': 'UiLibrary',
},
},
},

View File

@ -1,111 +0,0 @@
# Invoke UI
<!-- @import "[TOC]" {cmd="toc" depthFrom=1 depthTo=6 orderedList=false} -->
<!-- code_chunk_output -->
- [Invoke UI](#invoke-ui)
- [Core Libraries](#core-libraries)
- [Package Scripts](#package-scripts)
- [Client Types Generation](#client-types-generation)
- [Contributing](#contributing)
- [Localization](#localization)
- [Dev Environment](#dev-environment)
- [VSCode Remote Dev](#vscode-remote-dev)
<!-- /code_chunk_output -->
## Core Libraries
Invoke's UI is made possible by a number of excellent open-source libraries. The most heavily-used are listed below, but there are many others.
- [Redux Toolkit]
- [redux-remember]
- [Socket.IO]
- [Chakra UI]
- [KonvaJS]
- [Vite]
- [openapi-typescript]
- [reactflow]
- [zod]
## Package Scripts
See `package.json` for all scripts.
Run with `pnpm <script name>`.
- `dev`: run the frontend in dev mode, enabling hot reloading
- `build`: run all checks (madge, eslint, prettier, tsc) and then build the frontend
- `typegen`: generate types from the OpenAPI schema (see [Client Types Generation])
- `lint:madge`: check frontend for circular dependencies
- `lint:eslint`: check frontend for code quality
- `lint:prettier`: check frontend for code formatting
- `lint:tsc`: check frontend for type issues
- `lint`: run all checks concurrently
- `fix`: run `eslint` and `prettier`, fixing fixable issues
### Client Types Generation
We use [openapi-typescript] to generate types from the app's OpenAPI schema.
The generated types are written to `invokeai/frontend/web/src/services/api/schema.d.ts`. This file is committed to the repo.
The server must be started and available at <http://127.0.0.1:9090>.
```sh
# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
pnpm typegen
```
## Contributing
Thanks for your interest in contributing to the Invoke Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on [discord] if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
### Localization
We use [i18next] for localization, but translation to languages other than English happens on our [Weblate] project.
**Only the English source strings should be changed on this repo.**
### Dev Environment
Install [node] and [pnpm].
From `invokeai/frontend/web/` run `pnpm i` to get everything set up.
Start everything in dev mode:
1. From `invokeai/frontend/web/`: `pnpm dev`
2. From repo root: `python scripts/invokeai-web.py`
3. Point your browser to the dev server address e.g. <http://localhost:5173/>
### VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out.
We suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
```sh
ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host
```
[node]: https://nodejs.org/en/download/
[pnpm]: https://github.com/pnpm/pnpm
[discord]: https://discord.gg/ZmtBAhwWhy
[Redux Toolkit]: https://github.com/reduxjs/redux-toolkit
[redux-remember]: https://github.com/zewish/redux-remember
[Socket.IO]: https://github.com/socketio/socket.io
[Chakra UI]: https://github.com/chakra-ui/chakra-ui
[KonvaJS]: https://github.com/konvajs/react-konva
[Vite]: https://github.com/vitejs/vite
[i18next]: https://github.com/i18next/react-i18next
[Weblate]: https://hosted.weblate.org/engage/invokeai/
[openapi-typescript]: https://github.com/drwpow/openapi-typescript
[reactflow]: https://github.com/xyflow/xyflow
[zod]: https://github.com/colinhacks/zod
[Client Types Generation]: #client-types-generation

View File

@ -0,0 +1,38 @@
# State Management
The app makes heavy use of Redux Toolkit, its Query library, and `nanostores`.
## Redux
TODO
## `nanostores`
[nanostores] is a tiny state management library. It provides both imperative and declarative APIs.
### Example
```ts
export const $myStringOption = atom<string | null>(null);
// Outside a component, or within a callback for performance-critical logic
$myStringOption.get();
$myStringOption.set('new value');
// Inside a component
const myStringOption = useStore($myStringOption);
```
### Where to put nanostores
- For global application state, export your stores from `invokeai/frontend/web/src/app/store/nanostores/`.
- For feature state, create a file for the stores next to the redux slice definition (e.g. `invokeai/frontend/web/src/features/myFeature/myFeatureNanostores.ts`).
- For hooks with global state, export the store from the same file the hook is in, or put it next to the hook.
### When to use nanostores
- For non-serializable data that needs to be available throughout the app, use `nanostores` instead of a global.
- For ephemeral global state (i.e. state that does not need to be persisted), use `nanostores` instead of redux.
- For performance-critical code and in callbacks, redux selectors can be problematic due to the declarative reactivity system. Consider refactoring to use `nanostores` if there's a **measurable** performance issue.
[nanostores]: https://github.com/nanostores/nanostores/

View File

@ -19,8 +19,8 @@
"dist"
],
"scripts": {
"dev": "concurrently \"vite dev\" \"pnpm run theme:watch\"",
"dev:host": "concurrently \"vite dev --host\" \"pnpm run theme:watch\"",
"dev": "vite dev",
"dev:host": "vite dev --host",
"build": "pnpm run lint && vite build",
"typegen": "node scripts/typegen.js",
"preview": "vite preview",
@ -31,9 +31,6 @@
"lint": "concurrently -g -n eslint,prettier,tsc,madge -c cyan,green,magenta,yellow \"pnpm run lint:eslint\" \"pnpm run lint:prettier\" \"pnpm run lint:tsc\" \"pnpm run lint:madge\"",
"fix": "eslint --fix . && prettier --log-level warn --write .",
"preinstall": "npx only-allow pnpm",
"postinstall": "pnpm run theme",
"theme": "chakra-cli tokens node_modules/@invoke-ai/ui",
"theme:watch": "chakra-cli tokens node_modules/@invoke-ai/ui --watch",
"storybook": "storybook dev -p 6006",
"build-storybook": "storybook build",
"unimported": "npx unimported"
@ -55,9 +52,10 @@
"@chakra-ui/react-use-size": "^2.1.0",
"@dagrejs/graphlib": "^2.1.13",
"@dnd-kit/core": "^6.1.0",
"@dnd-kit/sortable": "^8.0.0",
"@dnd-kit/utilities": "^3.2.2",
"@fontsource-variable/inter": "^5.0.16",
"@invoke-ai/ui": "0.0.13",
"@invoke-ai/ui-library": "^0.0.18",
"@mantine/form": "6.0.21",
"@nanostores/react": "^0.7.1",
"@reduxjs/toolkit": "2.0.1",
@ -107,7 +105,6 @@
"zod-validation-error": "^3.0.0"
},
"peerDependencies": {
"@chakra-ui/cli": "^2.4.1",
"@chakra-ui/react": "^2.8.2",
"react": "^18.2.0",
"react-dom": "^18.2.0",
@ -115,7 +112,8 @@
},
"devDependencies": {
"@arthurgeron/eslint-plugin-react-usememo": "^2.2.3",
"@chakra-ui/cli": "^2.4.1",
"@invoke-ai/eslint-config-react": "^0.0.13",
"@invoke-ai/prettier-config-react": "^0.0.6",
"@storybook/addon-docs": "^7.6.10",
"@storybook/addon-essentials": "^7.6.10",
"@storybook/addon-interactions": "^7.6.10",
@ -155,7 +153,7 @@
"storybook": "^7.6.10",
"ts-toolbelt": "^9.6.0",
"typescript": "^5.3.3",
"vite": "^5.0.11",
"vite": "^5.0.12",
"vite-plugin-css-injected-by-js": "^3.3.1",
"vite-plugin-dts": "^3.7.1",
"vite-plugin-eslint": "^1.8.1",

View File

@ -22,15 +22,18 @@ dependencies:
'@dnd-kit/core':
specifier: ^6.1.0
version: 6.1.0(react-dom@18.2.0)(react@18.2.0)
'@dnd-kit/sortable':
specifier: ^8.0.0
version: 8.0.0(@dnd-kit/core@6.1.0)(react@18.2.0)
'@dnd-kit/utilities':
specifier: ^3.2.2
version: 3.2.2(react@18.2.0)
'@fontsource-variable/inter':
specifier: ^5.0.16
version: 5.0.16
'@invoke-ai/ui':
specifier: 0.0.13
version: 0.0.13(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.16)(@internationalized/date@3.5.1)(@types/react@18.2.48)(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0)
'@invoke-ai/ui-library':
specifier: ^0.0.18
version: 0.0.18(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.16)(@internationalized/date@3.5.1)(@types/react@18.2.48)(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0)
'@mantine/form':
specifier: 6.0.21
version: 6.0.21(react@18.2.0)
@ -177,9 +180,12 @@ devDependencies:
'@arthurgeron/eslint-plugin-react-usememo':
specifier: ^2.2.3
version: 2.2.3
'@chakra-ui/cli':
specifier: ^2.4.1
version: 2.4.1
'@invoke-ai/eslint-config-react':
specifier: ^0.0.13
version: 0.0.13(@typescript-eslint/eslint-plugin@6.19.0)(@typescript-eslint/parser@6.19.0)(eslint-config-prettier@9.1.0)(eslint-plugin-import@2.29.1)(eslint-plugin-react-hooks@4.6.0)(eslint-plugin-react-refresh@0.4.5)(eslint-plugin-react@7.33.2)(eslint-plugin-simple-import-sort@10.0.0)(eslint-plugin-storybook@0.6.15)(eslint-plugin-unused-imports@3.0.0)(eslint@8.56.0)
'@invoke-ai/prettier-config-react':
specifier: ^0.0.6
version: 0.0.6(prettier@3.2.4)
'@storybook/addon-docs':
specifier: ^7.6.10
version: 7.6.10(@types/react-dom@18.2.18)(@types/react@18.2.48)(react-dom@18.2.0)(react@18.2.0)
@ -206,7 +212,7 @@ devDependencies:
version: 7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)
'@storybook/react-vite':
specifier: ^7.6.10
version: 7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)(vite@5.0.11)
version: 7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)(vite@5.0.12)
'@storybook/test':
specifier: ^7.6.10
version: 7.6.10
@ -239,7 +245,7 @@ devDependencies:
version: 6.19.0(eslint@8.56.0)(typescript@5.3.3)
'@vitejs/plugin-react-swc':
specifier: ^3.5.0
version: 3.5.0(vite@5.0.11)
version: 3.5.0(vite@5.0.12)
concurrently:
specifier: ^8.2.2
version: 8.2.2
@ -298,20 +304,20 @@ devDependencies:
specifier: ^5.3.3
version: 5.3.3
vite:
specifier: ^5.0.11
version: 5.0.11(@types/node@20.11.5)
specifier: ^5.0.12
version: 5.0.12(@types/node@20.11.5)
vite-plugin-css-injected-by-js:
specifier: ^3.3.1
version: 3.3.1(vite@5.0.11)
version: 3.3.1(vite@5.0.12)
vite-plugin-dts:
specifier: ^3.7.1
version: 3.7.1(@types/node@20.11.5)(typescript@5.3.3)(vite@5.0.11)
version: 3.7.1(@types/node@20.11.5)(typescript@5.3.3)(vite@5.0.12)
vite-plugin-eslint:
specifier: ^1.8.1
version: 1.8.1(eslint@8.56.0)(vite@5.0.11)
version: 1.8.1(eslint@8.56.0)(vite@5.0.12)
vite-tsconfig-paths:
specifier: ^4.3.1
version: 4.3.1(typescript@5.3.3)(vite@5.0.11)
version: 4.3.1(typescript@5.3.3)(vite@5.0.12)
packages:
@ -1852,19 +1858,6 @@ packages:
react: 18.2.0
dev: false
/@chakra-ui/cli@2.4.1:
resolution: {integrity: sha512-GZZuHUA1cXJWpmYNiVTLPihvY4VhIssRl+AXgw/0IbeodTMop3jWlIioPKLAQeXu5CwvRA6iESyGjnu1V8Zykg==}
hasBin: true
dependencies:
chokidar: 3.5.3
cli-check-node: 1.3.4
cli-handle-unhandled: 1.1.1
cli-welcome: 2.2.2
commander: 9.5.0
esbuild: 0.17.19
prettier: 2.8.8
dev: true
/@chakra-ui/clickable@2.1.0(react@18.2.0):
resolution: {integrity: sha512-flRA/ClPUGPYabu+/GLREZVZr9j2uyyazCAUHAdrTUEdDYCr31SVGhgh7dgKdtq23bOvAQJpIJjw/0Bs0WvbXw==}
peerDependencies:
@ -2894,6 +2887,18 @@ packages:
tslib: 2.6.2
dev: false
/@dnd-kit/sortable@8.0.0(@dnd-kit/core@6.1.0)(react@18.2.0):
resolution: {integrity: sha512-U3jk5ebVXe1Lr7c2wU7SBZjcWdQP+j7peHJfCspnA81enlu88Mgd7CC8Q+pub9ubP7eKVETzJW+IBAhsqbSu/g==}
peerDependencies:
'@dnd-kit/core': ^6.1.0
react: '>=16.8.0'
dependencies:
'@dnd-kit/core': 6.1.0(react-dom@18.2.0)(react@18.2.0)
'@dnd-kit/utilities': 3.2.2(react@18.2.0)
react: 18.2.0
tslib: 2.6.2
dev: false
/@dnd-kit/utilities@3.2.2(react@18.2.0):
resolution: {integrity: sha512-+MKAJEOfaBe5SmV6t34p80MMKhjvUz0vRrvVJbPT0WElzaOJ/1xs+D+KDv+tD/NE5ujfrChEcshd4fLn0wpiqg==}
peerDependencies:
@ -2907,7 +2912,7 @@ packages:
resolution: {integrity: sha512-m4HEDZleaaCH+XgDDsPF15Ht6wTLsgDTeR3WYj9Q/k76JtWhrJjcP4+/XlG8LGT/Rol9qUfOIztXeA84ATpqPQ==}
dependencies:
'@babel/helper-module-imports': 7.22.15
'@babel/runtime': 7.23.6
'@babel/runtime': 7.23.8
'@emotion/hash': 0.9.1
'@emotion/memoize': 0.8.1
'@emotion/serialize': 1.1.3
@ -2966,7 +2971,7 @@ packages:
'@types/react':
optional: true
dependencies:
'@babel/runtime': 7.23.6
'@babel/runtime': 7.23.8
'@emotion/babel-plugin': 11.11.0
'@emotion/cache': 11.11.0
'@emotion/serialize': 1.1.3
@ -3041,15 +3046,6 @@ packages:
dev: true
optional: true
/@esbuild/android-arm64@0.17.19:
resolution: {integrity: sha512-KBMWvEZooR7+kzY0BtbTQn0OAYY7CsiydT63pVEaPtVYF0hXbUaOyZog37DKxK7NF3XacBJOpYT4adIJh+avxA==}
engines: {node: '>=12'}
cpu: [arm64]
os: [android]
requiresBuild: true
dev: true
optional: true
/@esbuild/android-arm64@0.18.20:
resolution: {integrity: sha512-Nz4rJcchGDtENV0eMKUNa6L12zz2zBDXuhj/Vjh18zGqB44Bi7MBMSXjgunJgjRhCmKOjnPuZp4Mb6OKqtMHLQ==}
engines: {node: '>=12'}
@ -3068,15 +3064,6 @@ packages:
dev: true
optional: true
/@esbuild/android-arm@0.17.19:
resolution: {integrity: sha512-rIKddzqhmav7MSmoFCmDIb6e2W57geRsM94gV2l38fzhXMwq7hZoClug9USI2pFRGL06f4IOPHHpFNOkWieR8A==}
engines: {node: '>=12'}
cpu: [arm]
os: [android]
requiresBuild: true
dev: true
optional: true
/@esbuild/android-arm@0.18.20:
resolution: {integrity: sha512-fyi7TDI/ijKKNZTUJAQqiG5T7YjJXgnzkURqmGj13C6dCqckZBLdl4h7bkhHt/t0WP+zO9/zwroDvANaOqO5Sw==}
engines: {node: '>=12'}
@ -3095,15 +3082,6 @@ packages:
dev: true
optional: true
/@esbuild/android-x64@0.17.19:
resolution: {integrity: sha512-uUTTc4xGNDT7YSArp/zbtmbhO0uEEK9/ETW29Wk1thYUJBz3IVnvgEiEwEa9IeLyvnpKrWK64Utw2bgUmDveww==}
engines: {node: '>=12'}
cpu: [x64]
os: [android]
requiresBuild: true
dev: true
optional: true
/@esbuild/android-x64@0.18.20:
resolution: {integrity: sha512-8GDdlePJA8D6zlZYJV/jnrRAi6rOiNaCC/JclcXpB+KIuvfBN4owLtgzY2bsxnx666XjJx2kDPUmnTtR8qKQUg==}
engines: {node: '>=12'}
@ -3122,15 +3100,6 @@ packages:
dev: true
optional: true
/@esbuild/darwin-arm64@0.17.19:
resolution: {integrity: sha512-80wEoCfF/hFKM6WE1FyBHc9SfUblloAWx6FJkFWTWiCoht9Mc0ARGEM47e67W9rI09YoUxJL68WHfDRYEAvOhg==}
engines: {node: '>=12'}
cpu: [arm64]
os: [darwin]
requiresBuild: true
dev: true
optional: true
/@esbuild/darwin-arm64@0.18.20:
resolution: {integrity: sha512-bxRHW5kHU38zS2lPTPOyuyTm+S+eobPUnTNkdJEfAddYgEcll4xkT8DB9d2008DtTbl7uJag2HuE5NZAZgnNEA==}
engines: {node: '>=12'}
@ -3149,15 +3118,6 @@ packages:
dev: true
optional: true
/@esbuild/darwin-x64@0.17.19:
resolution: {integrity: sha512-IJM4JJsLhRYr9xdtLytPLSH9k/oxR3boaUIYiHkAawtwNOXKE8KoU8tMvryogdcT8AU+Bflmh81Xn6Q0vTZbQw==}
engines: {node: '>=12'}
cpu: [x64]
os: [darwin]
requiresBuild: true
dev: true
optional: true
/@esbuild/darwin-x64@0.18.20:
resolution: {integrity: sha512-pc5gxlMDxzm513qPGbCbDukOdsGtKhfxD1zJKXjCCcU7ju50O7MeAZ8c4krSJcOIJGFR+qx21yMMVYwiQvyTyQ==}
engines: {node: '>=12'}
@ -3176,15 +3136,6 @@ packages:
dev: true
optional: true
/@esbuild/freebsd-arm64@0.17.19:
resolution: {integrity: sha512-pBwbc7DufluUeGdjSU5Si+P3SoMF5DQ/F/UmTSb8HXO80ZEAJmrykPyzo1IfNbAoaqw48YRpv8shwd1NoI0jcQ==}
engines: {node: '>=12'}
cpu: [arm64]
os: [freebsd]
requiresBuild: true
dev: true
optional: true
/@esbuild/freebsd-arm64@0.18.20:
resolution: {integrity: sha512-yqDQHy4QHevpMAaxhhIwYPMv1NECwOvIpGCZkECn8w2WFHXjEwrBn3CeNIYsibZ/iZEUemj++M26W3cNR5h+Tw==}
engines: {node: '>=12'}
@ -3203,15 +3154,6 @@ packages:
dev: true
optional: true
/@esbuild/freebsd-x64@0.17.19:
resolution: {integrity: sha512-4lu+n8Wk0XlajEhbEffdy2xy53dpR06SlzvhGByyg36qJw6Kpfk7cp45DR/62aPH9mtJRmIyrXAS5UWBrJT6TQ==}
engines: {node: '>=12'}
cpu: [x64]
os: [freebsd]
requiresBuild: true
dev: true
optional: true
/@esbuild/freebsd-x64@0.18.20:
resolution: {integrity: sha512-tgWRPPuQsd3RmBZwarGVHZQvtzfEBOreNuxEMKFcd5DaDn2PbBxfwLcj4+aenoh7ctXcbXmOQIn8HI6mCSw5MQ==}
engines: {node: '>=12'}
@ -3230,15 +3172,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-arm64@0.17.19:
resolution: {integrity: sha512-ct1Tg3WGwd3P+oZYqic+YZF4snNl2bsnMKRkb3ozHmnM0dGWuxcPTTntAF6bOP0Sp4x0PjSF+4uHQ1xvxfRKqg==}
engines: {node: '>=12'}
cpu: [arm64]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-arm64@0.18.20:
resolution: {integrity: sha512-2YbscF+UL7SQAVIpnWvYwM+3LskyDmPhe31pE7/aoTMFKKzIc9lLbyGUpmmb8a8AixOL61sQ/mFh3jEjHYFvdA==}
engines: {node: '>=12'}
@ -3257,15 +3190,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-arm@0.17.19:
resolution: {integrity: sha512-cdmT3KxjlOQ/gZ2cjfrQOtmhG4HJs6hhvm3mWSRDPtZ/lP5oe8FWceS10JaSJC13GBd4eH/haHnqf7hhGNLerA==}
engines: {node: '>=12'}
cpu: [arm]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-arm@0.18.20:
resolution: {integrity: sha512-/5bHkMWnq1EgKr1V+Ybz3s1hWXok7mDFUMQ4cG10AfW3wL02PSZi5kFpYKrptDsgb2WAJIvRcDm+qIvXf/apvg==}
engines: {node: '>=12'}
@ -3284,15 +3208,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-ia32@0.17.19:
resolution: {integrity: sha512-w4IRhSy1VbsNxHRQpeGCHEmibqdTUx61Vc38APcsRbuVgK0OPEnQ0YD39Brymn96mOx48Y2laBQGqgZ0j9w6SQ==}
engines: {node: '>=12'}
cpu: [ia32]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-ia32@0.18.20:
resolution: {integrity: sha512-P4etWwq6IsReT0E1KHU40bOnzMHoH73aXp96Fs8TIT6z9Hu8G6+0SHSw9i2isWrD2nbx2qo5yUqACgdfVGx7TA==}
engines: {node: '>=12'}
@ -3311,15 +3226,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-loong64@0.17.19:
resolution: {integrity: sha512-2iAngUbBPMq439a+z//gE+9WBldoMp1s5GWsUSgqHLzLJ9WoZLZhpwWuym0u0u/4XmZ3gpHmzV84PonE+9IIdQ==}
engines: {node: '>=12'}
cpu: [loong64]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-loong64@0.18.20:
resolution: {integrity: sha512-nXW8nqBTrOpDLPgPY9uV+/1DjxoQ7DoB2N8eocyq8I9XuqJ7BiAMDMf9n1xZM9TgW0J8zrquIb/A7s3BJv7rjg==}
engines: {node: '>=12'}
@ -3338,15 +3244,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-mips64el@0.17.19:
resolution: {integrity: sha512-LKJltc4LVdMKHsrFe4MGNPp0hqDFA1Wpt3jE1gEyM3nKUvOiO//9PheZZHfYRfYl6AwdTH4aTcXSqBerX0ml4A==}
engines: {node: '>=12'}
cpu: [mips64el]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-mips64el@0.18.20:
resolution: {integrity: sha512-d5NeaXZcHp8PzYy5VnXV3VSd2D328Zb+9dEq5HE6bw6+N86JVPExrA6O68OPwobntbNJ0pzCpUFZTo3w0GyetQ==}
engines: {node: '>=12'}
@ -3365,15 +3262,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-ppc64@0.17.19:
resolution: {integrity: sha512-/c/DGybs95WXNS8y3Ti/ytqETiW7EU44MEKuCAcpPto3YjQbyK3IQVKfF6nbghD7EcLUGl0NbiL5Rt5DMhn5tg==}
engines: {node: '>=12'}
cpu: [ppc64]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-ppc64@0.18.20:
resolution: {integrity: sha512-WHPyeScRNcmANnLQkq6AfyXRFr5D6N2sKgkFo2FqguP44Nw2eyDlbTdZwd9GYk98DZG9QItIiTlFLHJHjxP3FA==}
engines: {node: '>=12'}
@ -3392,15 +3280,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-riscv64@0.17.19:
resolution: {integrity: sha512-FC3nUAWhvFoutlhAkgHf8f5HwFWUL6bYdvLc/TTuxKlvLi3+pPzdZiFKSWz/PF30TB1K19SuCxDTI5KcqASJqA==}
engines: {node: '>=12'}
cpu: [riscv64]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-riscv64@0.18.20:
resolution: {integrity: sha512-WSxo6h5ecI5XH34KC7w5veNnKkju3zBRLEQNY7mv5mtBmrP/MjNBCAlsM2u5hDBlS3NGcTQpoBvRzqBcRtpq1A==}
engines: {node: '>=12'}
@ -3419,15 +3298,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-s390x@0.17.19:
resolution: {integrity: sha512-IbFsFbxMWLuKEbH+7sTkKzL6NJmG2vRyy6K7JJo55w+8xDk7RElYn6xvXtDW8HCfoKBFK69f3pgBJSUSQPr+4Q==}
engines: {node: '>=12'}
cpu: [s390x]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-s390x@0.18.20:
resolution: {integrity: sha512-+8231GMs3mAEth6Ja1iK0a1sQ3ohfcpzpRLH8uuc5/KVDFneH6jtAJLFGafpzpMRO6DzJ6AvXKze9LfFMrIHVQ==}
engines: {node: '>=12'}
@ -3446,15 +3316,6 @@ packages:
dev: true
optional: true
/@esbuild/linux-x64@0.17.19:
resolution: {integrity: sha512-68ngA9lg2H6zkZcyp22tsVt38mlhWde8l3eJLWkyLrp4HwMUr3c1s/M2t7+kHIhvMjglIBrFpncX1SzMckomGw==}
engines: {node: '>=12'}
cpu: [x64]
os: [linux]
requiresBuild: true
dev: true
optional: true
/@esbuild/linux-x64@0.18.20:
resolution: {integrity: sha512-UYqiqemphJcNsFEskc73jQ7B9jgwjWrSayxawS6UVFZGWrAAtkzjxSqnoclCXxWtfwLdzU+vTpcNYhpn43uP1w==}
engines: {node: '>=12'}
@ -3473,15 +3334,6 @@ packages:
dev: true
optional: true
/@esbuild/netbsd-x64@0.17.19:
resolution: {integrity: sha512-CwFq42rXCR8TYIjIfpXCbRX0rp1jo6cPIUPSaWwzbVI4aOfX96OXY8M6KNmtPcg7QjYeDmN+DD0Wp3LaBOLf4Q==}
engines: {node: '>=12'}
cpu: [x64]
os: [netbsd]
requiresBuild: true
dev: true
optional: true
/@esbuild/netbsd-x64@0.18.20:
resolution: {integrity: sha512-iO1c++VP6xUBUmltHZoMtCUdPlnPGdBom6IrO4gyKPFFVBKioIImVooR5I83nTew5UOYrk3gIJhbZh8X44y06A==}
engines: {node: '>=12'}
@ -3500,15 +3352,6 @@ packages:
dev: true
optional: true
/@esbuild/openbsd-x64@0.17.19:
resolution: {integrity: sha512-cnq5brJYrSZ2CF6c35eCmviIN3k3RczmHz8eYaVlNasVqsNY+JKohZU5MKmaOI+KkllCdzOKKdPs762VCPC20g==}
engines: {node: '>=12'}
cpu: [x64]
os: [openbsd]
requiresBuild: true
dev: true
optional: true
/@esbuild/openbsd-x64@0.18.20:
resolution: {integrity: sha512-e5e4YSsuQfX4cxcygw/UCPIEP6wbIL+se3sxPdCiMbFLBWu0eiZOJ7WoD+ptCLrmjZBK1Wk7I6D/I3NglUGOxg==}
engines: {node: '>=12'}
@ -3527,15 +3370,6 @@ packages:
dev: true
optional: true
/@esbuild/sunos-x64@0.17.19:
resolution: {integrity: sha512-vCRT7yP3zX+bKWFeP/zdS6SqdWB8OIpaRq/mbXQxTGHnIxspRtigpkUcDMlSCOejlHowLqII7K2JKevwyRP2rg==}
engines: {node: '>=12'}
cpu: [x64]
os: [sunos]
requiresBuild: true
dev: true
optional: true
/@esbuild/sunos-x64@0.18.20:
resolution: {integrity: sha512-kDbFRFp0YpTQVVrqUd5FTYmWo45zGaXe0X8E1G/LKFC0v8x0vWrhOWSLITcCn63lmZIxfOMXtCfti/RxN/0wnQ==}
engines: {node: '>=12'}
@ -3554,15 +3388,6 @@ packages:
dev: true
optional: true
/@esbuild/win32-arm64@0.17.19:
resolution: {integrity: sha512-yYx+8jwowUstVdorcMdNlzklLYhPxjniHWFKgRqH7IFlUEa0Umu3KuYplf1HUZZ422e3NU9F4LGb+4O0Kdcaag==}
engines: {node: '>=12'}
cpu: [arm64]
os: [win32]
requiresBuild: true
dev: true
optional: true
/@esbuild/win32-arm64@0.18.20:
resolution: {integrity: sha512-ddYFR6ItYgoaq4v4JmQQaAI5s7npztfV4Ag6NrhiaW0RrnOXqBkgwZLofVTlq1daVTQNhtI5oieTvkRPfZrePg==}
engines: {node: '>=12'}
@ -3581,15 +3406,6 @@ packages:
dev: true
optional: true
/@esbuild/win32-ia32@0.17.19:
resolution: {integrity: sha512-eggDKanJszUtCdlVs0RB+h35wNlb5v4TWEkq4vZcmVt5u/HiDZrTXe2bWFQUez3RgNHwx/x4sk5++4NSSicKkw==}
engines: {node: '>=12'}
cpu: [ia32]
os: [win32]
requiresBuild: true
dev: true
optional: true
/@esbuild/win32-ia32@0.18.20:
resolution: {integrity: sha512-Wv7QBi3ID/rROT08SABTS7eV4hX26sVduqDOTe1MvGMjNd3EjOz4b7zeexIR62GTIEKrfJXKL9LFxTYgkyeu7g==}
engines: {node: '>=12'}
@ -3608,15 +3424,6 @@ packages:
dev: true
optional: true
/@esbuild/win32-x64@0.17.19:
resolution: {integrity: sha512-lAhycmKnVOuRYNtRtatQR1LPQf2oYCkRGkSFnseDAKPl8lu5SOsK/e1sXe5a0Pc5kHIHe6P2I/ilntNv2xf3cA==}
engines: {node: '>=12'}
cpu: [x64]
os: [win32]
requiresBuild: true
dev: true
optional: true
/@esbuild/win32-x64@0.18.20:
resolution: {integrity: sha512-kTdfRcSiDfQca/y9QIkng02avJ+NCaQvrMejlsB3RRv5sE9rRoeBPISaZpKxHELzRxZyLvNts1P27W3wV+8geQ==}
engines: {node: '>=12'}
@ -3759,8 +3566,44 @@ packages:
'@swc/helpers': 0.5.3
dev: false
/@invoke-ai/ui@0.0.13(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.16)(@internationalized/date@3.5.1)(@types/react@18.2.48)(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0):
resolution: {integrity: sha512-X4Txij2dMnzPUXTPhorBHezByJQ/ceyHxCM+zZ0gpFsSyXUieOFWjaSu+dAVpghS9y0dxFQGayHvNyX6VsX/PA==}
/@invoke-ai/eslint-config-react@0.0.13(@typescript-eslint/eslint-plugin@6.19.0)(@typescript-eslint/parser@6.19.0)(eslint-config-prettier@9.1.0)(eslint-plugin-import@2.29.1)(eslint-plugin-react-hooks@4.6.0)(eslint-plugin-react-refresh@0.4.5)(eslint-plugin-react@7.33.2)(eslint-plugin-simple-import-sort@10.0.0)(eslint-plugin-storybook@0.6.15)(eslint-plugin-unused-imports@3.0.0)(eslint@8.56.0):
resolution: {integrity: sha512-dfo9k+wPHdvpy1z6ABoYXR/Ttzs1FAnbC46ttIxVhZuqDq8K5cLWznivrOfl7f0hJb8Cb8HiuQb4pHDxhHBDqA==}
peerDependencies:
'@typescript-eslint/eslint-plugin': ^6.19.0
'@typescript-eslint/parser': ^6.19.0
eslint: ^8.56.0
eslint-config-prettier: ^9.1.0
eslint-plugin-import: ^2.29.1
eslint-plugin-react: ^7.33.2
eslint-plugin-react-hooks: ^4.6.0
eslint-plugin-react-refresh: ^0.4.5
eslint-plugin-simple-import-sort: ^10.0.0
eslint-plugin-storybook: ^0.6.15
eslint-plugin-unused-imports: ^3.0.0
dependencies:
'@typescript-eslint/eslint-plugin': 6.19.0(@typescript-eslint/parser@6.19.0)(eslint@8.56.0)(typescript@5.3.3)
'@typescript-eslint/parser': 6.19.0(eslint@8.56.0)(typescript@5.3.3)
eslint: 8.56.0
eslint-config-prettier: 9.1.0(eslint@8.56.0)
eslint-plugin-import: 2.29.1(@typescript-eslint/parser@6.19.0)(eslint@8.56.0)
eslint-plugin-react: 7.33.2(eslint@8.56.0)
eslint-plugin-react-hooks: 4.6.0(eslint@8.56.0)
eslint-plugin-react-refresh: 0.4.5(eslint@8.56.0)
eslint-plugin-simple-import-sort: 10.0.0(eslint@8.56.0)
eslint-plugin-storybook: 0.6.15(eslint@8.56.0)(typescript@5.3.3)
eslint-plugin-unused-imports: 3.0.0(@typescript-eslint/eslint-plugin@6.19.0)(eslint@8.56.0)
dev: true
/@invoke-ai/prettier-config-react@0.0.6(prettier@3.2.4):
resolution: {integrity: sha512-qHE6GAw/Aka/8TLTN9U1U+8pxjaFe5irDv/uSgzqmrBR1rGiVyMp19pEficWRRt+03zYdquiiDjTmoabWQxY0Q==}
peerDependencies:
prettier: ^3.2.4
dependencies:
prettier: 3.2.4
dev: true
/@invoke-ai/ui-library@0.0.18(@chakra-ui/form-control@2.2.0)(@chakra-ui/icon@3.2.0)(@chakra-ui/media-query@3.3.0)(@chakra-ui/menu@2.2.1)(@chakra-ui/spinner@2.1.0)(@chakra-ui/system@2.6.2)(@fontsource-variable/inter@5.0.16)(@internationalized/date@3.5.1)(@types/react@18.2.48)(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0):
resolution: {integrity: sha512-Yme+2+pzYy3TPb7ZT0hYmBwahH29ZRSVIxLKSexh3BsbJXbTzGssRQU78QvK6Ymxemgbso3P8Rs+IW0zNhQKjQ==}
peerDependencies:
'@fontsource-variable/inter': ^5.0.16
react: ^18.2.0
@ -3782,11 +3625,12 @@ packages:
framer-motion: 10.18.0(react-dom@18.2.0)(react@18.2.0)
lodash-es: 4.17.21
nanostores: 0.9.5
overlayscrollbars: 2.4.6
overlayscrollbars-react: 0.5.3(overlayscrollbars@2.4.6)(react@18.2.0)
overlayscrollbars: 2.4.7
overlayscrollbars-react: 0.5.4(overlayscrollbars@2.4.7)(react@18.2.0)
react: 18.2.0
react-dom: 18.2.0(react@18.2.0)
react-i18next: 14.0.0(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0)
react-i18next: 14.0.1(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0)
react-icons: 5.0.1(react@18.2.0)
react-select: 5.8.0(@types/react@18.2.48)(react-dom@18.2.0)(react@18.2.0)
transitivePeerDependencies:
- '@chakra-ui/form-control'
@ -3882,7 +3726,7 @@ packages:
chalk: 4.1.2
dev: true
/@joshwooding/vite-plugin-react-docgen-typescript@0.3.0(typescript@5.3.3)(vite@5.0.11):
/@joshwooding/vite-plugin-react-docgen-typescript@0.3.0(typescript@5.3.3)(vite@5.0.12):
resolution: {integrity: sha512-2D6y7fNvFmsLmRt6UCOFJPvFoPMJGT0Uh1Wg0RaigUp7kdQPs6yYn8Dmx6GZkOH/NW0yMTwRz/p0SRMMRo50vA==}
peerDependencies:
typescript: '>= 4.3.x'
@ -3896,7 +3740,7 @@ packages:
magic-string: 0.27.0
react-docgen-typescript: 2.2.2(typescript@5.3.3)
typescript: 5.3.3
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
dev: true
/@jridgewell/gen-mapping@0.3.3:
@ -5131,7 +4975,7 @@ packages:
- supports-color
dev: true
/@storybook/builder-vite@7.6.10(typescript@5.3.3)(vite@5.0.11):
/@storybook/builder-vite@7.6.10(typescript@5.3.3)(vite@5.0.12):
resolution: {integrity: sha512-qxe19axiNJVdIKj943e1ucAmADwU42fTGgMSdBzzrvfH3pSOmx2057aIxRzd8YtBRnj327eeqpgCHYIDTunMYQ==}
peerDependencies:
'@preact/preset-vite': '*'
@ -5163,7 +5007,7 @@ packages:
magic-string: 0.30.5
rollup: 3.29.4
typescript: 5.3.3
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
transitivePeerDependencies:
- encoding
- supports-color
@ -5519,7 +5363,7 @@ packages:
react-dom: 18.2.0(react@18.2.0)
dev: true
/@storybook/react-vite@7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)(vite@5.0.11):
/@storybook/react-vite@7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)(vite@5.0.12):
resolution: {integrity: sha512-YE2+J1wy8nO+c6Nv/hBMu91Edew3K184L1KSnfoZV8vtq2074k1Me/8pfe0QNuq631AncpfCYNb37yBAXQ/80w==}
engines: {node: '>=16'}
peerDependencies:
@ -5527,16 +5371,16 @@ packages:
react-dom: ^16.8.0 || ^17.0.0 || ^18.0.0
vite: ^3.0.0 || ^4.0.0 || ^5.0.0
dependencies:
'@joshwooding/vite-plugin-react-docgen-typescript': 0.3.0(typescript@5.3.3)(vite@5.0.11)
'@joshwooding/vite-plugin-react-docgen-typescript': 0.3.0(typescript@5.3.3)(vite@5.0.12)
'@rollup/pluginutils': 5.1.0
'@storybook/builder-vite': 7.6.10(typescript@5.3.3)(vite@5.0.11)
'@storybook/builder-vite': 7.6.10(typescript@5.3.3)(vite@5.0.12)
'@storybook/react': 7.6.10(react-dom@18.2.0)(react@18.2.0)(typescript@5.3.3)
'@vitejs/plugin-react': 3.1.0(vite@5.0.11)
'@vitejs/plugin-react': 3.1.0(vite@5.0.12)
magic-string: 0.30.5
react: 18.2.0
react-docgen: 7.0.3
react-dom: 18.2.0(react@18.2.0)
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
transitivePeerDependencies:
- '@preact/preset-vite'
- encoding
@ -6611,18 +6455,18 @@ packages:
resolution: {integrity: sha512-zuVdFrMJiuCDQUMCzQaD6KL28MjnqqN8XnAqiEq9PNm/hCPTSGfrXCOfwj1ow4LFb/tNymJPwsNbVePc1xFqrQ==}
dev: true
/@vitejs/plugin-react-swc@3.5.0(vite@5.0.11):
/@vitejs/plugin-react-swc@3.5.0(vite@5.0.12):
resolution: {integrity: sha512-1PrOvAaDpqlCV+Up8RkAh9qaiUjoDUcjtttyhXDKw53XA6Ve16SOp6cCOpRs8Dj8DqUQs6eTW5YkLcLJjrXAig==}
peerDependencies:
vite: ^4 || ^5
dependencies:
'@swc/core': 1.3.101
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
transitivePeerDependencies:
- '@swc/helpers'
dev: true
/@vitejs/plugin-react@3.1.0(vite@5.0.11):
/@vitejs/plugin-react@3.1.0(vite@5.0.12):
resolution: {integrity: sha512-AfgcRL8ZBhAlc3BFdigClmTUMISmmzHn7sB2h9U1odvc5U/MjWXsAaz18b/WoppUTDBzxOJwo2VdClfUcItu9g==}
engines: {node: ^14.18.0 || >=16.0.0}
peerDependencies:
@ -6633,7 +6477,7 @@ packages:
'@babel/plugin-transform-react-jsx-source': 7.23.3(@babel/core@7.23.7)
magic-string: 0.27.0
react-refresh: 0.14.0
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
transitivePeerDependencies:
- supports-color
dev: true
@ -7640,7 +7484,7 @@ packages:
resolution: {integrity: sha512-Cg7TFGpIr01vOQNODXOOaGz2NpCU5gl8x1qJFbb6hbZxR7XrcE2vtbAsTAbJ7/xwJtUuJEw8K8Zr/AE0LHlesg==}
engines: {node: '>=10', npm: '>=6'}
dependencies:
'@babel/runtime': 7.23.6
'@babel/runtime': 7.23.8
cosmiconfig: 7.1.0
resolve: 1.22.8
dev: false
@ -7959,17 +7803,6 @@ packages:
engines: {node: '>=6'}
dev: true
/clear-any-console@1.16.2:
resolution: {integrity: sha512-OL/7wZpNy9x0GBSzz3poWja84Nr7iaH8aYNsJ5Uet2BVLj6Lm1zvWpZN/yH46Vv3ae7YfHmLLMmfHj911fshJg==}
dev: true
/cli-check-node@1.3.4:
resolution: {integrity: sha512-iLGgQXm82iP8eH3R67qbOWs5qqUOLmNnMy5Lzl/RybcMh3y+H2zWU5POzuQ6oDUOdz4XWuxcFhP75szqd6frLg==}
dependencies:
chalk: 3.0.0
log-symbols: 3.0.0
dev: true
/cli-cursor@3.1.0:
resolution: {integrity: sha512-I/zHAwsKf9FqGoXM4WWRACob9+SNukZTd94DWF57E4toouRulbCxcUh6RKUEOQlYTHJnzkPMySvPNaaSLNfLZw==}
engines: {node: '>=8'}
@ -7977,19 +7810,6 @@ packages:
restore-cursor: 3.1.0
dev: true
/cli-handle-error@4.4.0:
resolution: {integrity: sha512-RyBCnKlc7xVr79cKb9RfBq+4fjwQeX8HKeNzIPnI/W+DWWIUUKh2ur576DpwJ3kZt2UGHlIAOF7N9txy+mgZsA==}
dependencies:
chalk: 3.0.0
log-symbols: 3.0.0
dev: true
/cli-handle-unhandled@1.1.1:
resolution: {integrity: sha512-Em91mJvU7VdgT2MxQpyY633vW1tDzRjPDbii6ZjEBHHLLh0xDoVkFt/wjvi9nSvJcz9rJmvtJSK8KL/hvF0Stg==}
dependencies:
cli-handle-error: 4.4.0
dev: true
/cli-spinners@2.9.2:
resolution: {integrity: sha512-ywqV+5MmyL4E7ybXgKys4DugZbX0FC6LnwrhjuykIjnK9k8OQacQ7axGKnjDXWNhns0xot3bZI5h55H8yo9cJg==}
engines: {node: '>=6'}
@ -8004,14 +7824,6 @@ packages:
'@colors/colors': 1.5.0
dev: true
/cli-welcome@2.2.2:
resolution: {integrity: sha512-LgDGS0TW4nIf8v81wpuZzfOEDPcy68u0jKR0Fy5IaWftqdminI6FoDiMFt1mjPylqKGNv/wFsZ7fCs93IeDMIw==}
dependencies:
chalk: 2.4.2
clear-any-console: 1.16.2
prettier: 2.8.8
dev: true
/cliui@8.0.1:
resolution: {integrity: sha512-BSeNnyus75C4//NQ9gQt1/csTXyo/8Sb+afLAkzAptFuMsod9HFokGNudZpi/oQV73hnVK+sR+5PVRMd+Dr7YQ==}
engines: {node: '>=12'}
@ -8606,7 +8418,7 @@ packages:
dependencies:
debug: 4.3.4
is-url: 1.2.4
postcss: 8.4.32
postcss: 8.4.33
postcss-values-parser: 2.0.1
transitivePeerDependencies:
- supports-color
@ -8617,8 +8429,8 @@ packages:
engines: {node: ^12.0.0 || ^14.0.0 || >=16.0.0}
dependencies:
is-url: 1.2.4
postcss: 8.4.32
postcss-values-parser: 6.0.2(postcss@8.4.32)
postcss: 8.4.33
postcss-values-parser: 6.0.2(postcss@8.4.33)
dev: true
/detective-sass@3.0.2:
@ -8968,36 +8780,6 @@ packages:
- supports-color
dev: true
/esbuild@0.17.19:
resolution: {integrity: sha512-XQ0jAPFkK/u3LcVRcvVHQcTIqD6E2H1fvZMA5dQPSOWb3suUbWbfbRf94pjc0bNzRYLfIrDRQXr7X+LHIm5oHw==}
engines: {node: '>=12'}
hasBin: true
requiresBuild: true
optionalDependencies:
'@esbuild/android-arm': 0.17.19
'@esbuild/android-arm64': 0.17.19
'@esbuild/android-x64': 0.17.19
'@esbuild/darwin-arm64': 0.17.19
'@esbuild/darwin-x64': 0.17.19
'@esbuild/freebsd-arm64': 0.17.19
'@esbuild/freebsd-x64': 0.17.19
'@esbuild/linux-arm': 0.17.19
'@esbuild/linux-arm64': 0.17.19
'@esbuild/linux-ia32': 0.17.19
'@esbuild/linux-loong64': 0.17.19
'@esbuild/linux-mips64el': 0.17.19
'@esbuild/linux-ppc64': 0.17.19
'@esbuild/linux-riscv64': 0.17.19
'@esbuild/linux-s390x': 0.17.19
'@esbuild/linux-x64': 0.17.19
'@esbuild/netbsd-x64': 0.17.19
'@esbuild/openbsd-x64': 0.17.19
'@esbuild/sunos-x64': 0.17.19
'@esbuild/win32-arm64': 0.17.19
'@esbuild/win32-ia32': 0.17.19
'@esbuild/win32-x64': 0.17.19
dev: true
/esbuild@0.18.20:
resolution: {integrity: sha512-ceqxoedUrcayh7Y7ZX6NdbbDzGROiyVBgC4PriJThBKSVPWnnFHZAkfI1lJT8QFkOwH4qOS2SJkS4wvpGl8BpA==}
engines: {node: '>=12'}
@ -9198,6 +8980,14 @@ packages:
eslint: 8.56.0
dev: true
/eslint-plugin-react-refresh@0.4.5(eslint@8.56.0):
resolution: {integrity: sha512-D53FYKJa+fDmZMtriODxvhwrO+IOqrxoEo21gMA0sjHdU6dPVH4OhyFip9ypl8HOF5RV5KdTo+rBQLvnY2cO8w==}
peerDependencies:
eslint: '>=7'
dependencies:
eslint: 8.56.0
dev: true
/eslint-plugin-react@7.33.2(eslint@8.56.0):
resolution: {integrity: sha512-73QQMKALArI8/7xGLNI/3LylrEYrlKZSb5C9+q3OtOewTnMQi5cT+aE9E41sLCmli3I9PGGmD1yiZydyo4FEPw==}
engines: {node: '>=4'}
@ -10908,13 +10698,6 @@ packages:
resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==}
dev: true
/log-symbols@3.0.0:
resolution: {integrity: sha512-dSkNGuI7iG3mfvDzUuYZyvk5dD9ocYCYzNU6CYDE6+Xqd+gwme6Z00NS3dUh8mq/73HaEtT7m6W+yUPtU6BZnQ==}
engines: {node: '>=8'}
dependencies:
chalk: 2.4.2
dev: true
/log-symbols@4.1.0:
resolution: {integrity: sha512-8XPvpAA8uyhfteu8pIvQxpJZ7SYYdpUivZpGy6sFsBuKRY/7rQGavedeB8aK+Zkyq6upMFVL/9AW6vOYzfRyLg==}
engines: {node: '>=10'}
@ -11565,10 +11348,24 @@ packages:
react: 18.2.0
dev: false
/overlayscrollbars-react@0.5.4(overlayscrollbars@2.4.7)(react@18.2.0):
resolution: {integrity: sha512-FPKx9XnXovTnI4+2JXig5uEaTLSEJ6svOwPzIfBBXTHBRNsz2+WhYUmfM0K/BNYxjgDEwuPm+NQhEoOA0RoG1g==}
peerDependencies:
overlayscrollbars: ^2.0.0
react: '>=16.8.0'
dependencies:
overlayscrollbars: 2.4.7
react: 18.2.0
dev: false
/overlayscrollbars@2.4.6:
resolution: {integrity: sha512-C7tmhetwMv9frEvIT/RfkAVEgbjRNz/Gh2zE8BVmN+jl35GRaAnz73rlGQCMRoC2arpACAXyMNnJkzHb7GBrcA==}
dev: false
/overlayscrollbars@2.4.7:
resolution: {integrity: sha512-02X2/nHno35dzebCx+EO2tRDaKAOltZqUKdUqvq3Pt8htCuhJbYi+mjr0CYerVeGRRoZ2Uo6/8XrNg//DJJ+GA==}
dev: false
/p-limit@2.3.0:
resolution: {integrity: sha512-//88mFWSJx8lxCzwdAABTJL2MyWB12+eIY7MDL2SqLmAkeKU9qxRvWuSyTjm3FUmpBEMuFfckAIqEaVGUDxb6w==}
engines: {node: '>=6'}
@ -11774,7 +11571,7 @@ packages:
uniq: 1.0.1
dev: true
/postcss-values-parser@6.0.2(postcss@8.4.32):
/postcss-values-parser@6.0.2(postcss@8.4.33):
resolution: {integrity: sha512-YLJpK0N1brcNJrs9WatuJFtHaV9q5aAOj+S4DI5S7jgHlRfm0PIbDCAFRYMQD5SHq7Fy6xsDhyutgS0QOAs0qw==}
engines: {node: '>=10'}
peerDependencies:
@ -11782,19 +11579,10 @@ packages:
dependencies:
color-name: 1.1.4
is-url-superb: 4.0.0
postcss: 8.4.32
postcss: 8.4.33
quote-unquote: 1.0.0
dev: true
/postcss@8.4.32:
resolution: {integrity: sha512-D/kj5JNu6oo2EIy+XL/26JEDTlIbB8hw85G8StOE6L74RQAVVP5rej6wxCNqyMbR4RkPfqvezVbPw81Ngd6Kcw==}
engines: {node: ^10 || ^12 || >=14}
dependencies:
nanoid: 3.3.7
picocolors: 1.0.0
source-map-js: 1.0.2
dev: true
/postcss@8.4.33:
resolution: {integrity: sha512-Kkpbhhdjw2qQs2O2DGX+8m5OVqEcbB9HRBvuYM9pgrjEFUg30A9LmXNlTAUj4S9kgtGyrMbTzVjH7E+s5Re2yg==}
engines: {node: ^10 || ^12 || >=14}
@ -12207,6 +11995,26 @@ packages:
react-dom: 18.2.0(react@18.2.0)
dev: false
/react-i18next@14.0.1(i18next@23.7.16)(react-dom@18.2.0)(react@18.2.0):
resolution: {integrity: sha512-TMV8hFismBmpMdIehoFHin/okfvgjFhp723RYgIqB4XyhDobVMyukyM3Z8wtTRmajyFMZrBl/OaaXF2P6WjUAw==}
peerDependencies:
i18next: '>= 23.2.3'
react: '>= 16.8.0'
react-dom: '*'
react-native: '*'
peerDependenciesMeta:
react-dom:
optional: true
react-native:
optional: true
dependencies:
'@babel/runtime': 7.23.8
html-parse-stringify: 3.0.1
i18next: 23.7.16
react: 18.2.0
react-dom: 18.2.0(react@18.2.0)
dev: false
/react-icons@5.0.1(react@18.2.0):
resolution: {integrity: sha512-WqLZJ4bLzlhmsvme6iFdgO8gfZP17rfjYEJ2m9RsZjZ+cc4k1hTzknEz63YS1MeT50kVzoa1Nz36f4BEx+Wigw==}
peerDependencies:
@ -14020,15 +13828,15 @@ packages:
engines: {node: '>= 0.8'}
dev: true
/vite-plugin-css-injected-by-js@3.3.1(vite@5.0.11):
/vite-plugin-css-injected-by-js@3.3.1(vite@5.0.12):
resolution: {integrity: sha512-PjM/X45DR3/V1K1fTRs8HtZHEQ55kIfdrn+dzaqNBFrOYO073SeSNCxp4j7gSYhV9NffVHaEnOL4myoko0ePAg==}
peerDependencies:
vite: '>2.0.0-0'
dependencies:
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
dev: true
/vite-plugin-dts@3.7.1(@types/node@20.11.5)(typescript@5.3.3)(vite@5.0.11):
/vite-plugin-dts@3.7.1(@types/node@20.11.5)(typescript@5.3.3)(vite@5.0.12):
resolution: {integrity: sha512-VZJckNFpVfRAkmOxhGT5OgTUVWVXxkNQqLpBUuiNGAr9HbtvmvsPLo2JB3Xhn+o/Z9+CT6YZfYa4bX9SGR5hNw==}
engines: {node: ^14.18.0 || >=16.0.0}
peerDependencies:
@ -14044,7 +13852,7 @@ packages:
debug: 4.3.4
kolorist: 1.8.0
typescript: 5.3.3
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
vue-tsc: 1.8.27(typescript@5.3.3)
transitivePeerDependencies:
- '@types/node'
@ -14052,7 +13860,7 @@ packages:
- supports-color
dev: true
/vite-plugin-eslint@1.8.1(eslint@8.56.0)(vite@5.0.11):
/vite-plugin-eslint@1.8.1(eslint@8.56.0)(vite@5.0.12):
resolution: {integrity: sha512-PqdMf3Y2fLO9FsNPmMX+//2BF5SF8nEWspZdgl4kSt7UvHDRHVVfHvxsD7ULYzZrJDGRxR81Nq7TOFgwMnUang==}
peerDependencies:
eslint: '>=7'
@ -14062,10 +13870,10 @@ packages:
'@types/eslint': 8.56.0
eslint: 8.56.0
rollup: 2.79.1
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
dev: true
/vite-tsconfig-paths@4.3.1(typescript@5.3.3)(vite@5.0.11):
/vite-tsconfig-paths@4.3.1(typescript@5.3.3)(vite@5.0.12):
resolution: {integrity: sha512-cfgJwcGOsIxXOLU/nELPny2/LUD/lcf1IbfyeKTv2bsupVbTH/xpFtdQlBmIP1GEK2CjjLxYhFfB+QODFAx5aw==}
peerDependencies:
vite: '*'
@ -14076,14 +13884,14 @@ packages:
debug: 4.3.4
globrex: 0.1.2
tsconfck: 3.0.1(typescript@5.3.3)
vite: 5.0.11(@types/node@20.11.5)
vite: 5.0.12(@types/node@20.11.5)
transitivePeerDependencies:
- supports-color
- typescript
dev: true
/vite@5.0.11(@types/node@20.11.5):
resolution: {integrity: sha512-XBMnDjZcNAw/G1gEiskiM1v6yzM4GE5aMGvhWTlHAYYhxb7S3/V1s3m2LDHa8Vh6yIWYYB0iJwsEaS523c4oYA==}
/vite@5.0.12(@types/node@20.11.5):
resolution: {integrity: sha512-4hsnEkG3q0N4Tzf1+t6NdN9dg/L3BM+q8SWgbSPnJvrgH2kgdyzfVJwbR1ic69/4uMJJ/3dqDZZE5/WwqW8U1w==}
engines: {node: ^18.0.0 || >=20.0.0}
hasBin: true
peerDependencies:

View File

@ -7,7 +7,6 @@
"img2img": "صورة إلى صورة",
"unifiedCanvas": "لوحة موحدة",
"nodes": "عقد",
"langArabic": "العربية",
"nodesDesc": "نظام مبني على العقد لإنتاج الصور قيد التطوير حاليًا. تبقى على اتصال مع تحديثات حول هذه الميزة المذهلة.",
"postProcessing": "معالجة بعد الإصدار",
"postProcessDesc1": "Invoke AI توفر مجموعة واسعة من ميزات المعالجة بعد الإصدار. تحسين الصور واستعادة الوجوه متاحين بالفعل في واجهة الويب. يمكنك الوصول إليهم من الخيارات المتقدمة في قائمة الخيارات في علامة التبويب Text To Image و Image To Image. يمكن أيضًا معالجة الصور مباشرةً باستخدام أزرار الإجراء على الصورة فوق عرض الصورة الحالي أو في العارض.",

View File

@ -0,0 +1,5 @@
{
"accessibility": {
"about": "Haqqında"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -86,6 +86,7 @@
"back": "Back",
"batch": "Batch Manager",
"cancel": "Cancel",
"copy": "Copy",
"copyError": "$t(gallery.copy) Error",
"close": "Close",
"on": "On",
@ -117,24 +118,7 @@
"inpaint": "inpaint",
"input": "Input",
"installed": "Installed",
"langArabic": "العربية",
"langBrPortuguese": "Português do Brasil",
"langDutch": "Nederlands",
"langEnglish": "English",
"langFrench": "Français",
"langGerman": "German",
"langHebrew": "Hebrew",
"langItalian": "Italiano",
"langJapanese": "日本語",
"langKorean": "한국어",
"langPolish": "Polski",
"langPortuguese": "Português",
"langRussian": "Русский",
"langSimplifiedChinese": "简体中文",
"langSpanish": "Español",
"languagePickerLabel": "Language",
"langUkranian": "Украї́нська",
"lightMode": "Light Mode",
"linear": "Linear",
"load": "Load",
"loading": "Loading",
@ -191,6 +175,7 @@
"statusUpscaling": "Upscaling",
"statusUpscalingESRGAN": "Upscaling (ESRGAN)",
"template": "Template",
"toResolve": "To resolve",
"training": "Training",
"trainingDesc1": "A dedicated workflow for training your own embeddings and checkpoints using Textual Inversion and Dreambooth from the web interface.",
"trainingDesc2": "InvokeAI already supports training custom embeddourings using Textual Inversion using the main script.",
@ -251,6 +236,9 @@
"fill": "Fill",
"h": "H",
"handAndFace": "Hand and Face",
"face": "Face",
"body": "Body",
"hands": "Hands",
"hed": "HED",
"hedDescription": "Holistically-Nested Edge Detection",
"hideAdvanced": "Hide Advanced",
@ -277,8 +265,8 @@
"noneDescription": "No processing applied",
"normalBae": "Normal BAE",
"normalBaeDescription": "Normal BAE processing",
"openPose": "Openpose",
"openPoseDescription": "Human pose estimation using Openpose",
"dwOpenpose": "DW Openpose",
"dwOpenposeDescription": "Human pose estimation using DW Openpose",
"pidi": "PIDI",
"pidiDescription": "PIDI image processing",
"processor": "Processor",
@ -913,6 +901,7 @@
"doesNotExist": "does not exist",
"downloadWorkflow": "Download Workflow JSON",
"edge": "Edge",
"editMode": "Edit in Workflow Editor",
"enum": "Enum",
"enumDescription": "Enums are values that may be one of a number of options.",
"executionStateCompleted": "Completed",
@ -1008,11 +997,16 @@
"problemReadingMetadata": "Problem reading metadata from image",
"problemReadingWorkflow": "Problem reading workflow from image",
"problemSettingTitle": "Problem Setting Title",
"resetToDefaultValue": "Reset to default value",
"reloadNodeTemplates": "Reload Node Templates",
"removeLinearView": "Remove from Linear View",
"reorderLinearView": "Reorder Linear View",
"newWorkflow": "New Workflow",
"newWorkflowDesc": "Create a new workflow?",
"newWorkflowDesc2": "Your current workflow has unsaved changes.",
"clearWorkflow": "Clear Workflow",
"clearWorkflowDesc": "Clear this workflow and start a new one?",
"clearWorkflowDesc2": "Your current workflow has unsaved changes.",
"scheduler": "Scheduler",
"schedulerDescription": "TODO",
"sDXLMainModelField": "SDXL Model",
@ -1077,6 +1071,7 @@
"vaeModelFieldDescription": "TODO",
"validateConnections": "Validate Connections and Graph",
"validateConnectionsHelp": "Prevent invalid connections from being made, and invalid graphs from being invoked",
"viewMode": "Use in Linear View",
"unableToGetWorkflowVersion": "Unable to get workflow schema version",
"unrecognizedWorkflowVersion": "Unrecognized workflow schema version {{version}}",
"version": "Version",
@ -1372,6 +1367,7 @@
"problemCopyingCanvasDesc": "Unable to export base layer",
"problemCopyingImage": "Unable to Copy Image",
"problemCopyingImageLink": "Unable to Copy Image Link",
"problemDownloadingImage": "Unable to Download Image",
"problemDownloadingCanvas": "Problem Downloading Canvas",
"problemDownloadingCanvasDesc": "Unable to export base layer",
"problemImportingMask": "Problem Importing Mask",
@ -1428,9 +1424,8 @@
"clipSkip": {
"heading": "CLIP Skip",
"paragraphs": [
"Choose how many layers of the CLIP model to skip.",
"Some models work better with certain CLIP Skip settings.",
"A higher value typically results in a less detailed image."
"How many layers of the CLIP model to skip.",
"Certain models are better suited to be used with CLIP Skip."
]
},
"paramNegativeConditioning": {
@ -1450,7 +1445,8 @@
"paramScheduler": {
"heading": "Scheduler",
"paragraphs": [
"Scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
"Scheduler used during the generation process.",
"Each scheduler defines how to iteratively add noise to an image or how to update a sample based on a model's output."
]
},
"compositingBlur": {
@ -1463,61 +1459,55 @@
},
"compositingCoherencePass": {
"heading": "Coherence Pass",
"paragraphs": [
"A second round of denoising helps to composite the Inpainted/Outpainted image."
]
"paragraphs": ["A second round of denoising helps to composite the Inpainted/Outpainted image."]
},
"compositingCoherenceMode": {
"heading": "Mode",
"paragraphs": ["The mode of the Coherence Pass."]
"paragraphs": ["Method used to create a coherent image with the newly generated masked area."]
},
"compositingCoherenceSteps": {
"heading": "Steps",
"paragraphs": [
"Number of denoising steps used in the Coherence Pass.",
"Same as the main Steps parameter."
]
"paragraphs": ["Number of steps in the Coherence Pass.", "Similar to Generation Steps."]
},
"compositingStrength": {
"heading": "Strength",
"paragraphs": [
"Denoising strength for the Coherence Pass.",
"Same as the Image to Image Denoising Strength parameter."
]
"paragraphs": ["Amount of noise added for the Coherence Pass.", "Similar to Denoising Strength."]
},
"compositingMaskAdjustments": {
"heading": "Mask Adjustments",
"paragraphs": ["Adjust the mask."]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraphs": [
"Which steps of the denoising process will have the ControlNet applied.",
"ControlNets applied at the beginning of the process guide composition, and ControlNets applied at the end guide details."
]
},
"controlNetControlMode": {
"heading": "Control Mode",
"paragraphs": [
"Lends more weight to either the prompt or ControlNet."
]
},
"controlNetResizeMode": {
"heading": "Resize Mode",
"paragraphs": [
"How the ControlNet image will be fit to the image output size."
]
},
"controlNet": {
"heading": "ControlNet",
"paragraphs": [
"ControlNets provide guidance to the generation process, helping create images with controlled composition, structure, or style, depending on the model selected."
]
},
"controlNetBeginEnd": {
"heading": "Begin / End Step Percentage",
"paragraphs": [
"The part of the of the denoising process that will have the Control Adapter applied.",
"Generally, Control Adapters applied at the start of the process guide composition, and Control Adapters applied at the end guide details."
]
},
"controlNetControlMode": {
"heading": "Control Mode",
"paragraphs": ["Lend more weight to either the prompt or ControlNet."]
},
"controlNetProcessor": {
"heading": "Processor",
"paragraphs": [
"Method of processing the input image to guide the generation process. Different processors will providedifferent effects or styles in your generated images."
]
},
"controlNetResizeMode": {
"heading": "Resize Mode",
"paragraphs": ["Method to fit Control Adapter's input image size to the output generation size."]
},
"controlNetWeight": {
"heading": "Weight",
"paragraphs": [
"How strongly the ControlNet will impact the generated image."
"Weight of the Control Adapter. Higher weight will lead to larger impacts on the final image."
]
},
"dynamicPrompts": {
@ -1530,9 +1520,7 @@
},
"dynamicPromptsMaxPrompts": {
"heading": "Max Prompts",
"paragraphs": [
"Limits the number of prompts that can be generated by Dynamic Prompts."
]
"paragraphs": ["Limits the number of prompts that can be generated by Dynamic Prompts."]
},
"dynamicPromptsSeedBehaviour": {
"heading": "Seed Behaviour",
@ -1543,15 +1531,23 @@
"Per Image will use a unique seed for each image. This provides more variation."
]
},
"imageFit": {
"heading": "Fit Initial Image to Output Size",
"paragraphs": [
"Resizes the initial image to the width and height of the output image. Recommended to enable."
]
},
"infillMethod": {
"heading": "Infill Method",
"paragraphs": ["Method to infill the selected area."]
"paragraphs": ["Method of infilling during the Outpainting or Inpainting process."]
},
"lora": {
"heading": "LoRA Weight",
"paragraphs": [
"Higher LoRA weight will lead to larger impacts on the final image."
]
"heading": "LoRA",
"paragraphs": ["Lightweight models that are used in conjunction with base models."]
},
"loraWeight": {
"heading": "Weight",
"paragraphs": ["Weight of the LoRA. Higher weight will lead to larger impacts on the final image."]
},
"noiseUseCPU": {
"heading": "Use CPU Noise",
@ -1561,16 +1557,25 @@
"There is no performance impact to enabling CPU Noise."
]
},
"paramAspect": {
"heading": "Aspect",
"paragraphs": [
"Aspect ratio of the generated image. Changing the ratio will update the Width and Height accordingly.",
"“Optimize” will set the Width and Height to optimal dimensions for the chosen model."
]
},
"paramCFGScale": {
"heading": "CFG Scale",
"paragraphs": [
"Controls how much your prompt influences the generation process."
"Controls how much the prompt influences the generation process.",
"High CFG Scale values can result in over-saturation and distorted generation results. "
]
},
"paramCFGRescaleMultiplier": {
"heading": "CFG Rescale Multiplier",
"paragraphs": [
"Rescale multiplier for CFG guidance, used for models trained using zero-terminal SNR (ztsnr). Suggested value 0.7."
"Rescale multiplier for CFG guidance, used for models trained using zero-terminal SNR (ztsnr).",
"Suggested value of 0.7 for these models."
]
},
"paramDenoisingStrength": {
@ -1580,6 +1585,16 @@
"0 will result in an identical image, while 1 will result in a completely new image."
]
},
"paramHeight": {
"heading": "Height",
"paragraphs": ["Height of the generated image. Must be a multiple of 8."]
},
"paramHrf": {
"heading": "Enable High Resolution Fix",
"paragraphs": [
"Generate high quality images at a larger resolution than optimal for the model. Generally used to prevent duplication in the generated image."
]
},
"paramIterations": {
"heading": "Iterations",
"paragraphs": [
@ -1590,8 +1605,7 @@
"paramModel": {
"heading": "Model",
"paragraphs": [
"Model used for the denoising steps.",
"Different models are typically trained to specialize in producing particular aesthetic results and content."
"Model used for generation. Different models are trained to specialize in producing different aesthetic results and content."
]
},
"paramRatio": {
@ -1605,7 +1619,7 @@
"heading": "Seed",
"paragraphs": [
"Controls the starting noise used for generation.",
"Disable “Random Seed” to produce identical results with the same generation settings."
"Disable the “Random” option to produce identical results with the same generation settings."
]
},
"paramSteps": {
@ -1615,23 +1629,93 @@
"Higher step counts will typically create better images but will require more generation time."
]
},
"paramUpscaleMethod": {
"heading": "Upscale Method",
"paragraphs": ["Method used to upscale the image for High Resolution Fix."]
},
"paramVAE": {
"heading": "VAE",
"paragraphs": [
"Model used for translating AI output into the final image."
]
"paragraphs": ["Model used for translating AI output into the final image."]
},
"paramVAEPrecision": {
"heading": "VAE Precision",
"paragraphs": [
"The precision used during VAE encoding and decoding. FP16/half precision is more efficient, at the expense of minor image variations."
"The precision used during VAE encoding and decoding.",
"Fp16/Half precision is more efficient, at the expense of minor image variations."
]
},
"paramWidth": {
"heading": "Width",
"paragraphs": ["Width of the generated image. Must be a multiple of 8."]
},
"patchmatchDownScaleSize": {
"heading": "Downscale",
"paragraphs": [
"How much downscaling occurs before infilling.",
"Higher downscaling will improve performance and reduce quality."
]
},
"refinerModel": {
"heading": "Refiner Model",
"paragraphs": [
"Model used during the refiner portion of the generation process.",
"Similar to the Generation Model."
]
},
"refinerPositiveAestheticScore": {
"heading": "Positive Aesthetic Score",
"paragraphs": [
"Weight generations to be more similar to images with a high aesthetic score, based on the training data."
]
},
"refinerNegativeAestheticScore": {
"heading": "Negative Aesthetic Score",
"paragraphs": [
"Weight generations to be more similar to images with a low aesthetic score, based on the training data."
]
},
"refinerScheduler": {
"heading": "Scheduler",
"paragraphs": [
"Scheduler used during the refiner portion of the generation process.",
"Similar to the Generation Scheduler."
]
},
"refinerStart": {
"heading": "Refiner Start",
"paragraphs": [
"Where in the generation process the refiner will start to be used.",
"0 means the refiner will be used for the entire generation process, 0.8 means the refiner will be used for the last 20% of the generation process."
]
},
"refinerSteps": {
"heading": "Steps",
"paragraphs": [
"Number of steps that will be performed during the refiner portion of the generation process.",
"Similar to the Generation Steps."
]
},
"refinerCfgScale": {
"heading": "CFG Scale",
"paragraphs": [
"Controls how much the prompt influences the generation process.",
"Similar to the Generation CFG Scale."
]
},
"scaleBeforeProcessing": {
"heading": "Scale Before Processing",
"paragraphs": [
"Scales the selected area to the size best suited for the model before the image generation process."
"“Auto” scales the selected area to the size best suited for the model before the image generation process.",
"“Manual” allows you to choose the width and height the selected area will be scaled to before the image generation process."
]
},
"seamlessTilingXAxis": {
"heading": "Seamless Tiling X Axis",
"paragraphs": ["Seamlessly tile an image along the horizontal axis."]
},
"seamlessTilingYAxis": {
"heading": "Seamless Tiling Y Axis",
"paragraphs": ["Seamlessly tile an image along the vertical axis."]
}
},
"ui": {
@ -1716,6 +1800,7 @@
"downloadWorkflow": "Save to File",
"saveWorkflow": "Save Workflow",
"saveWorkflowAs": "Save Workflow As",
"saveWorkflowToProject": "Save Workflow to Project",
"savingWorkflow": "Saving Workflow...",
"problemSavingWorkflow": "Problem Saving Workflow",
"workflowSaved": "Workflow Saved",
@ -1730,6 +1815,7 @@
"clearWorkflowSearchFilter": "Clear Workflow Search Filter",
"workflowName": "Workflow Name",
"newWorkflowCreated": "New Workflow Created",
"workflowCleared": "Workflow Cleared",
"workflowEditorMenu": "Workflow Editor Menu",
"workflowIsOpen": "Workflow is Open"
},

View File

@ -7,7 +7,6 @@
"img2img": "Imagen a Imagen",
"unifiedCanvas": "Lienzo Unificado",
"nodes": "Editor del flujo de trabajo",
"langSpanish": "Español",
"nodesDesc": "Un sistema de generación de imágenes basado en nodos, actualmente se encuentra en desarrollo. Mantente pendiente a nuestras actualizaciones acerca de esta fabulosa funcionalidad.",
"postProcessing": "Post-procesamiento",
"postProcessDesc1": "Invoke AI ofrece una gran variedad de funciones de post-procesamiento, El aumento de tamaño y Restauración de Rostros ya se encuentran disponibles en la interfaz web, puedes acceder desde el menú de Opciones Avanzadas en las pestañas de Texto a Imagen y de Imagen a Imagen. También puedes acceder a estas funciones directamente mediante el botón de acciones en el menú superior de la imagen actual o en el visualizador.",
@ -43,25 +42,10 @@
"statusMergedModels": "Modelos combinados",
"githubLabel": "Github",
"discordLabel": "Discord",
"langEnglish": "Inglés",
"langDutch": "Holandés",
"langFrench": "Francés",
"langGerman": "Alemán",
"langItalian": "Italiano",
"langArabic": "Árabe",
"langJapanese": "Japones",
"langPolish": "Polaco",
"langBrPortuguese": "Portugués brasileño",
"langRussian": "Ruso",
"langSimplifiedChinese": "Chino simplificado",
"langUkranian": "Ucraniano",
"back": "Atrás",
"statusConvertingModel": "Convertir el modelo",
"statusModelConverted": "Modelo adaptado",
"statusMergingModels": "Fusionar modelos",
"langPortuguese": "Portugués",
"langKorean": "Coreano",
"langHebrew": "Hebreo",
"loading": "Cargando",
"loadingInvokeAI": "Cargando invocar a la IA",
"postprocessing": "Tratamiento posterior",
@ -77,7 +61,6 @@
"imagePrompt": "Indicación de imagen",
"batch": "Administrador de lotes",
"darkMode": "Modo oscuro",
"lightMode": "Modo claro",
"modelManager": "Administrador de modelos",
"communityLabel": "Comunidad"
},

View File

@ -27,22 +27,12 @@
"statusModelChanged": "Malli vaihdettu",
"statusConvertingModel": "Muunnetaan mallia",
"statusModelConverted": "Malli muunnettu",
"langFrench": "Ranska",
"langItalian": "Italia",
"languagePickerLabel": "Kielen valinta",
"hotkeysLabel": "Pikanäppäimet",
"reportBugLabel": "Raportoi Bugista",
"langPolish": "Puola",
"langDutch": "Hollanti",
"settingsLabel": "Asetukset",
"githubLabel": "Github",
"langGerman": "Saksa",
"langPortuguese": "Portugali",
"discordLabel": "Discord",
"langEnglish": "Englanti",
"langRussian": "Venäjä",
"langUkranian": "Ukraina",
"langSpanish": "Espanja",
"upload": "Lataa",
"statusMergedModels": "Mallit yhdistelty",
"img2img": "Kuva kuvaksi",

View File

@ -7,7 +7,6 @@
"img2img": "Image en image",
"unifiedCanvas": "Canvas unifié",
"nodes": "Nœuds",
"langFrench": "Français",
"nodesDesc": "Un système basé sur les nœuds pour la génération d'images est actuellement en développement. Restez à l'écoute pour des mises à jour à ce sujet.",
"postProcessing": "Post-traitement",
"postProcessDesc1": "Invoke AI offre une grande variété de fonctionnalités de post-traitement. Le redimensionnement d'images et la restauration de visages sont déjà disponibles dans la WebUI. Vous pouvez y accéder à partir du menu 'Options avancées' des onglets 'Texte vers image' et 'Image vers image'. Vous pouvez également traiter les images directement en utilisant les boutons d'action d'image au-dessus de l'affichage d'image actuel ou dans le visualiseur.",
@ -47,7 +46,6 @@
"statusMergingModels": "Mélange des modèles",
"loadingInvokeAI": "Chargement de Invoke AI",
"cancel": "Annuler",
"langEnglish": "Anglais",
"statusConvertingModel": "Conversion du modèle",
"statusModelConverted": "Modèle converti",
"loading": "Chargement",

View File

@ -111,17 +111,6 @@
"githubLabel": "גיטהאב",
"discordLabel": "דיסקורד",
"settingsLabel": "הגדרות",
"langEnglish": "אנגלית",
"langDutch": "הולנדית",
"langArabic": "ערבית",
"langFrench": "צרפתית",
"langGerman": "גרמנית",
"langJapanese": "יפנית",
"langBrPortuguese": "פורטוגזית",
"langRussian": "רוסית",
"langSimplifiedChinese": "סינית",
"langUkranian": "אוקראינית",
"langSpanish": "ספרדית",
"img2img": "תמונה לתמונה",
"unifiedCanvas": "קנבס מאוחד",
"nodes": "צמתים",
@ -152,9 +141,7 @@
"statusMergedModels": "מודלים מוזגו",
"hotkeysLabel": "מקשים חמים",
"reportBugLabel": "דווח באג",
"langItalian": "איטלקית",
"upload": "העלאה",
"langPolish": "פולנית",
"training": "אימון",
"load": "טעינה",
"back": "אחורה",

View File

@ -34,7 +34,6 @@
"delete": "Törlés",
"data": "Adat",
"discordLabel": "Discord",
"folder": "Mappa",
"langEnglish": "Angol"
"folder": "Mappa"
}
}

View File

@ -7,7 +7,6 @@
"img2img": "Immagine a Immagine",
"unifiedCanvas": "Tela unificata",
"nodes": "Editor del flusso di lavoro",
"langItalian": "Italiano",
"nodesDesc": "Attualmente è in fase di sviluppo un sistema basato su nodi per la generazione di immagini. Resta sintonizzato per gli aggiornamenti su questa fantastica funzionalità.",
"postProcessing": "Post-elaborazione",
"postProcessDesc1": "Invoke AI offre un'ampia varietà di funzionalità di post-elaborazione. Ampliamento Immagine e Restaura Volti sono già disponibili nell'interfaccia Web. È possibile accedervi dal menu 'Opzioni avanzate' delle schede 'Testo a Immagine' e 'Immagine a Immagine'. È inoltre possibile elaborare le immagini direttamente, utilizzando i pulsanti di azione dell'immagine sopra la visualizzazione dell'immagine corrente o nel visualizzatore.",
@ -43,26 +42,11 @@
"statusModelChanged": "Modello cambiato",
"githubLabel": "GitHub",
"discordLabel": "Discord",
"langArabic": "Arabo",
"langEnglish": "Inglese",
"langFrench": "Francese",
"langGerman": "Tedesco",
"langJapanese": "Giapponese",
"langPolish": "Polacco",
"langBrPortuguese": "Portoghese Basiliano",
"langRussian": "Russo",
"langUkranian": "Ucraino",
"langSpanish": "Spagnolo",
"statusMergingModels": "Fusione Modelli",
"statusMergedModels": "Modelli fusi",
"langSimplifiedChinese": "Cinese semplificato",
"langDutch": "Olandese",
"statusModelConverted": "Modello Convertito",
"statusConvertingModel": "Conversione Modello",
"langKorean": "Coreano",
"langPortuguese": "Portoghese",
"loading": "Caricamento in corso",
"langHebrew": "Ebraico",
"loadingInvokeAI": "Caricamento Invoke AI",
"postprocessing": "Post Elaborazione",
"txt2img": "Testo a Immagine",
@ -76,7 +60,6 @@
"dontAskMeAgain": "Non chiedermelo più",
"imagePrompt": "Prompt Immagine",
"darkMode": "Modalità scura",
"lightMode": "Modalità chiara",
"batch": "Gestione Lotto",
"modelManager": "Gestore modello",
"communityLabel": "Comunità",
@ -125,7 +108,8 @@
"localSystem": "Sistema locale",
"green": "Verde",
"blue": "Blu",
"alpha": "Alfa"
"alpha": "Alfa",
"copy": "Copia"
},
"gallery": {
"generations": "Generazioni",
@ -384,7 +368,11 @@
"desc": "Apre e chiude le opzioni e i pannelli della galleria",
"title": "Attiva/disattiva le Opzioni e la Galleria"
},
"clearSearch": "Cancella ricerca"
"clearSearch": "Cancella ricerca",
"remixImage": {
"desc": "Utilizza tutti i parametri tranne il seme dell'immagine corrente",
"title": "Remixa l'immagine"
}
},
"modelManager": {
"modelManager": "Gestione Modelli",
@ -670,7 +658,8 @@
"aspect": "Aspetto",
"setToOptimalSizeTooLarge": "$t(parameters.setToOptimalSize) (potrebbe essere troppo grande)",
"boxBlur": "Box",
"gaussianBlur": "Gaussian"
"gaussianBlur": "Gaussian",
"remixImage": "Remixa l'immagine"
},
"settings": {
"models": "Modelli",
@ -806,7 +795,8 @@
"workflowDeleted": "Flusso di lavoro eliminato",
"problemRetrievingWorkflow": "Problema nel recupero del flusso di lavoro",
"resetInitialImage": "Reimposta l'immagine iniziale",
"uploadInitialImage": "Carica l'immagine iniziale"
"uploadInitialImage": "Carica l'immagine iniziale",
"problemDownloadingImage": "Impossibile scaricare l'immagine"
},
"tooltip": {
"feature": {
@ -1145,7 +1135,14 @@
"newWorkflow": "Nuovo flusso di lavoro",
"newWorkflowDesc": "Creare un nuovo flusso di lavoro?",
"newWorkflowDesc2": "Il flusso di lavoro attuale presenta modifiche non salvate.",
"unsupportedAnyOfLength": "unione di troppi elementi ({{count}})"
"unsupportedAnyOfLength": "unione di troppi elementi ({{count}})",
"clearWorkflowDesc": "Cancellare questo flusso di lavoro e avviarne uno nuovo?",
"clearWorkflow": "Cancella il flusso di lavoro",
"clearWorkflowDesc2": "Il tuo flusso di lavoro attuale presenta modifiche non salvate.",
"viewMode": "Utilizzare nella vista lineare",
"reorderLinearView": "Riordina la vista lineare",
"editMode": "Modifica nell'editor del flusso di lavoro",
"resetToDefaultValue": "Ripristina il valore predefinito"
},
"boards": {
"autoAddBoard": "Aggiungi automaticamente bacheca",
@ -1202,7 +1199,6 @@
"f": "F",
"h": "A",
"prompt": "Prompt",
"openPoseDescription": "Stima della posa umana utilizzando Openpose",
"resizeMode": "Ridimensionamento",
"weight": "Peso",
"selectModel": "Seleziona un modello",
@ -1245,7 +1241,15 @@
"scribble": "Scarabocchio",
"amult": "Angolo di illuminazione",
"coarse": "Approssimativo",
"resizeSimple": "Ridimensiona (semplice)"
"resizeSimple": "Ridimensiona (semplice)",
"large": "Grande",
"small": "Piccolo",
"depthAnythingDescription": "Generazione di mappe di profondità utilizzando la tecnica Depth Anything",
"modelSize": "Dimensioni del modello",
"dwOpenposeDescription": "Stima della posa umana utilizzando DW Openpose",
"face": "Viso",
"body": "Corpo",
"hands": "Mani"
},
"queue": {
"queueFront": "Aggiungi all'inizio della coda",
@ -1375,7 +1379,8 @@
"popovers": {
"paramScheduler": {
"paragraphs": [
"Il campionatore definisce come aggiungere in modo iterativo il rumore a un'immagine o come aggiornare un campione in base all'output di un modello."
"Il campionatore utilizzato durante il processo di generazione.",
"Ciascun campionatore definisce come aggiungere in modo iterativo il rumore a un'immagine o come aggiornare un campione in base all'output di un modello."
],
"heading": "Campionatore"
},
@ -1388,8 +1393,8 @@
"compositingCoherenceSteps": {
"heading": "Passi",
"paragraphs": [
"Numero di passi di riduzione del rumore utilizzati nel Passaggio di Coerenza.",
"Uguale al parametro principale Passi."
"Numero di passi utilizzati nel Passaggio di Coerenza.",
"Simile ai passi di generazione."
]
},
"compositingBlur": {
@ -1401,14 +1406,13 @@
"compositingCoherenceMode": {
"heading": "Modalità",
"paragraphs": [
"La modalità del Passaggio di Coerenza."
"Metodo utilizzato per creare un'immagine coerente con l'area mascherata appena generata."
]
},
"clipSkip": {
"paragraphs": [
"Scegli quanti livelli del modello CLIP saltare.",
"Alcuni modelli funzionano meglio con determinate impostazioni di CLIP Skip.",
"Un valore più alto in genere produce un'immagine meno dettagliata."
"Alcuni modelli funzionano meglio con determinate impostazioni di CLIP Skip."
]
},
"compositingCoherencePass": {
@ -1420,8 +1424,8 @@
"compositingStrength": {
"heading": "Forza",
"paragraphs": [
"Intensità di riduzione del rumore per il passaggio di coerenza.",
"Uguale al parametro intensità di riduzione del rumore da immagine a immagine."
"Quantità di rumore aggiunta per il Passaggio di Coerenza.",
"Simile alla forza di riduzione del rumore."
]
},
"paramNegativeConditioning": {
@ -1447,8 +1451,8 @@
"controlNetBeginEnd": {
"heading": "Percentuale passi Inizio / Fine",
"paragraphs": [
"A quali passi del processo di rimozione del rumore verrà applicato ControlNet.",
"I ControlNet applicati all'inizio del processo guidano la composizione, mentre i ControlNet applicati alla fine guidano i dettagli."
"La parte del processo di rimozione del rumore in cui verrà applicato l'adattatore di controllo.",
"In genere, gli adattatori di controllo applicati all'inizio del processo guidano la composizione, mentre quelli applicati alla fine guidano i dettagli."
]
},
"noiseUseCPU": {
@ -1461,7 +1465,7 @@
},
"scaleBeforeProcessing": {
"paragraphs": [
"Ridimensiona l'area selezionata alla dimensione più adatta al modello prima del processo di generazione dell'immagine."
"\"Auto\" ridimensiona l'area selezionata alla dimensione più adatta al modello prima del processo di generazione dell'immagine."
],
"heading": "Scala prima dell'elaborazione"
},
@ -1496,20 +1500,21 @@
"paramVAEPrecision": {
"heading": "Precisione VAE",
"paragraphs": [
"La precisione utilizzata durante la codifica e decodifica VAE. FP16/mezza precisione è più efficiente, a scapito di minori variazioni dell'immagine."
"La precisione utilizzata durante la codifica e decodifica VAE.",
"Fp16/Mezza precisione è più efficiente, a scapito di minori variazioni dell'immagine."
]
},
"paramSeed": {
"paragraphs": [
"Controlla il rumore iniziale utilizzato per la generazione.",
"Disabilita seme \"Casuale\" per produrre risultati identici con le stesse impostazioni di generazione."
"Disabilita l'opzione \"Casuale\" per produrre risultati identici con le stesse impostazioni di generazione."
],
"heading": "Seme"
},
"controlNetResizeMode": {
"heading": "Modalità ridimensionamento",
"paragraphs": [
"Come l'immagine ControlNet verrà adattata alle dimensioni di output dell'immagine."
"Metodo per adattare le dimensioni dell'immagine in ingresso dell'adattatore di controllo alle dimensioni della generazione di output."
]
},
"dynamicPromptsSeedBehaviour": {
@ -1524,8 +1529,7 @@
"paramModel": {
"heading": "Modello",
"paragraphs": [
"Modello utilizzato per i passaggi di riduzione del rumore.",
"Diversi modelli sono generalmente addestrati per specializzarsi nella produzione di particolari risultati e contenuti estetici."
"Modello utilizzato per la generazione. Diversi modelli vengono addestrati per specializzarsi nella produzione di risultati e contenuti estetici diversi."
]
},
"paramDenoisingStrength": {
@ -1543,25 +1547,26 @@
},
"infillMethod": {
"paragraphs": [
"Metodo per riempire l'area selezionata."
"Metodo di riempimento durante il processo di Outpainting o Inpainting."
],
"heading": "Metodo di riempimento"
},
"controlNetWeight": {
"heading": "Peso",
"paragraphs": [
"Quanto forte sarà l'impatto di ControlNet sull'immagine generata."
"Peso dell'adattatore di controllo. Un peso maggiore porterà a impatti maggiori sull'immagine finale."
]
},
"paramCFGScale": {
"heading": "Scala CFG",
"paragraphs": [
"Controlla quanto il tuo prompt influenza il processo di generazione."
"Controlla quanto il prompt influenza il processo di generazione.",
"Valori elevati della scala CFG possono provocare una saturazione eccessiva e distorsioni nei risultati della generazione. "
]
},
"controlNetControlMode": {
"paragraphs": [
"Attribuisce più peso al prompt o a ControlNet."
"Attribuisce più peso al prompt oppure a ControlNet."
],
"heading": "Modalità di controllo"
},
@ -1573,9 +1578,9 @@
]
},
"lora": {
"heading": "Peso LoRA",
"heading": "LoRA",
"paragraphs": [
"Un peso LoRA più elevato porterà a impatti maggiori sull'immagine finale."
"Modelli leggeri utilizzati insieme ai modelli base."
]
},
"controlNet": {
@ -1587,8 +1592,65 @@
"paramCFGRescaleMultiplier": {
"heading": "Moltiplicatore di riscala CFG",
"paragraphs": [
"Moltiplicatore di riscala per la guida CFG, utilizzato per modelli addestrati utilizzando SNR a terminale zero (ztsnr). Valore suggerito 0.7."
"Moltiplicatore di riscala per la guida CFG, utilizzato per modelli addestrati utilizzando SNR a terminale zero (ztsnr).",
"Valore suggerito di 0.7 per questi modelli."
]
},
"controlNetProcessor": {
"heading": "Processore",
"paragraphs": [
"Metodo di elaborazione dell'immagine di input per guidare il processo di generazione. Processori diversi forniranno effetti o stili diversi nelle immagini generate."
]
},
"imageFit": {
"heading": "Adatta l'immagine iniziale alle dimensioni di output",
"paragraphs": [
"Ridimensiona l'immagine iniziale in base alla larghezza e all'altezza dell'immagine di output. Si consiglia di abilitarlo."
]
},
"loraWeight": {
"heading": "Peso",
"paragraphs": [
"Peso del LoRA. Un peso maggiore comporterà un impatto maggiore sull'immagine finale."
]
},
"paramAspect": {
"heading": "Aspetto",
"paragraphs": [
"Proporzioni dell'immagine generata. La modifica del rapporto aggiornerà di conseguenza la larghezza e l'altezza.",
"\"Ottimizza\" imposterà la larghezza e l'altezza alle dimensioni ottimali per il modello scelto."
]
},
"paramHeight": {
"heading": "Altezza",
"paragraphs": [
"Altezza dell'immagine generata. Deve essere un multiplo di 8."
]
},
"paramHrf": {
"heading": "Abilita correzione alta risoluzione",
"paragraphs": [
"Genera immagini di alta qualità con una risoluzione maggiore di quella ottimale per il modello. Generalmente utilizzato per impedire la duplicazione nell'immagine generata."
]
},
"paramUpscaleMethod": {
"heading": "Metodo di ampliamento",
"paragraphs": [
"Metodo utilizzato per eseguire l'ampliamento dell'immagine per la correzione ad alta risoluzione."
]
},
"patchmatchDownScaleSize": {
"heading": "Ridimensiona",
"paragraphs": [
"Quanto ridimensionamento avviene prima del riempimento.",
"Un ridimensionamento più elevato migliorerà le prestazioni e ridurrà la qualità."
]
},
"paramWidth": {
"paragraphs": [
"Larghezza dell'immagine generata. Deve essere un multiplo di 8."
],
"heading": "Larghezza"
}
},
"sdxl": {
@ -1677,7 +1739,11 @@
"userWorkflows": "I miei flussi di lavoro",
"newWorkflowCreated": "Nuovo flusso di lavoro creato",
"downloadWorkflow": "Salva su file",
"uploadWorkflow": "Carica da file"
"uploadWorkflow": "Carica da file",
"projectWorkflows": "Flussi di lavoro del progetto",
"noWorkflows": "Nessun flusso di lavoro",
"workflowCleared": "Flusso di lavoro cancellato",
"saveWorkflowToProject": "Salva flusso di lavoro nel progetto"
},
"app": {
"storeNotInitialized": "Il negozio non è inizializzato"

Some files were not shown because too many files have changed in this diff Show More