Compare commits

..

100 Commits

Author SHA1 Message Date
448f6a04f4 Merge branch 'feat/controlnet_backend' of https://github.com/invoke-ai/InvokeAI into feat/controlnet_backend 2023-04-25 13:37:54 -07:00
cf6941f665 Added minimal viable node support for ControlNet in TextToImageInvocation. 2023-04-25 13:30:00 -07:00
6bd74de8f1 Resolving rebase: Initial implementation of ControlNet and MultiControlNet support for InvokeAI. 2023-04-25 02:54:02 -07:00
76e5d0595d fix(ui): fix no progress images when gallery is empty (#3268)
When gallery was empty (and there is therefore no selected image), no
progress images were displayed.

- fix by correcting the logic in CurrentImageDisplay
- also fix app crash introduced by fixing the first bug
2023-04-25 17:48:24 +12:00
f03cb8f134 fix(ui): fix no progress images when gallery is empty 2023-04-25 15:00:54 +10:00
c2a0e8afc3 [Bugfix] prevent cli crash (#3132)
Prevent legacy CLI crash caused by removal of convert option
    
- Compensatory change to the CLI that prevents it from crashing when it
tries to import a model.
- Bug introduced when the "convert" option removed from the model
manager.
2023-04-25 03:55:33 +01:00
31a904b903 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:28:45 +01:00
c174cab3ee [Bugfix] fixes and code cleanup to update and installation routines (#3101)
- Fix the update script to work again and fixes the ambiguity between
when a user wants to update to a tag vs updating to a branch, by making
these two operations explicitly separate.
- Remove dangling functions and arguments related to legacy checkpoint
conversion. These are no longer needed now that all legacy models are
either converted at import time, or on-the-fly in RAM.
2023-04-25 03:28:23 +01:00
fe12938c23 update to diffusers 0.15 and fix code for name changes (#3201)
- This is a port of #3184 to the main branch
2023-04-25 03:23:24 +01:00
4fa5c963a1 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-25 03:10:51 +01:00
48ce256ba2 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-25 02:49:59 +01:00
7555b1f876 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly (#3256)
I noticed that the current invokeai-new.py was using almost all of a CPU
core. After a bit of profileing I noticed that there were many thousands
of calls to epoll() which suggested to me that something wasn't sleeping
properly in asyncio's loop.

A bit of further investigation with Python profiling revealed that the
__dispatch_from_queue() method in FastAPIEventService
(app/api/events.py:33) was also being called thousands of times.

I believe the asyncio.sleep(0.001) in that method is too aggressive (it
means that the queue will be polled every 1ms) and that 0.1 (100ms) is
still entirely reasonable.
2023-04-24 19:35:27 +12:00
a537231f19 Merge branch 'main' into reduce-event-polling 2023-04-24 19:14:10 +12:00
8044d1b840 translationBot(ui): update translation (Turkish)
Currently translated at 11.3% (58 of 512 strings)

translationBot(ui): added translation (Turkish)

Co-authored-by: ismail ihsan bülbül <e-ben@msn.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/tr/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
2b58ce4ae4 translationBot(ui): update translation (Chinese (Simplified))
Currently translated at 75.0% (380 of 506 strings)

Co-authored-by: Patrick Tien <ivetien@outlook.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/zh_Hans/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
ef605cd76c translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

Co-authored-by: Fabian Bahl <fabian98@bahl-netz.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
a84b5b168f translationBot(ui): update translation (Swedish)
Currently translated at 34.7% (176 of 506 strings)

translationBot(ui): added translation (Swedish)

Co-authored-by: figgefigge <qvintuz@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/sv/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
16f6ee04d0 translationBot(ui): update translation (German)
Currently translated at 81.8% (414 of 506 strings)

translationBot(ui): update translation (German)

Currently translated at 80.8% (409 of 506 strings)

Co-authored-by: Alexander Eichhorn <pfannkuchensack@einfach-doof.de>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
44be057aa3 translationBot(ui): update translation (Ukrainian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (English)

Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Ukrainian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: System X - Files <vasyasos@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/en/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
422f6967b2 translationBot(ui): update translation (Ukrainian)
Currently translated at 75.8% (384 of 506 strings)

translationBot(ui): update translation (Russian)

Currently translated at 85.5% (433 of 506 strings)

Co-authored-by: mitien <mitien@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ru/
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/uk/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
4528cc8ba6 translationBot(ui): update translation (Italian)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Italian)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/it/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
87e91ebc1d translationBot(ui): update translation (Spanish)
Currently translated at 100.0% (512 of 512 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (511 of 511 strings)

translationBot(ui): update translation (Spanish)

Currently translated at 100.0% (506 of 506 strings)

Co-authored-by: gallegonovato <fran-carro@hotmail.es>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/es/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
fd00d111ea translationBot(ui): update translation (Dutch)
Currently translated at 100.0% (504 of 504 strings)

Co-authored-by: Dennis <dennis@vanzoerlandt.nl>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/nl/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
b8dc9000bd translationBot(ui): update translation (German)
Currently translated at 73.4% (370 of 504 strings)

Co-authored-by: Jaulustus <jaulustus@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/de/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
58c1066765 translationBot(ui): update translation (Finnish)
Currently translated at 18.2% (92 of 504 strings)

translationBot(ui): added translation (Finnish)

Co-authored-by: Juuso V <juuso.vantola@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/fi/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
37096a697b translationBot(ui): added translation (Mongolian)
Co-authored-by: Bouncyknighter <gebifirm@gmail.com>
2023-04-24 16:05:16 +10:00
17d0920186 translationBot(ui): update translation (Japanese)
Currently translated at 73.0% (368 of 504 strings)

Co-authored-by: 唐澤 克幸 <4ranci0ne@gmail.com>
Translate-URL: https://hosted.weblate.org/projects/invokeai/web-ui/ja/
Translation: InvokeAI/Web UI
2023-04-24 16:05:16 +10:00
1e05538364 translationBot(ui): added translation (Vietnamese)
Co-authored-by: techybrain-dev <techybrain.dev@gmail.com>
2023-04-24 16:05:16 +10:00
cf28617cd6 Event service will now sleep for 100ms between polls instead of 1ms, reducing CPU usage significantly 2023-04-23 21:27:02 +01:00
d0d8640711 feat(ui): add reload schema button (#3252) 2023-04-23 19:51:37 +12:00
e6158d1874 feat(ui): add reload schema button 2023-04-23 17:49:02 +10:00
2e9d1ea8a3 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL (#3250)
* if `shouldFetchImages` is passed in, UI will make an additional
request to get valid image URL when an invocation is complete
* this is necessary in order to have optional authorization for images
2023-04-23 16:00:13 +10:00
59b0153236 add to types 2023-04-23 15:59:55 +10:00
9f8ff912c4 feat(ui): add support for shouldFetchImages if UI needs to re-fetch an image URL 2023-04-23 15:59:55 +10:00
f0e4a2124a [Nodes UI] More Work (#3248)
- Style the Minimap
- Made the Node UI Legend Responsive
- Set Min Width for nodes on Spawn so resize doesn't snap.
- Initial Implementation of Node Search
- Added FuseJS to handle the node filtering
2023-04-23 17:51:40 +12:00
11ab5c7d56 fix(ui): Fix up arrow not working on unfiltered list 2023-04-23 15:18:35 +12:00
3f334d9e5e feat(ui): Add fusejs to NodeSearch 2023-04-23 15:14:44 +12:00
ff891b1ff2 feat(ui): Basic Node Search Component
Very buggy
2023-04-23 13:35:02 +12:00
2914ee10b0 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-22 20:21:59 +01:00
e29c2fb782 Merge branch 'more-nodes-work' of https://github.com/blessedcoolant/InvokeAI into more-nodes-work 2023-04-23 02:53:25 +12:00
b763f1809e feat(ui): Stylize Node Minimap 2023-04-23 02:52:32 +12:00
d26b44104a fix(ui): minor tidy 2023-04-23 00:45:03 +10:00
b73fd2a6d2 fix(ui): Set Min Width for Nodes 2023-04-23 00:55:43 +12:00
f258aba6d1 chore(ui): Make the Node UI Legend Responsive 2023-04-23 00:55:22 +12:00
2e70848aa0 Responsive Mobile Layout (#3207)
The first draft for a Responsive Mobile Layout for InvokeAI. Some basic
documentation to help contributors. // Notes from: @blessedcoolant

---

The whole rework needs to be done using the `mobile first` concept where
the base design will be catered to mobile and we add responsive changes
as we grow to larger screens.

**Added**

- Basic breakpoints have been added to the `theme.ts` file that indicate
at which values Chakra makes the responsive changes.
- A basic `useResolution` hook has been added that either returns
`mobile`, `tablet` or `desktop` based on the breakpoint. We can
customize this hook further to do more complex checks for us if need be.

**Syntax**

- Any Chakra component is directly capable of taking different values
for the different breakpoints set in our `theme.ts` file. These can be
passed in a few ways with the most descriptive being an object. For
example:

`flexDir={{ base: 'column', xl: 'row' }}` - This would set the `0em and
above` to be column for the flex direction but change to row
automatically when we hit `xl` and above resolutions which in our case
is `80em or 1280px`. This same format is applicable for any element in
Chakra.

`flexDir={['column', null, null, 'row', null]}` - The above syntax can
also be passed as an array to the property with each value in the array
corresponding to each breakpoint we have. Setting `null` just bypasses
it. This is a good short hand but I think we stick to the above syntax
for readability.

**Note**: I've modified a few elements here and there to give an idea on
how the responsive syntax works for reference.

---

**Problems to be solved** @SammCheese 

- Some issues you might run into are with the Resizable components.
We've decided we will get not use resizable components for smaller
resolutions. Doesn't make sense. So you'll need to make conditional
renderings around these.
- Some components that need custom layouts for different screens might
be better if ported over to `Grid` and use `gridTemplateAreas` to swap
out the design layout. I've demonstrated an example of this in a commit
I've made. I'll let you be the judge of where we might need this.
- The header will probably need to be converted to a burger menu of some
sort with the model changing being handled correctly UX wise. We'll
discuss this on discord.

---

Anyone willing to contribute to this PR can feel free to join the
discussion on discord.

https://discord.com/channels/1020123559063990373/1020839344170348605/threads/1097323866780606615
2023-04-22 22:34:30 +10:00
e973aeef0d Merge branch 'main' into responsive-ui 2023-04-22 14:31:19 +02:00
50e1ac731d fix(ui): make input/outputs renderfn callback 2023-04-22 22:25:17 +10:00
43addc1548 fix(ui): memoize everything nodes 2023-04-22 22:25:17 +10:00
4901911c1a fix(ui): improve nodes performance 2023-04-22 22:25:17 +10:00
44a653925a feat(ui): node styling, controls
- custom node controls
- fix some types
- fix badge colors via colorScheme
- style nodes
2023-04-22 22:25:17 +10:00
94a07a8da7 feat(ui): Make Nodes always spawn in center of work area 2023-04-22 22:25:17 +10:00
ad41afe65e feat(ui): Make Nodes Resizable 2023-04-22 22:25:17 +10:00
77fa7519c4 chore(ui): Cleanup Invocation Component 2023-04-22 22:25:17 +10:00
6e29148d4d delete ImageToImageContent.tsx 2023-04-22 08:43:14 +02:00
3044f3bfe5 fix(ui): adapt NodeEditor for smaller screens 2023-04-22 08:33:05 +02:00
67a8627cf6 add dev:host script 2023-04-22 08:30:09 +02:00
3fb433cb91 Merge branch 'main' of https://github.com/invoke-ai/InvokeAI into responsive-ui 2023-04-22 08:27:00 +02:00
5f498e10bd Partial migration of UI to nodes API (#3195)
* feat(ui): add axios client generator and simple example

* fix(ui): update client & nodes test code w/ new Edge type

* chore(ui): organize generated files

* chore(ui): update .eslintignore, .prettierignore

* chore(ui): update openapi.json

* feat(backend): fixes for nodes/generator

* feat(ui): generate object args for api client

* feat(ui): more nodes api prototyping

* feat(ui): nodes cancel

* chore(ui): regenerate api client

* fix(ui): disable OG web server socket connection

* fix(ui): fix scrollbar styles typing and prop

just noticed the typo, and made the types stronger.

* feat(ui): add socketio types

* feat(ui): wip nodes

- extract api client method arg types instead of manually declaring them
- update example to display images
- general tidy up

* start building out node translations from frontend state and add notes about missing features

* use reference to sampler_name

* use reference to sampler_name

* add optional apiUrl prop

* feat(ui): start hooking up dynamic txt2img node generation, create middleware for session invocation

* feat(ui): write separate nodes socket layer, txt2img generating and rendering w single node

* feat(ui): img2img implementation

* feat(ui): get intermediate images working but types are stubbed out

* chore(ui): add support for package mode

* feat(ui): add nodes mode script

* feat(ui): handle random seeds

* fix(ui): fix middleware types

* feat(ui): add rtk action type guard

* feat(ui): disable NodeAPITest

This was polluting the network/socket logs.

* feat(ui): fix parameters panel border color

This commit should be elsewhere but I don't want to break my flow

* feat(ui): make thunk types more consistent

* feat(ui): add type guards for outputs

* feat(ui): load images on socket connect

Rudimentary

* chore(ui): bump redux-toolkit

* docs(ui): update readme

* chore(ui): regenerate api client

* chore(ui): add typescript as dev dependency

I am having trouble with TS versions after vscode updated and now uses TS 5. `madge` has installed 3.9.10 and for whatever reason my vscode wants to use that. Manually specifying 4.9.5 and then setting vscode to use that as the workspace TS fixes the issue.

* feat(ui): begin migrating gallery to nodes

Along the way, migrate to use RTK `createEntityAdapter` for gallery images, and separate `results` and `uploads` into separate slices. Much cleaner this way.

* feat(ui): clean up & comment results slice

* fix(ui): separate thunk for initial gallery load so it properly gets index 0

* feat(ui): POST upload working

* fix(ui): restore removed type

* feat(ui): patch api generation for headers access

* chore(ui): regenerate api

* feat(ui): wip gallery migration

* feat(ui): wip gallery migration

* chore(ui): regenerate api

* feat(ui): wip refactor socket events

* feat(ui): disable panels based on app props

* feat(ui): invert logic to be disabled

* disable panels when app mounts

* feat(ui): add support to disableTabs

* docs(ui): organise and update docs

* lang(ui): add toast strings

* feat(ui): wip events, comments, and general refactoring

* feat(ui): add optional token for auth

* feat(ui): export StatusIndicator and ModelSelect for header use

* feat(ui) working on making socket URL dynamic

* feat(ui): dynamic middleware loading

* feat(ui): prep for socket jwt

* feat(ui): migrate cancelation

also updated action names to be event-like instead of declaration-like

sorry, i was scattered and this commit has a lot of unrelated stuff in it.

* fix(ui): fix img2img type

* chore(ui): regenerate api client

* feat(ui): improve InvocationCompleteEvent types

* feat(ui): increase StatusIndicator font size

* fix(ui): fix middleware order for multi-node graphs

* feat(ui): add exampleGraphs object w/ iterations example

* feat(ui): generate iterations graph

* feat(ui): update ModelSelect for nodes API

* feat(ui): add hi-res functionality for txt2img generations

* feat(ui): "subscribe" to particular nodes

feels like a dirty hack but oh well it works

* feat(ui): first steps to node editor ui

* fix(ui): disable event subscription

it is not fully baked just yet

* feat(ui): wip node editor

* feat(ui): remove extraneous field types

* feat(ui): nodes before deleting stuff

* feat(ui): cleanup nodes ui stuff

* feat(ui): hook up nodes to redux

* fix(ui): fix handle

* fix(ui): add basic node edges & connection validation

* feat(ui): add connection validation styling

* feat(ui): increase edge width

* feat(ui): it blends

* feat(ui): wip model handling and graph topology validation

* feat(ui): validation connections w/ graphlib

* docs(ui): update nodes doc

* feat(ui): wip node editor

* chore(ui): rebuild api, update types

* add redux-dynamic-middlewares as a dependency

* feat(ui): add url host transformation

* feat(ui): handle already-connected fields

* feat(ui): rewrite SqliteItemStore in sqlalchemy

* fix(ui): fix sqlalchemy dynamic model instantiation

* feat(ui, nodes): metadata wip

* feat(ui, nodes): models

* feat(ui, nodes): more metadata wip

* feat(ui): wip range/iterate

* fix(nodes): fix sqlite typing

* feat(ui): export new type for invoke component

* tests(nodes): fix test instantiation of ImageField

* feat(nodes): fix LoadImageInvocation

* feat(nodes): add `title` ui hint

* feat(nodes): make ImageField attrs optional

* feat(ui): wip nodes etc

* feat(nodes): roll back sqlalchemy

* fix(nodes): partially address feedback

* fix(backend): roll back changes to pngwriter

* feat(nodes): wip address metadata feedback

* feat(nodes): add seeded rng to RandomRange

* feat(nodes): address feedback

* feat(nodes): move GET images error handling to DiskImageStorage

* feat(nodes): move GET images error handling to DiskImageStorage

* fix(nodes): fix image output schema customization

* feat(ui): img2img/txt2img -> linear

- remove txt2img and img2img tabs
- add linear tab
- add initial image selection to linear parameters accordion

* feat(ui): tidy graph builders

* feat(ui): tidy misc

* feat(ui): improve invocation union types

* feat(ui): wip metadata viewer recall

* feat(ui): move fonts to normal deps

* feat(nodes): fix broken upload

* feat(nodes): add metadata module + tests, thumbnails

- `MetadataModule` is stateless and needed in places where the `InvocationContext` is not available, so have not made it a `service`
- Handles loading/parsing/building metadata, and creating png info objects
- added tests for MetadataModule
- Lifted thumbnail stuff to util

* fix(nodes): revert change to RandomRangeInvocation

* feat(nodes): address feedback

- make metadata a service
- rip out pydantic validation, implement metadata parsing as simple functions
- update tests
- address other minor feedback items

* fix(nodes): fix other tests

* fix(nodes): add metadata service to cli

* fix(nodes): fix latents/image field parsing

* feat(nodes): customise LatentsField schema

* feat(nodes): move metadata parsing to frontend

* fix(nodes): fix metadata test

---------

Co-authored-by: maryhipp <maryhipp@gmail.com>
Co-authored-by: Mary Hipp <maryhipp@Marys-MacBook-Air.local>
2023-04-22 13:10:20 +10:00
fdad62e88b chore: add ".version" and ".last_model" to gitignore (#3208)
Mistakenly closed the previous pr.
2023-04-20 18:26:27 +01:00
955c81acef Merge branch 'main' into patch-1 2023-04-20 18:26:06 +01:00
e1058f3416 update CODEOWNERS for changed team composition (#3234)
Remove @mauwii and @keturn until they are able to reengage with the
development effort. @GreggHelt2 is designated co-codeowner for the
backend.
2023-04-20 17:19:10 +01:00
edf16a253d Merge branch 'main' into patch-1 2023-04-20 14:16:10 +02:00
46f5ef4100 Merge branch 'main' into dev/codeowner-fix-main 2023-04-19 22:40:56 +01:00
b843255236 update CODEOWNERS for changed team composition 2023-04-19 17:37:48 -04:00
3a968e5072 Update NSFW.md
Outdated doc said to change the '.invokeai' file, but it's now named 'invokeai.init' afaik.
2023-04-18 21:18:32 -04:00
69433c9f68 Merge branch 'main' into lstein/enhance/diffusers-0.15 2023-04-18 19:21:53 -04:00
bd8ffd36bf bump to diffusers 0.15.1, remove dangling module 2023-04-18 19:20:38 -04:00
fd80e84ea6 Merge branch 'main' into patch-1 2023-04-18 19:14:28 -04:00
4824237a98 Added CPU instruction for README (#3225)
Since the change itself is quite straight-forward, I'll just describe
the context. Tried using automatic installer on my laptop, kept erroring
out on line 140-something of installer.py, "ERROR: Can not perform a
'--user' install. User site-packages are not visible in this
virtualenv."
Got tired of of fighting with pip so moved on to command line install.
Worked immediately, but at the time lacked instruction for CPU, so
instead of opening any helpful hyperlinks in the readme, took a few
minutes to grab the link from installer.py - thus this pr.
2023-04-18 19:07:37 -04:00
2c9a05eb59 Added CPU instruction for README 2023-04-18 18:46:55 +03:00
2feeb1f44c fix(ui): more responsive layout work 2023-04-18 04:29:31 +12:00
554f353773 fix(ui): Fix Width and Height showing 0 as input 2023-04-18 04:28:58 +12:00
aee27e94c9 fix(ui): Fix site header on really small screens 2023-04-18 01:25:53 +12:00
695893e1ac fix(ui): Improve parameters panel and preview display 2023-04-18 01:09:48 +12:00
b800a8eb2e feat(ui): responsive wip
- Fixed a bunch of padding and margin issues across the app
- Fixed the Invoke logo compressing
- Disabled the visibility of the options panel pin button in tablet and mobile views
- Refined the header menu options in mobile and tablet views
- Refined other site header elements in mobile and tablet views
- Aligned Tab Icons to center in mobile and tablet views
2023-04-18 00:50:09 +12:00
9749ef34b5 layout improvements 2023-04-17 13:30:33 +02:00
9a43362127 Revert "Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207"
This reverts commit 866024ea6c, reversing
changes made to 601cc1f92c.
2023-04-17 13:51:08 +12:00
866024ea6c Merge branch 'responsive-ui' of https://github.com/SammCheese/InvokeAI into pr/3207 2023-04-17 13:50:44 +12:00
601cc1f92c help(ui): Basic responsive updates to demonstrate
Made some basic responsive changes to demonstrate how to go about making changes.

There are a bunch of problems not addressed yet. Like dealing with the resizeable component and etc.
2023-04-17 13:50:13 +12:00
d6a9a4464d feat(ui): Add Basic useResolution Component
This component just classifies `base` and `sm` as mobile, `md` and `lg` as tablet and `xl` and `2xl` as desktop.

This is a basic hook for quicker work with resolutions. Can be modified and adjusted to our needs. All resolution related work can go into this hook.
2023-04-17 13:48:42 +12:00
dac271725a feat(ui): Add Basic Breakpoints 2023-04-17 13:26:10 +12:00
e1fbecfcf7 fix(ui): Syntax issue with the HidePreview icon 2023-04-17 12:42:06 +12:00
2ec4f5af10 remove unused import to pass lint & revert package.json 2023-04-15 21:53:33 +02:00
281662a6e1 chore: add ".version" and ".last_model" to gitignore
Mistakenly closed the previous pr
2023-04-15 21:46:47 +02:00
2edd032ec7 draft mobile layout 2023-04-15 21:34:03 +02:00
aab262d991 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-14 20:12:38 -04:00
47b9910b48 update to diffusers 0.15 and fix code for name changes
- This is a port of #3184 to the main branch
2023-04-14 15:35:03 -04:00
54c8d542dc Merge branch 'feat/controlnet_backend' of https://github.com/invoke-ai/InvokeAI into feat/controlnet_backend 2023-04-11 19:52:00 -07:00
75c2df3016 Initial implementation of ControlNet and MultiControlNet support for InvokeAI. 2023-04-11 19:50:31 -07:00
8ac8be44a2 Merge branch 'feat/controlnet_backend' of https://github.com/invoke-ai/InvokeAI into feat/controlnet_backend 2023-04-10 15:39:54 -07:00
5ab2164bdc Initial implementation of ControlNet and MultiControlNet support for InvokeAI. 2023-04-10 15:28:15 -07:00
5b11bcdfb8 Initial implementation of ControlNet and MultiControlNet support for InvokeAI. 2023-04-09 12:42:40 -07:00
cee159dfa3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-09 12:08:09 -04:00
cd1b350dae Merge branch 'main' into bugfix/release-updater 2023-04-07 18:56:21 -04:00
8334757af9 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-07 18:55:54 -04:00
bc2b9500e3 Merge branch 'main' into bugfix/prevent-cli-crash 2023-04-06 15:38:46 -04:00
32857d81c5 prevent legacy CLI crash caused by removal of convert option
- Compensatory change to the CLI that prevents it from crashing
  when it tries to import a model.
- Bug introduced when the "convert" option removed from the model
  manager.
2023-04-06 15:36:05 -04:00
28f75d80d5 Merge branch 'main' into bugfix/release-updater 2023-04-05 18:25:33 -04:00
b917ffa4d7 Merge branch 'main' into bugfix/release-updater 2023-04-05 17:37:27 -04:00
f682fb8040 fix invokeai-update script
- This commit fixes the update script to work again, as well as fixing
  the ambiguity between updating to a tag and updating to a branch.
2023-04-02 11:08:12 -04:00
137 changed files with 3539 additions and 1398 deletions

14
.github/CODEOWNERS vendored
View File

@ -1,16 +1,16 @@
# continuous integration
/.github/workflows/ @mauwii @lstein @blessedcoolant
/.github/workflows/ @lstein @blessedcoolant
# documentation
/docs/ @lstein @mauwii @tildebyte @blessedcoolant
/mkdocs.yml @lstein @mauwii @blessedcoolant
/docs/ @lstein @tildebyte @blessedcoolant
/mkdocs.yml @lstein @blessedcoolant
# nodes
/invokeai/app/ @Kyle0654 @blessedcoolant
# installation and configuration
/pyproject.toml @mauwii @lstein @blessedcoolant
/docker/ @mauwii @lstein @blessedcoolant
/pyproject.toml @lstein @blessedcoolant
/docker/ @lstein @blessedcoolant
/scripts/ @ebr @lstein
/installer/ @lstein @ebr
/invokeai/assets @lstein @ebr
@ -22,11 +22,11 @@
/invokeai/backend @blessedcoolant @psychedelicious @lstein
# generation, model management, postprocessing
/invokeai/backend @keturn @damian0815 @lstein @blessedcoolant @jpphoto
/invokeai/backend @damian0815 @lstein @blessedcoolant @jpphoto @gregghelt2
# front ends
/invokeai/frontend/CLI @lstein
/invokeai/frontend/install @lstein @ebr @mauwii
/invokeai/frontend/install @lstein @ebr
/invokeai/frontend/merge @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/training @lstein @blessedcoolant @hipsterusername
/invokeai/frontend/web @psychedelicious @blessedcoolant

2
.gitignore vendored
View File

@ -9,6 +9,8 @@ models/ldm/stable-diffusion-v1/model.ckpt
configs/models.user.yaml
config/models.user.yml
invokeai.init
.version
.last_model
# ignore the Anaconda/Miniconda installer used while building Docker image
anaconda.sh

View File

@ -148,6 +148,11 @@ not supported.
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/rocm5.4.2
```
_For non-GPU systems:_
```terminal
pip install InvokeAI --use-pep517 --extra-index-url https://download.pytorch.org/whl/cpu
```
_For Macintoshes, either Intel or M1/M2:_
```sh

View File

@ -32,7 +32,7 @@ turned on and off on the command line using `--nsfw_checker` and
At installation time, InvokeAI will ask whether the checker should be
activated by default (neither argument given on the command line). The
response is stored in the InvokeAI initialization file (usually
`.invokeai` in your home directory). You can change the default at any
`invokeai.init` in your home directory). You can change the default at any
time by opening this file in a text editor and commenting or
uncommenting the line `--nsfw_checker`.

View File

@ -3,6 +3,8 @@
import os
from argparse import Namespace
from invokeai.app.services.metadata import PngMetadataService, MetadataServiceBase
from ..services.default_graphs import create_system_graphs
from ..services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
@ -60,7 +62,9 @@ class ApiDependencies:
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f'{output_folder}/latents'))
images = DiskImageStorage(f'{output_folder}/images')
metadata = PngMetadataService()
images = DiskImageStorage(f'{output_folder}/images', metadata_service=metadata)
# TODO: build a file/path manager?
db_location = os.path.join(output_folder, "invokeai.db")
@ -70,6 +74,7 @@ class ApiDependencies:
events=events,
latents=latents,
images=images,
metadata=metadata,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"

View File

@ -45,7 +45,7 @@ class FastAPIEventService(EventServiceBase):
)
except Empty:
await asyncio.sleep(0.001)
await asyncio.sleep(0.1)
pass
except asyncio.CancelledError as e:

View File

@ -1,7 +1,19 @@
from typing import Optional
from pydantic import BaseModel, Field
from invokeai.app.models.image import ImageType
from invokeai.app.modules.metadata import ImageMetadata
from invokeai.app.services.metadata import InvokeAIMetadata
class ImageResponseMetadata(BaseModel):
"""An image's metadata. Used only in HTTP responses."""
created: int = Field(description="The creation timestamp of the image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
invokeai: Optional[InvokeAIMetadata] = Field(
description="The image's InvokeAI-specific metadata"
)
class ImageResponse(BaseModel):
@ -11,11 +23,12 @@ class ImageResponse(BaseModel):
image_name: str = Field(description="The name of the image")
image_url: str = Field(description="The url of the image")
thumbnail_url: str = Field(description="The url of the image's thumbnail")
metadata: ImageMetadata = Field(description="The image's metadata")
metadata: ImageResponseMetadata = Field(description="The image's metadata")
class ProgressImage(BaseModel):
"""The progress image sent intermittently during processing"""
width: int = Field(description="The effective width of the image in pixels")
height: int = Field(description="The effective height of the image in pixels")
dataURL: str = Field(description="The image data as a b64 data URL")

View File

@ -3,16 +3,16 @@ import io
from datetime import datetime, timezone
import json
import os
from typing import Any
import uuid
from fastapi import HTTPException, Path, Query, Request, UploadFile
from fastapi.responses import FileResponse, Response
from fastapi.routing import APIRouter
from PIL import Image
from invokeai.app.api.models.images import ImageResponse
from invokeai.app.modules.metadata import ImageMetadata, InvokeAIMetadata
from invokeai.app.api.models.images import ImageResponse, ImageResponseMetadata
from invokeai.app.services.metadata import InvokeAIMetadata
from invokeai.app.services.item_storage import PaginatedResults
from invokeai.app.util.image_paths import build_image_path
from ...services.image_storage import ImageType
from ..dependencies import ApiDependencies
@ -35,18 +35,6 @@ async def get_image(
return FileResponse(path)
else:
raise HTTPException(status_code=404)
@images_router.get("/path/{image_type}/{image_name}", operation_id="get_image_path")
async def get_image_path(
image_type: ImageType = Path(description="The type of image to get"),
image_name: str = Path(description="The name of the image to get"),
) -> str:
"""Gets a result location"""
path = build_image_path(image_type=image_type, image_name=image_name)
return path
@images_router.get(
@ -87,6 +75,7 @@ async def upload_image(
raise HTTPException(status_code=415, detail="Not an image")
contents = await file.read()
try:
img = Image.open(io.BytesIO(contents))
except:
@ -94,27 +83,22 @@ async def upload_image(
raise HTTPException(status_code=415, detail="Failed to read image")
filename = f"{uuid.uuid4()}_{str(int(datetime.now(timezone.utc).timestamp()))}.png"
(image_path, thumbnail_path, ctime) = ApiDependencies.invoker.services.images.save(
ImageType.UPLOAD, filename, img
)
# TODO: handle old `sd-metadata` style metadata
invokeai_metadata = img.info.get("invokeai", None)
invokeai_metadata = ApiDependencies.invoker.services.metadata.get_metadata(img)
if invokeai_metadata is not None:
invokeai_metadata = InvokeAIMetadata(**json.loads(invokeai_metadata))
# TODO: should creation of this object should happen elsewhere?
res = ImageResponse(
image_type=ImageType.UPLOAD,
image_name=filename,
image_url=f"api/v1/images/{ImageType.UPLOAD.value}/{filename}",
thumbnail_url=f"api/v1/images/{ImageType.UPLOAD.value}/thumbnails/{os.path.splitext(filename)[0]}.webp",
metadata=ImageMetadata(
metadata=ImageResponseMetadata(
created=ctime,
width=img.width,
height=img.height,
mode=img.mode,
invokeai=invokeai_metadata,
),
)

View File

@ -13,6 +13,8 @@ from typing import (
from pydantic import BaseModel
from pydantic.fields import Field
from invokeai.app.services.metadata import PngMetadataService
from .services.default_graphs import create_system_graphs
from .services.latent_storage import DiskLatentsStorage, ForwardCacheLatentsStorage
@ -200,6 +202,8 @@ def invoke_cli():
events = EventServiceBase()
metadata = PngMetadataService()
output_folder = os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../../outputs")
)
@ -211,7 +215,8 @@ def invoke_cli():
model_manager=model_manager,
events=events,
latents = ForwardCacheLatentsStorage(DiskLatentsStorage(f'{output_folder}/latents')),
images=DiskImageStorage(f'{output_folder}/images'),
images=DiskImageStorage(f'{output_folder}/images', metadata_service=metadata),
metadata=metadata,
queue=MemoryInvocationQueue(),
graph_library=SqliteItemStorage[LibraryGraph](
filename=db_location, table_name="graphs"

View File

@ -8,7 +8,6 @@ from PIL import Image, ImageOps
from pydantic import BaseModel, Field
from invokeai.app.models.image import ImageField, ImageType
from invokeai.app.modules.metadata import MetadataModule
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput, build_image_output
@ -58,8 +57,8 @@ class CvInpaintInvocation(BaseInvocation, CvInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, image_inpainted, metadata)

View File

@ -4,13 +4,14 @@ from functools import partial
from typing import Literal, Optional, Union
import numpy as np
from diffusers import ControlNetModel
from torch import Tensor
import torch
from pydantic import BaseModel, Field
from invokeai.app.models.image import ImageField, ImageType
from invokeai.app.invocations.util.choose_model import choose_model
from invokeai.app.modules.metadata import MetadataModule
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput, build_image_output
from ...backend.generator import Txt2Img, Img2Img, Inpaint, InvokeAIGenerator
@ -54,6 +55,9 @@ class TextToImageInvocation(BaseInvocation, SDImageInvocation):
seamless: bool = Field(default=False, description="Whether or not to generate an image that can tile without seams", )
model: str = Field(default="", description="The model to use (currently ignored)")
progress_images: bool = Field(default=False, description="Whether or not to produce progress images during generation", )
control_model: Optional[str] = Field(default=None, description="The control model to use")
control_image: Optional[ImageField] = Field(default=None, description="The processed control image")
# control_strength: Optional[float] = Field(default=1.0, ge=0, le=1, description="The strength of the controlnet")
# fmt: on
# TODO: pass this an emitter method or something? or a session for dispatching?
@ -66,25 +70,41 @@ class TextToImageInvocation(BaseInvocation, SDImageInvocation):
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
invocation_dict=self.dict(),
node=self.dict(),
source_node_id=source_node_id,
)
def invoke(self, context: InvocationContext) -> ImageOutput:
# Handle invalid model parameter
model = choose_model(context.services.model_manager, self.model)
# loading controlnet image (currently requires pre-processed image)
control_image = (
None if self.control_image is None
else context.services.images.get(
self.control_image.image_type, self.control_image.image_name
)
)
# loading controlnet model
if (self.control_model is None or self.control_model==''):
control_model = None
else:
# FIXME: change this to dropdown menu?
control_model = ControlNetModel.from_pretrained(self.control_model,
torch_dtype=torch.float16).to("cuda")
# Get the source node id (we are invoking the prepared node)
graph_execution_state = context.services.graph_execution_manager.get(
context.graph_execution_state_id
)
source_node_id = graph_execution_state.prepared_source_mapping[self.id]
outputs = Txt2Img(model).generate(
txt2img = Txt2Img(model, control_model=control_model)
outputs = txt2img.generate(
prompt=self.prompt,
step_callback=partial(self.dispatch_progress, context, source_node_id),
control_image=control_image,
**self.dict(
exclude={"prompt"}
exclude={"prompt", "control_image" }
), # Shorthand for passing all of the parameters above manually
)
# Outputs is an infinite iterator that will return a new InvokeAIGeneratorOutput object
@ -99,8 +119,8 @@ class TextToImageInvocation(BaseInvocation, SDImageInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(
@ -137,7 +157,7 @@ class ImageToImageInvocation(TextToImageInvocation):
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
invocation_dict=self.dict(),
node=self.dict(),
source_node_id=source_node_id,
)
@ -184,8 +204,8 @@ class ImageToImageInvocation(TextToImageInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, result_image, metadata)
@ -219,7 +239,7 @@ class InpaintInvocation(ImageToImageInvocation):
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
invocation_dict=self.dict(),
node=self.dict(),
source_node_id=source_node_id,
)
@ -270,8 +290,8 @@ class InpaintInvocation(ImageToImageInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, result_image, metadata)

View File

@ -6,8 +6,6 @@ import numpy
from PIL import Image, ImageFilter, ImageOps
from pydantic import BaseModel, Field
from invokeai.app.modules.metadata import MetadataModule
from ..models.image import ImageField, ImageType
from .baseinvocation import (
BaseInvocation,
@ -36,7 +34,6 @@ class ImageOutput(BaseInvocationOutput):
image: ImageField = Field(default=None, description="The output image")
width: Optional[int] = Field(default=None, description="The width of the image in pixels")
height: Optional[int] = Field(default=None, description="The height of the image in pixels")
mode: Optional[str] = Field(default=None, description="The image mode (ie pixel format)")
# fmt: on
class Config:
@ -151,8 +148,8 @@ class CropImageInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, image_crop, metadata)
@ -209,8 +206,8 @@ class PasteImageInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, new_image, metadata)
@ -246,8 +243,8 @@ class MaskFromAlphaInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, image_mask, metadata)
@ -283,8 +280,8 @@ class BlurInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, blur_image, metadata)
@ -320,8 +317,8 @@ class LerpInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, lerp_image, metadata)
@ -362,8 +359,8 @@ class InverseLerpInvocation(BaseInvocation, PILInvocationConfig):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, ilerp_image, metadata)

View File

@ -6,7 +6,6 @@ from pydantic import BaseModel, Field
import torch
from invokeai.app.invocations.util.choose_model import choose_model
from invokeai.app.modules.metadata import MetadataModule
from invokeai.app.util.step_callback import stable_diffusion_step_callback
@ -32,6 +31,8 @@ class LatentsField(BaseModel):
latents_name: Optional[str] = Field(default=None, description="The name of the latents")
class Config:
schema_extra = {"required": ["latents_name"]}
class LatentsOutput(BaseInvocationOutput):
"""Base class for invocations that output latents"""
@ -176,7 +177,7 @@ class TextToLatentsInvocation(BaseInvocation):
stable_diffusion_step_callback(
context=context,
intermediate_state=intermediate_state,
invocation_dict=self.dict(),
node=self.dict(),
source_node_id=source_node_id,
)
@ -358,8 +359,8 @@ class LatentsToImageInvocation(BaseInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, image, metadata)

View File

@ -3,7 +3,6 @@ from typing import Literal, Union
from pydantic import Field
from invokeai.app.models.image import ImageField, ImageType
from invokeai.app.modules.metadata import MetadataModule
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput, build_image_output
@ -45,8 +44,8 @@ class RestoreFaceInvocation(BaseInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, results[0][0], metadata)

View File

@ -5,7 +5,6 @@ from typing import Literal, Union
from pydantic import Field
from invokeai.app.models.image import ImageField, ImageType
from invokeai.app.modules.metadata import MetadataModule
from .baseinvocation import BaseInvocation, InvocationContext, InvocationConfig
from .image import ImageOutput, build_image_output
@ -49,8 +48,8 @@ class UpscaleInvocation(BaseInvocation):
context.graph_execution_state_id, self.id
)
metadata = MetadataModule.build_metadata(
session_id=context.graph_execution_state_id, invocation=self
metadata = context.services.metadata.build_metadata(
session_id=context.graph_execution_state_id, node=self
)
context.services.images.save(image_type, image_name, results[0][0], metadata)

View File

@ -9,6 +9,14 @@ class ImageType(str, Enum):
UPLOAD = "uploads"
def is_image_type(obj):
try:
ImageType(obj)
except ValueError:
return False
return True
class ImageField(BaseModel):
"""An image field used for passing image objects between invocations"""
@ -18,6 +26,4 @@ class ImageField(BaseModel):
image_name: Optional[str] = Field(default=None, description="The name of the image")
class Config:
schema_extra = {
"required": ["image_type", "image_name"]
}
schema_extra = {"required": ["image_type", "image_name"]}

View File

@ -1,180 +0,0 @@
import json
from typing import Any, Dict, Literal, Optional
from PIL import Image, PngImagePlugin
from pydantic import (
BaseModel,
Extra,
Field,
StrictBool,
StrictInt,
StrictStr,
ValidationError,
root_validator,
)
from invokeai.app.models.image import ImageType
class MetadataImageField(BaseModel):
"""A non-nullable version of ImageField"""
image_type: Literal[tuple([t.value for t in ImageType])] # type: ignore
image_name: StrictStr
class MetadataLatentsField(BaseModel):
"""A non-nullable version of LatentsField"""
latents_name: StrictStr
# Union of all valid metadata field types - use mostly strict types
NodeMetadataFieldTypes = (
StrictStr | StrictInt | float | StrictBool # we want to cast ints to floats here
)
class NodeMetadataField(BaseModel):
"""Helper class used as a hack for arbitrary metadata field keys."""
__root__: Dict[StrictStr, NodeMetadataFieldTypes]
# `extra=Extra.allow` allows this to model any potential node with `id` and `type` fields
class NodeMetadata(BaseModel, extra=Extra.allow):
"""Node metadata model, used for validation of metadata."""
@root_validator
def validate_node_metadata(cls, values: dict[str, Any]) -> dict[str, Any]:
"""Parses the node metadata, ignoring invalid values"""
parsed: dict[str, Any] = {}
# Conditionally build the parsed metadata, silently skipping invalid values
for name, value in values.items():
# explicitly parse `id` and `type` as strings
if name == "id":
if type(value) is not str:
continue
parsed[name] = value
elif name == "type":
if type(value) is not str:
continue
parsed[name] = value
else:
try:
if type(value) is dict:
# we only allow certain dicts, else just ignore the value entirely
if "image_name" in value or "image_type" in value:
# parse as an ImageField
parsed[name] = MetadataImageField.parse_obj(value)
elif "latents_name" in value:
# this is a LatentsField
parsed[name] = MetadataLatentsField.parse_obj(value)
else:
# hack to get parse and validate arbitrary keys
NodeMetadataField.parse_obj({name: value})
parsed[name] = value
except ValidationError:
# TODO: do we want to somehow alert when metadata is not fully valid?
continue
return parsed
class InvokeAIMetadata(BaseModel):
session_id: Optional[StrictStr] = Field(
description="The session in which this image was created"
)
node: Optional[NodeMetadata] = Field(description="The node that created this image")
@root_validator(pre=True)
def validate_invokeai_metadata(cls, values: dict[str, Any]) -> dict[str, Any]:
parsed: dict[str, Any] = {}
# Conditionally build the parsed metadata, silently skipping invalid values
for name, value in values.items():
if name == "session_id":
if type(value) is not str:
continue
parsed[name] = value
elif name == "node":
try:
p = NodeMetadata.parse_obj(value)
# check for empty NodeMetadata object
if len(p.dict().items()) == 0:
continue
except ValidationError:
continue
parsed[name] = value
return parsed
class ImageMetadata(BaseModel):
"""An image's metadata. Used only in HTTP responses."""
created: int = Field(description="The creation timestamp of the image")
width: int = Field(description="The width of the image in pixels")
height: int = Field(description="The height of the image in pixels")
mode: str = Field(description="The color mode of the image")
invokeai: Optional[InvokeAIMetadata] = Field(
description="The image's InvokeAI-specific metadata"
)
class MetadataModule:
"""Handles loading metadata from images and parsing it."""
# TODO: Support parsing old format metadata **hurk**
@staticmethod
def _load_metadata(image: Image.Image, key="invokeai") -> Any:
"""Loads a specific info entry from a PIL Image."""
raw_metadata = image.info.get(key)
# metadata should always be a dict
if type(raw_metadata) is not str:
return None
loaded_metadata = json.loads(raw_metadata)
return loaded_metadata
@staticmethod
def _parse_invokeai_metadata(
metadata: Any,
) -> InvokeAIMetadata | None:
"""Parses an object as InvokeAI metadata."""
if type(metadata) is not dict:
return None
parsed_metadata = InvokeAIMetadata.parse_obj(metadata)
return parsed_metadata
@staticmethod
def get_metadata(image: Image.Image) -> InvokeAIMetadata | None:
"""Gets the InvokeAI metadata from a PIL Image, skipping invalid values"""
loaded_metadata = MetadataModule._load_metadata(image)
parsed_metadata = MetadataModule._parse_invokeai_metadata(loaded_metadata)
return parsed_metadata
@staticmethod
def build_metadata(
session_id: StrictStr, invocation: BaseModel
) -> InvokeAIMetadata:
"""Builds an InvokeAIMetadata object"""
metadata = InvokeAIMetadata(
session_id=session_id, node=NodeMetadata(**invocation.dict())
)
return metadata
@staticmethod
def build_png_info(metadata: InvokeAIMetadata | None):
png_info = PngImagePlugin.PngInfo()
if metadata is not None:
png_info.add_text("invokeai", metadata.json())
return png_info

View File

@ -2,7 +2,7 @@
from typing import Any
from invokeai.app.api.models.images import ProgressImage
from invokeai.app.util.get_timestamp import get_timestamp
from invokeai.app.util.misc import get_timestamp
class EventServiceBase:
@ -14,6 +14,7 @@ class EventServiceBase:
pass
def __emit_session_event(self, event_name: str, payload: dict) -> None:
payload["timestamp"] = get_timestamp()
self.dispatch(
event_name=EventServiceBase.session_event,
payload=dict(event=event_name, data=payload),
@ -24,7 +25,7 @@ class EventServiceBase:
def emit_generator_progress(
self,
graph_execution_state_id: str,
invocation_dict: dict,
node: dict,
source_node_id: str,
progress_image: ProgressImage | None,
step: int,
@ -35,12 +36,11 @@ class EventServiceBase:
event_name="generator_progress",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
invocation=invocation_dict,
node=node,
source_node_id=source_node_id,
progress_image=progress_image.dict() if progress_image is not None else None,
step=step,
total_steps=total_steps,
timestamp=get_timestamp(),
),
)
@ -48,26 +48,24 @@ class EventServiceBase:
self,
graph_execution_state_id: str,
result: dict,
invocation_dict: dict,
node: dict,
source_node_id: str,
) -> None:
"""Emitted when an invocation has completed"""
print(result)
self.__emit_session_event(
event_name="invocation_complete",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
invocation=invocation_dict,
node=node,
source_node_id=source_node_id,
result=result,
timestamp=get_timestamp(),
),
)
def emit_invocation_error(
self,
graph_execution_state_id: str,
invocation_dict: dict,
node: dict,
source_node_id: str,
error: str,
) -> None:
@ -76,24 +74,22 @@ class EventServiceBase:
event_name="invocation_error",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
invocation=invocation_dict,
node=node,
source_node_id=source_node_id,
error=error,
timestamp=get_timestamp(),
),
)
def emit_invocation_started(
self, graph_execution_state_id: str, invocation_dict: dict, source_node_id: str
self, graph_execution_state_id: str, node: dict, source_node_id: str
) -> None:
"""Emitted when an invocation has started"""
self.__emit_session_event(
event_name="invocation_started",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
invocation=invocation_dict,
node=node,
source_node_id=source_node_id,
timestamp=get_timestamp(),
),
)
@ -103,6 +99,5 @@ class EventServiceBase:
event_name="graph_execution_state_complete",
payload=dict(
graph_execution_state_id=graph_execution_state_id,
timestamp=get_timestamp(),
),
)

View File

@ -1,27 +1,25 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
import datetime
import os
import json
from glob import glob
from abc import ABC, abstractmethod
from pathlib import Path
from queue import Queue
from typing import Any, Dict, List, Tuple
from typing import Dict, List, Tuple
from PIL.Image import Image
import PIL.Image as PILImage
from PIL import PngImagePlugin
from invokeai.app.api.models.images import ImageResponse
from invokeai.app.models.image import ImageType
from invokeai.app.modules.metadata import ImageMetadata, InvokeAIMetadata, MetadataModule
from invokeai.app.api.models.images import ImageResponse, ImageResponseMetadata
from invokeai.app.models.image import ImageType
from invokeai.app.services.metadata import (
InvokeAIMetadata,
MetadataServiceBase,
build_invokeai_metadata_pnginfo,
)
from invokeai.app.services.item_storage import PaginatedResults
from invokeai.app.util.get_timestamp import get_timestamp
from invokeai.app.util.image_paths import build_image_path
from invokeai.app.util.misc import get_timestamp
from invokeai.app.util.thumbnails import get_thumbnail_name, make_thumbnail
from invokeai.backend.image_util import PngWriter
class ImageStorageBase(ABC):
"""Responsible for storing and retrieving images."""
@ -48,14 +46,18 @@ class ImageStorageBase(ABC):
# TODO: make this a bit more flexible for e.g. cloud storage
@abstractmethod
def validate_path(
self, path: str
) -> bool:
def validate_path(self, path: str) -> bool:
"""Validates an image path."""
pass
@abstractmethod
def save(self, image_type: ImageType, image_name: str, image: Image, metadata: InvokeAIMetadata | None = None) -> Tuple[str, str, int]:
def save(
self,
image_type: ImageType,
image_name: str,
image: Image,
metadata: InvokeAIMetadata | None = None,
) -> Tuple[str, str, int]:
"""Saves an image and a 256x256 WEBP thumbnail. Returns a tuple of the image path, thumbnail path, and created timestamp."""
pass
@ -76,12 +78,14 @@ class DiskImageStorage(ImageStorageBase):
__cache_ids: Queue # TODO: this is an incredibly naive cache
__cache: Dict[str, Image]
__max_cache_size: int
__metadata_service: MetadataServiceBase
def __init__(self, output_folder: str):
def __init__(self, output_folder: str, metadata_service: MetadataServiceBase):
self.__output_folder = output_folder
self.__cache = dict()
self.__cache_ids = Queue()
self.__max_cache_size = 10 # TODO: get this from config
self.__metadata_service = metadata_service
Path(output_folder).mkdir(parents=True, exist_ok=True)
@ -114,22 +118,22 @@ class DiskImageStorage(ImageStorageBase):
for path in page_of_image_paths:
filename = os.path.basename(path)
img = PILImage.open(path)
invokeai_metadata = MetadataModule.get_metadata(img)
invokeai_metadata = self.__metadata_service.get_metadata(img)
page_of_images.append(
ImageResponse(
image_type=image_type.value,
image_name=filename,
image_url=build_image_path(image_type.value, filename),
thumbnail_url = build_image_path(image_type.value, filename, True),
# TODO: DiskImageStorage should not be building URLs...?
image_url=f"api/v1/images/{image_type.value}/{filename}",
thumbnail_url=f"api/v1/images/{image_type.value}/thumbnails/{os.path.splitext(filename)[0]}.webp",
# TODO: Creation of this object should happen elsewhere (?), just making it fit here so it works
metadata=ImageMetadata(
metadata=ImageResponseMetadata(
created=int(os.path.getctime(path)),
width=img.width,
height=img.height,
mode=img.mode,
invokeai=invokeai_metadata
invokeai=invokeai_metadata,
),
)
)
@ -172,26 +176,31 @@ class DiskImageStorage(ImageStorageBase):
return path
def validate_path(
self, path: str
) -> bool:
def validate_path(self, path: str) -> bool:
try:
os.stat(path)
return True
except Exception:
return False
def save(self, image_type: ImageType, image_name: str, image: Image, metadata: InvokeAIMetadata | None = None) -> Tuple[str, str, int]:
def save(
self,
image_type: ImageType,
image_name: str,
image: Image,
metadata: InvokeAIMetadata | None = None,
) -> Tuple[str, str, int]:
image_path = self.get_path(image_type, image_name)
# TODO: Reading the image and then saving it strips the metadata...
if metadata:
pnginfo = build_invokeai_metadata_pnginfo(metadata=metadata)
image.save(image_path, "PNG", pnginfo=pnginfo)
else:
image.save(image_path) # this saved image has an empty info
thumbnail_name = get_thumbnail_name(image_name)
thumbnail_path = self.get_path(image_type, thumbnail_name, is_thumbnail=True)
png_info = MetadataModule.build_png_info(metadata=metadata)
image.save(image_path, "PNG", pnginfo=png_info)
thumbnail_image = make_thumbnail(image)
thumbnail_image.save(thumbnail_path)

View File

@ -1,4 +1,5 @@
# Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654)
from invokeai.app.services.metadata import MetadataServiceBase
from invokeai.backend import ModelManager
from .events import EventServiceBase
@ -14,6 +15,7 @@ class InvocationServices:
events: EventServiceBase
latents: LatentsStorageBase
images: ImageStorageBase
metadata: MetadataServiceBase
queue: InvocationQueueABC
model_manager: ModelManager
restoration: RestorationServices
@ -29,6 +31,7 @@ class InvocationServices:
events: EventServiceBase,
latents: LatentsStorageBase,
images: ImageStorageBase,
metadata: MetadataServiceBase,
queue: InvocationQueueABC,
graph_library: ItemStorageABC["LibraryGraph"],
graph_execution_manager: ItemStorageABC["GraphExecutionState"],
@ -39,6 +42,7 @@ class InvocationServices:
self.events = events
self.latents = latents
self.images = images
self.metadata = metadata
self.queue = queue
self.graph_library = graph_library
self.graph_execution_manager = graph_execution_manager

View File

@ -0,0 +1,96 @@
import json
from abc import ABC, abstractmethod
from typing import Any, Dict, Optional, TypedDict
from PIL import Image, PngImagePlugin
from pydantic import BaseModel
from invokeai.app.models.image import ImageType, is_image_type
class MetadataImageField(TypedDict):
"""Pydantic-less ImageField, used for metadata parsing."""
image_type: ImageType
image_name: str
class MetadataLatentsField(TypedDict):
"""Pydantic-less LatentsField, used for metadata parsing."""
latents_name: str
# TODO: This is a placeholder for `InvocationsUnion` pending resolution of circular imports
NodeMetadata = Dict[
str, str | int | float | bool | MetadataImageField | MetadataLatentsField
]
class InvokeAIMetadata(TypedDict, total=False):
"""InvokeAI-specific metadata format."""
session_id: Optional[str]
node: Optional[NodeMetadata]
def build_invokeai_metadata_pnginfo(
metadata: InvokeAIMetadata | None,
) -> PngImagePlugin.PngInfo:
"""Builds a PngInfo object with key `"invokeai"` and value `metadata`"""
pnginfo = PngImagePlugin.PngInfo()
if metadata is not None:
pnginfo.add_text("invokeai", json.dumps(metadata))
return pnginfo
class MetadataServiceBase(ABC):
@abstractmethod
def get_metadata(self, image: Image.Image) -> InvokeAIMetadata | None:
"""Gets the InvokeAI metadata from a PIL Image, skipping invalid values"""
pass
@abstractmethod
def build_metadata(
self, session_id: str, node: BaseModel
) -> InvokeAIMetadata | None:
"""Builds an InvokeAIMetadata object"""
pass
class PngMetadataService(MetadataServiceBase):
"""Handles loading and building metadata for images."""
# TODO: Use `InvocationsUnion` to **validate** metadata as representing a fully-functioning node
def _load_metadata(self, image: Image.Image) -> dict | None:
"""Loads a specific info entry from a PIL Image."""
try:
info = image.info.get("invokeai")
if type(info) is not str:
return None
loaded_metadata = json.loads(info)
if type(loaded_metadata) is not dict:
return None
if len(loaded_metadata.items()) == 0:
return None
return loaded_metadata
except:
return None
def get_metadata(self, image: Image.Image) -> dict | None:
"""Retrieves an image's metadata as a dict"""
loaded_metadata = self._load_metadata(image)
return loaded_metadata
def build_metadata(self, session_id: str, node: BaseModel) -> InvokeAIMetadata:
metadata = InvokeAIMetadata(session_id=session_id, node=node.dict())
return metadata

View File

@ -49,7 +49,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# Send starting event
self.__invoker.services.events.emit_invocation_started(
graph_execution_state_id=graph_execution_state.id,
invocation_dict=invocation.dict(),
node=invocation.dict(),
source_node_id=source_node_id
)
@ -79,7 +79,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# Send complete event
self.__invoker.services.events.emit_invocation_complete(
graph_execution_state_id=graph_execution_state.id,
invocation_dict=invocation.dict(),
node=invocation.dict(),
source_node_id=source_node_id,
result=outputs.dict(),
)
@ -104,7 +104,7 @@ class DefaultInvocationProcessor(InvocationProcessorABC):
# Send error event
self.__invoker.services.events.emit_invocation_error(
graph_execution_state_id=graph_execution_state.id,
invocation_dict=invocation.dict(),
node=invocation.dict(),
source_node_id=source_node_id,
error=error,
)

View File

@ -1,12 +0,0 @@
from typing import Optional
from invokeai.app.models.image import ImageType
from invokeai.app.util.thumbnails import get_thumbnail_name
def build_image_path(image_type: ImageType, image_name: str, is_thumbnail: Optional[bool] = False) -> str:
"""Gets path to access an image"""
if is_thumbnail is None:
return f"api/v1/images/{image_type}/{image_name}"
else:
thumbnail_name = get_thumbnail_name(image_name)
return f"api/v1/images/{image_type}/thumbnails/{thumbnail_name}"

View File

@ -9,7 +9,7 @@ from ...backend.stable_diffusion import PipelineIntermediateState
def stable_diffusion_step_callback(
context: InvocationContext,
intermediate_state: PipelineIntermediateState,
invocation_dict: dict,
node: dict,
source_node_id: str,
):
if context.services.queue.is_canceled(context.graph_execution_state_id):
@ -47,9 +47,9 @@ def stable_diffusion_step_callback(
context.services.events.emit_generator_progress(
graph_execution_state_id=context.graph_execution_state_id,
invocation_dict=invocation_dict,
node=node,
source_node_id=source_node_id,
progress_image=ProgressImage(width=width, height=height, dataURL=dataURL),
step=intermediate_state.step,
total_steps=invocation_dict["steps"],
total_steps=node["steps"],
)

View File

@ -10,7 +10,7 @@ from .generator import (
Img2Img,
Inpaint
)
from .model_management import ModelManager
from .model_management import ModelManager, SDModelComponent
from .safety_checker import SafetyChecker
from .args import Args
from .globals import Globals

View File

@ -86,9 +86,11 @@ class InvokeAIGenerator(metaclass=ABCMeta):
def __init__(self,
model_info: dict,
params: InvokeAIGeneratorBasicParams=InvokeAIGeneratorBasicParams(),
**kwargs,
):
self.model_info=model_info
self.params=params
self.kwargs = kwargs
def generate(self,
prompt: str='',
@ -129,9 +131,12 @@ class InvokeAIGenerator(metaclass=ABCMeta):
model=model,
scheduler_name=generator_args.get('scheduler')
)
uc, c, extra_conditioning_info = get_uc_and_c_and_ec(prompt,model=model)
# get conditioning from prompt via Compel package
uc, c, extra_conditioning_info = get_uc_and_c_and_ec(prompt, model=model)
gen_class = self._generator_class()
generator = gen_class(model, self.params.precision)
generator = gen_class(model, self.params.precision, **self.kwargs)
if self.params.variation_amount > 0:
generator.set_variation(generator_args.get('seed'),
generator_args.get('variation_amount'),
@ -281,7 +286,7 @@ class Generator:
precision: str
model: DiffusionPipeline
def __init__(self, model: DiffusionPipeline, precision: str):
def __init__(self, model: DiffusionPipeline, precision: str, **kwargs):
self.model = model
self.precision = precision
self.seed = None
@ -354,7 +359,6 @@ class Generator:
seed = seed if seed is not None and seed >= 0 else self.new_seed()
first_seed = seed
seed, initial_noise = self.generate_initial_noise(seed, width, height)
# There used to be an additional self.model.ema_scope() here, but it breaks
# the inpaint-1.5 model. Not sure what it did.... ?
with scope(self.model.device.type):

View File

@ -4,6 +4,10 @@ invokeai.backend.generator.txt2img inherits from invokeai.backend.generator
import PIL.Image
import torch
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
from diffusers.models.controlnet import ControlNetModel, ControlNetOutput
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
from ..stable_diffusion import (
ConditioningData,
PostprocessingSettings,
@ -13,8 +17,13 @@ from .base import Generator
class Txt2Img(Generator):
def __init__(self, model, precision):
super().__init__(model, precision)
def __init__(self, model, precision,
control_model: Optional[Union[ControlNetModel, List[ControlNetModel]]] = None,
**kwargs):
self.control_model = control_model
if isinstance(self.control_model, list):
self.control_model = MultiControlNetModel(self.control_model)
super().__init__(model, precision, **kwargs)
@torch.no_grad()
def get_make_image(
@ -42,9 +51,12 @@ class Txt2Img(Generator):
kwargs are 'width' and 'height'
"""
self.perlin = perlin
control_image = kwargs.get("control_image", None)
do_classifier_free_guidance = cfg_scale > 1.0
# noinspection PyTypeChecker
pipeline: StableDiffusionGeneratorPipeline = self.model
pipeline.control_model = self.control_model
pipeline.scheduler = sampler
uc, c, extra_conditioning_info = conditioning
@ -61,6 +73,37 @@ class Txt2Img(Generator):
),
).add_scheduler_args_if_applicable(pipeline.scheduler, eta=ddim_eta)
# FIXME: still need to test with different widths, heights, devices, dtypes
# and add in batch_size, num_images_per_prompt?
if control_image is not None:
if isinstance(self.control_model, ControlNetModel):
control_image = pipeline.prepare_control_image(
image=control_image,
do_classifier_free_guidance=do_classifier_free_guidance,
width=width,
height=height,
# batch_size=batch_size * num_images_per_prompt,
# num_images_per_prompt=num_images_per_prompt,
device=self.control_model.device,
dtype=self.control_model.dtype,
)
elif isinstance(self.control_model, MultiControlNetModel):
images = []
for image_ in control_image:
image_ = self.model.prepare_control_image(
image=image_,
do_classifier_free_guidance=do_classifier_free_guidance,
width=width,
height=height,
# batch_size=batch_size * num_images_per_prompt,
# num_images_per_prompt=num_images_per_prompt,
device=self.control_model.device,
dtype=self.control_model.dtype,
)
images.append(image_)
control_image = images
kwargs["control_image"] = control_image
def make_image(x_T: torch.Tensor, _: int) -> PIL.Image.Image:
pipeline_output = pipeline.image_from_embeddings(
latents=torch.zeros_like(x_T, dtype=self.torch_dtype()),
@ -68,6 +111,7 @@ class Txt2Img(Generator):
num_inference_steps=steps,
conditioning_data=conditioning_data,
callback=step_callback,
**kwargs,
)
if (

View File

@ -5,6 +5,7 @@ from .convert_ckpt_to_diffusers import (
convert_ckpt_to_diffusers,
load_pipeline_from_original_stable_diffusion_ckpt,
)
from .model_manager import ModelManager
from .model_manager import ModelManager,SDModelComponent

View File

@ -9,16 +9,20 @@ from typing import Any, Callable, Generic, List, Optional, Type, TypeVar, Union
import einops
import PIL.Image
import numpy as np
from accelerate.utils import set_seed
import psutil
import torch
import torchvision.transforms as T
from compel import EmbeddingsProvider
from diffusers.models import AutoencoderKL, UNet2DConditionModel
from diffusers.models.controlnet import ControlNetModel, ControlNetOutput
from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion import (
StableDiffusionPipeline,
)
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_controlnet import MultiControlNetModel
from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img import (
StableDiffusionImg2ImgPipeline,
)
@ -27,6 +31,7 @@ from diffusers.pipelines.stable_diffusion.safety_checker import (
)
from diffusers.schedulers import KarrasDiffusionSchedulers
from diffusers.schedulers.scheduling_utils import SchedulerMixin, SchedulerOutput
from diffusers.utils import PIL_INTERPOLATION
from diffusers.utils.import_utils import is_xformers_available
from diffusers.utils.outputs import BaseOutput
from torchvision.transforms.functional import resize as tv_resize
@ -304,6 +309,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
feature_extractor: Optional[CLIPFeatureExtractor],
requires_safety_checker: bool = False,
precision: str = "float32",
control_model: ControlNetModel = None,
):
super().__init__(
vae,
@ -324,6 +330,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
scheduler=scheduler,
safety_checker=safety_checker,
feature_extractor=feature_extractor,
# FIXME: can't currently register control module
# control_model=control_model,
)
self.invokeai_diffuser = InvokeAIDiffuserComponent(
self.unet, self._unet_forward, is_running_diffusers=True
@ -343,6 +351,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
self._model_group = FullyLoadedModelGroup(self.unet.device)
self._model_group.install(*self._submodels)
self.control_model = control_model
def _adjust_memory_efficient_attention(self, latents: torch.Tensor):
"""
@ -445,8 +454,15 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
@property
def _submodels(self) -> Sequence[torch.nn.Module]:
module_names, _, _ = self.extract_init_dict(dict(self.config))
values = [getattr(self, name) for name in module_names.keys()]
return [m for m in values if isinstance(m, torch.nn.Module)]
submodels = []
for name in module_names.keys():
if hasattr(self, name):
value = getattr(self, name)
else:
value = getattr(self.config, name)
if isinstance(value, torch.nn.Module):
submodels.append(value)
return submodels
def image_from_embeddings(
self,
@ -457,6 +473,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
noise: torch.Tensor,
callback: Callable[[PipelineIntermediateState], None] = None,
run_id=None,
**kwargs,
) -> InvokeAIStableDiffusionPipelineOutput:
r"""
Function invoked when calling the pipeline for generation.
@ -477,6 +494,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
noise=noise,
run_id=run_id,
callback=callback,
**kwargs,
)
# https://discuss.huggingface.co/t/memory-usage-by-later-pipeline-stages/23699
torch.cuda.empty_cache()
@ -501,6 +519,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
additional_guidance: List[Callable] = None,
run_id=None,
callback: Callable[[PipelineIntermediateState], None] = None,
**kwargs,
) -> tuple[torch.Tensor, Optional[AttentionMapSaver]]:
if timesteps is None:
self.scheduler.set_timesteps(
@ -518,6 +537,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
additional_guidance=additional_guidance,
run_id=run_id,
callback=callback,
**kwargs,
)
return result.latents, result.attention_map_saver
@ -530,6 +550,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
noise: torch.Tensor,
run_id: str = None,
additional_guidance: List[Callable] = None,
**kwargs,
):
self._adjust_memory_efficient_attention(latents)
if run_id is None:
@ -544,7 +565,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
yield PipelineIntermediateState(
run_id=run_id,
step=-1,
timestep=self.scheduler.num_train_timesteps,
timestep=self.scheduler.config.num_train_timesteps,
latents=latents,
)
@ -568,6 +589,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
step_index=i,
total_step_count=len(timesteps),
additional_guidance=additional_guidance,
**kwargs,
)
latents = step_output.prev_sample
@ -608,6 +630,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
step_index: int,
total_step_count: int,
additional_guidance: List[Callable] = None,
**kwargs,
):
# invokeai_diffuser has batched timesteps, but diffusers schedulers expect a single value
timestep = t[0]
@ -619,6 +642,33 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# i.e. before or after passing it to InvokeAIDiffuserComponent
latent_model_input = self.scheduler.scale_model_input(latents, timestep)
if (self.control_model is not None) and (kwargs.get("control_image") is not None):
control_image = kwargs.get("control_image") # should be a processed tensor derived from the control image(s)
control_scale = kwargs.get("control_scale", 1.0) # control_scale default is 1.0
# handling case where using multiple control models but only specifying single control_scale
# so reshape control_scale to match number of control models
if isinstance(self.control_model, MultiControlNetModel) and isinstance(control_scale, float):
control_scale = [control_scale] * len(self.control_model.nets)
if conditioning_data.guidance_scale > 1.0:
# expand the latents input to control model if doing classifier free guidance
# (which I think for now is always true, there is conditional elsewhere that stops execution if
# classifier_free_guidance is <= 1.0 ?)
latent_control_input = torch.cat([latent_model_input] * 2)
else:
latent_control_input = latent_model_input
# controlnet inference
down_block_res_samples, mid_block_res_sample = self.control_model(
latent_control_input,
timestep,
encoder_hidden_states=torch.cat([conditioning_data.unconditioned_embeddings,
conditioning_data.text_embeddings]),
controlnet_cond=control_image,
conditioning_scale=control_scale,
return_dict=False,
)
else:
down_block_res_samples, mid_block_res_sample = None, None
# predict the noise residual
noise_pred = self.invokeai_diffuser.do_diffusion_step(
latent_model_input,
@ -628,6 +678,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
conditioning_data.guidance_scale,
step_index=step_index,
total_step_count=total_step_count,
down_block_additional_residuals=down_block_res_samples,
mid_block_additional_residual=mid_block_res_sample,
)
# compute the previous noisy sample x_t -> x_t-1
@ -649,6 +701,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
t,
text_embeddings,
cross_attention_kwargs: Optional[dict[str, Any]] = None,
**kwargs,
):
"""predict the noise residual"""
if is_inpainting_model(self.unet) and latents.size(1) == 4:
@ -668,7 +721,8 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
# First three args should be positional, not keywords, so torch hooks can see them.
return self.unet(
latents, t, text_embeddings, cross_attention_kwargs=cross_attention_kwargs
latents, t, text_embeddings, cross_attention_kwargs=cross_attention_kwargs,
**kwargs,
).sample
def img2img_from_embeddings(
@ -915,7 +969,7 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
@property
def channels(self) -> int:
"""Compatible with DiffusionWrapper"""
return self.unet.in_channels
return self.unet.config.in_channels
def decode_latents(self, latents):
# Explicit call to get the vae loaded, since `decode` isn't the forward method.
@ -930,3 +984,48 @@ class StableDiffusionGeneratorPipeline(StableDiffusionPipeline):
debug_image(
img, f"latents {msg} {i+1}/{len(decoded)}", debug_status=True
)
# Copied from diffusers pipeline_stable_diffusion_controlnet.py
# Returns torch.Tensor of shape (batch_size, 3, height, width)
def prepare_control_image(
self,
image,
width=512,
height=512,
batch_size=1,
num_images_per_prompt=1,
device="cuda",
dtype=torch.float16,
do_classifier_free_guidance=True,
):
if not isinstance(image, torch.Tensor):
if isinstance(image, PIL.Image.Image):
image = [image]
if isinstance(image[0], PIL.Image.Image):
images = []
for image_ in image:
image_ = image_.convert("RGB")
image_ = image_.resize((width, height), resample=PIL_INTERPOLATION["lanczos"])
image_ = np.array(image_)
image_ = image_[None, :]
images.append(image_)
image = images
image = np.concatenate(image, axis=0)
image = np.array(image).astype(np.float32) / 255.0
image = image.transpose(0, 3, 1, 2)
image = torch.from_numpy(image)
elif isinstance(image[0], torch.Tensor):
image = torch.cat(image, dim=0)
image_batch_size = image.shape[0]
if image_batch_size == 1:
repeat_by = batch_size
else:
# image batch size is the same as prompt batch size
repeat_by = num_images_per_prompt
image = image.repeat_interleave(repeat_by, dim=0)
image = image.to(device=device, dtype=dtype)
if do_classifier_free_guidance:
image = torch.cat([image] * 2)
return image

View File

@ -1,7 +1,6 @@
# adapted from bloc97's CrossAttentionControl colab
# https://github.com/bloc97/CrossAttentionControl
import enum
import math
from typing import Callable, Optional
@ -10,8 +9,7 @@ import diffusers
import psutil
import torch
from compel.cross_attention_control import Arguments
from diffusers.models.cross_attention import AttnProcessor
from diffusers.models.unet_2d_condition import UNet2DConditionModel
from diffusers.models.attention_processor import AttentionProcessor
from torch import nn
from ...util import torch_dtype
@ -188,7 +186,7 @@ class Context:
class InvokeAICrossAttentionMixin:
"""
Enable InvokeAI-flavoured CrossAttention calculation, which does aggressive low-memory slicing and calls
Enable InvokeAI-flavoured Attention calculation, which does aggressive low-memory slicing and calls
through both to an attention_slice_wrangler and a slicing_strategy_getter for custom attention map wrangling
and dymamic slicing strategy selection.
"""
@ -209,7 +207,7 @@ class InvokeAICrossAttentionMixin:
Set custom attention calculator to be called when attention is calculated
:param wrangler: Callback, with args (module, suggested_attention_slice, dim, offset, slice_size),
which returns either the suggested_attention_slice or an adjusted equivalent.
`module` is the current CrossAttention module for which the callback is being invoked.
`module` is the current Attention module for which the callback is being invoked.
`suggested_attention_slice` is the default-calculated attention slice
`dim` is -1 if the attenion map has not been sliced, or 0 or 1 for dimension-0 or dimension-1 slicing.
If `dim` is >= 0, `offset` and `slice_size` specify the slice start and length.
@ -345,11 +343,11 @@ class InvokeAICrossAttentionMixin:
def restore_default_cross_attention(
model,
is_running_diffusers: bool,
restore_attention_processor: Optional[AttnProcessor] = None,
restore_attention_processor: Optional[AttentionProcessor] = None,
):
if is_running_diffusers:
unet = model
unet.set_attn_processor(restore_attention_processor or CrossAttnProcessor())
unet.set_attn_processor(restore_attention_processor or AttnProcessor())
else:
remove_attention_function(model)
@ -408,12 +406,9 @@ def override_cross_attention(model, context: Context, is_running_diffusers=False
def get_cross_attention_modules(
model, which: CrossAttentionType
) -> list[tuple[str, InvokeAICrossAttentionMixin]]:
from ldm.modules.attention import CrossAttention # avoid circular import
cross_attention_class: type = (
InvokeAIDiffusersCrossAttention
if isinstance(model, UNet2DConditionModel)
else CrossAttention
)
which_attn = "attn1" if which is CrossAttentionType.SELF else "attn2"
attention_module_tuples = [
@ -428,10 +423,10 @@ def get_cross_attention_modules(
print(
f"Error! CrossAttentionControl found an unexpected number of {cross_attention_class} modules in the model "
+ f"(expected {expected_count}, found {cross_attention_modules_in_model_count}). Either monkey-patching failed "
+ f"or some assumption has changed about the structure of the model itself. Please fix the monkey-patching, "
+ "or some assumption has changed about the structure of the model itself. Please fix the monkey-patching, "
+ f"and/or update the {expected_count} above to an appropriate number, and/or find and inform someone who knows "
+ f"what it means. This error is non-fatal, but it is likely that .swap() and attention map display will not "
+ f"work properly until it is fixed."
+ "what it means. This error is non-fatal, but it is likely that .swap() and attention map display will not "
+ "work properly until it is fixed."
)
return attention_module_tuples
@ -550,7 +545,7 @@ def get_mem_free_total(device):
class InvokeAIDiffusersCrossAttention(
diffusers.models.attention.CrossAttention, InvokeAICrossAttentionMixin
diffusers.models.attention.Attention, InvokeAICrossAttentionMixin
):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@ -572,8 +567,8 @@ class InvokeAIDiffusersCrossAttention(
"""
# base implementation
class CrossAttnProcessor:
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
class AttnProcessor:
def __call__(self, attn: Attention, hidden_states, encoder_hidden_states=None, attention_mask=None):
batch_size, sequence_length, _ = hidden_states.shape
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
@ -601,9 +596,9 @@ class CrossAttnProcessor:
from dataclasses import dataclass, field
import torch
from diffusers.models.cross_attention import (
CrossAttention,
CrossAttnProcessor,
from diffusers.models.attention_processor import (
Attention,
AttnProcessor,
SlicedAttnProcessor,
)
@ -653,7 +648,7 @@ class SlicedSwapCrossAttnProcesser(SlicedAttnProcessor):
def __call__(
self,
attn: CrossAttention,
attn: Attention,
hidden_states,
encoder_hidden_states=None,
attention_mask=None,

View File

@ -5,7 +5,7 @@ from typing import Any, Callable, Dict, Optional, Union
import numpy as np
import torch
from diffusers.models.cross_attention import AttnProcessor
from diffusers.models.attention_processor import AttentionProcessor
from typing_extensions import TypeAlias
from invokeai.backend.globals import Globals
@ -101,7 +101,7 @@ class InvokeAIDiffuserComponent:
def override_cross_attention(
self, conditioning: ExtraConditioningInfo, step_count: int
) -> Dict[str, AttnProcessor]:
) -> Dict[str, AttentionProcessor]:
"""
setup cross attention .swap control. for diffusers this replaces the attention processor, so
the previous attention processor is returned so that the caller can restore it later.
@ -118,7 +118,7 @@ class InvokeAIDiffuserComponent:
)
def restore_default_cross_attention(
self, restore_attention_processor: Optional["AttnProcessor"] = None
self, restore_attention_processor: Optional["AttentionProcessor"] = None
):
self.conditioning = None
self.cross_attention_control_context = None
@ -168,6 +168,7 @@ class InvokeAIDiffuserComponent:
unconditional_guidance_scale: float,
step_index: Optional[int] = None,
total_step_count: Optional[int] = None,
**kwargs,
):
"""
:param x: current latents
@ -196,7 +197,7 @@ class InvokeAIDiffuserComponent:
if wants_hybrid_conditioning:
unconditioned_next_x, conditioned_next_x = self._apply_hybrid_conditioning(
x, sigma, unconditioning, conditioning
x, sigma, unconditioning, conditioning, **kwargs,
)
elif wants_cross_attention_control:
(
@ -208,13 +209,14 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
)
elif self.sequential_guidance:
(
unconditioned_next_x,
conditioned_next_x,
) = self._apply_standard_conditioning_sequentially(
x, sigma, unconditioning, conditioning
x, sigma, unconditioning, conditioning, **kwargs,
)
else:
@ -222,7 +224,7 @@ class InvokeAIDiffuserComponent:
unconditioned_next_x,
conditioned_next_x,
) = self._apply_standard_conditioning(
x, sigma, unconditioning, conditioning
x, sigma, unconditioning, conditioning, **kwargs,
)
combined_next_x = self._combine(
@ -262,20 +264,20 @@ class InvokeAIDiffuserComponent:
# TODO remove when compvis codepath support is dropped
if step_index is None and sigma is None:
raise ValueError(
f"Either step_index or sigma is required when doing cross attention control, but both are None."
"Either step_index or sigma is required when doing cross attention control, but both are None."
)
percent_through = self.estimate_percent_through(step_index, sigma)
return percent_through
# methods below are called from do_diffusion_step and should be considered private to this class.
def _apply_standard_conditioning(self, x, sigma, unconditioning, conditioning):
def _apply_standard_conditioning(self, x, sigma, unconditioning, conditioning, **kwargs):
# fast batched path
x_twice = torch.cat([x] * 2)
sigma_twice = torch.cat([sigma] * 2)
both_conditionings = torch.cat([unconditioning, conditioning])
both_results = self.model_forward_callback(
x_twice, sigma_twice, both_conditionings
x_twice, sigma_twice, both_conditionings, **kwargs,
)
unconditioned_next_x, conditioned_next_x = both_results.chunk(2)
if conditioned_next_x.device.type == "mps":
@ -289,16 +291,17 @@ class InvokeAIDiffuserComponent:
sigma,
unconditioning: torch.Tensor,
conditioning: torch.Tensor,
**kwargs,
):
# low-memory sequential path
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning)
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning, **kwargs)
conditioned_next_x = self.model_forward_callback(x, sigma, conditioning, **kwargs)
if conditioned_next_x.device.type == "mps":
# prevent a result filled with zeros. seems to be a torch bug.
conditioned_next_x = conditioned_next_x.clone()
return unconditioned_next_x, conditioned_next_x
def _apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning):
def _apply_hybrid_conditioning(self, x, sigma, unconditioning, conditioning, **kwargs):
assert isinstance(conditioning, dict)
assert isinstance(unconditioning, dict)
x_twice = torch.cat([x] * 2)
@ -313,7 +316,7 @@ class InvokeAIDiffuserComponent:
else:
both_conditionings[k] = torch.cat([unconditioning[k], conditioning[k]])
unconditioned_next_x, conditioned_next_x = self.model_forward_callback(
x_twice, sigma_twice, both_conditionings
x_twice, sigma_twice, both_conditionings, **kwargs,
).chunk(2)
return unconditioned_next_x, conditioned_next_x
@ -324,6 +327,7 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
):
if self.is_running_diffusers:
return self._apply_cross_attention_controlled_conditioning__diffusers(
@ -332,6 +336,7 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
)
else:
return self._apply_cross_attention_controlled_conditioning__compvis(
@ -340,6 +345,7 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
)
def _apply_cross_attention_controlled_conditioning__diffusers(
@ -349,6 +355,7 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
):
context: Context = self.cross_attention_control_context
@ -364,6 +371,7 @@ class InvokeAIDiffuserComponent:
sigma,
unconditioning,
{"swap_cross_attn_context": cross_attn_processor_context},
**kwargs,
)
# do requested cross attention types for conditioning (positive prompt)
@ -375,6 +383,7 @@ class InvokeAIDiffuserComponent:
sigma,
conditioning,
{"swap_cross_attn_context": cross_attn_processor_context},
**kwargs,
)
return unconditioned_next_x, conditioned_next_x
@ -385,6 +394,7 @@ class InvokeAIDiffuserComponent:
unconditioning,
conditioning,
cross_attention_control_types_to_do,
**kwargs,
):
# print('pct', percent_through, ': doing cross attention control on', cross_attention_control_types_to_do)
# slower non-batched path (20% slower on mac MPS)
@ -398,13 +408,13 @@ class InvokeAIDiffuserComponent:
context: Context = self.cross_attention_control_context
try:
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning)
unconditioned_next_x = self.model_forward_callback(x, sigma, unconditioning, **kwargs)
# process x using the original prompt, saving the attention maps
# print("saving attention maps for", cross_attention_control_types_to_do)
for ca_type in cross_attention_control_types_to_do:
context.request_save_attention_maps(ca_type)
_ = self.model_forward_callback(x, sigma, conditioning)
_ = self.model_forward_callback(x, sigma, conditioning, **kwargs,)
context.clear_requests(cleanup=False)
# process x again, using the saved attention maps to control where self.edited_conditioning will be applied
@ -415,7 +425,7 @@ class InvokeAIDiffuserComponent:
self.conditioning.cross_attention_control_args.edited_conditioning
)
conditioned_next_x = self.model_forward_callback(
x, sigma, edited_conditioning
x, sigma, edited_conditioning, **kwargs,
)
context.clear_requests(cleanup=True)
@ -599,7 +609,6 @@ class InvokeAIDiffuserComponent:
)
# below is fugly omg
num_actual_conditionings = len(c_or_weighted_c_list)
conditionings = [uc] + [c for c, weight in weighted_cond_list]
weights = [1] + [weight for c, weight in weighted_cond_list]
chunk_count = ceil(len(conditionings) / 2)

View File

@ -1,10 +1,9 @@
"""
'''
Minimalist updater script. Prompts user for the tag or branch to update to and runs
pip install <path_to_git_source>.
"""
'''
import os
import platform
import requests
from rich import box, print
from rich.console import Console, Group, group
@ -16,8 +15,10 @@ from rich.text import Text
from invokeai.version import __version__
INVOKE_AI_SRC = "https://github.com/invoke-ai/InvokeAI/archive"
INVOKE_AI_REL = "https://api.github.com/repos/invoke-ai/InvokeAI/releases"
INVOKE_AI_SRC="https://github.com/invoke-ai/InvokeAI/archive"
INVOKE_AI_TAG="https://github.com/invoke-ai/InvokeAI/archive/refs/tags"
INVOKE_AI_BRANCH="https://github.com/invoke-ai/InvokeAI/archive/refs/heads"
INVOKE_AI_REL="https://api.github.com/repos/invoke-ai/InvokeAI/releases"
OS = platform.uname().system
ARCH = platform.uname().machine
@ -28,22 +29,22 @@ if OS == "Windows":
else:
console = Console(style=Style(color="grey74", bgcolor="grey19"))
def get_versions() -> dict:
def get_versions()->dict:
return requests.get(url=INVOKE_AI_REL).json()
def welcome(versions: dict):
@group()
def text():
yield f"InvokeAI Version: [bold yellow]{__version__}"
yield ""
yield "This script will update InvokeAI to the latest release, or to a development version of your choice."
yield ""
yield "[bold yellow]Options:"
yield f"""[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
yield f'InvokeAI Version: [bold yellow]{__version__}'
yield ''
yield 'This script will update InvokeAI to the latest release, or to a development version of your choice.'
yield ''
yield '[bold yellow]Options:'
yield f'''[1] Update to the latest official release ([italic]{versions[0]['tag_name']}[/italic])
[2] Update to the bleeding-edge development version ([italic]main[/italic])
[3] Manually enter the tag or branch name you wish to update"""
[3] Manually enter the [bold]tag name[/bold] for the version you wish to update to
[4] Manually enter the [bold]branch name[/bold] for the version you wish to update to'''
console.rule()
print(
@ -59,33 +60,41 @@ def welcome(versions: dict):
)
console.line()
def main():
versions = get_versions()
welcome(versions)
tag = None
choice = Prompt.ask("Choice:", choices=["1", "2", "3"], default="1")
branch = None
release = None
choice = Prompt.ask('Choice:',choices=['1','2','3','4'],default='1')
if choice=='1':
release = versions[0]['tag_name']
elif choice=='2':
release = 'main'
elif choice=='3':
tag = Prompt.ask('Enter an InvokeAI tag name')
elif choice=='4':
branch = Prompt.ask('Enter an InvokeAI branch name')
if choice == "1":
tag = versions[0]["tag_name"]
elif choice == "2":
tag = "main"
elif choice == "3":
tag = Prompt.ask("Enter an InvokeAI tag or branch name")
print(f":crossed_fingers: Upgrading to [yellow]{tag}[/yellow]")
cmd = f"pip install {INVOKE_AI_SRC}/{tag}.zip --use-pep517"
print("")
print("")
if os.system(cmd) == 0:
print(f":heavy_check_mark: Upgrade successful")
print(f':crossed_fingers: Upgrading to [yellow]{tag if tag else release}[/yellow]')
if release:
cmd = f'pip install {INVOKE_AI_SRC}/{release}.zip --use-pep517 --upgrade'
elif tag:
cmd = f'pip install {INVOKE_AI_TAG}/{tag}.zip --use-pep517 --upgrade'
else:
print(f":exclamation: [bold red]Upgrade failed[/red bold]")
cmd = f'pip install {INVOKE_AI_BRANCH}/{branch}.zip --use-pep517 --upgrade'
print('')
print('')
if os.system(cmd)==0:
print(f':heavy_check_mark: Upgrade successful')
else:
print(f':exclamation: [bold red]Upgrade failed[/red bold]')
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass

View File

@ -81,6 +81,7 @@ interface InvokeProps extends PropsWithChildren {
disabledTabs?: InvokeTabName[];
token?: string;
shouldTransformUrls?: boolean;
shouldFetchImages?: boolean;
}
declare function Invoke(props: InvokeProps): JSX.Element;

View File

@ -6,6 +6,7 @@
"prepare": "cd ../../../ && husky install invokeai/frontend/web/.husky",
"dev": "concurrently \"vite dev\" \"yarn run theme:watch\"",
"dev:nodes": "concurrently \"vite dev --mode nodes\" \"yarn run theme:watch\"",
"dev:host": "concurrently \"vite dev --host\" \"yarn run theme:watch\"",
"build": "yarn run lint && vite build",
"api:web": "openapi -i http://localhost:9090/openapi.json -o src/services/api --client axios --useOptions --useUnionTypes --exportSchemas true --indent 2 --request src/services/fixtures/request.ts",
"api:file": "openapi -i src/services/fixtures/openapi.json -o src/services/api --client axios --useOptions --useUnionTypes --exportSchemas true --indent 2 --request src/services/fixtures/request.ts",
@ -53,6 +54,7 @@
"dateformat": "^5.0.3",
"formik": "^2.2.9",
"framer-motion": "^9.0.4",
"fuse.js": "^6.6.2",
"i18next": "^22.4.10",
"i18next-browser-languagedetector": "^7.0.1",
"i18next-http-backend": "^2.1.1",

View File

@ -18,7 +18,7 @@
"training": "Training",
"trainingDesc1": "Ein spezieller Arbeitsablauf zum Trainieren Ihrer eigenen Embeddings und Checkpoints mit Textual Inversion und Dreambooth über die Weboberfläche.",
"trainingDesc2": "InvokeAI unterstützt bereits das Training von benutzerdefinierten Embeddings mit Textual Inversion unter Verwendung des Hauptskripts.",
"upload": "Upload",
"upload": "Hochladen",
"close": "Schließen",
"load": "Laden",
"statusConnected": "Verbunden",
@ -41,12 +41,34 @@
"statusUpscaling": "Hochskalierung",
"statusUpscalingESRGAN": "Hochskalierung (ESRGAN)",
"statusLoadingModel": "Laden des Modells",
"statusModelChanged": "Modell Geändert"
"statusModelChanged": "Modell Geändert",
"cancel": "Abbruch",
"accept": "Annehmen",
"back": "Zurück",
"langEnglish": "Englisch",
"langDutch": "Niederländisch",
"langFrench": "Französisch",
"oceanTheme": "Ozean",
"langItalian": "Italienisch",
"langPortuguese": "Portogisisch",
"langRussian": "Russisch",
"langUkranian": "Ukrainisch",
"hotkeysLabel": "Tastenkombinationen",
"githubLabel": "Github",
"discordLabel": "Discord",
"txt2img": "Text zu Bild",
"postprocessing": "Nachbearbeitung",
"langPolish": "Polnisch",
"langJapanese": "Japanisch",
"langArabic": "Arabisch",
"langKorean": "Koreanisch",
"langHebrew": "Hebräisch",
"langSpanish": "Spanisch"
},
"gallery": {
"generations": "Erzeugungen",
"showGenerations": "Zeige Erzeugnisse",
"uploads": "Uploads",
"uploads": "Hochgelades",
"showUploads": "Zeige Uploads",
"galleryImageSize": "Bildgröße",
"galleryImageResetSize": "Größe zurücksetzen",
@ -312,7 +334,11 @@
"deleteModel": "Model löschen",
"deleteConfig": "Konfiguration löschen",
"deleteMsg1": "Möchten Sie diesen Model-Eintrag wirklich aus InvokeAI löschen?",
"deleteMsg2": "Dadurch wird die Modellprüfpunktdatei nicht von Ihrer Festplatte gelöscht. Sie können sie bei Bedarf erneut hinzufügen."
"deleteMsg2": "Dadurch wird die Modellprüfpunktdatei nicht von Ihrer Festplatte gelöscht. Sie können sie bei Bedarf erneut hinzufügen.",
"customConfig": "Benutzerdefinierte Konfiguration",
"invokeRoot": "InvokeAI Ordner",
"formMessageDiffusersVAELocationDesc": "Falls nicht angegeben, sucht InvokeAI nach der VAE-Datei innerhalb des oben angegebenen Modell Speicherortes.",
"checkpointModels": "Kontrollpunkte"
},
"parameters": {
"images": "Bilder",
@ -370,7 +396,10 @@
"useInitImg": "Ausgangsbild verwenden",
"deleteImage": "Bild löschen",
"initialImage": "Ursprüngliches Bild",
"showOptionsPanel": "Optionsleiste zeigen"
"showOptionsPanel": "Optionsleiste zeigen",
"cancel": {
"setType": "Abbruchart festlegen"
}
},
"settings": {
"displayInProgress": "Bilder in Bearbeitung anzeigen",
@ -489,5 +518,25 @@
"betaDarkenOutside": "Außen abdunkeln",
"betaLimitToBox": "Begrenzung auf das Feld",
"betaPreserveMasked": "Maskiertes bewahren"
},
"accessibility": {
"modelSelect": "Model Auswahl",
"uploadImage": "Bild hochladen",
"previousImage": "Voriges Bild",
"useThisParameter": "Benutze diesen Parameter",
"copyMetadataJson": "Kopiere metadata JSON",
"zoomIn": "Vergrößern",
"rotateClockwise": "Im Uhrzeigersinn drehen",
"flipHorizontally": "Horizontal drehen",
"flipVertically": "Vertikal drehen",
"modifyConfig": "Optionen einstellen",
"toggleAutoscroll": "Auroscroll ein/ausschalten",
"toggleLogViewer": "Log Betrachter ein/ausschalten",
"showGallery": "Zeige Galerie",
"showOptionsPanel": "Zeige Optionen",
"reset": "Zurücksetzen",
"nextImage": "Nächstes Bild",
"zoomOut": "Verkleinern",
"rotateCounterClockwise": "Gegen den Uhrzeigersinn verdrehen"
}
}

View File

@ -8,7 +8,7 @@
"nextImage": "Next Image",
"useThisParameter": "Use this parameter",
"copyMetadataJson": "Copy metadata JSON",
"exitViewer": "ExitViewer",
"exitViewer": "Exit Viewer",
"zoomIn": "Zoom In",
"zoomOut": "Zoom Out",
"rotateCounterClockwise": "Rotate Counter-Clockwise",
@ -19,7 +19,8 @@
"toggleAutoscroll": "Toggle autoscroll",
"toggleLogViewer": "Toggle Log Viewer",
"showGallery": "Show Gallery",
"showOptionsPanel": "Show Options Panel"
"showOptionsPanel": "Show Options Panel",
"menu": "Menu"
},
"common": {
"hotkeysLabel": "Hotkeys",

View File

@ -73,7 +73,8 @@
"postprocessing": "Tratamiento posterior",
"txt2img": "De texto a imagen",
"accept": "Aceptar",
"cancel": "Cancelar"
"cancel": "Cancelar",
"linear": "Lineal"
},
"gallery": {
"generations": "Generaciones",
@ -483,7 +484,9 @@
"negativePrompts": "Preguntas negativas",
"imageToImage": "Imagen a imagen",
"denoisingStrength": "Intensidad de la eliminación del ruido",
"hiresStrength": "Alta resistencia"
"hiresStrength": "Alta resistencia",
"showPreview": "Mostrar la vista previa",
"hidePreview": "Ocultar la vista previa"
},
"settings": {
"models": "Modelos",
@ -529,7 +532,11 @@
"metadataLoadFailed": "Error al cargar metadatos",
"initialImageSet": "Imágen inicial establecida",
"initialImageNotSet": "Imagen inicial no establecida",
"initialImageNotSetDesc": "Error al establecer la imágen inicial"
"initialImageNotSetDesc": "Error al establecer la imágen inicial",
"serverError": "Error en el servidor",
"disconnected": "Desconectado del servidor",
"canceled": "Procesando la cancelación",
"connected": "Conectado al servidor"
},
"tooltip": {
"feature": {
@ -625,6 +632,7 @@
"toggleAutoscroll": "Activar el autodesplazamiento",
"toggleLogViewer": "Alternar el visor de registros",
"showGallery": "Mostrar galería",
"showOptionsPanel": "Mostrar el panel de opciones"
"showOptionsPanel": "Mostrar el panel de opciones",
"menu": "Menú"
}
}

View File

@ -0,0 +1,122 @@
{
"accessibility": {
"reset": "Resetoi",
"useThisParameter": "Käytä tätä parametria",
"modelSelect": "Mallin Valinta",
"exitViewer": "Poistu katselimesta",
"uploadImage": "Lataa kuva",
"copyMetadataJson": "Kopioi metadata JSON:iin",
"invokeProgressBar": "Invoken edistymispalkki",
"nextImage": "Seuraava kuva",
"previousImage": "Edellinen kuva",
"zoomIn": "Lähennä",
"flipHorizontally": "Käännä vaakasuoraan",
"zoomOut": "Loitonna",
"rotateCounterClockwise": "Kierrä vastapäivään",
"rotateClockwise": "Kierrä myötäpäivään",
"flipVertically": "Käännä pystysuoraan",
"showGallery": "Näytä galleria",
"modifyConfig": "Muokkaa konfiguraatiota",
"toggleAutoscroll": "Kytke automaattinen vieritys",
"toggleLogViewer": "Kytke lokin katselutila",
"showOptionsPanel": "Näytä asetukset"
},
"common": {
"postProcessDesc2": "Erillinen käyttöliittymä tullaan julkaisemaan helpottaaksemme työnkulkua jälkikäsittelyssä.",
"training": "Kouluta",
"statusLoadingModel": "Ladataan mallia",
"statusModelChanged": "Malli vaihdettu",
"statusConvertingModel": "Muunnetaan mallia",
"statusModelConverted": "Malli muunnettu",
"langFrench": "Ranska",
"langItalian": "Italia",
"languagePickerLabel": "Kielen valinta",
"hotkeysLabel": "Pikanäppäimet",
"reportBugLabel": "Raportoi Bugista",
"langPolish": "Puola",
"themeLabel": "Teema",
"langDutch": "Hollanti",
"settingsLabel": "Asetukset",
"githubLabel": "Github",
"darkTheme": "Tumma",
"lightTheme": "Vaalea",
"greenTheme": "Vihreä",
"langGerman": "Saksa",
"langPortuguese": "Portugali",
"discordLabel": "Discord",
"langEnglish": "Englanti",
"oceanTheme": "Meren sininen",
"langRussian": "Venäjä",
"langUkranian": "Ukraina",
"langSpanish": "Espanja",
"upload": "Lataa",
"statusMergedModels": "Mallit yhdistelty",
"img2img": "Kuva kuvaksi",
"nodes": "Solmut",
"nodesDesc": "Solmupohjainen järjestelmä kuvien generoimiseen on parhaillaan kehitteillä. Pysy kuulolla päivityksistä tähän uskomattomaan ominaisuuteen liittyen.",
"postProcessDesc1": "Invoke AI tarjoaa monenlaisia jälkikäsittelyominaisuukisa. Kuvan laadun skaalaus sekä kasvojen korjaus ovat jo saatavilla WebUI:ssä. Voit ottaa ne käyttöön lisäasetusten valikosta teksti kuvaksi sekä kuva kuvaksi -välilehdiltä. Voit myös suoraan prosessoida kuvia käyttämällä kuvan toimintapainikkeita nykyisen kuvan yläpuolella tai tarkastelussa.",
"postprocessing": "Jälkikäsitellään",
"postProcessing": "Jälkikäsitellään",
"cancel": "Peruuta",
"close": "Sulje",
"accept": "Hyväksy",
"statusConnected": "Yhdistetty",
"statusError": "Virhe",
"statusProcessingComplete": "Prosessointi valmis",
"load": "Lataa",
"back": "Takaisin",
"statusGeneratingTextToImage": "Generoidaan tekstiä kuvaksi",
"trainingDesc2": "InvokeAI tukee jo mukautettujen upotusten kouluttamista tekstin inversiolla käyttäen pääskriptiä.",
"statusDisconnected": "Yhteys katkaistu",
"statusPreparing": "Valmistellaan",
"statusIterationComplete": "Iteraatio valmis",
"statusMergingModels": "Yhdistellään malleja",
"statusProcessingCanceled": "Valmistelu peruutettu",
"statusSavingImage": "Tallennetaan kuvaa",
"statusGeneratingImageToImage": "Generoidaan kuvaa kuvaksi",
"statusRestoringFacesGFPGAN": "Korjataan kasvoja (GFPGAN)",
"statusRestoringFacesCodeFormer": "Korjataan kasvoja (CodeFormer)",
"statusGeneratingInpainting": "Generoidaan sisällemaalausta",
"statusGeneratingOutpainting": "Generoidaan ulosmaalausta",
"statusRestoringFaces": "Korjataan kasvoja",
"pinOptionsPanel": "Kiinnitä asetukset -paneeli",
"loadingInvokeAI": "Ladataan Invoke AI:ta",
"loading": "Ladataan",
"statusGenerating": "Generoidaan",
"txt2img": "Teksti kuvaksi",
"trainingDesc1": "Erillinen työnkulku omien upotusten ja tarkastuspisteiden kouluttamiseksi käyttäen tekstin inversiota ja dreamboothia selaimen käyttöliittymässä.",
"postProcessDesc3": "Invoke AI:n komentorivi tarjoaa paljon muita ominaisuuksia, kuten esimerkiksi Embiggenin.",
"unifiedCanvas": "Yhdistetty kanvas",
"statusGenerationComplete": "Generointi valmis"
},
"gallery": {
"uploads": "Lataukset",
"showUploads": "Näytä lataukset",
"galleryImageResetSize": "Resetoi koko",
"maintainAspectRatio": "Säilytä kuvasuhde",
"galleryImageSize": "Kuvan koko",
"pinGallery": "Kiinnitä galleria",
"showGenerations": "Näytä generaatiot",
"singleColumnLayout": "Yhden sarakkeen asettelu",
"generations": "Generoinnit",
"gallerySettings": "Gallerian asetukset",
"autoSwitchNewImages": "Vaihda uusiin kuviin automaattisesti",
"allImagesLoaded": "Kaikki kuvat ladattu",
"noImagesInGallery": "Ei kuvia galleriassa",
"loadMore": "Lataa lisää"
},
"hotkeys": {
"keyboardShortcuts": "näppäimistön pikavalinnat",
"appHotkeys": "Sovelluksen pikanäppäimet",
"generalHotkeys": "Yleiset pikanäppäimet",
"galleryHotkeys": "Gallerian pikanäppäimet",
"unifiedCanvasHotkeys": "Yhdistetyn kanvaan pikanäppäimet",
"cancel": {
"desc": "Peruuta kuvan luominen",
"title": "Peruuta"
},
"invoke": {
"desc": "Luo kuva"
}
}
}

View File

@ -73,7 +73,8 @@
"postprocessing": "Post Elaborazione",
"txt2img": "Testo a Immagine",
"accept": "Accetta",
"cancel": "Annulla"
"cancel": "Annulla",
"linear": "Lineare"
},
"gallery": {
"generations": "Generazioni",
@ -483,7 +484,9 @@
},
"hSymmetryStep": "Passi Simmetria Orizzontale",
"vSymmetryStep": "Passi Simmetria Verticale",
"symmetry": "Simmetria"
"symmetry": "Simmetria",
"hidePreview": "Nascondi l'anteprima",
"showPreview": "Mostra l'anteprima"
},
"settings": {
"models": "Modelli",
@ -529,7 +532,11 @@
"metadataLoadFailed": "Impossibile caricare i metadati",
"initialImageSet": "Immagine iniziale impostata",
"initialImageNotSet": "Immagine iniziale non impostata",
"initialImageNotSetDesc": "Impossibile caricare l'immagine iniziale"
"initialImageNotSetDesc": "Impossibile caricare l'immagine iniziale",
"serverError": "Errore del Server",
"disconnected": "Disconnesso dal Server",
"connected": "Connesso al Server",
"canceled": "Elaborazione annullata"
},
"tooltip": {
"feature": {
@ -625,6 +632,7 @@
"showOptionsPanel": "Mostra il pannello opzioni",
"flipVertically": "Capovolgi verticalmente",
"toggleAutoscroll": "Attiva/disattiva lo scorrimento automatico",
"modifyConfig": "Modifica configurazione"
"modifyConfig": "Modifica configurazione",
"menu": "Menu"
}
}

View File

@ -37,7 +37,43 @@
"statusUpscaling": "アップスケーリング",
"statusUpscalingESRGAN": "アップスケーリング (ESRGAN)",
"statusLoadingModel": "モデルを読み込む",
"statusModelChanged": "モデルを変更"
"statusModelChanged": "モデルを変更",
"cancel": "キャンセル",
"accept": "同意",
"langBrPortuguese": "Português do Brasil",
"langRussian": "Русский",
"langSimplifiedChinese": "简体中文",
"langUkranian": "Украї́нська",
"langSpanish": "Español",
"img2img": "img2img",
"unifiedCanvas": "Unified Canvas",
"statusMergingModels": "モデルのマージ",
"statusModelConverted": "変換済モデル",
"statusGeneratingInpainting": "Inpaintingを生成",
"statusIterationComplete": "Iteration Complete",
"statusGeneratingOutpainting": "Outpaintingを生成",
"loading": "ロード中",
"loadingInvokeAI": "Invoke AIをロード中",
"statusConvertingModel": "モデルの変換",
"statusMergedModels": "マージ済モデル",
"pinOptionsPanel": "オプションパネルを固定",
"githubLabel": "Github",
"hotkeysLabel": "ホットキー",
"langHebrew": "עברית",
"discordLabel": "Discord",
"langItalian": "Italiano",
"langEnglish": "English",
"oceanTheme": "オーシャン",
"langArabic": "アラビア語",
"langDutch": "Nederlands",
"langFrench": "Français",
"langGerman": "Deutsch",
"langPortuguese": "Português",
"nodes": "ノード",
"langKorean": "한국어",
"langPolish": "Polski",
"txt2img": "txt2img",
"postprocessing": "Post Processing"
},
"gallery": {
"uploads": "アップロード",
@ -46,11 +82,14 @@
"galleryImageResetSize": "サイズをリセット",
"gallerySettings": "ギャラリーの設定",
"maintainAspectRatio": "アスペクト比を維持",
"singleColumnLayout": "シングルカラムレイアウト",
"singleColumnLayout": "1カラムレイアウト",
"pinGallery": "ギャラリーにピン留め",
"allImagesLoaded": "すべての画像を読み込む",
"loadMore": "さらに読み込む",
"noImagesInGallery": "ギャラリーに画像がありません"
"noImagesInGallery": "ギャラリーに画像がありません",
"generations": "生成",
"showGenerations": "生成過程を見る",
"autoSwitchNewImages": "新しい画像に自動切替"
},
"hotkeys": {
"keyboardShortcuts": "キーボードショートカット",
@ -59,14 +98,16 @@
"galleryHotkeys": "ギャラリーのホットキー",
"unifiedCanvasHotkeys": "Unified Canvasのホットキー",
"invoke": {
"desc": "画像を生成"
"desc": "画像を生成",
"title": "Invoke"
},
"cancel": {
"title": "キャンセル",
"desc": "画像の生成をキャンセル"
},
"focusPrompt": {
"desc": "プロンプトテキストボックスにフォーカス"
"desc": "プロンプトテキストボックスにフォーカス",
"title": "プロジェクトにフォーカス"
},
"toggleOptions": {
"title": "オプションパネルのトグル",
@ -410,5 +451,27 @@
"accept": "同意",
"showHide": "表示/非表示",
"discardAll": "すべて破棄"
},
"accessibility": {
"modelSelect": "モデルを選択",
"invokeProgressBar": "進捗バー",
"reset": "リセット",
"uploadImage": "画像をアップロード",
"previousImage": "前の画像",
"nextImage": "次の画像",
"useThisParameter": "このパラメータを使用する",
"copyMetadataJson": "メタデータをコピー(JSON)",
"zoomIn": "ズームイン",
"exitViewer": "ExitViewer",
"zoomOut": "ズームアウト",
"rotateCounterClockwise": "反時計回りに回転",
"rotateClockwise": "時計回りに回転",
"flipHorizontally": "水平方向に反転",
"flipVertically": "垂直方向に反転",
"toggleAutoscroll": "自動スクロールの切替",
"modifyConfig": "Modify Config",
"toggleLogViewer": "Log Viewerの切替",
"showGallery": "ギャラリーを表示",
"showOptionsPanel": "オプションパネルを表示"
}
}

View File

@ -0,0 +1 @@
{}

View File

@ -62,7 +62,18 @@
"statusConvertingModel": "Omzetten van model",
"statusModelConverted": "Model omgezet",
"statusMergingModels": "Samenvoegen van modellen",
"statusMergedModels": "Modellen samengevoegd"
"statusMergedModels": "Modellen samengevoegd",
"cancel": "Annuleer",
"accept": "Akkoord",
"langPortuguese": "Português",
"pinOptionsPanel": "Zet deelscherm Opties vast",
"loading": "Bezig met laden",
"loadingInvokeAI": "Bezig met laden van Invoke AI",
"oceanTheme": "Oceaan",
"langHebrew": "עברית",
"langKorean": "한국어",
"txt2img": "Tekst naar afbeelding",
"postprocessing": "Nabewerking"
},
"gallery": {
"generations": "Gegenereerde afbeeldingen",
@ -301,7 +312,7 @@
"name": "Naam",
"nameValidationMsg": "Geef een naam voor je model",
"description": "Beschrijving",
"descriptionValidationMsg": "Voeg een beschrijving toe voor je model.",
"descriptionValidationMsg": "Voeg een beschrijving toe voor je model",
"config": "Configuratie",
"configValidationMsg": "Pad naar het configuratiebestand van je model.",
"modelLocation": "Locatie model",
@ -391,7 +402,13 @@
"modelMergeInterpAddDifferenceHelp": "In deze stand wordt model 3 eerst van model 2 afgehaald. Wat daar uitkomt wordt gemengd met model 1, gebruikmakend van de hierboven ingestelde alfawaarde.",
"inverseSigmoid": "Keer Sigmoid om",
"sigmoid": "Sigmoid",
"weightedSum": "Gewogen som"
"weightedSum": "Gewogen som",
"v2_base": "v2 (512px)",
"v2_768": "v2 (768px)",
"none": "geen",
"addDifference": "Voeg verschil toe",
"scanForModels": "Scan naar modellen",
"pickModelType": "Kies modelsoort"
},
"parameters": {
"images": "Afbeeldingen",
@ -561,7 +578,7 @@
"autoSaveToGallery": "Bewaar automatisch naar galerij",
"saveBoxRegionOnly": "Bewaar alleen tekengebied",
"limitStrokesToBox": "Beperk streken tot tekenvak",
"showCanvasDebugInfo": "Toon foutopsporingsgegevens canvas",
"showCanvasDebugInfo": "Toon aanvullende canvasgegevens",
"clearCanvasHistory": "Wis canvasgeschiedenis",
"clearHistory": "Wis geschiedenis",
"clearCanvasHistoryMessage": "Het wissen van de canvasgeschiedenis laat het huidige canvas ongemoeid, maar wist onherstelbaar de geschiedenis voor het ongedaan maken en herhalen.",
@ -587,5 +604,27 @@
"betaDarkenOutside": "Verduister buiten tekenvak",
"betaLimitToBox": "Beperk tot tekenvak",
"betaPreserveMasked": "Behoud masker"
},
"accessibility": {
"exitViewer": "Stop viewer",
"zoomIn": "Zoom in",
"rotateCounterClockwise": "Draai tegen de klok in",
"modelSelect": "Modelkeuze",
"invokeProgressBar": "Voortgangsbalk Invoke",
"reset": "Herstel",
"uploadImage": "Upload afbeelding",
"previousImage": "Vorige afbeelding",
"nextImage": "Volgende afbeelding",
"useThisParameter": "Gebruik deze parameter",
"copyMetadataJson": "Kopieer metagegevens-JSON",
"zoomOut": "Zoom uit",
"rotateClockwise": "Draai met de klok mee",
"flipHorizontally": "Spiegel horizontaal",
"flipVertically": "Spiegel verticaal",
"modifyConfig": "Wijzig configuratie",
"toggleAutoscroll": "Autom. scrollen aan/uit",
"toggleLogViewer": "Logboekviewer aan/uit",
"showGallery": "Toon galerij",
"showOptionsPanel": "Toon deelscherm Opties"
}
}

View File

@ -9,7 +9,7 @@
"lightTheme": "Светлая",
"greenTheme": "Зеленая",
"img2img": "Изображение в изображение (img2img)",
"unifiedCanvas": "Универсальный холст",
"unifiedCanvas": "Единый холст",
"nodes": "Ноды",
"langRussian": "Русский",
"nodesDesc": "Cистема генерации изображений на основе нодов (узлов) уже разрабатывается. Следите за новостями об этой замечательной функции.",
@ -53,7 +53,28 @@
"loading": "Загрузка",
"loadingInvokeAI": "Загрузка Invoke AI",
"back": "Назад",
"statusConvertingModel": "Конвертация модели"
"statusConvertingModel": "Конвертация модели",
"cancel": "Отменить",
"accept": "Принять",
"oceanTheme": "Океан",
"langUkranian": "Украинский",
"langEnglish": "Английский",
"postprocessing": "Постобработка",
"langArabic": "Арабский",
"langSpanish": "Испанский",
"langSimplifiedChinese": "Китайский (упрощенный)",
"langDutch": "Нидерландский",
"langFrench": "Французский",
"langGerman": "Немецкий",
"langHebrew": "Иврит",
"langItalian": "Итальянский",
"langJapanese": "Японский",
"langKorean": "Корейский",
"langPolish": "Польский",
"langPortuguese": "Португальский",
"txt2img": "Текст в изображение (txt2img)",
"langBrPortuguese": "Португальский (Бразилия)",
"linear": "Линейная обработка"
},
"gallery": {
"generations": "Генерации",
@ -72,11 +93,11 @@
"noImagesInGallery": "Изображений нет"
},
"hotkeys": {
"keyboardShortcuts": "Клавиатурные сокращения",
"keyboardShortcuts": "Горячие клавиши",
"appHotkeys": "Горячие клавиши приложения",
"generalHotkeys": "Общие горячие клавиши",
"galleryHotkeys": "Горячие клавиши галереи",
"unifiedCanvasHotkeys": "Горячие клавиши универсального холста",
"unifiedCanvasHotkeys": "Горячие клавиши Единого холста",
"invoke": {
"title": "Invoke",
"desc": "Сгенерировать изображение"
@ -266,12 +287,12 @@
"desc": "Сбросить вид холста"
},
"previousStagingImage": {
"title": "Previous Staging Image",
"desc": "Предыдущее изображение"
"title": "Предыдущее изображение",
"desc": "Предыдущая область изображения"
},
"nextStagingImage": {
"title": "Next Staging Image",
"desc": "Следующее изображение"
"title": "Следующее изображение",
"desc": "Следующая область изображения"
},
"acceptStagingImage": {
"title": "Принять изображение",
@ -353,7 +374,42 @@
"modelConverted": "Модель преобразована",
"invokeRoot": "Каталог InvokeAI",
"modelsMerged": "Модели объединены",
"mergeModels": "Объединить модели"
"mergeModels": "Объединить модели",
"scanForModels": "Просканировать модели",
"sigmoid": "Сигмоид",
"formMessageDiffusersModelLocation": "Расположение Diffusers-модели",
"modelThree": "Модель 3",
"modelMergeHeaderHelp2": "Только Diffusers-модели доступны для объединения. Если вы хотите объединить checkpoint-модели, сначала преобразуйте их в Diffusers.",
"pickModelType": "Выбрать тип модели",
"formMessageDiffusersVAELocation": "Расположение VAE",
"v1": "v1",
"convertToDiffusersSaveLocation": "Путь сохранения",
"customSaveLocation": "Пользовательский путь сохранения",
"alpha": "Альфа",
"diffusersModels": "Diffusers",
"customConfig": "Пользовательский конфиг",
"pathToCustomConfig": "Путь к пользовательскому конфигу",
"inpainting": "v1 Inpainting",
"sameFolder": "В ту же папку",
"modelOne": "Модель 1",
"mergedModelCustomSaveLocation": "Пользовательский путь",
"none": "пусто",
"addDifference": "Добавить разницу",
"vaeRepoIDValidationMsg": "Онлайн репозиторий VAE",
"convertToDiffusersHelpText2": "Этот процесс заменит вашу запись в Model Manager на версию той же модели в Diffusers.",
"custom": "Пользовательский",
"modelTwo": "Модель 2",
"mergedModelSaveLocation": "Путь сохранения",
"merge": "Объединить",
"interpolationType": "Тип интерполяции",
"modelMergeInterpAddDifferenceHelp": "В этом режиме Модель 3 сначала вычитается из Модели 2. Результирующая версия смешивается с Моделью 1 с установленным выше коэффициентом Альфа.",
"modelMergeHeaderHelp1": "Вы можете объединить до трех разных моделей, чтобы создать смешанную, соответствующую вашим потребностям.",
"modelMergeAlphaHelp": "Альфа влияет на силу смешивания моделей. Более низкие значения альфа приводят к меньшему влиянию второй модели.",
"inverseSigmoid": "Обратный Сигмоид",
"weightedSum": "Взвешенная сумма",
"safetensorModels": "SafeTensors",
"v2_768": "v2 (768px)",
"v2_base": "v2 (512px)"
},
"parameters": {
"images": "Изображения",
@ -380,7 +436,7 @@
"scale": "Масштаб",
"otherOptions": "Другие параметры",
"seamlessTiling": "Бесшовный узор",
"hiresOptim": "Высокое разрешение",
"hiresOptim": "Оптимизация High Res",
"imageFit": "Уместить изображение",
"codeformerFidelity": "Точность",
"seamSize": "Размер шва",
@ -397,11 +453,11 @@
"infillScalingHeader": "Заполнение и масштабирование",
"img2imgStrength": "Сила обработки img2img",
"toggleLoopback": "Зациклить обработку",
"invoke": "Вызвать",
"invoke": "Invoke",
"promptPlaceholder": "Введите запрос здесь (на английском). [исключенные токены], (более значимые)++, (менее значимые)--, swap и blend тоже доступны (смотрите Github)",
"sendTo": "Отправить",
"sendToImg2Img": "Отправить в img2img",
"sendToUnifiedCanvas": "Отправить на холст",
"sendToUnifiedCanvas": "Отправить на Единый холст",
"copyImageToLink": "Скопировать ссылку",
"downloadImage": "Скачать",
"openInViewer": "Открыть в просмотрщике",
@ -413,7 +469,24 @@
"info": "Метаданные",
"deleteImage": "Удалить изображение",
"initialImage": "Исходное изображение",
"showOptionsPanel": "Показать панель настроек"
"showOptionsPanel": "Показать панель настроек",
"vSymmetryStep": "Шаг верт. симметрии",
"cancel": {
"immediate": "Отменить немедленно",
"schedule": "Отменить после текущей итерации",
"isScheduled": "Отмена",
"setType": "Установить тип отмены"
},
"general": "Основное",
"hiresStrength": "Сила High Res",
"symmetry": "Симметрия",
"hSymmetryStep": "Шаг гор. симметрии",
"hidePreview": "Скрыть предпросмотр",
"imageToImage": "Изображение в изображение",
"denoisingStrength": "Сила шумоподавления",
"copyImage": "Скопировать изображение",
"negativePrompts": "Исключающий запрос",
"showPreview": "Показать предпросмотр"
},
"settings": {
"models": "Модели",
@ -423,10 +496,11 @@
"displayHelpIcons": "Показывать значки подсказок",
"useCanvasBeta": "Показывать инструменты слева (Beta UI)",
"enableImageDebugging": "Включить отладку",
"resetWebUI": "Вернуть умолчания",
"resetWebUI": "Сброс настроек Web UI",
"resetWebUIDesc1": "Сброс настроек веб-интерфейса удаляет только локальный кэш браузера с вашими изображениями и настройками. Он не удаляет изображения с диска.",
"resetWebUIDesc2": "Если изображения не отображаются в галерее или не работает что-то еще, пожалуйста, попробуйте сбросить настройки, прежде чем сообщать о проблеме на GitHub.",
"resetComplete": "Интерфейс сброшен. Обновите эту страницу."
"resetComplete": "Интерфейс сброшен. Обновите эту страницу.",
"useSlidersForAll": "Использовать ползунки для всех параметров"
},
"toast": {
"tempFoldersEmptied": "Временная папка очищена",
@ -441,7 +515,7 @@
"imageSavedToGallery": "Изображение сохранено в галерею",
"canvasMerged": "Холст объединен",
"sentToImageToImage": "Отправить в img2img",
"sentToUnifiedCanvas": "Отправить на холст",
"sentToUnifiedCanvas": "Отправлено на Единый холст",
"parametersSet": "Параметры заданы",
"parametersNotSet": "Параметры не заданы",
"parametersNotSetDesc": "Не найдены метаданные изображения.",
@ -458,7 +532,11 @@
"metadataLoadFailed": "Не удалось загрузить метаданные",
"initialImageSet": "Исходное изображение задано",
"initialImageNotSet": "Исходное изображение не задано",
"initialImageNotSetDesc": "Не получилось загрузить исходное изображение"
"initialImageNotSetDesc": "Не получилось загрузить исходное изображение",
"serverError": "Ошибка сервера",
"disconnected": "Отключено от сервера",
"connected": "Подключено к серверу",
"canceled": "Обработка отменена"
},
"tooltip": {
"feature": {
@ -507,7 +585,7 @@
"autoSaveToGallery": "Автосохранение в галерее",
"saveBoxRegionOnly": "Сохранять только выделение",
"limitStrokesToBox": "Ограничить штрихи выделением",
"showCanvasDebugInfo": "Показать отладку холста",
"showCanvasDebugInfo": "Показать доп. информацию о холсте",
"clearCanvasHistory": "Очистить историю холста",
"clearHistory": "Очистить историю",
"clearCanvasHistoryMessage": "Очистка истории холста оставляет текущий холст нетронутым, но удаляет историю отмен и повторов.",
@ -535,6 +613,26 @@
"betaPreserveMasked": "Сохранять маскируемую область"
},
"accessibility": {
"modelSelect": "Выбор модели"
"modelSelect": "Выбор модели",
"uploadImage": "Загрузить изображение",
"nextImage": "Следующее изображение",
"previousImage": "Предыдущее изображение",
"zoomIn": "Приблизить",
"zoomOut": "Отдалить",
"rotateClockwise": "Повернуть по часовой стрелке",
"rotateCounterClockwise": "Повернуть против часовой стрелки",
"flipVertically": "Перевернуть вертикально",
"flipHorizontally": "Отразить горизонтально",
"toggleAutoscroll": "Включить автопрокрутку",
"toggleLogViewer": "Показать или скрыть просмотрщик логов",
"showOptionsPanel": "Показать опции",
"showGallery": "Показать галерею",
"invokeProgressBar": "Индикатор выполнения",
"reset": "Сброс",
"modifyConfig": "Изменить конфиг",
"useThisParameter": "Использовать этот параметр",
"copyMetadataJson": "Скопировать метаданные JSON",
"exitViewer": "Закрыть просмотрщик",
"menu": "Меню"
}
}

View File

@ -0,0 +1,254 @@
{
"accessibility": {
"copyMetadataJson": "Kopiera metadata JSON",
"zoomIn": "Zooma in",
"exitViewer": "Avslutningsvisare",
"modelSelect": "Välj modell",
"uploadImage": "Ladda upp bild",
"invokeProgressBar": "Invoke förloppsmätare",
"nextImage": "Nästa bild",
"toggleAutoscroll": "Växla automatisk rullning",
"flipHorizontally": "Vänd vågrätt",
"flipVertically": "Vänd lodrätt",
"zoomOut": "Zooma ut",
"toggleLogViewer": "Växla logvisare",
"reset": "Starta om",
"previousImage": "Föregående bild",
"useThisParameter": "Använd denna parametern",
"showGallery": "Visa galleri",
"rotateCounterClockwise": "Rotera moturs",
"rotateClockwise": "Rotera medurs",
"modifyConfig": "Ändra konfiguration",
"showOptionsPanel": "Visa inställningspanelen"
},
"common": {
"hotkeysLabel": "Snabbtangenter",
"reportBugLabel": "Rapportera bugg",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Inställningar",
"darkTheme": "Mörk",
"lightTheme": "Ljus",
"greenTheme": "Grön",
"oceanTheme": "Hav",
"langEnglish": "Engelska",
"langDutch": "Nederländska",
"langFrench": "Franska",
"langGerman": "Tyska",
"langItalian": "Italienska",
"langArabic": "العربية",
"langHebrew": "עברית",
"langPolish": "Polski",
"langPortuguese": "Português",
"langBrPortuguese": "Português do Brasil",
"langSimplifiedChinese": "简体中文",
"langJapanese": "日本語",
"langKorean": "한국어",
"langRussian": "Русский",
"unifiedCanvas": "Förenad kanvas",
"nodesDesc": "Ett nodbaserat system för bildgenerering är under utveckling. Håll utkik för uppdateringar om denna fantastiska funktion.",
"langUkranian": "Украї́нська",
"langSpanish": "Español",
"postProcessDesc2": "Ett dedikerat användargränssnitt kommer snart att släppas för att underlätta mer avancerade arbetsflöden av efterbehandling.",
"trainingDesc1": "Ett dedikerat arbetsflöde för träning av dina egna inbäddningar och kontrollpunkter genom Textual Inversion eller Dreambooth från webbgränssnittet.",
"trainingDesc2": "InvokeAI stöder redan träning av anpassade inbäddningar med hjälp av Textual Inversion genom huvudscriptet.",
"upload": "Ladda upp",
"close": "Stäng",
"cancel": "Avbryt",
"accept": "Acceptera",
"statusDisconnected": "Frånkopplad",
"statusGeneratingTextToImage": "Genererar text till bild",
"statusGeneratingImageToImage": "Genererar Bild till bild",
"statusGeneratingInpainting": "Genererar Måla i",
"statusGenerationComplete": "Generering klar",
"statusModelConverted": "Modell konverterad",
"statusMergingModels": "Sammanfogar modeller",
"pinOptionsPanel": "Nåla fast inställningspanelen",
"loading": "Laddar",
"loadingInvokeAI": "Laddar Invoke AI",
"statusRestoringFaces": "Återskapar ansikten",
"languagePickerLabel": "Språkväljare",
"themeLabel": "Tema",
"txt2img": "Text till bild",
"nodes": "Noder",
"img2img": "Bild till bild",
"postprocessing": "Efterbehandling",
"postProcessing": "Efterbehandling",
"load": "Ladda",
"training": "Träning",
"postProcessDesc1": "Invoke AI erbjuder ett brett utbud av efterbehandlingsfunktioner. Uppskalning och ansiktsåterställning finns redan tillgängligt i webbgränssnittet. Du kommer åt dem ifrån Avancerade inställningar-menyn under Bild till bild-fliken. Du kan också behandla bilder direkt genom att använda knappen bildåtgärder ovanför nuvarande bild eller i bildvisaren.",
"postProcessDesc3": "Invoke AI's kommandotolk erbjuder många olika funktioner, bland annat \"Förstora\".",
"statusGenerating": "Genererar",
"statusError": "Fel",
"back": "Bakåt",
"statusConnected": "Ansluten",
"statusPreparing": "Förbereder",
"statusProcessingCanceled": "Bearbetning avbruten",
"statusProcessingComplete": "Bearbetning färdig",
"statusGeneratingOutpainting": "Genererar Fyll ut",
"statusIterationComplete": "Itterering klar",
"statusSavingImage": "Sparar bild",
"statusRestoringFacesGFPGAN": "Återskapar ansikten (GFPGAN)",
"statusRestoringFacesCodeFormer": "Återskapar ansikten (CodeFormer)",
"statusUpscaling": "Skala upp",
"statusUpscalingESRGAN": "Uppskalning (ESRGAN)",
"statusModelChanged": "Modell ändrad",
"statusLoadingModel": "Laddar modell",
"statusConvertingModel": "Konverterar modell",
"statusMergedModels": "Modeller sammanfogade"
},
"gallery": {
"generations": "Generationer",
"showGenerations": "Visa generationer",
"uploads": "Uppladdningar",
"showUploads": "Visa uppladdningar",
"galleryImageSize": "Bildstorlek",
"allImagesLoaded": "Alla bilder laddade",
"loadMore": "Ladda mer",
"galleryImageResetSize": "Återställ storlek",
"gallerySettings": "Galleriinställningar",
"maintainAspectRatio": "Behåll bildförhållande",
"pinGallery": "Nåla fast galleri",
"noImagesInGallery": "Inga bilder i galleriet",
"autoSwitchNewImages": "Ändra automatiskt till nya bilder",
"singleColumnLayout": "Enkolumnslayout"
},
"hotkeys": {
"generalHotkeys": "Allmänna snabbtangenter",
"galleryHotkeys": "Gallerisnabbtangenter",
"unifiedCanvasHotkeys": "Snabbtangenter för sammanslagskanvas",
"invoke": {
"title": "Anropa",
"desc": "Genererar en bild"
},
"cancel": {
"title": "Avbryt",
"desc": "Avbryt bildgenerering"
},
"focusPrompt": {
"desc": "Fokusera området för promptinmatning",
"title": "Fokusprompt"
},
"pinOptions": {
"desc": "Nåla fast alternativpanelen",
"title": "Nåla fast alternativ"
},
"toggleOptions": {
"title": "Växla inställningar",
"desc": "Öppna och stäng alternativpanelen"
},
"toggleViewer": {
"title": "Växla visaren",
"desc": "Öppna och stäng bildvisaren"
},
"toggleGallery": {
"title": "Växla galleri",
"desc": "Öppna eller stäng galleribyrån"
},
"maximizeWorkSpace": {
"title": "Maximera arbetsyta",
"desc": "Stäng paneler och maximera arbetsyta"
},
"changeTabs": {
"title": "Växla flik",
"desc": "Byt till en annan arbetsyta"
},
"consoleToggle": {
"title": "Växla konsol",
"desc": "Öppna och stäng konsol"
},
"setSeed": {
"desc": "Använd seed för nuvarande bild",
"title": "välj seed"
},
"setParameters": {
"title": "Välj parametrar",
"desc": "Använd alla parametrar från nuvarande bild"
},
"setPrompt": {
"desc": "Använd prompt för nuvarande bild",
"title": "Välj prompt"
},
"restoreFaces": {
"title": "Återskapa ansikten",
"desc": "Återskapa nuvarande bild"
},
"upscale": {
"title": "Skala upp",
"desc": "Skala upp nuvarande bild"
},
"showInfo": {
"title": "Visa info",
"desc": "Visa metadata för nuvarande bild"
},
"sendToImageToImage": {
"title": "Skicka till Bild till bild",
"desc": "Skicka nuvarande bild till Bild till bild"
},
"deleteImage": {
"title": "Radera bild",
"desc": "Radera nuvarande bild"
},
"closePanels": {
"title": "Stäng paneler",
"desc": "Stäng öppna paneler"
},
"previousImage": {
"title": "Föregående bild",
"desc": "Visa föregående bild"
},
"nextImage": {
"title": "Nästa bild",
"desc": "Visa nästa bild"
},
"toggleGalleryPin": {
"title": "Växla gallerinål",
"desc": "Nålar fast eller nålar av galleriet i gränssnittet"
},
"increaseGalleryThumbSize": {
"title": "Förstora galleriets bildstorlek",
"desc": "Förstora miniatyrbildernas storlek"
},
"decreaseGalleryThumbSize": {
"title": "Minska gelleriets bildstorlek",
"desc": "Minska miniatyrbildernas storlek i galleriet"
},
"decreaseBrushSize": {
"desc": "Förminska storleken på kanvas- pensel eller suddgummi",
"title": "Minska penselstorlek"
},
"increaseBrushSize": {
"title": "Öka penselstorlek",
"desc": "Öka stoleken på kanvas- pensel eller suddgummi"
},
"increaseBrushOpacity": {
"title": "Öka penselns opacitet",
"desc": "Öka opaciteten för kanvaspensel"
},
"decreaseBrushOpacity": {
"desc": "Minska kanvaspenselns opacitet",
"title": "Minska penselns opacitet"
},
"moveTool": {
"title": "Flytta",
"desc": "Tillåt kanvasnavigation"
},
"fillBoundingBox": {
"title": "Fyll ram",
"desc": "Fyller ramen med pensels färg"
},
"keyboardShortcuts": "Snabbtangenter",
"appHotkeys": "Appsnabbtangenter",
"selectBrush": {
"desc": "Välj kanvaspensel",
"title": "Välj pensel"
},
"selectEraser": {
"desc": "Välj kanvassuddgummi",
"title": "Välj suddgummi"
},
"eraseBoundingBox": {
"title": "Ta bort ram"
}
}
}

View File

@ -0,0 +1,64 @@
{
"accessibility": {
"invokeProgressBar": "Invoke ilerleme durumu",
"nextImage": "Sonraki Resim",
"useThisParameter": "Kullanıcı parametreleri",
"copyMetadataJson": "Metadata verilerini kopyala (JSON)",
"exitViewer": "Görüntüleme Modundan Çık",
"zoomIn": "Yakınlaştır",
"zoomOut": "Uzaklaştır",
"rotateCounterClockwise": "Döndür (Saat yönünün tersine)",
"rotateClockwise": "Döndür (Saat yönünde)",
"flipHorizontally": "Yatay Çevir",
"flipVertically": "Dikey Çevir",
"modifyConfig": "Ayarları Değiştir",
"toggleAutoscroll": "Otomatik kaydırmayı aç/kapat",
"toggleLogViewer": "Günlük Görüntüleyici Aç/Kapa",
"showOptionsPanel": "Ayarlar Panelini Göster",
"modelSelect": "Model Seçin",
"reset": "Sıfırla",
"uploadImage": "Resim Yükle",
"previousImage": "Önceki Resim",
"menu": "Menü",
"showGallery": "Galeriyi Göster"
},
"common": {
"hotkeysLabel": "Kısayol Tuşları",
"themeLabel": "Tema",
"languagePickerLabel": "Dil Seçimi",
"reportBugLabel": "Hata Bildir",
"githubLabel": "Github",
"discordLabel": "Discord",
"settingsLabel": "Ayarlar",
"darkTheme": "Karanlık Tema",
"lightTheme": "Aydınlık Tema",
"greenTheme": "Yeşil Tema",
"oceanTheme": "Okyanus Tema",
"langArabic": "Arapça",
"langEnglish": "İngilizce",
"langDutch": "Hollandaca",
"langFrench": "Fransızca",
"langGerman": "Almanca",
"langItalian": "İtalyanca",
"langJapanese": "Japonca",
"langPolish": "Lehçe",
"langPortuguese": "Portekizce",
"langBrPortuguese": "Portekizcr (Brezilya)",
"langRussian": "Rusça",
"langSimplifiedChinese": "Çince (Basit)",
"langUkranian": "Ukraynaca",
"langSpanish": "İspanyolca",
"txt2img": "Metinden Resime",
"img2img": "Resimden Metine",
"linear": "Çizgisel",
"nodes": "Düğümler",
"postprocessing": "İşlem Sonrası",
"postProcessing": "İşlem Sonrası",
"postProcessDesc2": "Daha gelişmiş özellikler için ve iş akışını kolaylaştırmak için özel bir kullanıcı arayüzü çok yakında yayınlanacaktır.",
"postProcessDesc3": "Invoke AI komut satırı arayüzü, bir çok yeni özellik sunmaktadır.",
"langKorean": "Korece",
"unifiedCanvas": "Akıllı Tuval",
"nodesDesc": "Görüntülerin oluşturulmasında hazırladığımız yeni bir sistem geliştirme aşamasındadır. Bu harika özellikler ve çok daha fazlası için bizi takip etmeye devam edin.",
"postProcessDesc1": "Invoke AI son kullanıcıya yönelik bir çok özellik sunar. Görüntü kalitesi yükseltme, yüz restorasyonu WebUI üzerinden kullanılabilir. Metinden resime ve resimden metne araçlarına gelişmiş seçenekler menüsünden ulaşabilirsiniz. İsterseniz mevcut görüntü ekranının üzerindeki veya görüntüleyicideki görüntüyü doğrudan düzenleyebilirsiniz."
}
}

View File

@ -16,9 +16,9 @@
"postProcessing": "Постобробка",
"postProcessDesc1": "Invoke AI пропонує широкий спектр функцій постобробки. Збільшення зображення (upscale) та відновлення облич вже доступні в інтерфейсі. Отримайте доступ до них з меню 'Додаткові параметри' на вкладках 'Зображення із тексту' та 'Зображення із зображення'. Обробляйте зображення безпосередньо, використовуючи кнопки дій із зображеннями над поточним зображенням або в режимі перегляду.",
"postProcessDesc2": "Найближчим часом буде випущено спеціальний інтерфейс для більш сучасних процесів постобробки.",
"postProcessDesc3": "Інтерфейс командного рядка Invoke AI пропонує різні інші функції, включаючи збільшення Embiggen",
"postProcessDesc3": "Інтерфейс командного рядка Invoke AI пропонує різні інші функції, включаючи збільшення Embiggen.",
"training": "Навчання",
"trainingDesc1": "Спеціальний інтерфейс для навчання власних моделей з використанням Textual Inversion та Dreambooth",
"trainingDesc1": "Спеціальний інтерфейс для навчання власних моделей з використанням Textual Inversion та Dreambooth.",
"trainingDesc2": "InvokeAI вже підтримує навчання моделей за допомогою TI, через інтерфейс командного рядка.",
"upload": "Завантажити",
"close": "Закрити",
@ -43,7 +43,38 @@
"statusUpscaling": "Збільшення",
"statusUpscalingESRGAN": "Збільшення (ESRGAN)",
"statusLoadingModel": "Завантаження моделі",
"statusModelChanged": "Модель змінено"
"statusModelChanged": "Модель змінено",
"cancel": "Скасувати",
"accept": "Підтвердити",
"back": "Назад",
"postprocessing": "Постобробка",
"statusModelConverted": "Модель сконвертована",
"statusMergingModels": "Злиття моделей",
"loading": "Завантаження",
"loadingInvokeAI": "Завантаження Invoke AI",
"langHebrew": "Іврит",
"langKorean": "Корейська",
"langPortuguese": "Португальська",
"pinOptionsPanel": "Закріпити панель налаштувань",
"oceanTheme": "Океан",
"langArabic": "Арабська",
"langSimplifiedChinese": "Китайська (спрощена)",
"langSpanish": "Іспанська",
"langEnglish": "Англійська",
"langGerman": "Німецька",
"langItalian": "Італійська",
"langJapanese": "Японська",
"langPolish": "Польська",
"langBrPortuguese": "Португальська (Бразилія)",
"langRussian": "Російська",
"githubLabel": "Github",
"txt2img": "Текст в зображення (txt2img)",
"discordLabel": "Discord",
"langDutch": "Голландська",
"langFrench": "Французька",
"statusMergedModels": "Моделі об'єднані",
"statusConvertingModel": "Конвертація моделі",
"linear": "Лінійна обробка"
},
"gallery": {
"generations": "Генерації",
@ -284,15 +315,15 @@
"description": "Опис",
"descriptionValidationMsg": "Введіть опис моделі",
"config": "Файл конфігурації",
"configValidationMsg": "Шлях до файлу конфігурації",
"configValidationMsg": "Шлях до файлу конфігурації.",
"modelLocation": "Розташування моделі",
"modelLocationValidationMsg": "Шлях до файлу з моделлю",
"modelLocationValidationMsg": "Шлях до файлу з моделлю.",
"vaeLocation": "Розтышування VAE",
"vaeLocationValidationMsg": "Шлях до VAE",
"vaeLocationValidationMsg": "Шлях до VAE.",
"width": "Ширина",
"widthValidationMsg": "Початкова ширина зображень",
"widthValidationMsg": "Початкова ширина зображень.",
"height": "Висота",
"heightValidationMsg": "Початкова висота зображень",
"heightValidationMsg": "Початкова висота зображень.",
"addModel": "Додати модель",
"updateModel": "Оновити модель",
"availableModels": "Доступні моделі",
@ -319,7 +350,66 @@
"deleteModel": "Видалити модель",
"deleteConfig": "Видалити конфігурацію",
"deleteMsg1": "Ви точно хочете видалити модель із InvokeAI?",
"deleteMsg2": "Це не призведе до видалення файлу моделі з диску. Позніше ви можете додати його знову."
"deleteMsg2": "Це не призведе до видалення файлу моделі з диску. Позніше ви можете додати його знову.",
"allModels": "Усі моделі",
"diffusersModels": "Diffusers",
"scanForModels": "Сканувати моделі",
"convert": "Конвертувати",
"convertToDiffusers": "Конвертувати в Diffusers",
"formMessageDiffusersVAELocationDesc": "Якщо не надано, InvokeAI буде шукати файл VAE в розташуванні моделі, вказаній вище.",
"convertToDiffusersHelpText3": "Файл моделі на диску НЕ буде видалено або змінено. Ви можете знову додати його в Model Manager, якщо потрібно.",
"customConfig": "Користувальницький конфіг",
"invokeRoot": "Каталог InvokeAI",
"custom": "Користувальницький",
"modelTwo": "Модель 2",
"modelThree": "Модель 3",
"mergedModelName": "Назва об'єднаної моделі",
"alpha": "Альфа",
"interpolationType": "Тип інтерполяції",
"mergedModelSaveLocation": "Шлях збереження",
"mergedModelCustomSaveLocation": "Користувальницький шлях",
"invokeAIFolder": "Каталог InvokeAI",
"ignoreMismatch": "Ігнорувати невідповідності між вибраними моделями",
"modelMergeHeaderHelp2": "Тільки Diffusers-моделі доступні для об'єднання. Якщо ви хочете об'єднати checkpoint-моделі, спочатку перетворіть їх на Diffusers.",
"checkpointModels": "Checkpoints",
"repo_id": "ID репозиторію",
"v2_base": "v2 (512px)",
"repoIDValidationMsg": "Онлайн-репозиторій моделі",
"formMessageDiffusersModelLocationDesc": "Вкажіть хоча б одне.",
"formMessageDiffusersModelLocation": "Шлях до Diffusers-моделі",
"v2_768": "v2 (768px)",
"formMessageDiffusersVAELocation": "Шлях до VAE",
"convertToDiffusersHelpText5": "Переконайтеся, що у вас достатньо місця на диску. Моделі зазвичай займають від 4 до 7 Гб.",
"convertToDiffusersSaveLocation": "Шлях збереження",
"v1": "v1",
"convertToDiffusersHelpText6": "Ви хочете перетворити цю модель?",
"inpainting": "v1 Inpainting",
"modelConverted": "Модель перетворено",
"sameFolder": "У ту ж папку",
"statusConverting": "Перетворення",
"merge": "Об'єднати",
"mergeModels": "Об'єднати моделі",
"modelOne": "Модель 1",
"sigmoid": "Сігмоїд",
"weightedSum": "Зважена сума",
"none": "пусто",
"addDifference": "Додати різницю",
"pickModelType": "Вибрати тип моделі",
"convertToDiffusersHelpText4": "Це одноразова дія. Вона може зайняти від 30 до 60 секунд в залежності від характеристик вашого комп'ютера.",
"pathToCustomConfig": "Шлях до конфігу користувача",
"safetensorModels": "SafeTensors",
"addCheckpointModel": "Додати модель Checkpoint/Safetensor",
"addDiffuserModel": "Додати Diffusers",
"vaeRepoID": "ID репозиторію VAE",
"vaeRepoIDValidationMsg": "Онлайн-репозиторій VAE",
"modelMergeInterpAddDifferenceHelp": "У цьому режимі Модель 3 спочатку віднімається з Моделі 2. Результуюча версія змішується з Моделью 1 із встановленим вище коефіцієнтом Альфа.",
"customSaveLocation": "Користувальницький шлях збереження",
"modelMergeAlphaHelp": "Альфа впливає силу змішування моделей. Нижчі значення альфа призводять до меншого впливу другої моделі.",
"convertToDiffusersHelpText1": "Ця модель буде конвертована в формат 🧨 Diffusers.",
"convertToDiffusersHelpText2": "Цей процес замінить ваш запис в Model Manager на версію тієї ж моделі в Diffusers.",
"modelsMerged": "Моделі об'єднані",
"modelMergeHeaderHelp1": "Ви можете об'єднати до трьох різних моделей, щоб створити змішану, що відповідає вашим потребам.",
"inverseSigmoid": "Зворотній Сігмоїд"
},
"parameters": {
"images": "Зображення",
@ -346,7 +436,7 @@
"scale": "Масштаб",
"otherOptions": "інші параметри",
"seamlessTiling": "Безшовний узор",
"hiresOptim": "Висока роздільна здатність",
"hiresOptim": "Оптимізація High Res",
"imageFit": "Вмістити зображення",
"codeformerFidelity": "Точність",
"seamSize": "Размір шву",
@ -379,7 +469,24 @@
"info": "Метадані",
"deleteImage": "Видалити зображення",
"initialImage": "Початкове зображення",
"showOptionsPanel": "Показати панель налаштувань"
"showOptionsPanel": "Показати панель налаштувань",
"general": "Основне",
"cancel": {
"immediate": "Скасувати негайно",
"schedule": "Скасувати після поточної ітерації",
"isScheduled": "Відміна",
"setType": "Встановити тип скасування"
},
"vSymmetryStep": "Крок верт. симетрії",
"hiresStrength": "Сила High Res",
"hidePreview": "Сховати попередній перегляд",
"showPreview": "Показати попередній перегляд",
"imageToImage": "Зображення до зображення",
"denoisingStrength": "Сила шумоподавлення",
"copyImage": "Копіювати зображення",
"symmetry": "Симетрія",
"hSymmetryStep": "Крок гор. симетрії",
"negativePrompts": "Виключний запит"
},
"settings": {
"models": "Моделі",
@ -392,7 +499,8 @@
"resetWebUI": "Повернути початкові",
"resetWebUIDesc1": "Скидання настройок веб-інтерфейсу видаляє лише локальний кеш браузера з вашими зображеннями та налаштуваннями. Це не призводить до видалення зображень з диску.",
"resetWebUIDesc2": "Якщо зображення не відображаються в галереї або не працює ще щось, спробуйте скинути налаштування, перш ніж повідомляти про проблему на GitHub.",
"resetComplete": "Інтерфейс скинуто. Оновіть цю сторінку."
"resetComplete": "Інтерфейс скинуто. Оновіть цю сторінку.",
"useSlidersForAll": "Використовувати повзунки для всіх параметрів"
},
"toast": {
"tempFoldersEmptied": "Тимчасова папка очищена",
@ -410,21 +518,25 @@
"sentToUnifiedCanvas": "Надіслати на полотно",
"parametersSet": "Параметри задані",
"parametersNotSet": "Параметри не задані",
"parametersNotSetDesc": "Не знайдені метадані цього зображення",
"parametersNotSetDesc": "Не знайдені метадані цього зображення.",
"parametersFailed": "Проблема із завантаженням параметрів",
"parametersFailedDesc": "Неможливо завантажити початкове зображення",
"parametersFailedDesc": "Неможливо завантажити початкове зображення.",
"seedSet": "Сід заданий",
"seedNotSet": "Сід не заданий",
"seedNotSetDesc": "Не вдалося знайти сід для зображення",
"seedNotSetDesc": "Не вдалося знайти сід для зображення.",
"promptSet": "Запит заданий",
"promptNotSet": "Запит не заданий",
"promptNotSetDesc": "Не вдалося знайти запит для зображення",
"promptNotSetDesc": "Не вдалося знайти запит для зображення.",
"upscalingFailed": "Збільшення не вдалося",
"faceRestoreFailed": "Відновлення облич не вдалося",
"metadataLoadFailed": "Не вдалося завантажити метадані",
"initialImageSet": "Початкове зображення задане",
"initialImageNotSet": "Початкове зображення не задане",
"initialImageNotSetDesc": "Не вдалося завантажити початкове зображення"
"initialImageNotSetDesc": "Не вдалося завантажити початкове зображення",
"serverError": "Помилка сервера",
"disconnected": "Відключено від сервера",
"connected": "Підключено до сервера",
"canceled": "Обробку скасовано"
},
"tooltip": {
"feature": {
@ -473,10 +585,10 @@
"autoSaveToGallery": "Автозбереження до галереї",
"saveBoxRegionOnly": "Зберiгати тiльки видiлення",
"limitStrokesToBox": "Обмежити штрихи виділенням",
"showCanvasDebugInfo": "Показати налаштування полотна",
"showCanvasDebugInfo": "Показати дод. інформацію про полотно",
"clearCanvasHistory": "Очистити iсторiю полотна",
"clearHistory": "Очистити iсторiю",
"clearCanvasHistoryMessage": "Очищення історії полотна залишає поточне полотно незайманим, але видаляє історію скасування та повтору",
"clearCanvasHistoryMessage": "Очищення історії полотна залишає поточне полотно незайманим, але видаляє історію скасування та повтору.",
"clearCanvasHistoryConfirm": "Ви впевнені, що хочете очистити історію полотна?",
"emptyTempImageFolder": "Очистити тимчасову папку",
"emptyFolder": "Очистити папку",
@ -499,5 +611,28 @@
"betaDarkenOutside": "Затемнити зовні",
"betaLimitToBox": "Обмежити виділенням",
"betaPreserveMasked": "Зберiгати замасковану область"
},
"accessibility": {
"nextImage": "Наступне зображення",
"modelSelect": "Вибір моделі",
"invokeProgressBar": "Індикатор виконання",
"reset": "Скинути",
"uploadImage": "Завантажити зображення",
"useThisParameter": "Використовувати цей параметр",
"exitViewer": "Вийти з переглядача",
"zoomIn": "Збільшити",
"zoomOut": "Зменшити",
"rotateCounterClockwise": "Обертати проти годинникової стрілки",
"rotateClockwise": "Обертати за годинниковою стрілкою",
"toggleAutoscroll": "Увімкнути автопрокручування",
"toggleLogViewer": "Показати або приховати переглядач журналів",
"showGallery": "Показати галерею",
"previousImage": "Попереднє зображення",
"copyMetadataJson": "Скопіювати метадані JSON",
"flipVertically": "Перевернути по вертикалі",
"flipHorizontally": "Відобразити по горизонталі",
"showOptionsPanel": "Показати опції",
"modifyConfig": "Змінити конфігурацію",
"menu": "Меню"
}
}

View File

@ -0,0 +1 @@
{}

View File

@ -481,5 +481,22 @@
"betaDarkenOutside": "暗化外部区域",
"betaLimitToBox": "限制在框内",
"betaPreserveMasked": "保留遮罩层"
},
"accessibility": {
"modelSelect": "模型选择",
"invokeProgressBar": "Invoke 进度条",
"reset": "重置",
"nextImage": "下一张图片",
"useThisParameter": "使用此参数",
"uploadImage": "上传图片",
"previousImage": "上一张图片",
"copyMetadataJson": "复制JSON元数据",
"exitViewer": "退出视口ExitViewer",
"zoomIn": "放大",
"zoomOut": "缩小",
"rotateCounterClockwise": "逆时针旋转",
"rotateClockwise": "顺时针旋转",
"flipHorizontally": "水平翻转",
"flipVertically": "垂直翻转"
}
}

View File

@ -18,6 +18,7 @@ import { PropsWithChildren, useEffect } from 'react';
import { setDisabledPanels, setDisabledTabs } from 'features/ui/store/uiSlice';
import { InvokeTabName } from 'features/ui/store/tabMap';
import { shouldTransformUrlsChanged } from 'features/system/store/systemSlice';
import { setShouldFetchImages } from 'features/gallery/store/resultsSlice';
keepGUIAlive();
@ -26,6 +27,7 @@ interface Props extends PropsWithChildren {
disabledPanels: string[];
disabledTabs: InvokeTabName[];
shouldTransformUrls?: boolean;
shouldFetchImages: boolean;
};
}
@ -50,6 +52,10 @@ const App = (props: Props) => {
);
}, [dispatch, props.options.shouldTransformUrls]);
useEffect(() => {
dispatch(setShouldFetchImages(props.options.shouldFetchImages));
}, [dispatch, props.options.shouldFetchImages]);
useEffect(() => {
setColorMode(['light'].includes(currentTheme) ? 'light' : 'dark');
}, [setColorMode, currentTheme]);
@ -67,7 +73,12 @@ const App = (props: Props) => {
h={APP_HEIGHT}
>
{props.children || <SiteHeader />}
<Flex gap={4} w="full" h="full">
<Flex
gap={4}
w={{ base: '100vw', xl: 'full' }}
h="full"
flexDir={{ base: 'column', xl: 'row' }}
>
<InvokeTabs />
<ImageGalleryPanel />
</Flex>

View File

@ -31,13 +31,13 @@ export const DIFFUSERS_SAMPLERS: Array<string> = [
];
// Valid image widths
export const WIDTHS: Array<number> = Array.from(Array(65)).map(
(_x, i) => i * 64
export const WIDTHS: Array<number> = Array.from(Array(64)).map(
(_x, i) => (i + 1) * 64
);
// Valid image heights
export const HEIGHTS: Array<number> = Array.from(Array(65)).map(
(_x, i) => i * 64
export const HEIGHTS: Array<number> = Array.from(Array(64)).map(
(_x, i) => (i + 1) * 64
);
// Valid upscaling levels
@ -60,3 +60,5 @@ export const IN_PROGRESS_IMAGE_TYPES: Array<{
{ key: 'Fast', value: 'latents' },
{ key: 'Accurate', value: 'full-res' },
];
export const NODE_MIN_WIDTH = 250;

View File

@ -0,0 +1,18 @@
import { useBreakpoint } from '@chakra-ui/react';
export default function useResolution():
| 'mobile'
| 'tablet'
| 'desktop'
| 'unknown' {
const breakpointValue = useBreakpoint();
const mobileResolutions = ['base', 'sm'];
const tabletResolutions = ['md', 'lg'];
const desktopResolutions = ['xl', '2xl'];
if (mobileResolutions.includes(breakpointValue)) return 'mobile';
if (tabletResolutions.includes(breakpointValue)) return 'tablet';
if (desktopResolutions.includes(breakpointValue)) return 'desktop';
return 'unknown';
}

View File

@ -0,0 +1,119 @@
/**
* PARTIAL ZOD IMPLEMENTATION
*
* doesn't work well bc like most validators, zod is not built to skip invalid values.
* it mostly works but just seems clearer and simpler to manually parse for now.
*
* in the future it would be really nice if we could use zod for some things:
* - zodios (axios + zod): https://github.com/ecyrbe/zodios
* - openapi to zodios: https://github.com/astahmer/openapi-zod-client
*/
// import { z } from 'zod';
// const zMetadataStringField = z.string();
// export type MetadataStringField = z.infer<typeof zMetadataStringField>;
// const zMetadataIntegerField = z.number().int();
// export type MetadataIntegerField = z.infer<typeof zMetadataIntegerField>;
// const zMetadataFloatField = z.number();
// export type MetadataFloatField = z.infer<typeof zMetadataFloatField>;
// const zMetadataBooleanField = z.boolean();
// export type MetadataBooleanField = z.infer<typeof zMetadataBooleanField>;
// const zMetadataImageField = z.object({
// image_type: z.union([
// z.literal('results'),
// z.literal('uploads'),
// z.literal('intermediates'),
// ]),
// image_name: z.string().min(1),
// });
// export type MetadataImageField = z.infer<typeof zMetadataImageField>;
// const zMetadataLatentsField = z.object({
// latents_name: z.string().min(1),
// });
// export type MetadataLatentsField = z.infer<typeof zMetadataLatentsField>;
// /**
// * zod Schema for any node field. Use a `transform()` to manually parse, skipping invalid values.
// */
// const zAnyMetadataField = z.any().transform((val, ctx) => {
// // Grab the field name from the path
// const fieldName = String(ctx.path[ctx.path.length - 1]);
// // `id` and `type` must be strings if they exist
// if (['id', 'type'].includes(fieldName)) {
// const reservedStringPropertyResult = zMetadataStringField.safeParse(val);
// if (reservedStringPropertyResult.success) {
// return reservedStringPropertyResult.data;
// }
// return;
// }
// // Parse the rest of the fields, only returning the data if the parsing is successful
// const stringFieldResult = zMetadataStringField.safeParse(val);
// if (stringFieldResult.success) {
// return stringFieldResult.data;
// }
// const integerFieldResult = zMetadataIntegerField.safeParse(val);
// if (integerFieldResult.success) {
// return integerFieldResult.data;
// }
// const floatFieldResult = zMetadataFloatField.safeParse(val);
// if (floatFieldResult.success) {
// return floatFieldResult.data;
// }
// const booleanFieldResult = zMetadataBooleanField.safeParse(val);
// if (booleanFieldResult.success) {
// return booleanFieldResult.data;
// }
// const imageFieldResult = zMetadataImageField.safeParse(val);
// if (imageFieldResult.success) {
// return imageFieldResult.data;
// }
// const latentsFieldResult = zMetadataImageField.safeParse(val);
// if (latentsFieldResult.success) {
// return latentsFieldResult.data;
// }
// });
// /**
// * The node metadata schema.
// */
// const zNodeMetadata = z.object({
// session_id: z.string().min(1).optional(),
// node: z.record(z.string().min(1), zAnyMetadataField).optional(),
// });
// export type NodeMetadata = z.infer<typeof zNodeMetadata>;
// const zMetadata = z.object({
// invokeai: zNodeMetadata.optional(),
// 'sd-metadata': z.record(z.string().min(1), z.any()).optional(),
// });
// export type Metadata = z.infer<typeof zMetadata>;
// export const parseMetadata = (
// metadata: Record<string, any>
// ): Metadata | undefined => {
// const result = zMetadata.safeParse(metadata);
// if (!result.success) {
// console.log(result.error.issues);
// return;
// }
// return result.data;
// };
export default {};

View File

@ -0,0 +1,169 @@
import { forEach, size } from 'lodash';
import { ImageField, LatentsField } from 'services/api';
const OBJECT_TYPESTRING = '[object Object]';
const STRING_TYPESTRING = '[object String]';
const NUMBER_TYPESTRING = '[object Number]';
const BOOLEAN_TYPESTRING = '[object Boolean]';
const ARRAY_TYPESTRING = '[object Array]';
const isObject = (obj: unknown): obj is Record<string | number, any> =>
Object.prototype.toString.call(obj) === OBJECT_TYPESTRING;
const isString = (obj: unknown): obj is string =>
Object.prototype.toString.call(obj) === STRING_TYPESTRING;
const isNumber = (obj: unknown): obj is number =>
Object.prototype.toString.call(obj) === NUMBER_TYPESTRING;
const isBoolean = (obj: unknown): obj is boolean =>
Object.prototype.toString.call(obj) === BOOLEAN_TYPESTRING;
const isArray = (obj: unknown): obj is Array<any> =>
Object.prototype.toString.call(obj) === ARRAY_TYPESTRING;
const parseImageField = (imageField: unknown): ImageField | undefined => {
// Must be an object
if (!isObject(imageField)) {
return;
}
// An ImageField must have both `image_name` and `image_type`
if (!('image_name' in imageField && 'image_type' in imageField)) {
return;
}
// An ImageField's `image_type` must be one of the allowed values
if (
!['results', 'uploads', 'intermediates'].includes(imageField.image_type)
) {
return;
}
// An ImageField's `image_name` must be a string
if (typeof imageField.image_name !== 'string') {
return;
}
// Build a valid ImageField
return {
image_type: imageField.image_type,
image_name: imageField.image_name,
};
};
const parseLatentsField = (latentsField: unknown): LatentsField | undefined => {
// Must be an object
if (!isObject(latentsField)) {
return;
}
// A LatentsField must have a `latents_name`
if (!('latents_name' in latentsField)) {
return;
}
// A LatentsField's `latents_name` must be a string
if (typeof latentsField.latents_name !== 'string') {
return;
}
// Build a valid LatentsField
return {
latents_name: latentsField.latents_name,
};
};
type NodeMetadata = {
[key: string]: string | number | boolean | ImageField | LatentsField;
};
type InvokeAIMetadata = {
session_id?: string;
node?: NodeMetadata;
};
export const parseNodeMetadata = (
nodeMetadata: Record<string | number, any>
): NodeMetadata | undefined => {
if (!isObject(nodeMetadata)) {
return;
}
const parsed: NodeMetadata = {};
forEach(nodeMetadata, (nodeItem, nodeKey) => {
// `id` and `type` must be strings if they are present
if (['id', 'type'].includes(nodeKey)) {
if (isString(nodeItem)) {
parsed[nodeKey] = nodeItem;
}
return;
}
// the only valid object types are ImageField and LatentsField
if (isObject(nodeItem)) {
if ('image_name' in nodeItem || 'image_type' in nodeItem) {
const imageField = parseImageField(nodeItem);
if (imageField) {
parsed[nodeKey] = imageField;
}
return;
}
if ('latents_name' in nodeItem) {
const latentsField = parseLatentsField(nodeItem);
if (latentsField) {
parsed[nodeKey] = latentsField;
}
return;
}
}
// otherwise we accept any string, number or boolean
if (isString(nodeItem) || isNumber(nodeItem) || isBoolean(nodeItem)) {
parsed[nodeKey] = nodeItem;
return;
}
});
if (size(parsed) === 0) {
return;
}
return parsed;
};
export const parseInvokeAIMetadata = (
metadata: Record<string | number, any> | undefined
): InvokeAIMetadata | undefined => {
if (metadata === undefined) {
return;
}
if (!isObject(metadata)) {
return;
}
const parsed: InvokeAIMetadata = {};
forEach(metadata, (item, key) => {
if (key === 'session_id' && isString(item)) {
parsed['session_id'] = item;
}
if (key === 'node' && isObject(item)) {
const nodeMetadata = parseNodeMetadata(item);
if (nodeMetadata) {
parsed['node'] = nodeMetadata;
}
}
});
if (size(parsed) === 0) {
return;
}
return parsed;
};

View File

@ -30,6 +30,7 @@ interface Props extends PropsWithChildren {
disabledTabs?: InvokeTabName[];
token?: string;
shouldTransformUrls?: boolean;
shouldFetchImages?: boolean;
}
export default function Component({
@ -39,6 +40,7 @@ export default function Component({
token,
children,
shouldTransformUrls,
shouldFetchImages = false,
}: Props) {
useEffect(() => {
// configure API client token
@ -70,7 +72,12 @@ export default function Component({
<React.Suspense fallback={<Loading showText />}>
<ThemeLocaleProvider>
<App
options={{ disabledPanels, disabledTabs, shouldTransformUrls }}
options={{
disabledPanels,
disabledTabs,
shouldTransformUrls,
shouldFetchImages,
}}
>
{children}
</App>

View File

@ -101,8 +101,8 @@ const currentImageButtonsSelector = createSelector(
shouldShowImageDetails,
activeTabName,
isLightboxOpen,
selectedImage,
shouldHidePreview,
selectedImage,
};
},
{
@ -132,8 +132,8 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
// currentImage,
isLightboxOpen,
activeTabName,
selectedImage,
shouldHidePreview,
selectedImage,
} = useAppSelector(currentImageButtonsSelector);
const { getUrl, shouldTransformUrls } = useGetUrl();
@ -151,11 +151,17 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
};
const handleCopyImage = async () => {
if (!selectedImage) return;
if (!selectedImage?.url) {
return;
}
const blob = await fetch(getUrl(selectedImage.url)).then((res) =>
res.blob()
);
const url = getUrl(selectedImage.url);
if (!url) {
return;
}
const blob = await fetch(url).then((res) => res.blob());
const data = [new ClipboardItem({ [blob.type]: blob })];
await navigator.clipboard.write(data);
@ -175,6 +181,10 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
: window.location.toString() + selectedImage.url
: '';
if (!url) {
return;
}
navigator.clipboard.writeText(url).then(() => {
toast({
title: t('toast.imageLinkCopied'),
@ -425,9 +435,10 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
return (
<Flex
sx={{
flexWrap: 'wrap',
justifyContent: 'center',
alignItems: 'center',
columnGap: '0.5em',
gap: 2,
}}
{...props}
>
@ -476,7 +487,7 @@ const CurrentImageButtons = (props: CurrentImageButtonsProps) => {
{t('parameters.copyImageToLink')}
</IAIButton>
<Link download={true} href={getUrl(selectedImage!.url)}>
<Link download={true} href={getUrl(selectedImage?.url)}>
<IAIButton leftIcon={<FaDownload />} size="sm" w="100%">
{t('parameters.downloadImage')}
</IAIButton>

View File

@ -1,6 +1,7 @@
import { Flex, Icon } from '@chakra-ui/react';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/storeHooks';
import { systemSelector } from 'features/system/store/systemSelectors';
import { isEqual } from 'lodash';
import { MdPhoto } from 'react-icons/md';
@ -12,12 +13,12 @@ import CurrentImageButtons from './CurrentImageButtons';
import CurrentImagePreview from './CurrentImagePreview';
export const currentImageDisplaySelector = createSelector(
[gallerySelector, selectedImageSelector],
(gallery, selectedImage) => {
const { currentImage, intermediateImage } = gallery;
[systemSelector, selectedImageSelector],
(system, selectedImage) => {
const { progressImage } = system;
return {
hasAnImageToDisplay: selectedImage || intermediateImage,
hasAnImageToDisplay: selectedImage || progressImage,
};
},
{

View File

@ -13,7 +13,7 @@ const CurrentImageHidden = () => {
color: 'base.400',
}}
>
<FaEyeSlash size={'30vh'} />
<FaEyeSlash fontSize="25vh" />
</Flex>
);
};

View File

@ -41,8 +41,8 @@ export const imagesSelector = createSelector(
return {
shouldShowImageDetails,
imageToDisplay,
shouldHidePreview,
imageToDisplay,
};
},
{

View File

@ -254,7 +254,7 @@ const ImageGalleryContent = () => {
const isSelected = currentImageUuid === name;
return (
<HoverableImage
key={name}
key={`${name}-${image.thumbnail}`}
image={image}
isSelected={isSelected}
/>

View File

@ -26,6 +26,8 @@ import {
import { isStagingSelector } from 'features/canvas/store/canvasSelectors';
import { requestCanvasRescale } from 'features/canvas/store/thunks/requestCanvasScale';
import { lightboxSelector } from 'features/lightbox/store/lightboxSelectors';
import useResolution from 'common/hooks/useResolution';
import { Flex } from '@chakra-ui/react';
const GALLERY_TAB_WIDTHS: Record<
InvokeTabName,
@ -97,6 +99,8 @@ export default function ImageGalleryPanel() {
shouldPinGallery && dispatch(requestCanvasRescale());
};
const resolution = useResolution();
useHotkeys(
'g',
() => {
@ -179,25 +183,53 @@ export default function ImageGalleryPanel() {
[galleryImageMinimumWidth]
);
return (
<ResizableDrawer
direction="right"
isResizable={isResizable || !shouldPinGallery}
isOpen={shouldShowGallery}
onClose={handleCloseGallery}
isPinned={shouldPinGallery && !isLightboxOpen}
minWidth={
shouldPinGallery
? GALLERY_TAB_WIDTHS[activeTabName].galleryMinWidth
: 200
}
maxWidth={
shouldPinGallery
? GALLERY_TAB_WIDTHS[activeTabName].galleryMaxWidth
: undefined
}
>
<ImageGalleryContent />
</ResizableDrawer>
);
const calcGalleryMinHeight = () => {
if (resolution === 'desktop') return;
return 300;
};
const imageGalleryContent = () => {
return (
<Flex
w="100vw"
h={{ base: 300, xl: '100vh' }}
paddingRight={{ base: 8, xl: 0 }}
paddingBottom={{ base: 4, xl: 0 }}
>
<ImageGalleryContent />
</Flex>
);
};
const resizableImageGalleryContent = () => {
return (
<ResizableDrawer
direction="right"
isResizable={isResizable || !shouldPinGallery}
isOpen={shouldShowGallery}
onClose={handleCloseGallery}
isPinned={shouldPinGallery && !isLightboxOpen}
minWidth={
shouldPinGallery
? GALLERY_TAB_WIDTHS[activeTabName].galleryMinWidth
: 200
}
maxWidth={
shouldPinGallery
? GALLERY_TAB_WIDTHS[activeTabName].galleryMaxWidth
: undefined
}
minHeight={calcGalleryMinHeight()}
>
<ImageGalleryContent />
</ResizableDrawer>
);
};
const renderImageGallery = () => {
if (['mobile', 'tablet'].includes(resolution)) return imageGalleryContent();
return resizableImageGalleryContent();
};
return renderImageGallery();
}

View File

@ -192,21 +192,21 @@ const ImageMetadataViewer = memo(({ image }: ImageMetadataViewerProps) => {
<MetadataItem
label="Seed"
value={node.seed}
onClick={() => dispatch(setSeed(node.seed))}
onClick={() => dispatch(setSeed(Number(node.seed)))}
/>
)}
{node.threshold !== undefined && (
<MetadataItem
label="Noise Threshold"
value={node.threshold}
onClick={() => dispatch(setThreshold(node.threshold))}
onClick={() => dispatch(setThreshold(Number(node.threshold)))}
/>
)}
{node.perlin !== undefined && (
<MetadataItem
label="Perlin Noise"
value={node.perlin}
onClick={() => dispatch(setPerlin(node.perlin))}
onClick={() => dispatch(setPerlin(Number(node.perlin)))}
/>
)}
{node.scheduler && (
@ -220,14 +220,14 @@ const ImageMetadataViewer = memo(({ image }: ImageMetadataViewerProps) => {
<MetadataItem
label="Steps"
value={node.steps}
onClick={() => dispatch(setSteps(node.steps))}
onClick={() => dispatch(setSteps(Number(node.steps)))}
/>
)}
{node.cfg_scale !== undefined && (
<MetadataItem
label="CFG scale"
value={node.cfg_scale}
onClick={() => dispatch(setCfgScale(node.cfg_scale))}
onClick={() => dispatch(setCfgScale(Number(node.cfg_scale)))}
/>
)}
{node.variations && node.variations.length > 0 && (
@ -257,14 +257,14 @@ const ImageMetadataViewer = memo(({ image }: ImageMetadataViewerProps) => {
<MetadataItem
label="Width"
value={node.width}
onClick={() => dispatch(setWidth(node.width))}
onClick={() => dispatch(setWidth(Number(node.width)))}
/>
)}
{node.height && (
<MetadataItem
label="Height"
value={node.height}
onClick={() => dispatch(setHeight(node.height))}
onClick={() => dispatch(setHeight(Number(node.height)))}
/>
)}
{/* {init_image_path && (
@ -279,7 +279,9 @@ const ImageMetadataViewer = memo(({ image }: ImageMetadataViewerProps) => {
<MetadataItem
label="Image to image strength"
value={node.strength}
onClick={() => dispatch(setImg2imgStrength(node.strength))}
onClick={() =>
dispatch(setImg2imgStrength(Number(node.strength)))
}
/>
)}
{node.fit && (

View File

@ -1,4 +1,8 @@
import { createEntityAdapter, createSlice } from '@reduxjs/toolkit';
import {
PayloadAction,
createEntityAdapter,
createSlice,
} from '@reduxjs/toolkit';
import { Image } from 'app/invokeai';
import { invocationComplete } from 'services/events/actions';
@ -13,6 +17,7 @@ import {
extractTimestampFromImageName,
} from 'services/util/deserializeImageField';
import { deserializeImageResponse } from 'services/util/deserializeImageResponse';
import { imageReceived, thumbnailReceived } from 'services/thunks/image';
// use `createEntityAdapter` to create a slice for results images
// https://redux-toolkit.js.org/api/createEntityAdapter#overview
@ -34,12 +39,9 @@ type AdditionalResultsState = {
pages: number; // the total number of pages available
isLoading: boolean; // whether we are loading more images or not, mostly a placeholder
nextPage: number; // the next page to request
shouldFetchImages: boolean; // whether we need to re-fetch images or not
};
// export type ResultsState = ReturnType<
// typeof resultsAdapter.getInitialState<AdditionalResultsState>
// >;
export const initialResultsState =
resultsAdapter.getInitialState<AdditionalResultsState>({
// provide the additional initial state
@ -47,6 +49,7 @@ export const initialResultsState =
pages: 0,
isLoading: false,
nextPage: 0,
shouldFetchImages: false,
});
export type ResultsState = typeof initialResultsState;
@ -61,6 +64,10 @@ const resultsSlice = createSlice({
// here we just use the function itself as the reducer. we'll call this on `invocation_complete`
// to add a single result
resultAdded: resultsAdapter.upsertOne,
setShouldFetchImages: (state, action: PayloadAction<boolean>) => {
state.shouldFetchImages = action.payload;
},
},
extraReducers: (builder) => {
// here we can respond to a fulfilled call of the `getNextResultsPage` thunk
@ -97,12 +104,15 @@ const resultsSlice = createSlice({
*/
builder.addCase(invocationComplete, (state, action) => {
const { data } = action.payload;
const { result, invocation, graph_execution_state_id } = data;
const { result, node, graph_execution_state_id } = data;
if (isImageOutput(result)) {
const name = result.image.image_name;
const type = result.image.image_type;
const { url, thumbnail } = buildImageUrls(type, name);
// if we need to refetch, set URLs to placeholder for now
const { url, thumbnail } = state.shouldFetchImages
? { url: '', thumbnail: '' }
: buildImageUrls(type, name);
const timestamp = extractTimestampFromImageName(name);
@ -115,10 +125,9 @@ const resultsSlice = createSlice({
created: timestamp,
width: result.width, // TODO: add tese dimensions
height: result.height,
mode: result.mode,
invokeai: {
session_id: graph_execution_state_id,
invocation,
...(node ? { node } : {}),
},
},
};
@ -126,6 +135,30 @@ const resultsSlice = createSlice({
resultsAdapter.addOne(state, image);
}
});
builder.addCase(imageReceived.fulfilled, (state, action) => {
const { imagePath } = action.payload;
const { imageName } = action.meta.arg;
resultsAdapter.updateOne(state, {
id: imageName,
changes: {
url: imagePath,
},
});
});
builder.addCase(thumbnailReceived.fulfilled, (state, action) => {
const { thumbnailPath } = action.payload;
const { imageName } = action.meta.arg;
resultsAdapter.updateOne(state, {
id: imageName,
changes: {
thumbnail: thumbnailPath,
},
});
});
},
});
@ -139,6 +172,6 @@ export const {
selectTotal: selectResultsTotal,
} = resultsAdapter.getSelectors<RootState>((state) => state.results);
export const { resultAdded } = resultsSlice.actions;
export const { resultAdded, setShouldFetchImages } = resultsSlice.actions;
export default resultsSlice.reducer;

View File

@ -66,16 +66,8 @@ const uploadsSlice = createSlice({
*/
builder.addCase(imageUploaded.fulfilled, (state, action) => {
const { location, response } = action.payload;
const { image_name, image_url, image_type, metadata, thumbnail_url } =
response;
const uploadedImage: Image = {
name: image_name,
url: image_url,
thumbnail: thumbnail_url,
type: 'uploads',
metadata,
};
const uploadedImage = deserializeImageResponse(response);
uploadsAdapter.addOne(state, uploadedImage);
});

View File

@ -1,7 +1,7 @@
import { v4 as uuidv4 } from 'uuid';
import 'reactflow/dist/style.css';
import { useCallback } from 'react';
import { memo, useCallback } from 'react';
import {
Tooltip,
Menu,
@ -10,7 +10,7 @@ import {
MenuItem,
IconButton,
} from '@chakra-ui/react';
import { FaPlus } from 'react-icons/fa';
import { FaEllipsisV, FaPlus } from 'react-icons/fa';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import { nodeAdded } from '../store/nodesSlice';
import { cloneDeep, map } from 'lodash';
@ -18,8 +18,10 @@ import { RootState } from 'app/store';
import { useBuildInvocation } from '../hooks/useBuildInvocation';
import { addToast } from 'features/system/store/systemSlice';
import { makeToast } from 'features/system/hooks/useToastWatcher';
import { IAIIconButton } from 'exports';
import { AnyInvocationType } from 'services/events/types';
export const AddNodeMenu = () => {
const AddNodeMenu = () => {
const dispatch = useAppDispatch();
const invocationTemplates = useAppSelector(
@ -29,7 +31,7 @@ export const AddNodeMenu = () => {
const buildInvocation = useBuildInvocation();
const addNode = useCallback(
(nodeType: string) => {
(nodeType: AnyInvocationType) => {
const invocation = buildInvocation(nodeType);
if (!invocation) {
@ -47,9 +49,13 @@ export const AddNodeMenu = () => {
);
return (
<Menu>
<MenuButton as={IconButton} aria-label="Add Node" icon={<FaPlus />} />
<MenuList>
<Menu isLazy>
<MenuButton
as={IAIIconButton}
aria-label="Add Node"
icon={<FaEllipsisV />}
/>
<MenuList overflowY="scroll" height={400}>
{map(invocationTemplates, ({ title, description, type }, key) => {
return (
<Tooltip key={key} label={description} placement="end" hasArrow>
@ -61,3 +67,5 @@ export const AddNodeMenu = () => {
</Menu>
);
};
export default memo(AddNodeMenu);

View File

@ -1,5 +1,5 @@
import { Tooltip } from '@chakra-ui/react';
import { CSSProperties, useMemo } from 'react';
import { CSSProperties, memo, useMemo } from 'react';
import {
Handle,
Position,
@ -19,11 +19,11 @@ const handleBaseStyles: CSSProperties = {
};
const inputHandleStyles: CSSProperties = {
left: '-1.7rem',
left: '-1rem',
};
const outputHandleStyles: CSSProperties = {
right: '-1.7rem',
right: '-0.5rem',
};
const requiredConnectionStyles: CSSProperties = {
@ -38,13 +38,12 @@ type FieldHandleProps = {
styles?: CSSProperties;
};
export const FieldHandle = (props: FieldHandleProps) => {
const FieldHandle = (props: FieldHandleProps) => {
const { nodeId, field, isValidConnection, handleType, styles } = props;
const { name, title, type, description } = field;
return (
<Tooltip
key={name}
label={type}
placement={handleType === 'target' ? 'start' : 'end'}
hasArrow
@ -67,3 +66,5 @@ export const FieldHandle = (props: FieldHandleProps) => {
</Tooltip>
);
};
export default memo(FieldHandle);

View File

@ -1,18 +1,25 @@
import 'reactflow/dist/style.css';
import { Tooltip, Badge, HStack } from '@chakra-ui/react';
import { Tooltip, Badge, Flex } from '@chakra-ui/react';
import { map } from 'lodash';
import { FIELDS } from '../types/constants';
import { memo } from 'react';
export const FieldTypeLegend = () => {
const FieldTypeLegend = () => {
return (
<HStack>
<Flex gap={2} flexDirection={{ base: 'column', xl: 'row' }}>
{map(FIELDS, ({ title, description, color }, key) => (
<Tooltip key={key} label={description}>
<Badge colorScheme={color} sx={{ userSelect: 'none' }}>
<Badge
colorScheme={color}
sx={{ userSelect: 'none' }}
textAlign="center"
>
{title}
</Badge>
</Tooltip>
))}
</HStack>
</Flex>
);
};
export default memo(FieldTypeLegend);

View File

@ -1,15 +1,11 @@
import {
Background,
Controls,
MiniMap,
OnConnect,
OnEdgesChange,
OnNodesChange,
ReactFlow,
ConnectionLineType,
OnConnectStart,
OnConnectEnd,
Panel,
} from 'reactflow';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import { RootState } from 'app/store';
@ -22,10 +18,12 @@ import {
} from '../store/nodesSlice';
import { useCallback } from 'react';
import { InvocationComponent } from './InvocationComponent';
import { AddNodeMenu } from './AddNodeMenu';
import { FieldTypeLegend } from './FieldTypeLegend';
import { Button } from '@chakra-ui/react';
import { nodesGraphBuilt } from 'services/thunks/session';
import TopLeftPanel from './panels/TopLeftPanel';
import TopRightPanel from './panels/TopRightPanel';
import TopCenterPanel from './panels/TopCenterPanel';
import BottomLeftPanel from './panels/BottomLeftPanel.tsx';
import MinimapPanel from './panels/MinimapPanel';
import NodeSearch from './search/NodeSearch';
const nodeTypes = { invocation: InvocationComponent };
@ -62,15 +60,8 @@ export const Flow = () => {
[dispatch]
);
const onConnectEnd: OnConnectEnd = useCallback(
(event) => {
dispatch(connectionEnded());
},
[dispatch]
);
const handleInvoke = useCallback(() => {
dispatch(nodesGraphBuilt());
const onConnectEnd: OnConnectEnd = useCallback(() => {
dispatch(connectionEnded());
}, [dispatch]);
return (
@ -87,18 +78,13 @@ export const Flow = () => {
style: { strokeWidth: 2 },
}}
>
<Panel position="top-left">
<AddNodeMenu />
</Panel>
<Panel position="top-center">
<Button onClick={handleInvoke}>Will it blend?</Button>
</Panel>
<Panel position="top-right">
<FieldTypeLegend />
</Panel>
<NodeSearch />
{/* <TopLeftPanel /> */}
<TopCenterPanel />
<TopRightPanel />
<BottomLeftPanel />
<Background />
<Controls />
<MiniMap nodeStrokeWidth={3} zoomable pannable />
<MinimapPanel />
</ReactFlow>
);
};

View File

@ -0,0 +1,39 @@
import { Flex, Heading, Tooltip, Icon } from '@chakra-ui/react';
import { InvocationTemplate } from 'features/nodes/types/types';
import { memo, MutableRefObject } from 'react';
import { FaInfoCircle } from 'react-icons/fa';
interface IAINodeHeaderProps {
nodeId: string;
template: InvocationTemplate;
}
const IAINodeHeader = (props: IAINodeHeaderProps) => {
const { nodeId, template } = props;
return (
<Flex
borderTopRadius="md"
justifyContent="space-between"
background="base.700"
px={2}
py={1}
alignItems="center"
>
<Tooltip label={nodeId}>
<Heading size="xs" fontWeight={600} color="base.100">
{template.title}
</Heading>
</Tooltip>
<Tooltip
label={template.description}
placement="top"
hasArrow
shouldWrapChildren
>
<Icon color="base.300" as={FaInfoCircle} h="min-content" />
</Tooltip>
</Flex>
);
};
export default memo(IAINodeHeader);

View File

@ -0,0 +1,148 @@
import {
InputFieldTemplate,
InputFieldValue,
InvocationTemplate,
} from 'features/nodes/types/types';
import { memo, ReactNode, useCallback } from 'react';
import { map } from 'lodash';
import { useAppSelector } from 'app/storeHooks';
import { RootState } from 'app/store';
import {
Box,
Flex,
FormControl,
FormLabel,
HStack,
Tooltip,
Divider,
} from '@chakra-ui/react';
import FieldHandle from '../FieldHandle';
import { useIsValidConnection } from 'features/nodes/hooks/useIsValidConnection';
import InputFieldComponent from '../InputFieldComponent';
import { HANDLE_TOOLTIP_OPEN_DELAY } from 'features/nodes/types/constants';
interface IAINodeInputProps {
nodeId: string;
input: InputFieldValue;
template?: InputFieldTemplate | undefined;
connected: boolean;
}
function IAINodeInput(props: IAINodeInputProps) {
const { nodeId, input, template, connected } = props;
const isValidConnection = useIsValidConnection();
return (
<Box
position="relative"
borderColor={
!template
? 'error.400'
: !connected &&
['always', 'connectionOnly'].includes(
String(template?.inputRequirement)
) &&
input.value === undefined
? 'warning.400'
: undefined
}
>
<FormControl isDisabled={!template ? true : connected} pl={2}>
{!template ? (
<HStack justifyContent="space-between" alignItems="center">
<FormLabel>Unknown input: {input.name}</FormLabel>
</HStack>
) : (
<>
<HStack justifyContent="space-between" alignItems="center">
<HStack>
<Tooltip
label={template?.description}
placement="top"
hasArrow
shouldWrapChildren
openDelay={HANDLE_TOOLTIP_OPEN_DELAY}
>
<FormLabel>{template?.title}</FormLabel>
</Tooltip>
</HStack>
<InputFieldComponent
nodeId={nodeId}
field={input}
template={template}
/>
</HStack>
{!['never', 'directOnly'].includes(
template?.inputRequirement ?? ''
) && (
<FieldHandle
nodeId={nodeId}
field={template}
isValidConnection={isValidConnection}
handleType="target"
/>
)}
</>
)}
</FormControl>
</Box>
);
}
interface IAINodeInputsProps {
nodeId: string;
template: InvocationTemplate;
inputs: Record<string, InputFieldValue>;
}
const IAINodeInputs = (props: IAINodeInputsProps) => {
const { nodeId, template, inputs } = props;
const edges = useAppSelector((state: RootState) => state.nodes.edges);
const renderIAINodeInputs = useCallback(() => {
const IAINodeInputsToRender: ReactNode[] = [];
const inputSockets = map(inputs);
inputSockets.forEach((inputSocket, index) => {
const inputTemplate = template.inputs[inputSocket.name];
const isConnected = Boolean(
edges.filter((connectedInput) => {
return (
connectedInput.target === nodeId &&
connectedInput.targetHandle === inputSocket.name
);
}).length
);
if (index < inputSockets.length) {
IAINodeInputsToRender.push(
<Divider key={`${inputSocket.id}.divider`} />
);
}
IAINodeInputsToRender.push(
<IAINodeInput
key={inputSocket.id}
nodeId={nodeId}
input={inputSocket}
template={inputTemplate}
connected={isConnected}
/>
);
});
return (
<Flex flexDir="column" gap={2} p={2}>
{IAINodeInputsToRender}
</Flex>
);
}, [edges, inputs, nodeId, template.inputs]);
return renderIAINodeInputs();
};
export default memo(IAINodeInputs);

View File

@ -0,0 +1,97 @@
import {
InvocationTemplate,
OutputFieldTemplate,
OutputFieldValue,
} from 'features/nodes/types/types';
import { memo, ReactNode, useCallback } from 'react';
import { map } from 'lodash';
import { useAppSelector } from 'app/storeHooks';
import { RootState } from 'app/store';
import { Box, Flex, FormControl, FormLabel, HStack } from '@chakra-ui/react';
import FieldHandle from '../FieldHandle';
import { useIsValidConnection } from 'features/nodes/hooks/useIsValidConnection';
interface IAINodeOutputProps {
nodeId: string;
output: OutputFieldValue;
template?: OutputFieldTemplate | undefined;
connected: boolean;
}
function IAINodeOutput(props: IAINodeOutputProps) {
const { nodeId, output, template, connected } = props;
const isValidConnection = useIsValidConnection();
return (
<Box position="relative">
<FormControl isDisabled={!template ? true : connected} paddingRight={3}>
{!template ? (
<HStack justifyContent="space-between" alignItems="center">
<FormLabel color="error.400">
Unknown Output: {output.name}
</FormLabel>
</HStack>
) : (
<>
<FormLabel textAlign="end" padding={1}>
{template?.title}
</FormLabel>
<FieldHandle
key={output.id}
nodeId={nodeId}
field={template}
isValidConnection={isValidConnection}
handleType="source"
/>
</>
)}
</FormControl>
</Box>
);
}
interface IAINodeOutputsProps {
nodeId: string;
template: InvocationTemplate;
outputs: Record<string, OutputFieldValue>;
}
const IAINodeOutputs = (props: IAINodeOutputsProps) => {
const { nodeId, template, outputs } = props;
const edges = useAppSelector((state: RootState) => state.nodes.edges);
const renderIAINodeOutputs = useCallback(() => {
const IAINodeOutputsToRender: ReactNode[] = [];
const outputSockets = map(outputs);
outputSockets.forEach((outputSocket) => {
const outputTemplate = template.outputs[outputSocket.name];
const isConnected = Boolean(
edges.filter((connectedInput) => {
return (
connectedInput.source === nodeId &&
connectedInput.sourceHandle === outputSocket.name
);
}).length
);
IAINodeOutputsToRender.push(
<IAINodeOutput
key={outputSocket.id}
nodeId={nodeId}
output={outputSocket}
template={outputTemplate}
connected={isConnected}
/>
);
});
return <Flex flexDir="column">{IAINodeOutputsToRender}</Flex>;
}, [edges, nodeId, outputs, template.outputs]);
return renderIAINodeOutputs();
};
export default memo(IAINodeOutputs);

View File

@ -0,0 +1,24 @@
import { NODE_MIN_WIDTH } from 'app/constants';
import { memo } from 'react';
import { NodeResizeControl, NodeResizerProps } from 'reactflow';
const IAINodeResizer = (props: NodeResizerProps) => {
const { ...rest } = props;
return (
<NodeResizeControl
style={{
position: 'absolute',
border: 'none',
background: 'transparent',
width: 15,
height: 15,
bottom: 0,
right: 0,
}}
minWidth={NODE_MIN_WIDTH}
{...rest}
></NodeResizeControl>
);
};
export default memo(IAINodeResizer);

View File

@ -1,13 +1,14 @@
import { Box } from '@chakra-ui/react';
import { memo } from 'react';
import { InputFieldTemplate, InputFieldValue } from '../types/types';
import { ArrayInputFieldComponent } from './fields/ArrayInputField.tsx';
import { BooleanInputFieldComponent } from './fields/BooleanInputFieldComponent';
import { EnumInputFieldComponent } from './fields/EnumInputFieldComponent';
import { ImageInputFieldComponent } from './fields/ImageInputFieldComponent';
import { LatentsInputFieldComponent } from './fields/LatentsInputFieldComponent';
import { ModelInputFieldComponent } from './fields/ModelInputFieldComponent';
import { NumberInputFieldComponent } from './fields/NumberInputFieldComponent';
import { StringInputFieldComponent } from './fields/StringInputFieldComponent';
import ArrayInputFieldComponent from './fields/ArrayInputFieldComponent';
import BooleanInputFieldComponent from './fields/BooleanInputFieldComponent';
import EnumInputFieldComponent from './fields/EnumInputFieldComponent';
import ImageInputFieldComponent from './fields/ImageInputFieldComponent';
import LatentsInputFieldComponent from './fields/LatentsInputFieldComponent';
import ModelInputFieldComponent from './fields/ModelInputFieldComponent';
import NumberInputFieldComponent from './fields/NumberInputFieldComponent';
import StringInputFieldComponent from './fields/StringInputFieldComponent';
type InputFieldComponentProps = {
nodeId: string;
@ -16,7 +17,7 @@ type InputFieldComponentProps = {
};
// build an individual input element based on the schema
export const InputFieldComponent = (props: InputFieldComponentProps) => {
const InputFieldComponent = (props: InputFieldComponentProps) => {
const { nodeId, field, template } = props;
const { type, value } = field;
@ -105,3 +106,5 @@ export const InputFieldComponent = (props: InputFieldComponentProps) => {
return <Box p={2}>Unknown field type: {type}</Box>;
};
export default memo(InputFieldComponent);

View File

@ -1,242 +1,100 @@
import { NodeProps, useReactFlow } from 'reactflow';
import {
Box,
Flex,
FormControl,
FormLabel,
Heading,
HStack,
Tooltip,
Icon,
Code,
Text,
} from '@chakra-ui/react';
import { FaExclamationCircle, FaInfoCircle } from 'react-icons/fa';
import { InvocationValue } from '../types/types';
import { InputFieldComponent } from './InputFieldComponent';
import { FieldHandle } from './FieldHandle';
import { isEqual, map, size } from 'lodash';
import { memo, useMemo, useRef } from 'react';
import { useIsValidConnection } from '../hooks/useIsValidConnection';
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { useGetInvocationTemplate } from '../hooks/useInvocationTemplate';
import { NodeProps } from 'reactflow';
import { Box, Flex, Icon, useToken } from '@chakra-ui/react';
import { FaExclamationCircle } from 'react-icons/fa';
import { InvocationTemplate, InvocationValue } from '../types/types';
const connectedInputFieldsSelector = createSelector(
[(state: RootState) => state.nodes.edges],
(edges) => {
// return edges.map((e) => e.targetHandle);
return edges;
},
{
memoizeOptions: {
resultEqualityCheck: isEqual,
import { memo, PropsWithChildren, useMemo } from 'react';
import IAINodeOutputs from './IAINode/IAINodeOutputs';
import IAINodeInputs from './IAINode/IAINodeInputs';
import IAINodeHeader from './IAINode/IAINodeHeader';
import IAINodeResizer from './IAINode/IAINodeResizer';
import { RootState } from 'app/store';
import { AnyInvocationType } from 'services/events/types';
import { createSelector } from '@reduxjs/toolkit';
import { useAppSelector } from 'app/storeHooks';
import { NODE_MIN_WIDTH } from 'app/constants';
type InvocationComponentWrapperProps = PropsWithChildren & {
selected: boolean;
};
const InvocationComponentWrapper = (props: InvocationComponentWrapperProps) => {
const [nodeSelectedOutline, nodeShadow] = useToken('shadows', [
'nodeSelectedOutline',
'dark-lg',
]);
return (
<Box
sx={{
position: 'relative',
borderRadius: 'md',
minWidth: NODE_MIN_WIDTH,
boxShadow: props.selected
? `${nodeSelectedOutline}, ${nodeShadow}`
: `${nodeShadow}`,
}}
>
{props.children}
</Box>
);
};
const makeTemplateSelector = (type: AnyInvocationType) =>
createSelector(
[(state: RootState) => state.nodes],
(nodes) => {
const template = nodes.invocationTemplates[type];
if (!template) {
return;
}
return template;
},
}
);
{
memoizeOptions: {
resultEqualityCheck: (
a: InvocationTemplate | undefined,
b: InvocationTemplate | undefined
) => a !== undefined && b !== undefined && a.type === b.type,
},
}
);
export const InvocationComponent = memo((props: NodeProps<InvocationValue>) => {
const { id: nodeId, data, selected } = props;
const { type, inputs, outputs } = data;
const isValidConnection = useIsValidConnection();
const templateSelector = useMemo(() => makeTemplateSelector(type), [type]);
const connectedInputs = useAppSelector(connectedInputFieldsSelector);
const getInvocationTemplate = useGetInvocationTemplate();
// TODO: determine if a field/handle is connected and disable the input if so
const template = useAppSelector(templateSelector);
const template = useRef(getInvocationTemplate(type));
if (!template.current) {
if (!template) {
return (
<Box
sx={{
padding: 4,
bg: 'base.800',
borderRadius: 'md',
boxShadow: 'dark-lg',
borderWidth: 2,
borderColor: selected ? 'base.400' : 'transparent',
}}
>
<InvocationComponentWrapper selected={selected}>
<Flex sx={{ alignItems: 'center', justifyContent: 'center' }}>
<Icon color="base.400" boxSize={32} as={FaExclamationCircle}></Icon>
<IAINodeResizer />
</Flex>
</Box>
</InvocationComponentWrapper>
);
}
return (
<Box
sx={{
padding: 4,
bg: 'base.800',
borderRadius: 'md',
boxShadow: 'dark-lg',
borderWidth: 2,
borderColor: selected ? 'base.400' : 'transparent',
}}
>
<Flex flexDirection="column" gap={2}>
<>
<Code>{nodeId}</Code>
<HStack justifyContent="space-between">
<Heading size="sm" fontWeight={500} color="base.100">
{template.current.title}
</Heading>
<Tooltip
label={template.current.description}
placement="top"
hasArrow
shouldWrapChildren
>
<Icon color="base.300" as={FaInfoCircle} />
</Tooltip>
</HStack>
{map(inputs, (input, i) => {
const { id: fieldId } = input;
const inputTemplate = template.current?.inputs[input.name];
if (!inputTemplate) {
return (
<Box
key={fieldId}
position="relative"
p={2}
borderWidth={1}
borderRadius="md"
sx={{
borderColor: 'error.400',
}}
>
<FormControl isDisabled={true}>
<HStack justifyContent="space-between" alignItems="center">
<FormLabel>Unknown input: {input.name}</FormLabel>
</HStack>
</FormControl>
</Box>
);
}
const isConnected = Boolean(
connectedInputs.filter((connectedInput) => {
return (
connectedInput.target === nodeId &&
connectedInput.targetHandle === input.name
);
}).length
);
return (
<Box
key={fieldId}
position="relative"
p={2}
borderWidth={1}
borderRadius="md"
sx={{
borderColor:
!isConnected &&
['always', 'connectionOnly'].includes(
String(inputTemplate?.inputRequirement)
) &&
input.value === undefined
? 'warning.400'
: undefined,
}}
>
<FormControl isDisabled={isConnected}>
<HStack justifyContent="space-between" alignItems="center">
<FormLabel>{inputTemplate?.title}</FormLabel>
<Tooltip
label={inputTemplate?.description}
placement="top"
hasArrow
shouldWrapChildren
>
<Icon color="base.400" as={FaInfoCircle} />
</Tooltip>
</HStack>
<InputFieldComponent
nodeId={nodeId}
field={input}
template={inputTemplate}
/>
</FormControl>
{!['never', 'directOnly'].includes(
inputTemplate?.inputRequirement ?? ''
) && (
<FieldHandle
nodeId={nodeId}
field={inputTemplate}
isValidConnection={isValidConnection}
handleType="target"
/>
)}
</Box>
);
})}
{map(outputs).map((output, i) => {
const outputTemplate = template.current?.outputs[output.name];
const isConnected = Boolean(
connectedInputs.filter((connectedInput) => {
return (
connectedInput.source === nodeId &&
connectedInput.sourceHandle === output.name
);
}).length
);
if (!outputTemplate) {
return (
<Box
key={output.id}
position="relative"
p={2}
borderWidth={1}
borderRadius="md"
sx={{
borderColor: 'error.400',
}}
>
<FormControl isDisabled={true}>
<HStack justifyContent="space-between" alignItems="center">
<FormLabel>Unknown output: {output.name}</FormLabel>
</HStack>
</FormControl>
</Box>
);
}
return (
<Box
key={output.id}
position="relative"
p={2}
borderWidth={1}
borderRadius="md"
>
<FormControl isDisabled={isConnected}>
<FormLabel textAlign="end">
{outputTemplate?.title} Output
</FormLabel>
</FormControl>
<FieldHandle
key={output.id}
nodeId={nodeId}
field={outputTemplate}
isValidConnection={isValidConnection}
handleType="source"
/>
</Box>
);
})}
</>
<InvocationComponentWrapper selected={selected}>
<IAINodeHeader nodeId={nodeId} template={template} />
<Flex
sx={{
flexDirection: 'column',
borderBottomRadius: 'md',
bg: 'base.800',
py: 2,
}}
>
<IAINodeOutputs nodeId={nodeId} outputs={outputs} template={template} />
<IAINodeInputs nodeId={nodeId} inputs={inputs} template={template} />
</Flex>
<Flex></Flex>
</Box>
<IAINodeResizer />
</InvocationComponentWrapper>
);
});

View File

@ -3,21 +3,15 @@ import { Box } from '@chakra-ui/react';
import { ReactFlowProvider } from 'reactflow';
import { Flow } from './Flow';
import { useAppSelector } from 'app/storeHooks';
import { RootState } from 'app/store';
import { buildNodesGraph } from '../util/nodesGraphBuilder/buildNodesGraph';
import { memo } from 'react';
const NodeEditor = () => {
const state = useAppSelector((state: RootState) => state);
const graph = buildNodesGraph(state);
return (
<Box
sx={{
position: 'relative',
width: 'full',
height: 'full',
height: { base: '100vh', xl: 'full' },
borderRadius: 'md',
bg: 'base.850',
}}
@ -25,22 +19,8 @@ const NodeEditor = () => {
<ReactFlowProvider>
<Flow />
</ReactFlowProvider>
<Box
as="pre"
fontFamily="monospace"
position="absolute"
top={2}
left={2}
width="full"
height="full"
userSelect="none"
pointerEvents="none"
opacity={0.7}
>
<Box w="50%">{JSON.stringify(graph, null, 2)}</Box>
</Box>
</Box>
);
};
export default NodeEditor;
export default memo(NodeEditor);

View File

@ -0,0 +1,30 @@
import { Box } from '@chakra-ui/react';
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { memo } from 'react';
import { buildNodesGraph } from '../util/nodesGraphBuilder/buildNodesGraph';
const NodeGraphOverlay = () => {
const state = useAppSelector((state: RootState) => state);
const graph = buildNodesGraph(state);
return (
<Box
as="pre"
fontFamily="monospace"
position="absolute"
top={10}
right={2}
opacity={0.7}
background="base.800"
p={2}
maxHeight={500}
overflowY="scroll"
borderRadius="md"
>
{JSON.stringify(graph, null, 2)}
</Box>
);
};
export default memo(NodeGraphOverlay);

View File

@ -0,0 +1,59 @@
import { ButtonGroup } from '@chakra-ui/react';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import { IAIIconButton } from 'exports';
import { memo, useCallback } from 'react';
import { FaCode, FaExpand, FaMinus, FaPlus } from 'react-icons/fa';
import { useReactFlow } from 'reactflow';
import { shouldShowGraphOverlayChanged } from '../store/nodesSlice';
const ViewportControls = () => {
const { zoomIn, zoomOut, fitView } = useReactFlow();
const dispatch = useAppDispatch();
const shouldShowGraphOverlay = useAppSelector(
(state) => state.nodes.shouldShowGraphOverlay
);
const handleClickedZoomIn = useCallback(() => {
zoomIn();
}, [zoomIn]);
const handleClickedZoomOut = useCallback(() => {
zoomOut();
}, [zoomOut]);
const handleClickedFitView = useCallback(() => {
fitView();
}, [fitView]);
const handleClickedToggleGraphOverlay = useCallback(() => {
dispatch(shouldShowGraphOverlayChanged(!shouldShowGraphOverlay));
}, [shouldShowGraphOverlay, dispatch]);
return (
<ButtonGroup isAttached orientation="vertical">
<IAIIconButton
onClick={handleClickedZoomIn}
aria-label="Zoom In"
icon={<FaPlus />}
/>
<IAIIconButton
onClick={handleClickedZoomOut}
aria-label="Zoom Out"
icon={<FaMinus />}
/>
<IAIIconButton
onClick={handleClickedFitView}
aria-label="Fit to Viewport"
icon={<FaExpand />}
/>
<IAIIconButton
isChecked={shouldShowGraphOverlay}
onClick={handleClickedToggleGraphOverlay}
aria-label="Show/Hide Graph"
icon={<FaCode />}
/>
</ButtonGroup>
);
};
export default memo(ViewportControls);

View File

@ -1,14 +1,17 @@
import {
ArrayInputFieldTemplate,
ArrayInputFieldValue,
} from 'features/nodes/types';
import { FaImage, FaList } from 'react-icons/fa';
} from 'features/nodes/types/types';
import { memo } from 'react';
import { FaList } from 'react-icons/fa';
import { FieldComponentProps } from './types';
export const ArrayInputFieldComponent = (
const ArrayInputFieldComponent = (
props: FieldComponentProps<ArrayInputFieldValue, ArrayInputFieldTemplate>
) => {
const { nodeId, field } = props;
return <FaList />;
};
export default memo(ArrayInputFieldComponent);

View File

@ -4,11 +4,11 @@ import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
BooleanInputFieldTemplate,
BooleanInputFieldValue,
} from 'features/nodes/types';
import { ChangeEvent } from 'react';
} from 'features/nodes/types/types';
import { ChangeEvent, memo } from 'react';
import { FieldComponentProps } from './types';
export const BooleanInputFieldComponent = (
const BooleanInputFieldComponent = (
props: FieldComponentProps<BooleanInputFieldValue, BooleanInputFieldTemplate>
) => {
const { nodeId, field } = props;
@ -29,3 +29,5 @@ export const BooleanInputFieldComponent = (
<Switch onChange={handleValueChanged} isChecked={field.value}></Switch>
);
};
export default memo(BooleanInputFieldComponent);

View File

@ -4,11 +4,11 @@ import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
EnumInputFieldTemplate,
EnumInputFieldValue,
} from 'features/nodes/types';
import { ChangeEvent } from 'react';
} from 'features/nodes/types/types';
import { ChangeEvent, memo } from 'react';
import { FieldComponentProps } from './types';
export const EnumInputFieldComponent = (
const EnumInputFieldComponent = (
props: FieldComponentProps<EnumInputFieldValue, EnumInputFieldTemplate>
) => {
const { nodeId, field, template } = props;
@ -33,3 +33,5 @@ export const EnumInputFieldComponent = (
</Select>
);
};
export default memo(EnumInputFieldComponent);

View File

@ -8,13 +8,13 @@ import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
ImageInputFieldTemplate,
ImageInputFieldValue,
} from 'features/nodes/types';
import { DragEvent, useCallback, useState } from 'react';
} from 'features/nodes/types/types';
import { DragEvent, memo, useCallback, useState } from 'react';
import { FaImage } from 'react-icons/fa';
import { ImageType } from 'services/api';
import { FieldComponentProps } from './types';
export const ImageInputFieldComponent = (
const ImageInputFieldComponent = (
props: FieldComponentProps<ImageInputFieldValue, ImageInputFieldTemplate>
) => {
const { nodeId, field } = props;
@ -62,3 +62,5 @@ export const ImageInputFieldComponent = (
</Box>
);
};
export default memo(ImageInputFieldComponent);

View File

@ -1,13 +1,16 @@
import {
LatentsInputFieldTemplate,
LatentsInputFieldValue,
} from 'features/nodes/types';
} from 'features/nodes/types/types';
import { memo } from 'react';
import { FieldComponentProps } from './types';
export const LatentsInputFieldComponent = (
const LatentsInputFieldComponent = (
props: FieldComponentProps<LatentsInputFieldValue, LatentsInputFieldTemplate>
) => {
const { nodeId, field } = props;
return null;
};
export default memo(LatentsInputFieldComponent);

View File

@ -6,13 +6,13 @@ import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
ModelInputFieldTemplate,
ModelInputFieldValue,
} from 'features/nodes/types';
} from 'features/nodes/types/types';
import {
selectModelsById,
selectModelsIds,
} from 'features/system/store/modelSlice';
import { isEqual, map } from 'lodash';
import { ChangeEvent } from 'react';
import { ChangeEvent, memo } from 'react';
import { FieldComponentProps } from './types';
const availableModelsSelector = createSelector(
@ -28,7 +28,7 @@ const availableModelsSelector = createSelector(
}
);
export const ModelInputFieldComponent = (
const ModelInputFieldComponent = (
props: FieldComponentProps<ModelInputFieldValue, ModelInputFieldTemplate>
) => {
const { nodeId, field } = props;
@ -55,3 +55,5 @@ export const ModelInputFieldComponent = (
</Select>
);
};
export default memo(ModelInputFieldComponent);

View File

@ -12,10 +12,11 @@ import {
FloatInputFieldValue,
IntegerInputFieldTemplate,
IntegerInputFieldValue,
} from 'features/nodes/types';
} from 'features/nodes/types/types';
import { memo } from 'react';
import { FieldComponentProps } from './types';
export const NumberInputFieldComponent = (
const NumberInputFieldComponent = (
props: FieldComponentProps<
IntegerInputFieldValue | FloatInputFieldValue,
IntegerInputFieldTemplate | FloatInputFieldTemplate
@ -39,3 +40,5 @@ export const NumberInputFieldComponent = (
</NumberInput>
);
};
export default memo(NumberInputFieldComponent);

View File

@ -4,11 +4,11 @@ import { fieldValueChanged } from 'features/nodes/store/nodesSlice';
import {
StringInputFieldTemplate,
StringInputFieldValue,
} from 'features/nodes/types';
import { ChangeEvent } from 'react';
} from 'features/nodes/types/types';
import { ChangeEvent, memo } from 'react';
import { FieldComponentProps } from './types';
export const StringInputFieldComponent = (
const StringInputFieldComponent = (
props: FieldComponentProps<StringInputFieldValue, StringInputFieldTemplate>
) => {
const { nodeId, field } = props;
@ -27,3 +27,5 @@ export const StringInputFieldComponent = (
return <Input onChange={handleValueChanged} value={field.value}></Input>;
};
export default memo(StringInputFieldComponent);

View File

@ -0,0 +1,11 @@
import { memo } from 'react';
import { Panel } from 'reactflow';
import ViewportControls from '../ViewportControls';
const BottomLeftPanel = () => (
<Panel position="bottom-left">
<ViewportControls />
</Panel>
);
export default memo(BottomLeftPanel);

View File

@ -0,0 +1,34 @@
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { CSSProperties, memo } from 'react';
import { MiniMap } from 'reactflow';
const MinimapStyle: CSSProperties = {
background: 'var(--invokeai-colors-base-500)',
};
const MinimapPanel = () => {
const currentTheme = useAppSelector(
(state: RootState) => state.ui.currentTheme
);
return (
<MiniMap
nodeStrokeWidth={3}
pannable
zoomable
nodeBorderRadius={30}
style={MinimapStyle}
nodeColor={
currentTheme === 'light'
? 'var(--invokeai-colors-accent-700)'
: currentTheme === 'green'
? 'var(--invokeai-colors-accent-600)'
: 'var(--invokeai-colors-accent-700)'
}
maskColor="var(--invokeai-colors-base-700)"
/>
);
};
export default memo(MinimapPanel);

View File

@ -0,0 +1,32 @@
import { HStack } from '@chakra-ui/react';
import { useAppDispatch } from 'app/storeHooks';
import IAIButton from 'common/components/IAIButton';
import { memo, useCallback } from 'react';
import { Panel } from 'reactflow';
import { receivedOpenAPISchema } from 'services/thunks/schema';
import { nodesGraphBuilt } from 'services/thunks/session';
const TopCenterPanel = () => {
const dispatch = useAppDispatch();
const handleInvoke = useCallback(() => {
dispatch(nodesGraphBuilt());
}, [dispatch]);
const handleReloadSchema = useCallback(() => {
dispatch(receivedOpenAPISchema());
}, [dispatch]);
return (
<Panel position="top-center">
<HStack>
<IAIButton colorScheme="accent" onClick={handleInvoke}>
Will it blend?
</IAIButton>
<IAIButton onClick={handleReloadSchema}>Reload Schema</IAIButton>
</HStack>
</Panel>
);
};
export default memo(TopCenterPanel);

View File

@ -0,0 +1,11 @@
import { memo } from 'react';
import { Panel } from 'reactflow';
import AddNodeMenu from '../AddNodeMenu';
const TopLeftPanel = () => (
<Panel position="top-left">
<AddNodeMenu />
</Panel>
);
export default memo(TopLeftPanel);

View File

@ -0,0 +1,21 @@
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { memo } from 'react';
import { Panel } from 'reactflow';
import FieldTypeLegend from '../FieldTypeLegend';
import NodeGraphOverlay from '../NodeGraphOverlay';
const TopRightPanel = () => {
const shouldShowGraphOverlay = useAppSelector(
(state: RootState) => state.nodes.shouldShowGraphOverlay
);
return (
<Panel position="top-right">
<FieldTypeLegend />
{shouldShowGraphOverlay && <NodeGraphOverlay />}
</Panel>
);
};
export default memo(TopRightPanel);

View File

@ -0,0 +1,211 @@
import { Box, Flex } from '@chakra-ui/layout';
import { RootState } from 'app/store';
import { useAppDispatch, useAppSelector } from 'app/storeHooks';
import IAIInput from 'common/components/IAIInput';
import { Panel } from 'reactflow';
import { map } from 'lodash';
import {
ChangeEvent,
FocusEvent,
KeyboardEvent,
memo,
ReactNode,
useCallback,
useRef,
useState,
} from 'react';
import { Tooltip } from '@chakra-ui/tooltip';
import { AnyInvocationType } from 'services/events/types';
import { useBuildInvocation } from 'features/nodes/hooks/useBuildInvocation';
import { makeToast } from 'features/system/hooks/useToastWatcher';
import { addToast } from 'features/system/store/systemSlice';
import { nodeAdded } from '../../store/nodesSlice';
import Fuse from 'fuse.js';
import { InvocationTemplate } from 'features/nodes/types/types';
interface NodeListItemProps {
title: string;
description: string;
type: AnyInvocationType;
isSelected: boolean;
addNode: (nodeType: AnyInvocationType) => void;
}
const NodeListItem = (props: NodeListItemProps) => {
const { title, description, type, isSelected, addNode } = props;
return (
<Tooltip label={description} placement="end" hasArrow>
<Box
px={4}
onClick={() => addNode(type)}
background={isSelected ? 'base.600' : 'none'}
_hover={{
background: 'base.600',
cursor: 'pointer',
}}
>
{title}
</Box>
</Tooltip>
);
};
NodeListItem.displayName = 'NodeListItem';
const NodeSearch = () => {
const invocationTemplates = useAppSelector(
(state: RootState) => state.nodes.invocationTemplates
);
const nodes = map(invocationTemplates);
const [filteredNodes, setFilteredNodes] = useState<
Fuse.FuseResult<InvocationTemplate>[]
>([]);
const buildInvocation = useBuildInvocation();
const dispatch = useAppDispatch();
const [searchText, setSearchText] = useState<string>('');
const [showNodeList, setShowNodeList] = useState<boolean>(false);
const [focusedIndex, setFocusedIndex] = useState<number>(-1);
const nodeSearchRef = useRef<HTMLDivElement>(null);
const fuseOptions = {
findAllMatches: true,
threshold: 0,
ignoreLocation: true,
keys: ['title', 'type', 'tags'],
};
const fuse = new Fuse(nodes, fuseOptions);
const findNode = (e: ChangeEvent<HTMLInputElement>) => {
setSearchText(e.target.value);
setFilteredNodes(fuse.search(e.target.value));
setShowNodeList(true);
};
const addNode = useCallback(
(nodeType: AnyInvocationType) => {
const invocation = buildInvocation(nodeType);
if (!invocation) {
const toast = makeToast({
status: 'error',
title: `Unknown Invocation type ${nodeType}`,
});
dispatch(addToast(toast));
return;
}
dispatch(nodeAdded(invocation));
},
[dispatch, buildInvocation]
);
const renderNodeList = () => {
const nodeListToRender: ReactNode[] = [];
if (searchText.length > 0) {
filteredNodes.forEach(({ item }, index) => {
const { title, description, type } = item;
if (title.toLowerCase().includes(searchText)) {
nodeListToRender.push(
<NodeListItem
key={index}
title={title}
description={description}
type={type}
isSelected={focusedIndex === index}
addNode={addNode}
/>
);
}
});
} else {
nodes.forEach(({ title, description, type }, index) => {
nodeListToRender.push(
<NodeListItem
key={index}
title={title}
description={description}
type={type}
isSelected={focusedIndex === index}
addNode={addNode}
/>
);
});
}
return (
<Flex flexDirection="column" background="base.900" borderRadius={6}>
{nodeListToRender}
</Flex>
);
};
const searchKeyHandler = (e: KeyboardEvent<HTMLDivElement>) => {
const { key } = e;
let nextIndex = 0;
if (key === 'ArrowDown') {
setShowNodeList(true);
if (searchText.length > 0) {
nextIndex = (focusedIndex + 1) % filteredNodes.length;
} else {
nextIndex = (focusedIndex + 1) % nodes.length;
}
}
if (key === 'ArrowUp') {
setShowNodeList(true);
if (searchText.length > 0) {
nextIndex =
(focusedIndex + filteredNodes.length - 1) % filteredNodes.length;
} else {
nextIndex = (focusedIndex + nodes.length - 1) % nodes.length;
}
}
// # TODO Handle Blur
// if (key === 'Escape') {
// }
if (key === 'Enter') {
let selectedNodeType: AnyInvocationType;
if (searchText.length > 0) {
selectedNodeType = filteredNodes[focusedIndex].item.type;
} else {
selectedNodeType = nodes[focusedIndex].type;
}
addNode(selectedNodeType);
setShowNodeList(false);
}
setFocusedIndex(nextIndex);
};
const searchInputBlurHandler = (e: FocusEvent<HTMLDivElement>) => {
if (!e.currentTarget.contains(e.relatedTarget)) setShowNodeList(false);
};
return (
<Panel position="top-left">
<Flex
flexDirection="column"
tabIndex={1}
onKeyDown={searchKeyHandler}
onFocus={() => setShowNodeList(true)}
onBlur={searchInputBlurHandler}
ref={nodeSearchRef}
>
<IAIInput value={searchText} onChange={findNode} />
{showNodeList && renderNodeList()}
</Flex>
</Panel>
);
};
export default memo(NodeSearch);

View File

@ -1,7 +1,9 @@
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from 'app/store';
import { useAppSelector } from 'app/storeHooks';
import { reduce } from 'lodash';
import { Node } from 'reactflow';
import { useCallback } from 'react';
import { Node, useReactFlow } from 'reactflow';
import { AnyInvocationType } from 'services/events/types';
import { v4 as uuidv4 } from 'uuid';
import {
@ -11,68 +13,82 @@ import {
} from '../types/types';
import { buildInputFieldValue } from '../util/fieldValueBuilders';
const templatesSelector = createSelector(
[(state: RootState) => state.nodes],
(nodes) => nodes.invocationTemplates,
{ memoizeOptions: { resultEqualityCheck: (a, b) => true } }
);
export const useBuildInvocation = () => {
const invocationTemplates = useAppSelector(
(state: RootState) => state.nodes.invocationTemplates
);
const invocationTemplates = useAppSelector(templatesSelector);
return (type: AnyInvocationType) => {
const template = invocationTemplates[type];
const flow = useReactFlow();
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
return useCallback(
(type: AnyInvocationType) => {
const template = invocationTemplates[type];
const nodeId = uuidv4();
if (template === undefined) {
console.error(`Unable to find template ${type}.`);
return;
}
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
const nodeId = uuidv4();
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
const inputs = reduce(
template.inputs,
(inputsAccumulator, inputTemplate, inputName) => {
const fieldId = uuidv4();
inputsAccumulator[inputName] = inputFieldValue;
const inputFieldValue: InputFieldValue = buildInputFieldValue(
fieldId,
inputTemplate
);
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
inputsAccumulator[inputName] = inputFieldValue;
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
return inputsAccumulator;
},
{} as Record<string, InputFieldValue>
);
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
};
const outputs = reduce(
template.outputs,
(outputsAccumulator, outputTemplate, outputName) => {
const fieldId = uuidv4();
outputsAccumulator[outputName] = outputFieldValue;
const outputFieldValue: OutputFieldValue = {
id: fieldId,
name: outputName,
type: outputTemplate.type,
};
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
outputsAccumulator[outputName] = outputFieldValue;
const invocation: Node<InvocationValue> = {
id: nodeId,
type: 'invocation',
position: { x: 0, y: 0 },
data: {
return outputsAccumulator;
},
{} as Record<string, OutputFieldValue>
);
const { x, y } = flow.project({
x: window.innerWidth / 2.5,
y: window.innerHeight / 8,
});
const invocation: Node<InvocationValue> = {
id: nodeId,
type,
inputs,
outputs,
},
};
type: 'invocation',
position: { x: x, y: y },
data: {
id: nodeId,
type,
inputs,
outputs,
},
};
return invocation;
};
return invocation;
},
[invocationTemplates, flow]
);
};

View File

@ -1,16 +0,0 @@
import { useAppSelector } from 'app/storeHooks';
import { invocationTemplatesSelector } from '../store/selectors/invocationTemplatesSelector';
export const useGetInvocationTemplate = () => {
const invocationTemplates = useAppSelector(invocationTemplatesSelector);
return (invocationType: string) => {
const template = invocationTemplates[invocationType];
if (!template) {
return;
}
return template;
};
};

View File

@ -24,6 +24,7 @@ export type NodesState = {
invocationTemplates: Record<string, InvocationTemplate>;
connectionStartParams: OnConnectStartParams | null;
lastGraph: Graph | null;
shouldShowGraphOverlay: boolean;
};
export const initialNodesState: NodesState = {
@ -33,6 +34,7 @@ export const initialNodesState: NodesState = {
invocationTemplates: {},
connectionStartParams: null,
lastGraph: null,
shouldShowGraphOverlay: false,
};
const nodesSlice = createSlice({
@ -77,6 +79,9 @@ const nodesSlice = createSlice({
state.nodes[nodeIndex].data.inputs[fieldName].value = value;
}
},
shouldShowGraphOverlayChanged: (state, action: PayloadAction<boolean>) => {
state.shouldShowGraphOverlay = action.payload;
},
},
extraReducers(builder) {
builder.addCase(receivedOpenAPISchema.fulfilled, (state, action) => {
@ -98,6 +103,7 @@ export const {
connectionMade,
connectionStarted,
connectionEnded,
shouldShowGraphOverlayChanged,
} = nodesSlice.actions;
export default nodesSlice.reducer;

View File

@ -22,46 +22,55 @@ const getColorTokenCssVariable = (color: string) =>
export const FIELDS: Record<FieldType, FieldUIConfig> = {
integer: {
color: 'red',
colorCssVar: getColorTokenCssVariable('red'),
title: 'Integer',
description: 'Integers are whole numbers, without a decimal point.',
},
float: {
color: 'orange',
colorCssVar: getColorTokenCssVariable('orange'),
title: 'Float',
description: 'Floats are numbers with a decimal point.',
},
string: {
color: 'yellow',
colorCssVar: getColorTokenCssVariable('yellow'),
title: 'String',
description: 'Strings are text.',
},
boolean: {
color: 'green',
colorCssVar: getColorTokenCssVariable('green'),
title: 'Boolean',
description: 'Booleans are true or false.',
},
enum: {
color: 'blue',
colorCssVar: getColorTokenCssVariable('blue'),
title: 'Enum',
description: 'Enums are values that may be one of a number of options.',
},
image: {
color: 'purple',
colorCssVar: getColorTokenCssVariable('purple'),
title: 'Image',
description: 'Images may be passed between nodes.',
},
latents: {
color: 'pink',
colorCssVar: getColorTokenCssVariable('pink'),
title: 'Latents',
description: 'Latents may be passed between nodes.',
},
model: {
color: 'teal',
colorCssVar: getColorTokenCssVariable('teal'),
title: 'Model',
description: 'Models are models.',
},
array: {
color: 'gray',
colorCssVar: getColorTokenCssVariable('gray'),
title: 'Array',
description: 'TODO: Array type description.',

View File

@ -39,6 +39,7 @@ export type InvocationTemplate = {
};
export type FieldUIConfig = {
color: string;
colorCssVar: string;
title: string;
description: string;

View File

@ -76,7 +76,7 @@ const PromptInput = () => {
onKeyDown={handleKeyDown}
resize="vertical"
ref={promptRef}
minH={40}
minH={{ base: 20, lg: 40 }}
/>
</FormControl>
</Box>

View File

@ -10,19 +10,28 @@ const InvokeAILogoComponent = () => {
return (
<Flex alignItems="center" gap={3} ps={1}>
<Image src={InvokeAILogoImage} alt="invoke-ai-logo" w="32px" h="32px" />
<Text fontSize="xl">
invoke <strong>ai</strong>
</Text>
<Text
sx={{
fontWeight: 300,
marginTop: 1,
}}
variant="subtext"
>
{appVersion}
</Text>
<Image
src={InvokeAILogoImage}
alt="invoke-ai-logo"
w="32px"
h="32px"
minW="32px"
minH="32px"
/>
<Flex gap={3} display={{ base: 'inherit', sm: 'none', md: 'inherit' }}>
<Text fontSize="xl">
invoke <strong>ai</strong>
</Text>
<Text
sx={{
fontWeight: 300,
marginTop: 1,
}}
variant="subtext"
>
{appVersion}
</Text>
</Flex>
</Flex>
);
};

View File

@ -1,126 +1,68 @@
import { Flex, Grid, Link } from '@chakra-ui/react';
import { FaBug, FaCube, FaDiscord, FaGithub, FaKeyboard } from 'react-icons/fa';
import IAIIconButton from 'common/components/IAIIconButton';
import HotkeysModal from './HotkeysModal/HotkeysModal';
import ModelManagerModal from './ModelManager/ModelManagerModal';
import { Flex, Grid } from '@chakra-ui/react';
import { useState } from 'react';
import ModelSelect from './ModelSelect';
import SettingsModal from './SettingsModal/SettingsModal';
import StatusIndicator from './StatusIndicator';
import ThemeChanger from './ThemeChanger';
import LanguagePicker from './LanguagePicker';
import { useTranslation } from 'react-i18next';
import { MdSettings } from 'react-icons/md';
import InvokeAILogoComponent from './InvokeAILogoComponent';
import SiteHeaderMenu from './SiteHeaderMenu';
import useResolution from 'common/hooks/useResolution';
import { FaBars } from 'react-icons/fa';
import { IAIIconButton } from 'exports';
import { useTranslation } from 'react-i18next';
/**
* Header, includes color mode toggle, settings button, status message.
*/
const SiteHeader = () => {
const [menuOpened, setMenuOpened] = useState(false);
const resolution = useResolution();
const { t } = useTranslation();
return (
<Grid gridTemplateColumns="auto max-content">
<InvokeAILogoComponent />
<Flex alignItems="center" gap={2}>
<Grid
gridTemplateColumns={{ base: 'auto', sm: 'auto max-content' }}
paddingRight={{ base: 10, xl: 0 }}
gap={2}
>
<Flex justifyContent={{ base: 'center', sm: 'start' }}>
<InvokeAILogoComponent />
</Flex>
<Flex
alignItems="center"
gap={2}
justifyContent={{ base: 'center', sm: 'start' }}
>
<StatusIndicator />
<ModelSelect />
<ModelManagerModal>
{resolution === 'desktop' ? (
<SiteHeaderMenu />
) : (
<IAIIconButton
aria-label={t('modelManager.modelManager')}
tooltip={t('modelManager.modelManager')}
size="sm"
variant="link"
data-variant="link"
fontSize={20}
icon={<FaCube />}
/>
</ModelManagerModal>
<HotkeysModal>
<IAIIconButton
aria-label={t('common.hotkeysLabel')}
tooltip={t('common.hotkeysLabel')}
size="sm"
variant="link"
data-variant="link"
fontSize={20}
icon={<FaKeyboard />}
/>
</HotkeysModal>
<ThemeChanger />
<LanguagePicker />
<Link
isExternal
href="http://github.com/invoke-ai/InvokeAI/issues"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.reportBugLabel')}
tooltip={t('common.reportBugLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaBug />}
/>
</Link>
<Link
isExternal
href="http://github.com/invoke-ai/InvokeAI"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.githubLabel')}
tooltip={t('common.githubLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaGithub />}
/>
</Link>
<Link
isExternal
href="https://discord.gg/ZmtBAhwWhy"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.discordLabel')}
tooltip={t('common.discordLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaDiscord />}
/>
</Link>
<SettingsModal>
<IAIIconButton
aria-label={t('common.settingsLabel')}
tooltip={t('common.settingsLabel')}
variant="link"
data-variant="link"
fontSize={22}
size="sm"
icon={<MdSettings />}
/>
</SettingsModal>
icon={<FaBars />}
aria-label={t('accessibility.menu')}
background={menuOpened ? 'base.800' : 'none'}
_hover={{ background: menuOpened ? 'base.800' : 'none' }}
onClick={() => setMenuOpened(!menuOpened)}
p={0}
></IAIIconButton>
)}
</Flex>
{resolution !== 'desktop' && menuOpened && (
<Flex
position="absolute"
right={6}
top={{ base: 28, sm: 16 }}
backgroundColor="base.800"
padding={4}
borderRadius={4}
zIndex={{ base: 99, xl: 0 }}
>
<SiteHeaderMenu />
</Flex>
)}
</Grid>
);
};

View File

@ -0,0 +1,113 @@
import { Flex, Link } from '@chakra-ui/react';
import { useTranslation } from 'react-i18next';
import { FaCube, FaKeyboard, FaBug, FaGithub, FaDiscord } from 'react-icons/fa';
import { MdSettings } from 'react-icons/md';
import HotkeysModal from './HotkeysModal/HotkeysModal';
import LanguagePicker from './LanguagePicker';
import ModelManagerModal from './ModelManager/ModelManagerModal';
import SettingsModal from './SettingsModal/SettingsModal';
import ThemeChanger from './ThemeChanger';
import IAIIconButton from 'common/components/IAIIconButton';
const SiteHeaderMenu = () => {
const { t } = useTranslation();
return (
<Flex
alignItems="center"
flexDirection={{ base: 'column', xl: 'row' }}
gap={{ base: 4, xl: 1 }}
>
<ModelManagerModal>
<IAIIconButton
aria-label={t('modelManager.modelManager')}
tooltip={t('modelManager.modelManager')}
size="sm"
variant="link"
data-variant="link"
fontSize={20}
icon={<FaCube />}
/>
</ModelManagerModal>
<HotkeysModal>
<IAIIconButton
aria-label={t('common.hotkeysLabel')}
tooltip={t('common.hotkeysLabel')}
size="sm"
variant="link"
data-variant="link"
fontSize={20}
icon={<FaKeyboard />}
/>
</HotkeysModal>
<ThemeChanger />
<LanguagePicker />
<Link
isExternal
href="http://github.com/invoke-ai/InvokeAI/issues"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.reportBugLabel')}
tooltip={t('common.reportBugLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaBug />}
/>
</Link>
<Link
isExternal
href="http://github.com/invoke-ai/InvokeAI"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.githubLabel')}
tooltip={t('common.githubLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaGithub />}
/>
</Link>
<Link
isExternal
href="https://discord.gg/ZmtBAhwWhy"
marginBottom="-0.25rem"
>
<IAIIconButton
aria-label={t('common.discordLabel')}
tooltip={t('common.discordLabel')}
variant="link"
data-variant="link"
fontSize={20}
size="sm"
icon={<FaDiscord />}
/>
</Link>
<SettingsModal>
<IAIIconButton
aria-label={t('common.settingsLabel')}
tooltip={t('common.settingsLabel')}
variant="link"
data-variant="link"
fontSize={22}
size="sm"
icon={<MdSettings />}
/>
</SettingsModal>
</Flex>
);
};
SiteHeaderMenu.displayName = 'SiteHeaderMenu';
export default SiteHeaderMenu;

Some files were not shown because too many files have changed in this diff Show More