- update all scripts - update the frontend GH action - remove yarn-related files - update ignores Yarn classic + storybook has some weird module resolution issue due to how it hoists dependencies. See https://github.com/storybookjs/storybook/issues/22431#issuecomment-1630086092 When I did the `package.json` solution in this thread, it broke vite. Next option is to upgrade to yarn 3 or pnpm. I chose pnpm.
5.2 KiB
InvokeAI Web UI
The UI is a fairly straightforward Typescript React app.
Core Libraries
InvokeAI's UI is made possible by a number of excellent open-source libraries. The most heavily-used are listed below, but there are many others.
Redux Toolkit
Redux Toolkit is used for state management and fetching/caching:
RTK-Query
for data fetching and cachingcreateAsyncThunk
for a couple other HTTP requestscreateEntityAdapter
to normalize things like images and modelscreateListenerMiddleware
for async workflows
We use redux-remember for persistence.
Socket.IO
Socket.IO is used for server-to-client events, like generation process and queue state changes.
Chakra UI
Chakra UI is our primary UI library, but we also use a few components from Mantine v6.
KonvaJS
KonvaJS powers the canvas. In the future, we'd like to explore PixiJS or WebGPU.
Vite
Vite is our bundler.
i18next & Weblate
We use i18next for localization, but translation to languages other than English happens on our Weblate project. Only the English source strings should be changed on this repo.
openapi-typescript
openapi-typescript is used to generate types from the server's OpenAPI schema. See TYPES_CODEGEN.md.
reactflow
reactflow powers the Workflow Editor.
zod
zod schemas are used to model data structures and provide runtime validation.
Client Types Generation
We use openapi-typescript to generate types from the app's OpenAPI schema.
The generated types are written to invokeai/frontend/web/src/services/api/schema.d.ts
. This file is committed to the repo.
The server must be started and available at http://127.0.0.1:9090.
# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
pnpm typegen
Package Scripts
See package.json
for all scripts.
Run with pnpm <script name>
.
dev
: run the frontend in dev mode, enabling hot reloadingbuild
: run all checks (madge, eslint, prettier, tsc) and then build the frontendtypegen
: generate types from the OpenAPI schema (see Client Types Generation)lint:madge
: check frontend for circular dependencieslint:eslint
: check frontend for code qualitylint:prettier
: check frontend for code formattinglint:tsc
: check frontend for type issueslint
: run all checks concurrentlyfix
: runeslint
andprettier
, fixing fixable issues
Contributing
Thanks for your interest in contributing to the InvokeAI Web UI!
We encourage you to ping @psychedelicious and @blessedcoolant on discord if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.
Dev Environment
From invokeai/frontend/web/
run pnpm i
to get everything set up.
Start everything in dev mode:
- Start the dev server:
pnpm dev
- Start the InvokeAI Nodes backend:
python scripts/invokeai-web.py # run from the repo root
- Point your browser to the dev server address e.g. http://localhost:5173/
VSCode Remote Dev
We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:
ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host
Production builds
For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.
If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.
To build for production, run pnpm build
.