InvokeAI/invokeai/frontend/web/docs
2023-11-29 10:49:31 +11:00
..
README.md feat(ui): update readme 2023-11-29 10:49:31 +11:00
WORKFLOWS_DESIGN_IMPLEMENTATION.md feat(ui): add links to relevant files in workflows doc 2023-11-29 10:49:31 +11:00

InvokeAI Web UI

The UI is a fairly straightforward Typescript React app.

Core Libraries

InvokeAI's UI is made possible by a number of excellent open-source libraries. The most heavily-used are listed below, but there are many others.

Redux Toolkit

Redux Toolkit is used for state management and fetching/caching:

  • RTK-Query for data fetching and caching
  • createAsyncThunk for a couple other HTTP requests
  • createEntityAdapter to normalize things like images and models
  • createListenerMiddleware for async workflows

We use redux-remember for persistence.

Socket.IO

Socket.IO is used for server-to-client events, like generation process and queue state changes.

Chakra UI

Chakra UI is our primary UI library, but we also use a few components from Mantine v6.

KonvaJS

KonvaJS powers the canvas. In the future, we'd like to explore PixiJS or WebGPU.

Vite

Vite is our bundler.

i18next & Weblate

We use i18next for localization, but translation to languages other than English happens on our Weblate project. Only the English source strings should be changed on this repo.

openapi-typescript

openapi-typescript is used to generate types from the server's OpenAPI schema. See TYPES_CODEGEN.md.

reactflow

reactflow powers the Workflow Editor.

zod

zod schemas are used to model data structures and provide runtime validation.

Client Types Generation

We use openapi-typescript to generate types from the app's OpenAPI schema.

The generated types are written to invokeai/frontend/web/src/services/api/schema.d.ts. This file is committed to the repo.

The server must be started and available at http://127.0.0.1:9090.

# from the repo root, start the server
python scripts/invokeai-web.py
# from invokeai/frontend/web/, run the script
yarn typegen

Package Scripts

See package.json for all scripts.

Run with yarn <script name>.

  • dev: run the frontend in dev mode, enabling hot reloading
  • build: run all checks (madge, eslint, prettier, tsc) and then build the frontend
  • typegen: generate types from the OpenAPI schema (see Client Types Generation)
  • lint:madge: check frontend for circular dependencies
  • lint:eslint: check frontend for code quality
  • lint:prettier: check frontend for code formatting
  • lint:tsc: check frontend for type issues
  • lint: run all checks concurrently
  • fix: run eslint and prettier, fixing fixable issues

Contributing

Thanks for your interest in contributing to the InvokeAI Web UI!

We encourage you to ping @psychedelicious and @blessedcoolant on discord if you want to contribute, just to touch base and ensure your work doesn't conflict with anything else going on. The project is very active.

Dev Environment

Install node and yarn classic.

From invokeai/frontend/web/ run yarn install to get everything set up.

Start everything in dev mode:

  1. Start the dev server: yarn dev
  2. Start the InvokeAI Nodes backend: python scripts/invokeai-web.py # run from the repo root
  3. Point your browser to the dev server address e.g. http://localhost:5173/

VSCode Remote Dev

We've noticed an intermittent issue with the VSCode Remote Dev port forwarding. If you use this feature of VSCode, you may intermittently click the Invoke button and then get nothing until the request times out. Suggest disabling the IDE's port forwarding feature and doing it manually via SSH:

ssh -L 9090:localhost:9090 -L 5173:localhost:5173 user@host

Production builds

For a number of technical and logistical reasons, we need to commit UI build artefacts to the repo.

If you submit a PR, there is a good chance we will ask you to include a separate commit with a build of the app.

To build for production, run yarn build.