mirror of
https://github.com/invoke-ai/InvokeAI
synced 2025-07-25 21:05:37 +00:00
Update workflows handling for Workflow Library. **Updated Workflow Storage** "Embedded Workflows" are workflows associated with images, and are now only stored in the image files. "Library Workflows" are not associated with images, and are stored only in DB. This works out nicely. We have always saved workflows to files, but recently began saving them to the DB in addition to in image files. When that happened, we stopped reading workflows from files, so all the workflows that only existed in images were inaccessible. With this change, access to those workflows is restored, and no workflows are lost. **Updated Workflow Handling in Nodes** Prior to this change, workflows were embedded in images by passing the whole workflow JSON to a special workflow field on a node. In the node's `invoke()` function, the node was able to access this workflow and save it with the image. This (inaccurately) models workflows as a property of an image and is rather awkward technically. A workflow is now a property of a batch/session queue item. It is available in the InvocationContext and therefore available to all nodes during `invoke()`. **Database Migrations** Added a `SQLiteMigrator` class to handle database migrations. Migrations were needed to accomodate the DB-related changes in this PR. See the code for details. The `images`, `workflows` and `session_queue` tables required migrations for this PR, and are using the new migrator. Other tables/services are still creating tables themselves. A followup PR will adapt them to use the migrator. **Other/Support Changes** - Add a `has_workflow` column to `images` table to indicate that the image has an embedded workflow. - Add handling for retrieving the workflow from an image in python. The image file must be fetched, the workflow extracted, and then sent to client, avoiding needing the browser to parse the image file. With the `has_workflow` column, the UI knows if there is a workflow to be fetched, and only fetches when the user requests to load the workflow. - Add route to get the workflow from an image - Add CRUD service/routes for the library workflows - `workflow_images` table and services removed (no longer needed now that embedded workflows are not in the DB)
107 lines
3.7 KiB
Python
107 lines
3.7 KiB
Python
import sqlite3
|
|
|
|
|
|
def v0(cursor: sqlite3.Cursor) -> None:
|
|
"""
|
|
Migration for `session_queue` table v0
|
|
https://github.com/invoke-ai/InvokeAI/pull/4502
|
|
|
|
Creates the `session_queue` table, indicies and triggers for the session_queue service.
|
|
"""
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE TABLE IF NOT EXISTS session_queue (
|
|
item_id INTEGER PRIMARY KEY AUTOINCREMENT, -- used for ordering, cursor pagination
|
|
batch_id TEXT NOT NULL, -- identifier of the batch this queue item belongs to
|
|
queue_id TEXT NOT NULL, -- identifier of the queue this queue item belongs to
|
|
session_id TEXT NOT NULL UNIQUE, -- duplicated data from the session column, for ease of access
|
|
field_values TEXT, -- NULL if no values are associated with this queue item
|
|
session TEXT NOT NULL, -- the session to be executed
|
|
status TEXT NOT NULL DEFAULT 'pending', -- the status of the queue item, one of 'pending', 'in_progress', 'completed', 'failed', 'canceled'
|
|
priority INTEGER NOT NULL DEFAULT 0, -- the priority, higher is more important
|
|
error TEXT, -- any errors associated with this queue item
|
|
created_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')),
|
|
updated_at DATETIME NOT NULL DEFAULT(STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')), -- updated via trigger
|
|
started_at DATETIME, -- updated via trigger
|
|
completed_at DATETIME -- updated via trigger, completed items are cleaned up on application startup
|
|
-- Ideally this is a FK, but graph_executions uses INSERT OR REPLACE, and REPLACE triggers the ON DELETE CASCADE...
|
|
-- FOREIGN KEY (session_id) REFERENCES graph_executions (id) ON DELETE CASCADE
|
|
);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_item_id ON session_queue(item_id);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE UNIQUE INDEX IF NOT EXISTS idx_session_queue_session_id ON session_queue(session_id);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE INDEX IF NOT EXISTS idx_session_queue_batch_id ON session_queue(batch_id);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE INDEX IF NOT EXISTS idx_session_queue_created_priority ON session_queue(priority);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE INDEX IF NOT EXISTS idx_session_queue_created_status ON session_queue(status);
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE TRIGGER IF NOT EXISTS tg_session_queue_completed_at
|
|
AFTER UPDATE OF status ON session_queue
|
|
FOR EACH ROW
|
|
WHEN
|
|
NEW.status = 'completed'
|
|
OR NEW.status = 'failed'
|
|
OR NEW.status = 'canceled'
|
|
BEGIN
|
|
UPDATE session_queue
|
|
SET completed_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
|
|
WHERE item_id = NEW.item_id;
|
|
END;
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE TRIGGER IF NOT EXISTS tg_session_queue_started_at
|
|
AFTER UPDATE OF status ON session_queue
|
|
FOR EACH ROW
|
|
WHEN
|
|
NEW.status = 'in_progress'
|
|
BEGIN
|
|
UPDATE session_queue
|
|
SET started_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
|
|
WHERE item_id = NEW.item_id;
|
|
END;
|
|
"""
|
|
)
|
|
|
|
cursor.execute(
|
|
"""--sql
|
|
CREATE TRIGGER IF NOT EXISTS tg_session_queue_updated_at
|
|
AFTER UPDATE
|
|
ON session_queue FOR EACH ROW
|
|
BEGIN
|
|
UPDATE session_queue
|
|
SET updated_at = STRFTIME('%Y-%m-%d %H:%M:%f', 'NOW')
|
|
WHERE item_id = old.item_id;
|
|
END;
|
|
"""
|
|
)
|