mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
88ae19a768
This improves the overall responsiveness of the system substantially, but does make each iteration *slightly* slower, distributing the up-front cost across the batch. Two main changes: 1. Create BatchSessions immediately, but do not create a whole graph execution state until the batch is executed. BatchSessions are created with a `session_id` that does not exist in sessions database. The default state is changed to `"uninitialized"` to better represent this. Results: Time to create 5000 batches reduced from over 30s to 2.5s 2. Use `executemany()` to retrieve lists of created sessions. Results: time to create 5000 batches reduced from 2.5s to under 0.5s Other changes: - set BatchSession state to `"in_progress"` just before `invoke()` is called - rename a few methods to accomodate the new behaviour - remove unused `BatchProcessStorage.get_created_sessions()` method |
||
---|---|---|
.. | ||
api | ||
assets/images | ||
cli | ||
invocations | ||
models | ||
services | ||
util | ||
api_app.py | ||
cli_app.py |