mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
922716d2ab
The previous super-minimal implementation had a major issue - the saved workflow didn't take into account batched field values. When generating with multiple iterations or dynamic prompts, the same workflow with the first prompt, seed, etc was stored in each image. As a result, when the batch results in multiple queue items, only one of the images has the correct workflow - the others are mismatched. To work around this, we can store the _graph_ in the image metadata (alongside the workflow, if generated via workflow editor). When loading a workflow from an image, we can choose to load the workflow or the graph, preferring the workflow. Internally, we need to update images router image-saving services. The changes are minimal. To avoid pydantic errors deserializing the graph, when we extract it from the image, we will leave it as stringified JSON and let the frontend's more sophisticated and flexible parsing handle it. The worklow is also changed to just return stringified JSON, so the API is consistent. |
||
---|---|---|
.. | ||
app | ||
assets | ||
backend | ||
configs | ||
frontend | ||
invocation_api | ||
version | ||
__init__.py |