mirror of
https://github.com/invoke-ai/InvokeAI
synced 2024-08-30 20:32:17 +00:00
feat(nodes): depth-first execution
There was an issue where for graphs w/ iterations, your images were output all at once, at the very end of processing. So if you canceled halfway through an execution of 10 nodes, you wouldn't get any images - even though you'd completed 5 images' worth of inference. ## Cause Because graphs executed breadth-first (i.e. depth-by-depth), leaf nodes were necessarily processed last. For image generation graphs, your `LatentsToImage` will be leaf nodes, and be the last depth to be executed. For example, a `TextToLatents` graph w/ 3 iterations would execute all 3 `TextToLatents` nodes fully before moving to the next depth, where the `LatentsToImage` nodes produce output images, resulting in a node execution order like this: 1. TextToLatents 2. TextToLatents 3. TextToLatents 4. LatentsToImage 5. LatentsToImage 6. LatentsToImage ## Solution This PR makes a two changes to graph execution to execute as deeply as it can along each branch of the graph. ### Eager node preparation We now prepare as many nodes as possible, instead of just a single node at a time. We also need to change the conditions in which nodes are prepared. Previously, nodes were prepared only when all of their direct ancestors were executed. The updated logic prepares nodes that: - are *not* `Iterate` nodes whose inputs have *not* been executed - do *not* have any unexecuted `Iterate` ancestor nodes This results in graphs always being maximally prepared. ### Always execute the deepest prepared node We now choose the next node to execute by traversing from the bottom of the graph instead of the top, choosing the first node whose inputs are all executed. This means we always execute the deepest node possible. ## Result Graphs now execute depth-first, so instead of an execution order like this: 1. TextToLatents 2. TextToLatents 3. TextToLatents 4. LatentsToImage 5. LatentsToImage 6. LatentsToImage ... we get an execution order like this: 1. TextToLatents 2. LatentsToImage 3. TextToLatents 4. LatentsToImage 5. TextToLatents 6. LatentsToImage Immediately after inference, the image is decoded and sent to the gallery. fixes #3400
This commit is contained in:
parent
3f45294c61
commit
69539a0472
@ -859,11 +859,9 @@ class GraphExecutionState(BaseModel):
|
||||
if next_node is None:
|
||||
prepared_id = self._prepare()
|
||||
|
||||
# TODO: prepare multiple nodes at once?
|
||||
# while prepared_id is not None and not isinstance(self.graph.nodes[prepared_id], IterateInvocation):
|
||||
# prepared_id = self._prepare()
|
||||
|
||||
if prepared_id is not None:
|
||||
# Prepare as many nodes as we can
|
||||
while prepared_id is not None:
|
||||
prepared_id = self._prepare()
|
||||
next_node = self._get_next_node()
|
||||
|
||||
# Get values from edges
|
||||
@ -1010,14 +1008,30 @@ class GraphExecutionState(BaseModel):
|
||||
# Get flattened source graph
|
||||
g = self.graph.nx_graph_flat()
|
||||
|
||||
# Find next unprepared node where all source nodes are executed
|
||||
# Find next node that:
|
||||
# - was not already prepared
|
||||
# - is not an iterate node whose inputs have not been executed
|
||||
# - does not have an unexecuted iterate ancestor
|
||||
sorted_nodes = nx.topological_sort(g)
|
||||
next_node_id = next(
|
||||
(
|
||||
n
|
||||
for n in sorted_nodes
|
||||
# exclude nodes that have already been prepared
|
||||
if n not in self.source_prepared_mapping
|
||||
and all((e[0] in self.executed for e in g.in_edges(n)))
|
||||
# exclude iterate nodes whose inputs have not been executed
|
||||
and not (
|
||||
isinstance(self.graph.get_node(n), IterateInvocation) # `n` is an iterate node...
|
||||
and not all((e[0] in self.executed for e in g.in_edges(n))) # ...that has unexecuted inputs
|
||||
)
|
||||
# exclude nodes who have unexecuted iterate ancestors
|
||||
and not any(
|
||||
(
|
||||
isinstance(self.graph.get_node(a), IterateInvocation) # `a` is an iterate ancestor of `n`...
|
||||
and a not in self.executed # ...that is not executed
|
||||
for a in nx.ancestors(g, n) # for all ancestors `a` of node `n`
|
||||
)
|
||||
)
|
||||
),
|
||||
None,
|
||||
)
|
||||
@ -1114,9 +1128,22 @@ class GraphExecutionState(BaseModel):
|
||||
)
|
||||
|
||||
def _get_next_node(self) -> Optional[BaseInvocation]:
|
||||
"""Gets the deepest node that is ready to be executed"""
|
||||
g = self.execution_graph.nx_graph()
|
||||
sorted_nodes = nx.topological_sort(g)
|
||||
next_node = next((n for n in sorted_nodes if n not in self.executed), None)
|
||||
|
||||
# we need to traverse the graph in from bottom up
|
||||
reversed_sorted_nodes = reversed(list(nx.topological_sort(g)))
|
||||
|
||||
next_node = next(
|
||||
(
|
||||
n
|
||||
for n in reversed_sorted_nodes
|
||||
if n not in self.executed # the node must not already be executed...
|
||||
and all((e[0] in self.executed for e in g.in_edges(n))) # ...and all its inputs must be executed
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
if next_node is None:
|
||||
return None
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user