I have a system which has ISP,GPU and DSP . I have a graph with nodes A->B->C . My requirement is to have runtime scheduling for this grapth for every frame across different targets based on the load . For example
Frame0 : A-> ISP , B-> GPU, C->DSP
Frame1 : A-> GPU, B-> ISP, C->DSP
Frame2 : A-> DSP , B-> ISP, C->GPU
…
… and so on
The OpenVX Pipelining extension specifies the way the user can schedule a graph for each new frame before the completion of previously scheduled frames. How the implementation maps the nodes in the graph is implementation dependent. Your example assumes that at least node A is implemented on the ISP, GPU, and DSP. If this is true in the implementation, then the user can say which node should execute on which core by using the vxSetNodeTarget API, but this is if the user wants to execute that node for a specific target for every instance of the graph execution. If the user does not specify the node target, then this gives the implementation the freedom to schedule a node to whatever target it wants to. Some implementations may have some default static assignment (assuming the user can set the static schedule if they want something different), while other implementation may implement the kind of load balancing scheme you specified. Finally, some other implementations may choose something in between. The API doesn’t restrict this and leaves the decision to the implementation. To know if the implementation supports this kind of automatic load balancing/scheduling, you would need to check with the vendor of your OpenVX implementation.
Like Jesse mentioned, an implementation can take care of using its compute resources efficiently when pipelining extension is used. It may or may not use same target for every frame.
That said, I’ve a question about the example you used.
Are you thinking of explicitly specifying targets for every frame? The only available API for specifying targets is vxSetNodeTarget() and it has to be called before vxVerifyGraph(), but not between two runs of a graph.