Best way to process camera frames

Hi,
I work in a graphics company and my job is to to come up with sample application codes using openvx, (and opengl etc) for our driver team.
I am looking for what would be set of openvx API calls for processing camera frames coming in quick succession.


    vx_context context = vxCreateContext(); vx_graph graph = vxCreateGraph(context);
    const vx_uint32 width = 640, height = 480;
    const vx_df_image format = VX_DF_IMAGE_U8; // hard-coding plane_index to 0 everywhere
    vx_image srcImg = vxCreateImage(context, width, height, format);
    vx_image outImg = vxCreateImage(context, width, height, format);
    vx_convolution filter = vxCreateMatrix(context, VX_TYPE_INT16, 5, 5);     // set data to filter
    vx_node node = vxConvolveNode(graph, srcImg, filter, outImg);
//    vxAssignNodeCallback(node, callback);// here I access the output image
    vx_imagepatch_addressing_t addressing = { width, height, 1, width, VX_SCALE_UNITY, VX_SCALE_UNITY, 1, 1 };
    vx_uint8 *srcImgData, *dstImgData;
    vx_rectangle_t patch = {0, 0, width, height};
    vx_size dataSize = vxComputeImagePatchSize(srcImg, &patch, 0);
    srcImgData = malloc(dataSize);
 
    for (;;)
    {
        void* frameData = getNextCameraFrame(); // Camera data in U8 format
        // if no next frame, free all temporary memory and break loop!!
 
#if 0 // CASE 1 - commit new data to srcImage directly
        vxAccessImagePatch(srcImg, &patch, 0, &addressing, &srcImgData, VX_WRITE_ONLY); // probably not needed everytime, or is it?
        memcpy(srcImgData, frameData, dataSize);
        vxCommitImagePatch(srcImg, &patch, 0, &addressing, srcImgData);
        vxProcessGraph(graph);
#else // CASE 2 - create new image and change parameter
        memcpy(srcImgData, frameData, dataSize);
        srcImg = vxCreateImageFromHandle(context, format, &addressing, &srcImgData, VX_IMPORT_TYPE_HOST);
        vxSetParameterByIndex(node, 0, (vx_reference) srcImg);
        vxScheduleGraph(graph);
#endif       
    }
 
    vxReleaseContext(context);

Now here are the questions I have

  1. Is it legal to call vxSetParameter* on built-in nodes like Convolve node? Is it compliant to OpenVX specficiation? I am asking this because I can’t find the answer to this anywhere. The spec doesn’t say anything about it, so my guess would be, it’s vendor-dependent. Is that it? Though I can see why it’s not a good idea to support this for the 40 built-in kernels given that the vxConvolveNode kind of calls don’t make clear which index refers to which parameter. In case answer is no, assume that node is created using custom kernel (vxCreateGenericNode, vxAddParameter, vxFinalizeKernel) for the sake of case 2.

  2. Which path should I take? Case 1 or case 2? Both have their pros and cons?
    While 1 saves me from re-validation of graph, image initialization overheads etc; case 2 serves me well in that by creating a new image, I don’t need to block my camera frame on execution. In other words, if I try using vxScheduleGraph in case 1, I run the risk of over-writing source image’s data with the next frame even before previous frame was processed. So, case 1 forces my app to be single-threaded (sort of).

Any ideas/suggestions are welcome. You can even suggest new API calls, or ring-buffer based app if you want.
Thanks.

The application can double-buffer images coming from camera. Keep capturing image while the graph is executing. Here are the steps:
[S1]. allocate two buffers for camera capture (i.e., using malloc)
[S2]. create input image using vxCreateImageFromHandle with one of the buffers in S1
[S3]. build the graph using vx_image object created in S2 and call vxVerifyGraph
[S4]. call vxSwapImageHandle with buffer with camera data prior to calling vxProcessGraph

You could also double-buffer the output.

Another approach is to use vx_delay object with two delay elements for double buffering.

Hope that helps.