Grabbing image from OpenGL via OpenCL

Hi!

I'm beginner to OpenCL world and I need some help to copy data from GPU to CPU. The problem is that this data is a 2D texture made by OpenGL (off-screen rendering with [i]FBO[/i] + [i]GLSL[/i]). The context init seems good and returns no error; recording the texture to CL via [i]clCreateFromGLTexture2D[/i] and acquiring data ([i]clEnqueueAcquireGLObjects[/i]) return nothing but [i]CL_SUCCESS[/i] too.

Then, there's the moment when I want to copy the texture back from GPU memory to RAM. I tried [i]clEnqueueReadBuffer[/i] and [i]clEnqueueReadImage[/i], and I even tried to write something on it with  [i]clEnqueueWriteBuffer[/i] and [i]clEnqueueWriteImage[/i] but each time it fails with the code [i]CL_INVALID_VALUE[/i] which doesn't mean the same thing for all this functions. I verified some points but the application shows that th texture is known ([i]clGetGLTextureInfo[/i] and [i]clGetGLObjectInfo[/i] return the right texture ID and the right type : [i]GL_TEXTURE_2D[/i]).

I think I am wrong with the kind of memory the OpenCL handle to the texture is...

Does someone have a sample code or any idea about this?

Thank you!

Could you show us the code that is returning CL_INVALID_VALUE? Say for instance when you use clEnqueueReadImage.

Ok, so let’s go for some code :

My initialisation first (in case of…)


        // Get the number of plateforms
        cl_uint numPlatforms;
        errcode = clGetPlatformIDs(0, NULL, &numPlatforms);
        clErrorCodes(errcode);

        // The code will run on the same machine that does the compilation, I'm sure there will always be a single GPU

        // Get plateforms info
        platforms = new cl_platform_id[numPlatforms];
        errcode = clGetPlatformIDs(numPlatforms, platforms, NULL);
        clErrorCodes(errcode);

        if( NULL == (*platforms))
        {
            std::cout << "Failure!" << std::endl;
        }

        // Create a context to run OpenCL on our CUDA-enabled NVIDIA GPU
        cl_context_properties props[] =
        {
            CL_GL_CONTEXT_KHR, (cl_context_properties)wglGetCurrentContext(),
            CL_WGL_HDC_KHR, (cl_context_properties)wglGetCurrentDC(),
            CL_CONTEXT_PLATFORM, (cl_context_properties)(*platforms),
            0
        };

        theContext = clCreateContextFromType(props,targetDevice,NULL, NULL, &errcode);

        clErrorCodes(errcode);

        // Get the list of GPU devices associated with this context
        size_t ParmDataBytes;
        clGetContextInfo(theContext, CL_CONTEXT_DEVICES, 0, NULL, &ParmDataBytes);
        theDevice = new cl_device_id[ParmDataBytes];
        clGetContextInfo(theContext, CL_CONTEXT_DEVICES, ParmDataBytes, theDevice, NULL);

        // Create a command-queue on the first GPU device
        theCmdQueue = clCreateCommandQueue(theContext, theDevice[0], 0, &errcode);
        clErrorCodes(errcode);

Then the code I’m using to read a valid OpenGL texture :


// prepare it first : (where glObject->getID() return the correct GLuint ID)
         clCreateFromGLTexture2D(HdlOCL::context(), CL_MEM_READ_WRITE, GL_TEXTURE_2D, 0, glObject->getID(), HdlOCL::errorFB());
         // returns CL_SUCCESS

//grab the texture
         glFlush();
         HdlOCL::errorFB( clEnqueueAcquireGLObjects( HdlOCL::queue(), 1, &buffer, 0, 0, 0) );
         // returns CL_SUCCESS

//Try to read it or write it
         HdlOCL::errorFB( clEnqueueReadBuffer( HdlOCL::queue(), buffer, CL_FALSE, 0, 1920*1080*3, ptr, 0, NULL, NULL) );
         //or HdlOCL::errorFB(clEnqueueWriteBuffer( HdlOCL::queue(), buffer, CL_FALSE, 0, 1920*1080*3, ptr, 0, NULL, NULL) );
         /* or 
         size_t origin[] = {0, 0, 0};
         size_t region[] = {1920, 1080, 1};
         HdlOCL::errorFB( clEnqueueReadImage(HdlOCL::queue(), buffer, CL_TRUE, origin, region, 0, 0, ptr, 0, 0, 0) );
         */
         /* or
         size_t origin[] = {0, 0, 0};
         size_t region[] = {1920, 1080, 1};
         HdlOCL::errorFB( clEnqueueWriteImage(HdlOCL::queue(), buffer, CL_TRUE, origin, region, 0, 0, ptr, 0, 0, 0) );
          */
          // each returns CL_INVALID_VALUE

// Release it
         HdlOCL::errorFB( clEnqueueReleaseGLObjects( HdlOCL::queue(), 1, &buffer, 0, 0, 0) );
         clFinish(HdlOCL::queue());
         //returns CL_SUCCESS

I also verified the information with the CL queries : clGetGLTextureInfo, clGetGLObjectInfo and I get the right values…

Thanks for the help!

I tried with a FBO (Texture -> FBO -> Texture) and it’s a bit different.

When trying to read in the FBO, clEnqueueReadBuffer returns CL_SUCCESS but writes only zeros in the host buffer (which was previously filled with a non-zero value!). And clEnqueueReadImage still fails with the same error code : CL_INVALID_VALUE.

:?

How did you create the GL texture? Is it possible that the dimensions of the texture are less than 1920x1080 or that the texel format is not RGB888? Try calling clEnqueueReadImage() passing a region of {1,1,1}.

By the way, clEnqueue{Read,Write}Buffer() require a buffer object and you are passing an image. Stick to clEnqueue{Read,Write}Image() instead.

Thanks for the answer. I tried with clEnqueueReadImage() and {1, 1, 1} and I also verified the texture informations… But everything seems correct and I have still the same error CL_INVALID_VALUE.

I found someone having the same problem on the Nvidia forums… it could be a drivers problem… :? but still not sure

I guess it’s a driver bug. You should post a bug report.