Results 1 to 10 of 10

Thread: write_imagef and write_imageui

Hybrid View

  1. #1
    Junior Member
    Join Date
    Apr 2013

    write_imagef and write_imageui

    Hi there,

    I was making an texture interoperability example, but I faced some problems when
    reading the texture data inside a kernel. First, i'll explain what i'm doing.

    I just created a kernel to read the data from an image created from a texture buffer and
    copy that to other image. Both of the textures, let's call them texGLIn and texGLOut,
    have this definition:

    Code :
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

    Then I created the image objects as follows:

    Code :
    texCLIn = clCreateFromGLTexture(context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texGLIn, &err);
    texCLOut = clCreateFromGLTexture(context, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texGLOut, &err);

    The last thing was to write the data to texCLIn and running the kernel:

    Code :
    unsigned char tex_color_data[] = {
            255, 255, 255, 255,
            255, 0, 0, 255,
            0,   255, 0, 255,
            0,   0, 255, 255
    clEnqueueWriteImage(queue, texCLIn, 1, origin, region, 0, 0, tex_color_data, 0, NULL, NULL);

    As showed above, I used unsigned byte to texture, so in the kernel, I used the functions
    read_imageui and write_imageui:

    Code :
    uint4 pixel = read_imageui(texIn, sampler, coords);
    write_imageui(texOut, coords, pixel);

    This kernel didn't work, but when i used read_imagef and write_imagef everything worked. I don't
    know why this happened =/ I tested this code in a geforce 210 and 650M with opencl 1.1, both
    only worked when i used the float functions. However, when i tested it in a CPU with opencl 1.2
    support, the unsigned int functions worked. Anybody knows why this happened?

    I looked at OpenCL 1.1 specification, but didn't find anything talking about only using float
    functions with textures. Could this be a device problem? A AMD GPU with opencl 1.1 has the
    same behavior as the nvidia?

  2. #2
    Senior Member
    Join Date
    Oct 2012
    This is described in OpenCL specification v1.1 section

    The internal format of the OpenGL texture must be GL_RGBA8UI in order to be mapped to an OpenCL channel CL_UNSIGNED_INT8.

  3. #3
    Junior Member
    Join Date
    Apr 2013
    Thanks for your reply, I really didn't see that =P, but making only this change in the texImage2D
    from GL_RGBA to GL_RGBA8UI didn't work in my gpu, neither in my cpu. Actually, I got an
    error when creating the CL image with createFromGLTexture2D(), it threw an "invalid gl object"
    error. Do you know why?

  4. #4
    Senior Member
    Join Date
    Oct 2012
    Internal format must be GL_RGBA8UI and format must be GL_RGBA_INTEGER.
    Check with glGetError() that glTexImage2D() has succeeded.

    Also don't use NULL for data of the output texture. Neither OpenGL nor OpenCL will allocate memory for you, so you'll get random or invalid data after your kernel has executed if your textures have an unspecified content.
    Last edited by utnapishtim; 09-02-2013 at 04:11 AM.

  5. #5
    Junior Member
    Join Date
    Apr 2013
    Thanks, these changes made the write_imageui works as I was expecting.
    I read from the OpenCL image object and it returned the correct values. But,
    there was a problem: the texture data is not being used by the shader.

    I made a test only using opengl with the texImage2D() function using the
    parameters that you said, but the window is all black. Analysing the fragment
    shader, it seems to me that the contents read using texture2D() is returning
    zero in all components. I searched about that, but didn't find anything talking
    about this using GL_RGBA8UI and GL_RGBA_INTEGER.

    Any idea?

  6. #6
    Senior Member
    Join Date
    Oct 2012
    I have tested your case on a GeForce 650M and it works fine. Have you checked every return value for possible errors?

    Create your texture with all pixels set to zero. Then make a test with a simple kernel that just writes 1s into all channels of every pixel. Call glGetTexImage() after that to get the result and check that it contains 1s.

    More precisely:

    - Create your texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, 2, 2, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, startpixels) where startpixels contains 16 bytes set to zero
    - Create the OpenCL wrapper image with clCreateFromGLTexture2D(oclContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texid, &result);
    - Build your kernel, set its arguments.
    - Call glFinish() to synchronize OpenGL with OpenCL.
    - Call clEnqueueAcquireGLObjects() to lock the texture.
    - Run your kernel.
    - Call clEnqueueReleaseGLObjects() to unlock the texture.
    - Call clFinish() to synchronize OpenCL with OpenGL.
    - Call glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pixels) to get the result.

    You can use a dumb test kernel such as this one:

    __kernel void test(write_only image2d_t outimg)
    int column = get_global_id(0);
    int line = get_global_id(1);
    uint val = (uint)(7 * line + column + 1);

    write_imageui(outimg, (int2)(column, line), (uint4)val);
    Last edited by utnapishtim; 09-03-2013 at 02:27 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
Proudly hosted by Digital Ocean