write_imagef and write_imageui

Hi there,

I was making an texture interoperability example, but I faced some problems when
reading the texture data inside a kernel. First, i’ll explain what i’m doing.

I just created a kernel to read the data from an image created from a texture buffer and
copy that to other image. Both of the textures, let’s call them texGLIn and texGLOut,
have this definition:


glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2, 2, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);

Then I created the image objects as follows:


texCLIn = clCreateFromGLTexture(context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texGLIn, &err);
texCLOut = clCreateFromGLTexture(context, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texGLOut, &err);

The last thing was to write the data to texCLIn and running the kernel:


unsigned char tex_color_data[] = {
        255, 255, 255, 255,
        255, 0, 0, 255,
        0,   255, 0, 255,
        0,   0, 255, 255
};
clEnqueueWriteImage(queue, texCLIn, 1, origin, region, 0, 0, tex_color_data, 0, NULL, NULL);

As showed above, I used unsigned byte to texture, so in the kernel, I used the functions
read_imageui and write_imageui:


uint4 pixel = read_imageui(texIn, sampler, coords);
write_imageui(texOut, coords, pixel);

This kernel didn’t work, but when i used read_imagef and write_imagef everything worked. I don’t
know why this happened =/ I tested this code in a geforce 210 and 650M with opencl 1.1, both
only worked when i used the float functions. However, when i tested it in a CPU with opencl 1.2
support, the unsigned int functions worked. Anybody knows why this happened?

I looked at OpenCL 1.1 specification, but didn’t find anything talking about only using float
functions with textures. Could this be a device problem? A AMD GPU with opencl 1.1 has the
same behavior as the nvidia?

This is described in OpenCL specification v1.1 section 9.8.3.1.

The internal format of the OpenGL texture must be GL_RGBA8UI in order to be mapped to an OpenCL channel CL_UNSIGNED_INT8.

Thanks for your reply, I really didn’t see that =P, but making only this change in the texImage2D
from GL_RGBA to GL_RGBA8UI didn’t work in my gpu, neither in my cpu. Actually, I got an
error when creating the CL image with createFromGLTexture2D(), it threw an “invalid gl object”
error. Do you know why?

Internal format must be GL_RGBA8UI and format must be GL_RGBA_INTEGER.
Check with glGetError() that glTexImage2D() has succeeded.

Also don’t use NULL for data of the output texture. Neither OpenGL nor OpenCL will allocate memory for you, so you’ll get random or invalid data after your kernel has executed if your textures have an unspecified content.

Thanks, these changes made the write_imageui works as I was expecting.
I read from the OpenCL image object and it returned the correct values. But,
there was a problem: the texture data is not being used by the shader.

I made a test only using opengl with the texImage2D() function using the
parameters that you said, but the window is all black. Analysing the fragment
shader, it seems to me that the contents read using texture2D() is returning
zero in all components. I searched about that, but didn’t find anything talking
about this using GL_RGBA8UI and GL_RGBA_INTEGER.

Any idea?

I have tested your case on a GeForce 650M and it works fine. Have you checked every return value for possible errors?

Create your texture with all pixels set to zero. Then make a test with a simple kernel that just writes 1s into all channels of every pixel. Call glGetTexImage() after that to get the result and check that it contains 1s.

More precisely:

  • Create your texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, 2, 2, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, startpixels) where startpixels contains 16 bytes set to zero
  • Create the OpenCL wrapper image with clCreateFromGLTexture2D(oclContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texid, &result);
  • Build your kernel, set its arguments.
  • Call glFinish() to synchronize OpenGL with OpenCL.
  • Call clEnqueueAcquireGLObjects() to lock the texture.
  • Run your kernel.
  • Call clEnqueueReleaseGLObjects() to unlock the texture.
  • Call clFinish() to synchronize OpenCL with OpenGL.
  • Call glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pixels) to get the result.

You can use a dumb test kernel such as this one:

__kernel void test(write_only image2d_t outimg)
{
int column = get_global_id(0);
int line = get_global_id(1);
uint val = (uint)(7 * line + column + 1);

write_imageui(outimg, (int2)(column, line), (uint4)val);

}

Yeah! Everything is working, but the texture is not being displayed =/
When I used glGetTexImage(), it returns the correct data, but when I
try to use it, everything is black. It seems that texture2D() in the
fragment shader is always getting vec4(0,0,0,1) =/

It depends on the version of GLSL you use, but starting from GLSL 1.30 you should probably use texture() or texelFetch() with a usampler2D.

Thanks for all your help, It’s working =D. When I changed the texture2D() to texture(),
everything worked. The sadly thing about this is that It just worked in my linux machine,
when tested it in Mac OS X, it didn’t work.

I searched about that and saw that the OpenGL version in my Mac OS X is 2.1 with
GLSL 1.2, so It doesn’t support texture() =/. Anyway, really thank you for your help =D

With GLSL 1.2, you have to use the EXT_gpu_shader4 extension (if available of course).

Include “#extension GL_EXT_gpu_shader4 : require” in your shader. The type usampler2D becomes available but must be used with texture2D (instead of texture()).