Hi there,

I was making an texture interoperability example, but I faced some problems when
reading the texture data inside a kernel. First, i'll explain what i'm doing.

I just created a kernel to read the data from an image created from a texture buffer and
copy that to other image. Both of the textures, let's call them texGLIn and texGLOut,
have this definition:

Code :

Then I created the image objects as follows:

Code :
texCLIn = clCreateFromGLTexture(context, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texGLIn, &err);
texCLOut = clCreateFromGLTexture(context, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texGLOut, &err);

The last thing was to write the data to texCLIn and running the kernel:

Code :
unsigned char tex_color_data[] = {
        255, 255, 255, 255,
        255, 0, 0, 255,
        0,   255, 0, 255,
        0,   0, 255, 255
clEnqueueWriteImage(queue, texCLIn, 1, origin, region, 0, 0, tex_color_data, 0, NULL, NULL);

As showed above, I used unsigned byte to texture, so in the kernel, I used the functions
read_imageui and write_imageui:

Code :
uint4 pixel = read_imageui(texIn, sampler, coords);
write_imageui(texOut, coords, pixel);

This kernel didn't work, but when i used read_imagef and write_imagef everything worked. I don't
know why this happened =/ I tested this code in a geforce 210 and 650M with opencl 1.1, both
only worked when i used the float functions. However, when i tested it in a CPU with opencl 1.2
support, the unsigned int functions worked. Anybody knows why this happened?

I looked at OpenCL 1.1 specification, but didn't find anything talking about only using float
functions with textures. Could this be a device problem? A AMD GPU with opencl 1.1 has the
same behavior as the nvidia?