Strange results when using image objects

Hi

This is a follow up to my recent thread. The kernel is the same (it should just copy an image to another):

const sampler_t sampler = CLK_NORMALIZED_COORDS_FALSE |
CLK_ADDRESS_CLAMP_TO_EDGE |
CLK_FILTER_NEAREST;

__kernel void copy(__read_only image2d_t src, __write_only image2d_t dst)
{
int2 pos;
uint4 input;
int x = get_global_id(0);
int y = get_global_id(1);

pos.x = x;
pos.y = y;

input = read_imageui(src, sampler, pos);
write_imageui(dst, pos, input);

}

This will output distorted picture as if the resolution had been dropped with different resolutions for x-axis and y-axis.

I’m quite sure that the problem is in the kernel since if I load the picture to image object and then out again I get right results.

I don’t understand what could go wrong since the kernel is so simple. Any ideas?

-hnyk

The only things I can think of are to make sure the global sizes are right and that the images are created with the right size. Make sure you are not using CL_MEM_USE_HOST_PTR since that will add the additional complication of making sure you allocate the right amount of size. But those should all be detected with the test you mentioned.

What size images are you looking at? What platform are you on? Are the images >16 bits/pixel and > 8k wide?

Thanks for the answer.

The images are 512x512 with 8bit/pixel depth. What do you mean by > 8k wide?

I use Nvidia SDK and GF9500GT.

Have you verified that the format you are using for the image is supported by the card and that you are using the correct read_image* for the format?

Thanks, it works now.

I was using unsupported image format.

Kind of strange though that it didn’t give me the CL_IMAGE_FORMAT_NOT_SUPPORTED error :confused:

And top of that it worked fine when I just copied the image to an image object and then out again. Very strange indeed.

An in/out copy will probably work fine since the bits are just being moved across untouched.
When you use read_image in the GPU kernel it will use hardware to do the image access, and if the format is unsupported you’ll get garbage.

Unfortunately testing for error conditions is really hard so I don’t think the spec enforces them. I would definitely file a bug with Nvidia since this seems like a bug in the Nvidia OpenCL driver. I know on Mac OS X you’ll get that error if you try to use an image type that is not supported on a given device.