Read Image via OpenCL commands

Dear all,

I am working with image processing applications on GPU. I previously used OpenCV as an interface to read an image make it to foalt and then used it via OpenCL commands. But, now I want to use OpenCL commands like image 2d_t data type and read_imagef() functions to load an image and use it in my program for further operations. Is it possible to do so. If so kindly give me a small example. I want to load a JPEG image and make it into float. Later I want to convert it to JPEG form float. Kindly help me in this regard.

Thanks in advance.

Kind regards.

I want to do the above operation before the image is passed into the kernel. I want to pass the image to the kernel as a float.

For reading/writing JPEG images you can use libjpeg. It’s very widely used and mature, so it’s easy to find examples to use it.

One issue you will find is that some versions of libjpeg only output 24-bit RGB, instead of 32-bit RGBA. This is a bit inconvenient because OpenCL does not support 24-bit RGB. This means you may have to decode the JPEG image as 24-bit RGB and then expand manually to 32-bit RGBA. This is easy to do with a couple of loops.

Once the input data is in 32-bit RGBA, you can create an image as usual with clCreateImage2D. As you know, one of the arguments you have to pass to clCreateImage2D is the image format. You can use something like this for 32-bit RGBA:


cl_image_format format;
format.image_channel_order = CL_RGBA;
format.image_channel_data_type = CL_UNORM_INT8;

You are done! Reading the data as float in your kernels is as simple as calling:


// It's important to declare the sampler at program scope, not inside a kernel
__constant sampler_t mySampler = CLK_FILTER_NEAREST | CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE;

__kernel void myKernel(image2D_t myImage, 
{
    int2 coord;

    // Assign a value to coord.xy to select the X and Y coordinates of the pixel
    // you want to read from.

    read_imagef(myImage, mySampler, coord);
}


Other useful functions you can use in your kernel are get_image_width() and get_image_height().

Are you sure you want to pass the image as float? Probably all you want is to read the image data as float, which is what I described above. Converting to float when you read is faster than reading from a float image directly, and the values that are returned are exactly the same.

Thanks for your suggestion. It is not compulsory to pass a float to kernel but the operations to be performed are in the level of float. So I will try this once and give you the reply.

Once again thanks for your continuous support in OpenCL as I am basic user in this.

I just want to have a small doubt. I have downloaded libjpeg v 7 found it as stable. But I came to see that it is a image compression and decompression library. I cant able to find any sample programs that can be used to read an image and make it to be loaded on to the imaged 2d_t variable.

Can you help me in this regard. Can you give me any links where I can fine the sample codes or examples or list of functions available in libjpeg. So that I can use them in my OpenCL program.

Thanks in advance.

There’s a nice example on how to use libjpeg to read a JPEG image here: http://www.aaronmr.com/en/2010/03/test/ (look what he does in read_jpeg_file()).

Hi David,

How do I send an image to the kernel. I have used the clCreateImage2D to allocate a buffer for the image which is now a float array.

I mean how do I send the image directly in the clSetKernelArg without converting it to a float array…?