Image Processing

Hi

Firstly I’m a total novice with respect to ‘modern’ OpenGL! I have played with GL1.0 quite a bit though.

I writing an image processing application - basically a camera raw converter + photo related processing - and I’m not sure whether to use the CPU for all processing or maybe use the GPU.

Camera raw data is 12 bits per channal, which will be expanded to 48 or 64 bit (16 bits per channel). Then the usual processing will be performed - levels, curves, colour manipulation, sharpening etc - using a layering concept similar to photoshop et al.

Would the GPU offer performance advantages over the CPU?
Do most cards (including my FX5200) support 16 bit per channel images?

If the answer to the above is yes, can someone give me some pointers as to what I should be doing to support 16 bpc images (textures!) and anything else you think is relevant?

Many thanks
Mark

Current hardware does not support rendering to integer RGBA formats with more than 8 bits per channel (32bits) and also textures with integer formats and more than 32 bits are seldom.
But there are floating point formats like RGBA16F and RGBA32F.
FP16 formats have only 10 bits of precision so you might not want to use that for 12 or 16 bit input data. But it’s an otherwise nice format because it is identcal to the OpenEXR image format often used for high dynamic range (HDR) effects.
FP32 would be the most precise format, but has some rendering caveats (no texture filtering, no blending) but you can implement that in fragment programs yourself.

Since you have an NVIDIA board, start delving into the various examples in the NVSDK to be found on http://developer.nvidia.com/page/home.html
Make sure you read the GPU Programming Guide there.
Be warned, a GF5200 is rather mediocre in fragment shading performance.
Well, a GPU normally beats a CPU in graphics processing.

Also check http://www.gpgpu.org/ for general purpose processing on a GPU. Simple image filter examples should be there as well.