12bits per color channel

I am working on an image processing algorithm working across different codecs and using OpenGL for rendering. I have come across this Hardware decoder that supports 12-bit RGB (interleaved with no padding for rounding) as its optimal decoding format.

Now I know that GL supports a GL_RGB12 but I don’t know what “type” of data will handle this non-aligned format (i.e. what to pass to glTexImage and other texture functions).

Can anyone guide me as to how to optimally handle this texture format using opengl.

Thanks.

Probably almost nothing directly. But if the hardware you’re targetting supports integer textures, you can pull it out with a shader. The general workflow is:

Transfer the data as raw bytes, with an internal GL format that will persuade the driver not to spend time swizzling it. That’s generally BGRA for 8-bit, or luminance only. Maybe 16-bit RGBA or floating point RGBA.

Make a shader that samples the arbitrary data and uses bit operations to pull the actual pixels out, and place them in a 16-bit integer or floating point buffer.

Search the forums for v210 for more info - it’s a 10-bit format that has to be handled like this.

Bruce

Now I know that GL supports a GL_RGB12 but I don’t know what “type” of data will handle this non-aligned format (i.e. what to pass to glTexImage and other texture functions).

Send your data as though it were RGB16. Just shift each component left 4 times, and fill in the new zeros with the least significant 4 bits from the original value.

Also, I wouldn’t expect GL_RGB12 to actually be implemented as a 12-bit per-component format. GL_RGB12 is not listed on the list of required OpenGL formats, so odds are it’s probably implemented as GL_RGB16 under the hood. Well, that and the fact that GL_RGB12 is horribly unaligned.