32 bit textures on 16 bit framebuffer

I have an MBX lite chip mounted on a iMX31. Display is 16bpp, so window surfaces and context can be opened only in 565 (without alpha channel). If i create a 32bit texture with glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels32) the driver seems to upload pixels inside its dedicated texture ram in a 4444 format, resulting (when the texture is used) in an ugly/low color block artifacts.

If i upload pixels in 5551 format, like this glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, pixels16) the driver seems to hold the pixels as they are, without downsampling they in an internal 4444 format like in the case above.

How can i make the OpenGL|ES driver use a full 32bit texture? i think that the screen format conversion must be done only when a pixel is going to be written, so if i use a 32bit texture, then a conversion to the framebuffer 565 format is done only in the final stage, not when texture is loaded…am i right?

In addition the driver seems to report an extension called GL_IMG_texture_format_BGRA8888, that seems not be documented. Is that extension the thing i miss to make the chip use internal 32bit textures/calculations? Can someone tell me how that extension can be used? Can someone tell me if it refers to an additional texture format (like for example GL_RGBA). In this case can someone give me the correct enumerated value correspoinding to that format?

Thanks in advance.

GL_IMG_texture_format_BGRA8888 defines a new token, GL_BGRA, as 0x80E1. (That’s the same value as in GL_EXT_bgra, btw). This token can be used as <internalformat> and <format> argument to glTexImage2D/glTexSubImage2D in combination with GL_UNSIGNED_BYTE.

In your code that would be:

#ifndef GL_BGRA
#define GL_BGRA 0x80E1
#endif

// convert pixels32 from RGBA to BGRA

glTexImage2D(GL_TEXTURE_2D, 0, GL_BGRA, 128, 128, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels32);

Thank you very much, it seems to work perfectly.
Anyway, i was wondering why using GL_RGBA format, the driver must upload (the given 32bit pixels) in its texture ram in 4444 format.

The common sense would say that if GL_BGRA (0x80E1) works, it’s dual (GL_RGBA, that is in the OpenGL|ES 1.x baseline, so it’s official and more common for developers and applications) must works too (at least it must have the same behavior respect to the internal format storage, not considering the byte ordering).

On Apple’s iPhone SDK, RGBA must be used as the internalformat when passing in BGRA data. So an example texture upload becomes:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0, GL_BGRA, GL_UNSIGNED_BYTE, pixels32);

This might help other iPhone developers who stumble onto this page via a search, since there is no other documentation of this extension.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.