On Windows, on an NVidia GPU, I’m seeing:
glGetInternalformativ( GL_TEXTURE_2D, GL_RGBA8, GL_TEXTURE_IMAGE_FORMAT , 1, &format[0] ); // GL_RGBA
glGetInternalformativ( GL_TEXTURE_2D, GL_RGBA8, GL_TEXTURE_IMAGE_TYPE , 1, &type [0] ); // GL_UNSIGNED_INT_8_8_8_8_REV
glGetInternalformativ( GL_TEXTURE_2D, GL_RGBA8, GL_READ_PIXELS_FORMAT , 1, &format[1] ); // GL_RGBA
glGetInternalformativ( GL_TEXTURE_2D, GL_RGBA8, GL_READ_PIXELS_TYPE , 1, &type [1] ); // GL_UNSIGNED_INT_8_8_8_8_REV
That is, for GL_RGBA8, the driver recommends uploads and downloads using GL_RGBA / GL_UNSIGNED_INT_8_8_8_8_REV.
Does that makes sense to anyone?
I recall repeatedly folks benchmarking and finding that GL_BGRA is the external format to use on Windows (either with GL_UNSIGNED_BYTE or possibly GL_UNSIGNED_INT_8_8_8_8_REV (for instance, in mhagain’s post here). Not to mention that NVidia recommends this in their developer literature too.
Correct me if I’m wrong, but the NVidia driver’s recommendation (via glGetInternalformativ()) doesn’t match that:
ORDER FORMAT TYPE
-------- ---------- -----------
RGBA RGBA UNSIGNED_BYTE
RGBA RGBA UINT_8_8_8_8_REV <-- NV driver recommends
BGRA BGRA UNSIGNED_BYTE <-- Often recommended
BGRA BGRA UINT_8_8_8_8_REV <-- Sometimes recommended
ABGR RGBA UINT_8_8_8_8
ARGB BGRA UINT_8_8_8_8
Note here that “ORDER” is the memory order on a little-endian machine like x86 (with GL_{PACK,UNPACK}_SWAP_BYTES = FALSE).