Grey-scale image > 8 bits on iOS

Hi,

My issue is the following:
I have an image that contains grey-values of more than 8 bits (so 10,12,16 bits) that I want to save into a texture on iOS (OpenGL ES 2.0) and use a shader to map the contained values into the range of 0…255.
The shader is no issue at all but the saving of the array into the texture using this command
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, imageData);
seems to cap all values above 255 by design.

If/how can I maintain the full range of the values in the texture?
What I currently do is to go through the complete array and map all values manually into 0…255 and then write the result into the texture (I know this is odd and does not leverage the shader’s capabilities).

Could somebody point into an appropriate direction?

Thanks in advance.

Lars

Lars,
The best way to do this is to tell glTexImage2D() that your texture is GL_RGB even though it contains no real color. Then, you can write a fragment shader that reads your pixels on the RGB channels and scales and assembles the values as you need. For example:

float color_low = texture2D(Texture, TexCoord).r;
float color_high = texture2D(Texture, TexCoord).g;
float unused = texture2D(Texture, TexCoord).b;

float color_final = (color_low * 0.25) + (color_high * 0.75);

This technique is often used in fragment shaders which perform color space conversions, (such as YCrCb to RGB) because it will execute faster as shader code.

Good Luck, Clay

Clay,

thanks for the input, this was of big value but I am still struggling:

I am starting from a 16 bit array that contains greyvalues + alpha (=65.535 always), each 2 bytes.
Then I do
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);

as you suggested but using GL_RGBA instead of GL_LUMINANCE_ALPHA.
If I got you right, the greyvalue of 2 bytes is stored in R and G, the alpha value in B and A. This seems to be OK for B and A, also R and G seem to be correct in terms of the shape of the rendered result BUT all values > 255 have a darker intensity than expected.

My expectation would be that if I have a pixel value of let’s say 300 that the R-component stores 255 and the G-component 45 but my texture never contains 1.0 as a value when the original pixel value is > 255.

How is the mapping done between the 2 bytes and the two components R and G?

In my shader I need the original value of the 2 bytes greyvalue like below color_final =
highp float color_low = texture2D(inputImageTexture, textureCoordinate).r;
highp float color_high = texture2D(inputImageTexture, textureCoordinate).g;
highp float unused = texture2D(inputImageTexture, textureCoordinate).b;

highp float color_final = color_high + color_low;

color_low however, is much too low and I don’t know how both are constructed exactly.

Am I on the right path?

Thanks

Lars

Lars,

You have the right idea, but your math is wrong.  You can not split the bytes with subtraction.  Convert to hex instead.  300 in hex is 012C, so the two bytes are 01 and 2C (1 and 44 in decimal), which will be 1/255 and 44/255 as floats (0.00392 and 0.172549).  These would be the R and G component values for 300.  Neither of these is close to 1.0f

Also, you should not add color_high and color_low together until you have applied different scaling factors to them because you want color_high to have 256 times greater magnitude than color_low, so perhaps:

float color_final = (color_low + (color_high * 256.0)) / 256.0;

Regards, Clay

Clay,

Thanks so much for helping, after some trouble with Objective C it works and is also quite speedy.

Thanks again and best

Lars

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.