I found that the origin (0, 0, 0) lies at the center of the screen. Points on each of the x, y and z axis take fraction values between 0 and +/-1.
To test the “byte co-ordinates extension” of opengl es, the co-ordinates should be specified in byte format. I have understood this as, each of x, y and z should be of 1 byte size.
How to map the fraction co-ordinate values to byte format?
For E.g.:
GLfloat vertices[] = { //each element of the array is a fraction 0…+/-1
0.1, 0.1, -0.6,
-0.3, -0.1, 0.0,
-0.1, -0.3, 0.0,
0.1, -0.3, 0.0,
0.3, -0.1, 0.0,
-0.1, 0.3, 0.0,
};
glVertexPointer(3, GL_FLOAT, sizeof(GLfloat) * 3, vertices);
Should each of the fraction array elements mapped to coresponding byte value?
No, byte coordinates correspond to the integer part of the coordinate. This means, as you seem to have understood, that if just passed through the pipeline, everything will be way too big. So, you have to somehow scale the values down. Luckily, you can do this quite easily with the modelview matrix, by applying the correct scale there.
I can now give integer values for vertex co-ordinates. And also i see that the scaling factor needs to be adjusted with respect to the magnitude of the interger co-ordinate.
Please find below an outline of the code, which takes byte array as vertex pointer. Since i am scaling it with a factor of 0.01f the primitive is rendered correctly.
Whereas the texture seems to be very different, appears as if the image is shrunk heavily and repeated to fill the primitive.
So i thought the texels are also scaled by a factor of 0.01f. The “glTexCoordPointer” line of code is not changed while testing for byte co-ordinates. It worked fine with GLfloat co-ordinates and no scaling.
If you want to use bytes for texture coordinates you need to scale the texture coordinates too, using the texture matrix instead of the modelview matrix.