Query matrix

I am trying to validate “glQueryMatrix” API.

To validate, I initially load a identity matrix to modelview matrix and then multiply with a arbitrary 4 * 4 matrix “m”.

Then i get the mantissa and exponent by using the above API.

Now to validate the obtained value I do the following:
(mantissa[i] * (2 ^ exponent[i]) ) to get component[i]. Since the original matrix was identity i expect the component[i] value to be same as m[i]. But they are not equal.

Is this the right way to validate the values obtained by Query matrix.

Make sure you treat mantissa as a 16.16 fixed point number (in other words, you can subtract 16 from the exponent), and that the values in m are representable as 16.16 fixed point numbers as implementations are allowed to use this format internally. For this reason you also shouldn’t do an exact comparison with floating point values, allow a certain small difference.

Thank you very much for the reply

But still i am confused…

GLfixed matrix[16] = {
0.1, 0.2, 0.3, 0.4,
0.5, 0.6, 0.7, 0.8,
0.9, 0.1, 0.2, 0.3,
0.4, 0.5, 0.6, 0.7,
};
GLfixed mantissa[16] = { 0x00 };
GLint exponent[16] = { 0x00 };

glLoadIdentity();

status = glQueryMatrixxOES (mantissa, exponent);
glMultMatrixf (matrix); //here the matrix elements somehow become zero !!
status = glQueryMatrixxOES (mantissa, exponent);

for(i = 0x00; i < 16; i++) {
GLfixed x = ((mantissa[i]) * (2 ^(exponent[i]- 16))); //since matrix is zero mantissa and exponent are zero.
if((matrix[i] != x)) //this i would modify to allow some deviation
err++;
}

By declaring “matrix” as GLfloat, the elements are valid during glMultMatrix, but no good result!!

You cannot initialize a GLfixed from a float literal since GLfixed is just a typedef for GLint, the compiler doesn’t know that converting a number to GLfixed requires multiplying by 65536 first. Similarly, you can’t pass an array of GLfixed to glMultMatrixf.

More importantly, ^ is the bitwise xor operator. To construct a floating point value from a mantissa you can use the function ldexp from math.h.

#include <math.h>

GLfloat matrix[16] = { ... };

for (i = 0; i < 16; ++i) {
  float x = (float)ldexp(mantissa[i], exponent[i] - 16);
  if (abs(matrix[i] - x) > 0.0001f) err++;
}

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.