hi,
I’m SW Engineer in a OpenGL SW Verification project. We have been testing a set of gl functions that will run on an embedded PPC platform that never tested before. While writing test steps for the gl functions that takes floating number parameters we observed that the values we get are not same as the values we set.
i.e
//In the below code color is set to #800000 then the current color is tested
GLdouble afColorValues[4];
glClear(GL_COLOR_BUFFER_BIT);
glColor4d(0.5,0.0,0.0,0.0);
glGetDoublev(GL_CURRENT_COLOR,afColorValues);
printf("%.16f",afColorValues[0]); // print the value of red
but when I got the RED value of the current color with glGetDoublev I saw that it was
0.5019608139991760
Here there is a 0.0019608139991760 difference between the value set and obtained. I cannot say that this step is either passed or failed because I do not have any official epsilon value to define the acceptable precision error range. I wonder if there are any epsilon requirements that define the acceptable precision for those gl functions.
Such a requirement would be
i.e
“glColor4d shall set color values with a precision of epsilon=0.00001”
Is there any requirement like that? If so how can I get those epsilon values?