Originally Posted by

**david.garcia**
As far as I can tell, those decimal representations map to the same double-precision floating point value. Both are equally correct.

The decimal representation of CL_M_PI was probably obtained by computing the double-precision floating point value that is closest to the true value of pi and then transformed it back to decimal. On the other hand, glibc's M_PI decimal representation appears to come directly from the decimal representation of pi. Both will translate to the same double-precision floating point value, so it doesn't matter.

Does it make sense?