Do you mean something related to the approximation of floating point numbers when stored in binary representation?
I think that in CE6 double means IEEE-754 32-bit floating point (on Win 10 is 64-bit), but it seems to me that for 20.000.000 the approximation error is less than 1.0.
Do you mean something different?




Reply With Quote

Bookmarks