Detailed explanation of double precision multiplication accuracy problem in .NET
In C#, the result of the expression double i = 10 * 0.69;
is that i
is assigned the value 6.89999999999999995 instead of the expected 6.9. This raises the question: Is there something wrong with double-precision multiplication in .NET?
Binary representation and floating point operations
To understand this behavior, one must have a deep understanding of the subtleties of binary representation and floating-point arithmetic. Floating-point formats approximately represent real numbers as finite binary values, which can result in precision limitations.
The seemingly simple 0.69 cannot be accurately expressed in binary. It requires an infinite looping sequence of binary numbers. The binary representation stored in the double
data type is an approximation of 0.69.
Compiler optimization and stored constants
In the provided code snippet, the compiler optimizes the expression by performing a multiplication and storing the result as a constant in the executable. It does this because the values are known at compile time. The stored value is a floating point approximation of 6.9, which is slightly smaller than the actual value.
Workarounds and alternative data types
To solve this problem and obtain more precise results, you can use the decimal
data type, which is designed to represent non-repeating decimal values with higher precision. The expression decimal i = 10m * 0.69m;
will yield 6.9m.
The above is the detailed content of Is Double Multiplication Broken in .NET?. For more information, please follow other related articles on the PHP Chinese website!