Floating-Point Precision Issues in C# Multiplication
Let's examine the C# code snippet:
<code class="language-csharp">double i = 10 * 0.69;</code>
While you'd anticipate i
to equal 6.9, the actual output is often 6.8999999999999995. This discrepancy arises from the inherent limitations of floating-point arithmetic within the .NET framework.
Floating-point numbers, unlike decimal numbers, are represented using binary formats. This binary representation means that certain decimal values, including those with infinitely repeating decimal parts like 0.69, cannot be expressed perfectly. Instead, they are approximated.
The compiler, aware of this limitation, frequently optimizes the calculation by storing a close approximation of 6.9 directly, rather than computing 10 * 0.69
at runtime. This approximation accounts for the observed slight difference.
To achieve greater accuracy, particularly in financial applications, consider using the decimal
data type. decimal
employs a base-10 representation, providing a more precise representation of decimal numbers like 0.69, thus minimizing the rounding errors associated with binary floating-point arithmetic.
The above is the detailed content of Why is 10 * 0.69 not exactly 6.9 in C#?. For more information, please follow other related articles on the PHP Chinese website!