C# Floating-Point Arithmetic: Why Precision Matters
Floating-point numbers in C#, while commonly used, have inherent limitations that can produce unexpected results. Let's examine a case illustrating this imprecision.
Consider this C# code snippet:
<code class="language-csharp">class Program { static void Main(string[] args) { float f1 = 0.09f * 100f; float f2 = 0.09f * 99.999999f; Console.WriteLine(f1 > f2); // Surprisingly outputs "false" } }</code>
The program surprisingly outputs "false," even though f1
(0.09 100) should logically be larger than f2
(0.09 99.999999). This is due to the inherent limitations of floating-point representation.
Floating-point types, like float
(32 bits in C#), have limited precision. They can only represent a finite number of decimal digits accurately (approximately 7 for float
). Calculations involving floating-point numbers introduce rounding errors, which accumulate and can lead to discrepancies like the one observed. In this instance, the rounding errors during the calculation and storage of f1
and f2
cause them to be considered equal, or even for f1
to be slightly smaller than f2
due to these accumulated errors.
For a comprehensive understanding of floating-point precision and its implications, consult the definitive resource: "What Every Computer Scientist Should Know About Floating-Point Arithmetic."
The above is the detailed content of Why Does 0.09f * 100f Seemingly Not Exceed 0.09f * 99.999999f in C#?. For more information, please follow other related articles on the PHP Chinese website!