Comparison of Doubles and Floats: Why Unexpected Results?
Floating-point numbers like doubles and floats play a crucial role in numerical computation. However, comparing these types can lead to puzzling results, as illustrated by the code snippet below:
<code class="python">float f = 1.1 double d = 1.1 if (f == d): # returns false!</code>
This unexpected behavior stems from two fundamental factors: precision and rounding.
Precision:
Floating-point numbers have finite precision, which limits the number of digits they can accurately represent. Numbers with higher precision require more memory, which is not always feasible for floating-point variables.
For instance, the fraction 1/3 in decimal (0.33333...) cannot be precisely represented in a 32-bit float. It must be approximated and stored as 0.3333333333333333, resulting in a tiny loss of precision.
Rounding:
Binary and decimal numbers have inherent differences. Fractions that can be easily represented in decimal (e.g., 1/10 as 0.1) often require complex representations in binary (e.g., 1/10 as 0.0001100110011...).
This discrepancy leads to rounding errors, where floating-point values are truncated to fit within the memory limitations. As a result, the representation of 0.1 in the example code may not be exactly equal to 0.1 stored in the double.
Conclusion:
Due to precision and rounding issues, comparing doubles and floats using equality (==) is unreliable. Instead, a more robust approach is to compare their absolute difference to an acceptable epsilon value. This ensures that the difference is within an acceptable tolerance.
<code class="python">if abs(f - d) < epsilon: # epsilon is a small threshold</code>
This approach ensures that the comparison is not affected by precision and rounding errors, leading to accurate and consistent results.
The above is the detailed content of Why Doesn\'t `float == double` Always Work?. For more information, please follow other related articles on the PHP Chinese website!