C# In many programming languages such as C#, the number of floating point is used to represent decimal. However, due to the limited representation of floating -point number in the computer, these numbers have accuracy problems.
Consider the following procedures:
This program output FALSE, which seems to be contradictory with 0.09
<code class="language-csharp">class Program { static void Main(string[] args) { float f1 = 0.09f * 100f; float f2 = 0.09f * 99.999999f; Console.WriteLine(f1 > f2); } }</code>
99.999999. Understand the problem of accuracy
The reason for this accuracy is the way the floating point number is expressed in memory. These digital storage is a combination of tail number (decimal part), index (power of the base) and symbols. However, the accuracy of the tail number is limited.
In C#, the single -precision floating point number (Float) has 23 digits, and the dual -precision floating point number (Double) has a 52 -bit tail number. When the ending number cannot be accurately expressed, it enters the four houses and five to the closest to represent numbers.In the above example, the accuracy of the 0.09
100 is 9.000000190734863, but because the accuracy is limited, it is represented by 32 -bit Float to 9.0. Similarly, 0.09 99.999999 Four houses and five in -five are 9.0. As a result, these two values are relatively equal and the program outputs FALSE.
In order to reduce this accuracy, the IEEE 754 floating -point operation standard introduces the concept of "EPSILON", which means that it can be added to 1.0 without changing its minimum forward -floating point. In C#, this value is about 1.4013e-45. By comparing the difference between the two floating -point numbers with the EPSILON, it can be determined whether they are valid for their validity within the accuracy range of the floating point.
The above is the detailed content of Why Do Floating Point Comparisons in C# Sometimes Yield Unexpected Results?. For more information, please follow other related articles on the PHP Chinese website!