Why is the expression (double.MinValue == double.MinValue + 1) true?


The following C # code:

public class Program
    public static void Main(string[] args)
        //Your code goes here
        Console.WriteLine(double.MinValue == double.MinValue + 1);
        Console.WriteLine(int.MinValue == int.MinValue + 1);

has as output:


My question is: why does the first line return true ? For me, this line should return false , since I'm comparing a number with itself + 1.

asked by anonymous 21.09.2017 / 00:25

1 answer


First read Float, Double, and Decimal. What is the correct way to use them? .

Ali says that it is not possible to represent all numbers in binary format. Then some numbers are taken by approximation. Note that there are 308 digits in this number and it is -1.79769313486232E + 308.

Because this type of data is inaccurate when you have a number with an extreme, very low or very high value, adding 1 does not actually change the value, in such a large amount the binary representation can not differentiate a number on the other, the bits of both are the same.

Well, does long have the same size as a double , as the latter can represent many more integers and an absurd number of fractional numbers with the same amount of bits as an integer? Simple, it does not represent all these numbers, only approximations of them, so very close numbers actually have the same binary representation, there is not even a 1-bit difference between them, so you can get enough magnitude for very large numbers, but you can not exact number, it's as if he's just sampling.

This is why I always say that if accuracy is important, do not use a binary encoded floating-point number.

21.09.2017 / 00:38