Note the fact that there are two "maximum numbers". One is the largest floating point number, which the other responses pointed out correctly (Number.MAX_VALUE, 1.79E + 308).
The other is the largest integer number that can be represented unambiguously, which is a 16-digit value (Number.MAX_SAFE_INTEGER or 2 raised to 53rd power minus 1). This is important for programs that need to make accurate accounts, such as financial systems.
Above this threshold, adding small values to a large number no longer works right:
Number.MAX_SAFE_INTEGER
9007199254740991
Number.MAX_SAFE_INTEGER + 2
9007199254740992
Number.MAX_SAFE_INTEGER + 3
9007199254740994
Number.MAX_SAFE_INTEGER + 4
9007199254740996
Number.MAX_SAFE_INTEGER + 5
9007199254740996
Another way to understand this problem is: in floating point, precision is only kept in a sum if the two numbers added have a magnitude difference of no more than 15 digits.