What kind of data (double, float or decimal) should I use to represent currency in .NET with C #?

18

Although I am aware of what would be best to use, I ask this question for teaching purposes since I see several examples of people using double in C #. But I've had problems with double for coin calculations and read in several places that the type loses accuracy in some cases.

So, in which situations would it be best to use each type?

    
asked by anonymous 01.04.2014 / 22:30

2 answers

27

Decimal is the ideal type for calculating values. It has a huge breadth (79,228,162,514,264,337,593,543,950,335 to -79,228,162,514,264,337,593,543,950,335) and has one of the smallest error margins for rounding.

Double is more suitable for general scientific calculations, where the margin of error is not negligible, but is tolerable. Despite having a larger range of values, the calculation of the mantissa and the characteristic produces known problems of rounding.

Float is a Double with fewer bytes for representation, therefore with a small range of values and precision and rounding problems similar to Double .

    
01.04.2014 / 22:33
12

The most indicated is decimal , according to MSDN itself :

  

Compared to floating-point types, the decimal type has more precision and a small range , which makes it suitable for financial and monetary calculations.

It has 28-29 digits precision, enough to not affect the cents with calculations.

The double has 15-16, and the float only has 7 precision digits.

The observation is that if you do not make accurate calculations in the system, could be used float since it occupies only 32-bits (against 128 decimal places). But in practice this is not a problem unless you have a large array of decimals, for example.

In this case, you should determine whether you want more good space usage or the consistency of all monetary variables typed as decimal in your code (good programming practice).

    
01.04.2014 / 22:34