Difference of performance and situation of use of numerical types

7

I would like to know the difference between Long , Double , Float , Decimal and Int types, taking into consideration when it is best used in real cases. Ex: "use float in interest as it is ..." . And also the difference in performance between them. Ex: "int has better performance than long because ..."

I think the question is very valid, considering that we often use it wrong in our daily lives, and for the least difference of performance, it is always interesting to think about it.

    
asked by anonymous 13.03.2014 / 15:23

2 answers

8

In .NET (as in many other languages), there is a separation between integer types and rational types, so I will separate my explanation of these types into these two categories.

Whole Types

There are several integer types, again they can be separated into two groups, those that accept negative values, and those that do not accept negative values.

Accept Negatives (have sign): Int32 is the most common, but we have others: Int16 , Int64 (no Long , but alias long in C #, which is the same as Int64 ) and SByte .

│ Tipo  ║ Bits ║           Mínimo           ║            Máximo         ║ Alias C# ║ Literal │
╞═══════╬══════╬════════════════════════════╬═══════════════════════════╬══════════╬═════════╡
│ SByte ║  8   ║ -128                       ║ 127                       ║  sbyte   ║         │
│ Int16 ║  16  ║ -32768                     ║ 32767                     ║  short   ║         │
│ Int32 ║  32  ║ -2.147.483.648             ║ 2.147.483.647             ║  int     ║    0    │
│ Int64 ║  64  ║ -9.223.372.036.854.775.808 ║ 9.223.372.036.854.775.807 ║  long    ║    0L   │
└───────╨──────╨────────────────────────────╨───────────────────────────╨──────────╨─────────┘

The letters that appear in the literals can be uppercase or lowercase: 0L is the same as 0l .

Accept positive values only (no sign):

│ Tipo   ║ Bits ║ Mínimo ║            Máximo          ║ Alias C# ║ Literal │
╞════════╬══════╬════════╬════════════════════════════╬══════════╬═════════╡
│ Byte   ║  8   ║   0    ║ 255                        ║  byte    ║         │
│ UInt16 ║  16  ║   0    ║ 65535                      ║  ushort  ║         │
│ UInt32 ║  32  ║   0    ║ 4.294.967.295              ║  uint    ║   0U    │
│ UInt64 ║  64  ║   0    ║ 18.446.744.073.709.551.615 ║  ulong   ║   0UL   │
└─────═──╨──────╨────────╨────────────────────────────╨──────────╨─────────┘

The letters that appear in the literals can be uppercase or lowercase: 0UL is the same as 0ul .

In terms of the use of these types, there is no major difference in performance in the use of local variables or method parameters. Generally, the type Int32 ( int ) is used for these uses, unless the expected values exceed the limits of int, in which case Int64 ( long ) is used.

While Byte ( byte ) is only used even to work with binary data, I have never seen it being used in any code other than this.

The other types are usually only used in very long struct or class data structures, or else in arrays, so that they occupy less memory ... but this would only make sense even in heavily used structures, or even very large arrays of the order of millions or even billions of items.

In addition to all this, I see unsigned types (those that only accept positives) to be used in bitwise operations (called commonly bitwise), because of the ease of working with all bits of the same, which is more difficult when there is the signal bit.

Rational Types

There are only 3 of these in .NET: Single , Double and Decimal .

│   Tipo   ║     Single    ║      Double               ║      Decimal                            │
╞══════════╬═══════════════╬═══════════════════════════╬═════════════════════════════════════════╡
│ Alias C# ║     float     ║      double               ║      decimal                            │
│ Mínimo   ║ -3.402823e+38 ║  -1.7976931348623157e+308 ║ -79.228.162.514.264.337.593.543.950.335 │
│ Máximo   ║  3.402823e+38 ║   1.7976931348623157e+308 ║  79.228.162.514.264.337.593.543.950.335 │
│ Literal  ║    0f         ║   0.0  ou  0d             ║     0m                                  │
│ Base Exp.║    2          ║    2                      ║     10                                  │
└──────────╨───────────────╨───────────────────────────╨─────────────────────────────────────────┘

The letters that appear in the literals can be uppercase or lowercase: 0M is the same as 0m .

Base types 2 ( Single and Double ) are operated by instructions from the processor itself, in a unit called floating-point unit (FPU) ... and that in current processors are so optimized that math operations with floating points arrive to be as fast as whole types.

The types Single and Double are used when there is no correspondence with decimal numbers in a fraction. Examples: calculations involving the physics, used in engineering or simulations made in games, use these types.

In terms of performance, the types Single and Double are the same on machines current, because the FPU converts both internally to 80-bits. So the only advantage real in using Single is in terms of memory usage.

Type Decimal exists to support operations that must match with decimal fractions of the real world, when working with values money ... including, I think M literal comes from money (but this I am speculating).

The type Decimal has 128 bits, of which 96 are used to represent the internal value called the mantissa, and the others are used to indicate a base divider 10 ... is practically an exponent, just as it exists for base 2, except that in base 10 and only negatives. Therefore, Decimal is not able to represent numbers as large as Single and Double (since these two accept positive exponents). In compensation, the% type of% has an absurdly greater precision.

In terms of performance, the type Decimal is very poor compared to the types base-2, since all mathematical operations are done in the Arithmetic logic unit (ALU), and are therefore subdivided into several stages of calculations independently of the operation being made.

    
15.03.2014 / 00:39
2

Basically precision is the big difference between them:

Float : 7 Digits (32 bit)

Double : 15-16 Digits (64 bit)

Decimal : 28-29 Digits (128 bit) / p>

Decimal (decimal) has much more precision than any of the other, used almost in the totality of the financial applications that require a great degree of precision. Otherwise the Decimals are much slower ( up to 20x ) than double / float.

    
13.03.2014 / 15:32