Type Double and type Decimal

7

I found in the SO a question with an answer well accepted by the community #

But I never had problems with double for monetary calculations, I had yes with float that already left my hair standing, then from this answer stating that Decimal is correct, or that is only for C # and nothing to do with MySQL, I started to test it, and it left me a little insecure, for example to say that I will always store 2 digits of floating point, there are cases where the user will "need" (exception ) register the value with 4 digits in the bank which in turn will only record the first two, for this reason I feel safer with double .

Question - There is an example source code that shows me this double "rounding error" for me to see, or a reference book, trusted site that signals this recommendation of Decimal usage to record monetary values in MySQL ?

Obs 1 - The data types I refer to are from MySQL because I have also seen a quote from MSDN #

Obs 2 - Has a MySQL own link link My English is not very good, see if I'm wrong, say:

  

maximum number of digits for DECIMAL is 65

Maximum number of digits for DECIMAL It is 65

And in the sequence says

  

Before MySQL 5.0.3, the maximum range of DECIMAL values is the same as for DOUBLE

That is, the largest DECIMAL is equal to DOUBLE .

Continuing the site text

  

but the current range for a given DECIMAL column can be constrained by the precision or scale for a given column

Is that part of what I understand is DECIMAL that is limited by precision or am I wrong?

I'm going to put a test here, I hope someone will show me this "error" in the same way (similarly)

CREATE TABLE 'teste' (
  'id' int(11) NOT NULL AUTO_INCREMENT,
  'numero' double NOT NULL,
  PRIMARY KEY ('id')
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8;

INSERT INTO 'test'.'teste' ('numero') VALUES ('0.1');
INSERT INTO 'test'.'teste' ('numero') VALUES ('99.9');

SELECT sum(numero) FROM teste;

mysql>     SELECT sum(numero) FROM teste;
+-------------+
| sum(numero) |
+-------------+
|         100 |
+-------------+
1 row in set (0.00 sec)
    
asked by anonymous 17.04.2015 / 20:14

1 answer

11

I doubt you ever had problems. I think you never realized the problem. Most cases go wrong for 1 cent. Of course, 1 cent bugs can turn into thousands of dollars when multiplied.

The reason why so many people use wrong is precisely because the error is not so obvious and in cases where it becomes a huge problem the person does not know how to validate.

In accounting, 1 penny of error destroys the job. In other activities can only give harm, often small, or can give process coming from business partners or government. I will not go into detail because I am not a lawyer.

You have a trusted site where says this and it's Stack Overflow. You have multiple references on it .

In my answer I organize several links for other questions / answers about the subject matter. If you follow all the links there you will have all the information you need. You may learn that the problem occurs because of the binary representation of numbers that prevents you from accurately representing all numbers.

A reference in the MySQL documentation . Test running in SQLFiddle .

I imagine that everyone who participates in these responses, even if only through votes, agrees that DECIMAL type should be used. I think there would already be several sources and the variety obtained in a place where what is put is validated by its peers, gives a good reliability.

In the link posted in the comment, a person says that the DECIMAL and NUMERIC types are obsolete. Ask him to quote the source. He probably did not understand the question or did not know what he was talking about. Being a moderator does not mean that she understands the subject matter of the site. To learn more about these types have a question here on the site .

Some examples of inaccuracy can be found in the Wikipedia article .

A mere 0.1 is already problematic. I will show in C # because it is the language that works best in the mainstream but the "error" is of the processor, not the language, this applies to PHP, JS, SQL, etc. Adding 0.1 in% with% 100 times, should give 100, right? Why then gives 99.9999999999986?

See the dotNetFiddle result .

I've done also in PHP .

Some things were added later to the question.

If you think Microsoft's site has some information that is unreliable, it's up to you to prove that something is wrong. If no one else has complained, it must be right. If the problem is because the site is from MS and it is a whole company wrong, then I will not be able to convince you of anything because you believe in Saci Pererê as well.

Quotes from the MySQL documentation have nothing to do with the problem. But at least it shows that the types mentioned above are not obsolete. The type double has no imprecision, the number you put in it, is the number that will be used, as opposed to DECIMAL that uses a number approximate to what you tried to store in it. Of course, the DOUBLE will have as many decimal places as you determine. Of course half a penny can not be stored in DECIMAL with only two decimal places. But this is the programmer's decision. The DECIMAL changes the value and this occurs regardless of the programmer's will.

Understand the problem by reading everything I've been through, and you'll realize that the statement "Either the biggest DECIMAL is equal to DOUBLE" is completely false. As I said, quotes do not speak of the "rounding problem" because of binary representation.

The type DOUBLE may even hide the error of DOUBLE a little better, but at some point it will appear and is not that difficult.

It can be useful: What is the correct way to use float, double, and decimal types?

    
17.04.2015 / 20:41