I'm having some problems making sums of decimal values in my application, the variations that occur are not much of a difference, but since I'm dealing with total amounts of money, I'd need the sums not to come broken in tithe. >
I was able to go after some information on the reason why this occurs, and in some articles in English I found the term used Floating-Point Arithmetic and with an explanation translated by me as follows:
Why do my numbers like 0.1 + 0.2 not result in a sum with a nice rounding of 0.3 but instead we get a strange result like 0.30000000000000004 or 0.29999999999999999?
Because internally, computers use a (floating-point binary) format that can not accurately represent a number such as 0.1, 0.2, or 0.3 as a whole.
When the code is compiled or interpreted, "0.1" is already rounded to the nearest number in that format, which results in a small rounding error even before the calculation happens.
PS: The citation was summed up for an easier understanding, being aware that the original explanation for this is more complex.
I used double formatting for the sum of my values, where I encountered several rounding errors, after which I switched to float where I could have better precision. Keeping in mind these happenings I would like to confirm if there is any format for working with money that are more accurate than the float.
For those who want to delve deeper into the question, follow a link to an article that explains in a very complete and complex way: