Why in Python 0.03% 0.01 = 0.009999999999999998 and not 0?

2
>>> 0.03 % 0.01
0.009999999999999998

Why do you give this result, the rest of the division being 0?

And also, instead of 3, give:

>>> 0.03 // 0.01
2.0
    
asked by anonymous 26.06.2018 / 16:28

2 answers

7

Exactly for the same reason that 0.1 + 0.7 is 0.7999999999999999 and not 0.8.

What, in summary, is the summary: IEEE 754 .

To work around the problem, you need to use the decimal module:

from decimal import Decimal

a = Decimal('0.03')
b = Decimal('0.01')

print(a % b)  # 0.00
print(a // b)  # 3
    
26.06.2018 / 16:44
7
  

Short answer: Precision problems in floating-point operations.

Decimal numbers are represented on the computer as decimal fractions. For example, 0.125(10) = 0.001(2) . Nothing new - but if it is, take a look at this small summary on Wikipedia about floating point .

The problem is when we enter numbers that can not be easily described by a binary fraction in which they result in infinite decimals such as 1/3 = 0.33(3) in the decimal system.

Computers do not have an infinite number of bits, so they use approximate representations of the numbers they want to represent. They are very close presentations but they continue to be the number in question.

>>> 0.1
0.1000000000000000055511151231257827021181583404541015625

Python (among other languages) can detect these cases and present the user with a "rounded" representation of the user that corresponds to what would be expected:

>>> 1 / 10
0.1

However this does not change the value that is in memory. As such, doing 0.03 % 0.01 is not even doing the remainder of the division of 0.03 by 0.01 but rather the remainder of the division of the memory representation of 0.03 by 0.01, resulting in the error you see:

>>> 0.03 % 0.01
0.009999999999999998

Source: The Python Tutorial - Floating Point Arithmetic Operations: Problems and Limitations

    
26.06.2018 / 16:44