Floating-point problem Python 3

1

I'm writing an algorithm where I increment float variables in 0.2 , however, after a few increments instead of increasing 2.2 to 2.4 , for example, the program increments to 2.4000000000000004

I've read about this Python bug (I know it's not a bug, but I could not find a better word), but I can not find the reference to study a solution. Here's the algorithm:

J1, J2, J3, I = 1.0, 2.0, 3.0, 0

while I <= 2:
    print('I={} J={}'.format(I, J1))
    print('I={} J={}'.format(I, J2))
    print('I={} J={}'.format(I, J3))
    J1 += 0.2
    J2 += 0.2
    J3 += 0.2
    I += 0.2

And the output:

    
asked by anonymous 06.02.2018 / 23:48

3 answers

1

Although the @Pedro answer was a great help, I found the answer with examples and everything else in the English OS. Here is the link

My code, now working perfectly, was as follows to whom it may concern:

from decimal import Decimal as D

J1, J2, J3, I = D(1.0), D(2.0), D(3.0), D(0)

while I <= 2:
    print('I={} J={}'.format(I, J1))
    print('I={} J={}'.format(I, J2))
    print('I={} J={}'.format(I, J3))
    J1 += D("0.2")
    J2 += D("0.2")
    J3 += D("0.2")
    I += D("0.2")
    
08.02.2018 / 15:40
8

It's not a Python bug, but an inherent problem of how computers represent floating-point"> floating point .

>>> 0.1 + 0.3
0.4
>>> 0.1 + 0.2
0.30000000000000004

Fortunately, Python gives us a simple way to solve the problem: the decimal module.

>>> from decimal import Decimal
>>> Decimal('0.1') + Decimal('0.2')
Decimal('0.3')

It is important to note that using Decimal , we should start the number as a string . Otherwise, it interprets the argument as a floating point and we have the same problem:

>>> Decimal(0.1) + Decimal(0.2)
Decimal('0.3000000000000000166533453694')

This page shows the reaction of several languages to the problem.

    
07.02.2018 / 01:37
0

As Peter said, it's not a Python bug.

If you know that the increment will always be 0.2 (or any fixed number), you can simply store the valves multiplied by 5 (the multiplicative inverse of the increment) as integers and divide by 5.0 when you need it value as float . This will avoid problems with adding rounding errors.

As requested, a simple example:

J1, J2, J3, I = 5, 10, 15, 0

while I <= 10:
    print( 'I={} J={}'.format(I/5.0, J1/5.0) )
    print( 'I={} J={}'.format(I/5.0, J2/5.0) )
    print( 'I={} J={}'.format(I/5.0, J3/5.0) )
    J1 += 1
    J2 += 1
    J3 += 1
    I += 1

With a result of:

I=0.0 J=1.0
I=0.0 J=2.0
I=0.0 J=3.0
I=0.2 J=1.2
I=0.2 J=2.2
I=0.2 J=3.2
I=0.4 J=1.4
I=0.4 J=2.4
I=0.4 J=3.4
I=0.6 J=1.6
I=0.6 J=2.6
I=0.6 J=3.6
I=0.8 J=1.8
I=0.8 J=2.8
I=0.8 J=3.8
I=1.0 J=2.0
I=1.0 J=3.0
I=1.0 J=4.0
I=1.2 J=2.2
I=1.2 J=3.2
I=1.2 J=4.2
I=1.4 J=2.4
I=1.4 J=3.4
I=1.4 J=4.4
I=1.6 J=2.6
I=1.6 J=3.6
I=1.6 J=4.6
I=1.8 J=2.8
I=1.8 J=3.8
I=1.8 J=4.8
I=2.0 J=3.0
I=2.0 J=4.0
I=2.0 J=5.0

It would probably be best to encapsulate the code to do division multiplication within a class, but I do not know Python so much to give a good example of this.

I've also heard of classes that hold fractions instead of floating points to hold (almost) any rational number without losing accuracy.

I think the Decimal class of Python already does something similar to what I'm describing, but it fires an integer and holds a power of ten to do division (or multiplication). So I think Decimal is similar to floating point, but with powers of ten instead of powers of two.

    
08.02.2018 / 17:19