Question

I am writing a function to extract decimals from a number. Ignore the exception and its syntax, I am working on 2.5.2 (default Leopard version). My function does not yet handle 0's. My issue is, the function produces random errors with certain numbers, and I don't understand the reason. I will post an error readout after the code.


Function:

def extractDecimals(num):
    try:
        if(num > int(num)):
            decimals = num - int(num)
            while(decimals > int(decimals)):
                print 'decimal: ' + str(decimals)
                print 'int: ' + str(int(decimals))
                decimals *= 10
            decimals = int(decimals)
            return decimals
        else:
            raise DecimalError(num)
    except DecimalError, e:
        e.printErrorMessage()


Exception Class:

class DecimalError(Exception):
    def __init__(self, value):
        self.value = value

    def printErrorMessage(self):
        print 'The number, ' + str(self.value) + ', is not a decimal.'


Here is error output when I input the number 1.988:
decimal: 0.988
int: 0
decimal: 9.88
int: 9
decimal: 98.8
int: 98
decimal: 988.0
int: 987
decimal: 9880.0
int: 9879
decimal: 98800.0
int: 98799
decimal: 988000.0
int: 987999
decimal: 9880000.0
int: 9879999
decimal: 98800000.0
int: 98799999
decimal: 988000000.0
int: 987999999
decimal: 9880000000.0
int: 9879999999
decimal: 98800000000.0
int: 98799999999
decimal: 988000000000.0
int: 987999999999
decimal: 9.88e+12
int: 9879999999999
decimal: 9.88e+13
int: 98799999999999
decimal: 9.88e+14
int: 987999999999999
9879999999999998



I do not know why this error is popping up. Hopefully you guys can help me out.

Was it helpful?

Solution

The problem is that (binary) floating point numbers aren't precisely representable as decimals. See Why can't decimal numbers be represented exactly in binary? for more information.

OTHER TIPS

As Ned Batchelder said, not all decimals are exactly representable as floats. A float is represented by a certain number of binary digits which are used to approximate the decimal as closely as possible. You can never assume a float is exactly equal to a decimal.

In [49]: num
Out[49]: 1.988

In [50]: decimals=num - int(num)

In [51]: decimals
Out[51]: 0.98799999999999999

In [52]: print decimals   # Notice that print rounds the result, masking the inaccuracy.
0.988

See http://en.wikipedia.org/wiki/Floating_point for more info on the binary representation of floats.

There are other ways to achieve you goal. Here is one way, using string operations:

def extractDecimals(num):
    try:
        numstr=str(num)
        return int(numstr[numstr.find('.')+1:])
    except ValueError, e:
        print 'The number, %s is not a decimal.'%num

As others have already pointed out, the issue you are seeing is due to the inexact representation of floating point numbers

Try your program with Python's Decimal

from decimal import Decimal
extractDecimals(Decimal("0.988"))

As has already been said, floating point numbers are not exactly equal to decimals. You can see this by using the modulus operator like so:

>>> 0.988 % 1
0.98799999999999999
>>> 9.88 % 1
0.88000000000000078
>>> 98.8 % 1
0.79999999999999716

This gives the remainder of division by 1, or the decimal.

As others have said in their answers, arithmetic with floats doesn't always result in what you expect due to rounding errors. In this case, perhaps converting the float into a string and back is better?

In [1]: num = 1.988

In [2]: num_str = str(num)

In [3]: decimal = num_str.split('.')[1]

In [4]: decimal = int(decimal)

In [5]: decimal
Out[5]: 988
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top