Question

I seem to be losing a lot of precision with floats.

For example I need to solve a matrix:

4.0x -2.0y 1.0z =11.0
1.0x +5.0y -3.0z =-6.0
2.0x +2.0y +5.0z =7.0

This is the code I use to import the matrix from a text file:

f = open('gauss.dat')
lines =  f.readlines()
f.close()

j=0
for line in lines:
    bits = string.split(line, ',')
    s=[]
    for i in range(len(bits)):
        if (i!= len(bits)-1):
            s.append(float(bits[i]))
            #print s[i]
    b.append(s)
    y.append(float(bits[len(bits)-1]))

I need to solve using gauss-seidel so I need to rearrange the equations for x, y, and z:

x=(11+2y-1z)/4
y=(-6-x+3z)/5
z=(7-2x-2y)/7

Here is the code I use to rearrange the equations. b is a matrix of coefficients and y is the answer vector:

def equations(b,y):
    i=0
    eqn=[]
    row=[]
    while(i<len(b)):
        j=0
        row=[]
        while(j<len(b)):
            if(i==j):
                row.append(y[i]/b[i][i])
            else:
                row.append(-b[i][j]/b[i][i])
            j=j+1
        eqn.append(row)
        i=i+1
    return eqn

However the answers I get back aren't precise to the decimal place.

For example, upon rearranging the second equation from above, I should get:

y=-1.2-.2x+.6z

What I get is:

y=-1.2-0.20000000000000001x+0.59999999999999998z

This might not seem like a big issue but when you raise the number to a very high power the error is quite large. Is there a way around this? I tried the Decimal class but it does not work well with powers (i.e, Decimal(x)**2).

Any ideas?

Was it helpful?

Solution

I'm not familiar enough with the Decimal class to help you out, but your problem is due to the fact that decimal fractions can often not be accurate represented in binary, so what you're seeing is the closest possible approximation; there's no way to avoid this problem without using a special class (like Decimal, probably).

EDIT: What about the decimal class isn't working properly for you? As long as I start with a string, rather than a float, powers seem to work fine.

>>> import decimal
>>> print(decimal.Decimal("1.2") ** 2)
1.44

The module documentation explains the need for and usage of decimal.Decimal pretty clearly, you should check it out if you haven't yet.

OTHER TIPS

IEEE floating point is binary, not decimal. There is no fixed length binary fraction that is exactly 0.1, or any multiple thereof. It is a repeating fraction, like 1/3 in decimal.

Please read What Every Computer Scientist Should Know About Floating-Point Arithmetic

Other options besides a Decimal class are

  • using Common Lisp or Python 2.6 or another language with exact rationals

  • converting the doubles to close rationals using, e.g., frap

First, your input can be simplified a lot. You don't need to read and parse a file. You can just declare your objects in Python notation. Eval the file.

b = [
    [4.0, -2.0,  1.0],
    [1.0, +5.0, -3.0],
    [2.0, +2.0, +5.0],
]
y = [ 11.0, -6.0, 7.0 ]

Second, y=-1.2-0.20000000000000001x+0.59999999999999998z isn't unusual. There's no exact representation in binary notation for 0.2 or 0.6. Consequently, the values displayed are the decimal approximations of the original not exact representations. Those are true for just about every kind of floating-point processor there is.

You can try the Python 2.6 fractions module. There's an older rational package that might help.

Yes, raising floating-point numbers to powers increases the errors. Consequently, you have to be sure to avoid using the right-most positions of the floating-point number, since those bits are mostly noise.

When displaying floating-point numbers, you have to appropriately round them to avoid seeing the noise bits.

>>> a
0.20000000000000001
>>> "%.4f" % (a,)
'0.2000'

I'd caution against the decimal module for tasks like this. Its purpose is really more dealing with real-world decimal numbers (eg. matching human bookkeeping practices), with finite precision, not performing exact precision math. There are numbers not exactly representable in decimal just as there are in binary, and performing arithmetic in decimal is also much slower than alternatives.

Instead, if you want exact results you should use rational arithmetic. These will represent numbers as a numerator/denomentator pair, so can exactly represent all rational numbers. If you're only using multiplication and division (rather than operations like square roots that can result in irrational numbers), you will never lose precision.

As others have mentioned, python 2.6 will have a built-in rational type, though note that this isn't really a high-performing implementation - for speed you're better using libraries like gmpy. Just replace your calls to float() to gmpy.mpq() and your code should now give exact results (though you may want to format the results as floats for display purposes).

Here's a slightly tidied version of your code to load a matrix that will use gmpy rationals instead:

def read_matrix(f):
    b,y = [], []
    for line in f:
        bits = line.split(",")
        b.append( map(gmpy.mpq, bits[:-1]) )
        y.append(gmpy.mpq(bits[-1]))
    return b,y

It is not an answer to your question, but related:

#!/usr/bin/env python
from numpy import abs, dot, loadtxt, max
from numpy.linalg import solve

data = loadtxt('gauss.dat', delimiter=',')
a, b = data[:,:-1], data[:,-1:]
x = solve(a, b) # here you may use any method you like instead of `solve`
print(x)
print(max(abs((dot(a, x) - b) / b))) # check solution

Example:

$ cat gauss.dat
4.0, 2.0, 1.0, 11.0
1.0, 5.0, 3.0, 6.0 
2.0, 2.0, 5.0, 7.0

$ python loadtxt_example.py
[[ 2.4]
 [ 0.6]
 [ 0.2]]
0.0

Also see What is a simple example of floating point error, here on SO, which has some answers. The one I give actually uses python as the example language...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top