Question

I have a lot of 10 by 10 (0,1)-matrices and I would like to determine which have determinant exactly 0 (that is which are singular). Using scipy.linalg.det I get a floating point number which I have to test to see if it is close to zero. Is it possible to do the calculation exactly so I can be sure I am not finding false positives?

On the other hand, maybe there is some guarantee about the smallest eigenvalue which can be used to make sure the floating point method never makes a false positive?

Was it helpful?

Solution 5

As the entries in the matrices are either 1 or 0 the smallest non-zero absolute value of a determinant is 1. So there is no need to fear a true non-zero value that is very close to 0.

Alternatively one can apparently use sympy to get an exact answer.

OTHER TIPS

As long as you are using floats, you can not guarantee you will get exactly zero. I would use this:

scipy.allclose(det, 0)

You can control the tolerance with the kwargs.


In your case (10x10 matrices with 0,1 elements) you shouldn't have to worry about false positives.

I don't have a proof for this, but it's just geometrical intuition: the group of 10-vectors with 0/1 elements can not be "nearly" linearly dependent in the way that would be necessary for you to get a false positive using floats. As vectors their "directions" are necessarily discrete/atomic if elements are in 0,1.

Think of the 3D case and generalise your thoughts to 10-dimensional space ;)

You can use Gaussian elimination to bring the matrix to a triangular form. Since your elements are all 0 or 1, the calculation even using floating point arithmetic will be exact (you are only multiplying/dividing/adding/subtracting by -1, 0 and 1, which is exact).

The determinant is then 0 if one element of the diagonal is zero and nonzero otherwise.

So for this specific algorithm (Gaussian elimination), calculation of the determinant will be exact even in floating point arithmetic.

This algorithm also should be pretty efficient. It can even be implemented using integers, which is faster and shows even in a more obvious way that the problem is exactly solvable.

EDIT: the point is, that an algorithm which operates on the 0,1 matrix can be exact. It depends on the algorithm. I would check how det() is implemented and maybe, there is no issue with numerical noise, and, in fact, you could just test for det(M) == 0.0 and get neither false negatives nor false positives.

I think you should consider examining the condition number rather than the determinant. In python you'd want

numpy.linalg.cond(x, p=None)

Reference http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.cond.html.

That was the advice from the applied maths prof on the coursera course on scientific computing. Essentially the condition number will give you the best indication of numerical instability for operations like inverting a matrix etc, which is probably what you're interested in. See this answer on scicomp stackexchange for full details.

how about test batches while playing with the tolerance arg, then decide on the max acceptable tolerance, rinse and repeat: http://docs.scipy.org/doc/numpy/reference/generated/numpy.allclose.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top