Question

I can test the rank of a matrix using np.linalg.matrix_rank(A) . But how can I test if all the rows of A are orthogonal efficiently?

I could take all pairs of rows and compute the inner product between them but is there a better way?

My matrix has fewer rows than columns and the rows are not unit vectors.

Was it helpful?

Solution 2

It seems that this will do

product = np.dot(A,A.T)
np.fill_diagonal(product,0)
if (product.any() == 0):

OTHER TIPS

This answer basically summarizes the approaches mentioned in the question and the comments, and adds some comparison/insights about them


Approach #1 -- checking all row-pairs

As you suggested, you can iterate over all row pairs, and compute the inner product. If A.shape==(N,M), i.e. you have N rows of size M each, you end up with a O(M*N^2) complexity.

Approach #2 -- matrix multiplication

As suggested in the comments by @JoeKington, you can compute the multiplication A.dot(A.T), and check all the non-diagonal elements. Depending on the algorithm used for matrix multiplication, this can be faster than the naive O(M*N^2) algorithm, but only asymptotically better. Unless your matrices are big, they would be slower.


The advantages of approach #1:

  • You can "short circuit" -- quit the check as soon as you find the first non-orthogonal pair
  • requires less memory. In #2, you create a temporary NxN matrix.

The advantages of approach #2:

  • The multiplication is fast, as it is implemented in the heavily-optimized linear-algebra library (BLAS of ATLAS). I believe those libraries choose the right algorithm to use according to input size (i.e. they won't use the fancy algorithms on small matrices, because they are slower for small matrices. There's a big constant hidden behind that O-notation).
  • less code to write

My bet is that for small matrices, approach #2 would prove faster due to the fact the LA libraries are heavily optimized, and despite the fact they compute the entire multiplication, even after processing the first pair of non-orthogonal rows.

Approach #3: Compute the QR decomposition of AT

In general, to find an orthogonal basis of the range space of some matrix X, one can compute the QR decomposition of this matrix (using Givens rotations or Householder reflectors). Q is an orthogonal matrix and R upper triangular. The columns of Q corresponding to non-zero diagonal entries of R form an orthonormal basis of the range space.

If the columns of X=AT, i.e., the rows of A, already are orthogonal, then the QR decomposition will necessarily have the R factor diagonal, where the diagonal entries are plus or minus the lengths of the columns of X resp. the rows of A.

Common folklore has it that this approach is numerically better behaved than the computation of the product A*AT=RT*R. This may only matter for larger matrices. The computation is not as straightforward as the matrix product, however, the amount of operations is of the same size.

(U.T @ U == np.eye(U.shape[0])).all()

This will give 'True' if matrix 'U' is orthogonal otherwise 'False', here 'all()' function is used to convert the matrix of boolean values(True/False values) that we get after 'U.T @ U == np.eye(U.shape[0])', into a single boolean value.

if you want to check that matrix is approximately orthonormal(by this I mean that the matrix that we get after 'U.T @ U' is nearly equal to an identity matrix), Then use 'np.allclose()' like this

np.allclose(U.T @ U, np.eye(U.shape[0]))

Note: '@' is used for matrix multiplication

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top