Pergunta

Consider the following simple piece of code:

import numpy as np

A = np.array([[0,1,1,0],[1,0,0,1],[1,0,0,1],[0,1,1,0]], dtype=float)
eye4 = np.eye(4, dtype=float)  # 4x4 identity

H1 = np.kron(A,eye4)
w1,v1 = np.linalg.eig(H1)
H1copy = np.dot(np.dot(v1,np.diag(w1)),np.transpose(v1)) # reconstructing from eigvals and eigvecs

H2 = np.kron(eye4,A)
w2,v2 = np.linalg.eig(H2)
H2copy = np.dot(np.dot(v2,np.diag(w2)),np.transpose(v2))

print np.sum((H1-H1copy)**2)  # sum of squares of elements
print np.sum((H2-H2copy)**2)  

It produces the output

1.06656622138
8.7514256673e-30

This is very perplexing. These two matrices differ only in the order of the kronecker product. And yet, the accuracy is so low in just one of them. Further, an norm-square error > 1.066 is highly unacceptable according to me. What is going wrong here? Further, what is the best way to work around this issue, given that the eigenvalue decomposition is a small part of a code that has to be run several (>100) times.

Foi útil?

Solução

Your matrices are symmetric. Use eigh instead of eig.

If you use eig, the transpose of v1 is not necessarily equal to the inverse of v1.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top