Question

I'm using numpy to get the eigenvalues/eigenvectors of a matrix. My matrix is symmetric and positive.

> mat

  matrix([[ 1.,  1.,  0.,  0.,  0.,  0.,  0.],
          [ 1.,  2.,  0.,  0.,  0.,  0.,  0.],
          [ 0.,  0.,  1.,  0.,  0.,  0.,  0.],
          [ 0.,  0.,  0.,  2.,  1.,  1.,  0.],
          [ 0.,  0.,  0.,  1.,  2.,  1.,  0.],
          [ 0.,  0.,  0.,  1.,  1.,  1.,  0.],
          [ 0.,  0.,  0.,  0.,  0.,  0.,  1.]])

I use np.eigh because my matrix is symmetric.

> import numpy.linalg as la
> la.eigh(mat)

  (array([ 0.27,  0.38,  1.  ,  1.  ,  1.  ,  2.62,  3.73]),
   matrix([[ 0.  , -0.85, -0.  ,  0.  ,  0.  ,  0.53,  0.  ],
          [ 0.  ,  0.53, -0.  ,  0.  ,  0.  ,  0.85,  0.  ],
          [ 0.  ,  0.  , -0.  ,  1.  ,  0.  ,  0.  ,  0.  ],
          [-0.33, -0.  , -0.71, -0.  , -0.  , -0.  , -0.63],
          [-0.33, -0.  ,  0.71, -0.  , -0.  , -0.  , -0.63],
          [ 0.89, -0.  , -0.  , -0.  , -0.  , -0.  , -0.46],
          [-0.  , -0.  , -0.  , -0.  ,  1.  , -0.  , -0.  ]]))

My problem is that many of these values have the wrong sign. In particular, the principal eigenvector (the column farthest to the right in the matrix) is all negative when it should be positive. I have checked this against both matlab and octave. Is this just a precision error, or is there something I'm missing?

If it is an error, is there some way to check for such an error and correct it?

EDIT: This calculation is part of Hubs and Authorities and the matrix above is A*A^T. It is a result of the original paper (see p.9, p.10) that the hub scores converge to the principal eigenvector of A*A^T. Ultimately, we want to compare these hub scores against each other, so the sign is actually important.

On page 10, the paper also says, "Also (as a corollary), if M has only non-negative entries, then the principal eigenvector of M has only non-negative entries." This is why I asked the question.

Était-ce utile?

La solution

The sign of an eigenvector is arbitrary. As far as I know, there is not a right or wrong answer in that regard.

Autres conseils

You can easily check that the eigendecomposition of your symmetric matrix MM is correct, if you exploit MM == QQ*DD*QQ.T, with DD=diag([lam_1, lam2_,...]) being a matrix with the eigenvalues on the diagonal and QQ being the matrix of eigenvectors:

lam,QQ = la.eigh(MM)

# Check result:
DD = np.diag(lam)
MM2 = np.dot(np.dot(QQ, DD), QQ.T)
print(MM-MM2)  # should be zero

which is in your case correct to 14 digits.

Note that eigenvectors may by multiplied by any constant, because of the defining equation of eigenvalues MM*x == lambda*x <=> MM*(c*x) == lambda *(c*x) with c being any non-zero constant. c depends on the numerics - Numpy just normalized the vector to one.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top