Pregunta

Given the following Markov Matrix:

import numpy, scipy.linalg
A = numpy.array([[0.9, 0.1],[0.15, 0.85]])

The stationary probability exists and is equal to [.6, .4]. This is easy to verify by taking a large power of the matrix:

B = A.copy()
for _ in xrange(10): B = numpy.dot(B,B)

Here B[0] = [0.6, 0.4]. So far, so good. According to wikipedia:

A stationary probability vector is defined as a vector that does not change under application of the transition matrix; that is, it is defined as a left eigenvector of the probability matrix, associated with eigenvalue 1:

So I should be able to calculate the left eigenvector of A with eigenvalue of 1, and this should also give me the stationary probability. Scipy's implementation of eig has a left keyword:

scipy.linalg.eig(A,left=True,right=False)

Gives:

(array([ 1.00+0.j,  0.75+0.j]), array([[ 0.83205029, -0.70710678],
   [ 0.5547002 ,  0.70710678]]))

Which says that the dominant left eigenvector is: [0.83205029, 0.5547002]. Am I reading this incorrectly? How do I get the [0.6, 0.4] using the eigenvalue decomposition?

¿Fue útil?

Solución

The [0.83205029, 0.5547002] is just [0.6, 0.4] multiplied by ~1.39.

Although from "physical" point of view you need eigenvector with sum of its components equal 1, scaling eigenvector by some factor does not change it's "eigenness":

If \vec{v} A = \lambda \vec{v}, then obviously (\alpha \vec{v}) A = \lambda (\alpha \vec{v})

So, to get [0.6, 0.4] you should do:

>>> v = scipy.linalg.eig(A,left=True,right=False)[1][:,0]
>>> v
array([ 0.83205029,  0.5547002 ])
>>> v / sum(v)
array([ 0.6,  0.4])

Otros consejos

the eig function returns unit vector as far as eigenvectors are concerned.

So, if we take v = [0.6, 0.4], its length is: l = np.sqrt(np.square(a).sum()) or l = np.linalg.norm(v), so the normalized vector (as returned from scipy.linalg.eig) is:

>>> v = np.array([.6, .4])
>>> l = np.sqrt(np.square(a).sum())
>>> v / l
array([0.83205029, 0.5547002 ])

So, if you need the vector to be a stochastic vector or probability vector as in Markov chain, simply scale it so it sums to 1.0

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top