Frage

So I have managed to estimate most of the parameters in a particular Hidden Markov Model (HMM) given the learn dataset. These parameters are: the emission probabilities of the hidden states and the transition matrix $P$ of the Markov chain. I used Gibbs sampling for the learning. Now there is one set of parameters that is still missing that is the initial probabilities $\pi$ (probability distribution of where the chain starts) and I want to deduce it from the learned parameters. How can I do it?

Also, is it true that $\pi$ is the same as the stationary probability distribution of $P$?

War es hilfreich?

Lösung

The easiest way to achieve this is to use a special [start] token. You then know that this will always be the first token, and transitions from the [start] token to other words are learned in the model.

The stationary distribution of the Markov chain is the marginal distribution of $P$.

Andere Tipps

I have been puzzled by the same question for the past few days. As far as I can tell, from all the papers I surveyed, PI is dependent on the 1st output in the sampled sequence. Another word, no matter which derivation you use, the answer will differ depending on the sampled sequence. I therefore believe, this is more a frequentist problem. You must perform actual experiments, and average your answers over the data obtained from them.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top