I'm going to make a leap and assume that metroStep
is a MCMC Metropolis-Hastings iteration.
The problem you have is that you want the MH steps to be Markovian, but simply sharing RandomGen
state, which is exactly what replicateM n metroStep
does, is insufficient. That only makes it so that each step is capable of being based on independent random variables. To compare, if the RandomGen
state weren't shared then immutability would guarantee that every metroStep
is identical.
So what you really need is something that has both RandomGen
state in order to provide an chain of psuedorandom numbers for generating independent variable samples and a fixed state so that at each step you can have P(x_i | theta, x_(i-1))
. We build a transformer stack to do this—I'll use the mtl
library and random-fu
because I just wrote an MCMC using those libraries a few days ago.
metroStep :: (MonadRandom m, MonadState StateSpace m) => m StateSpace
where StateSpace
is a point in state space including both observed an unobserved variables—it's every parameter on the right side of your likelihood function. Now, replicateM n metroStep :: (MonadRandom m, MonadState StateSpace m) => m [StateSpace]
is a list of Markov-sequential StateSpace
points.
Then we "run" a concrete version of this monad stack like this
do steps <- (`runRVar` StdRandom) . (`evalStateT` ss0) $ (replicateM n metroStep)
mapM_ print steps