I'm looking into discrete-time Markov chains (DTMCs) for use in analyzing a probabilistic consensus protocol. One basic thing I haven't been able to figure out is how to model a set of independent processes: consider $N$ processes. These processes will concurrently execute a series of identical instructions labeled $0, 1, 2, 3,$ etc. and all are starting in instruction $0$. When probability is not involved, modeling this is simple: it's a state machine which branches nondeterministically off the start state to $N$ different states, where in each of those $N$ states a different process was the first to execute instruction $0$. What do we do when probability is involved? Do we do the same thing with $N$ states branching from the start state, where the probability of transitioning to each state is $\frac{1}{N}$? As in, it's uniformly random which process was the first one to execute instruction $0$?

Is this like taking the product of the state machines of each process?

I'm using a DTMC here, would I gain anything by moving to a CTMC if I don't care about anything beyond the global order of execution?

Bonus question: assigning probabilities to whichever action is taken first seems like a generalization of the non-probabilistic notion of fairness; if it is, what is the formal definition of this generalized notion of probabilistic fairness?

没有正确的解决方案

许可以下: CC-BY-SA归因
不隶属于 cs.stackexchange
scroll top