Absorbing Markov chain in a computer. Is "almost every" turned into always convergence in computer executions?

103 Views Asked by At

Let $\{X_n\}_{n=0}^\infty$ be an absorbing Markov chain. It is well-known that $$ P(\text{chain gets absorbed}|X_0=i)=1. $$ My question is how this is interpreted in practice. We have that for almost every "experiment" $\omega$, the chain gets absorbed. I am not sure how to interpret this "almost sure" sense. If I simulate the chain in a computer, will I always observe convergence to an absorbing state? Or could there be some execution $\omega$ in the computer in which the chain does not converge (because $\omega$ does not form part of "almost every $\omega$")? In practice in the computer, does "almost every" play any role?

Edit: Convergence means that, from any starting state $i$, the random walk in the computer reaches an absorbing state in a finite number (not fixed) of steps.

2

There are 2 best solutions below

4
On

I think your bigger issue is not so much the computer (as pointed out in the comments), but convergence, which is a statement for "large n"

In particular how are you measuring convergence? Are you just checking at some time $n$ whether it has been absorbed? In which case, there is a positive probability that it will never be absorbed. (Think of a simple chain that starts from state 1, has a probability of 0.00001 of staying at 1, or going to 0 otherwise, which is an absorbing state). For any finite n, the probability that you are still in state 1 is $(0.00001)^N$, which is indeed greater than 0.

2
On

If you're taking about a simulation where you keep choosing between the possible next transitions using the output of a PRNG (pseudo-random number generator), then there has to be exceptions.

We can even construct fairly small Markov chains that are mathematically impeccably absorbing but where the PRNG will never reach the absorbing state no natter how you seed it.

Suppose your PRNG has an internal state of $n$ bits. Then let our Markov chain have states $\{0,1,2,\ldots,n+1\}$ states, where state $n+1$ is absorbing and every other state $k$ moves to $0$ or $k+1$ with equal probability.

Then, starting in state $0$, your simulation can only reach the absorbing state if the PRNG produces $n+1$ "go right" in a row. That is still possible, though.

However, note that there are only at most $2^n$ different sequences of "left" and "right" that can possibly be the outcome of the next $n+1$ draws of the PRNG -- because that's how many different internal states it has. So, if we know the PRNG we can choose the description or the Markov chain such that the sequence it needs to get to the absorbing state is one of the $2^{n+1}-2^n$ sequences that the PRNG cannot produce.

And then, no matter how long you keep simulating the chain, it will never reach the absorbing state.

But matematically, the chain always has finite probability at least $2^{-(n+1)}$ of ending up in the absorbing state within the next $n+1$ moves, so this ought to happen eventually with probability $1$.