Let $\{X_n\}_{n=0}^\infty$ be an absorbing Markov chain. It is well-known that $$ P(\text{chain gets absorbed}|X_0=i)=1. $$ My question is how this is interpreted in practice. We have that for almost every "experiment" $\omega$, the chain gets absorbed. I am not sure how to interpret this "almost sure" sense. If I simulate the chain in a computer, will I always observe convergence to an absorbing state? Or could there be some execution $\omega$ in the computer in which the chain does not converge (because $\omega$ does not form part of "almost every $\omega$")? In practice in the computer, does "almost every" play any role?
Edit: Convergence means that, from any starting state $i$, the random walk in the computer reaches an absorbing state in a finite number (not fixed) of steps.
I think your bigger issue is not so much the computer (as pointed out in the comments), but convergence, which is a statement for "large n"
In particular how are you measuring convergence? Are you just checking at some time $n$ whether it has been absorbed? In which case, there is a positive probability that it will never be absorbed. (Think of a simple chain that starts from state 1, has a probability of 0.00001 of staying at 1, or going to 0 otherwise, which is an absorbing state). For any finite n, the probability that you are still in state 1 is $(0.00001)^N$, which is indeed greater than 0.