Probability of Markov chain often returning to the initial state

329 Views Asked by At

Suppose $a(t)$ is a discrete time Markov chain with $n$ states, transition matrix $A = (p_{ij})_{0 \leq i,j < n}$ and initial state $0$. Suppose, $\alpha \in (0;1)$. How can the asymptotic of $P(t) = P(|\{0 \leq \tau \leq t| a(t) = 0\}| \leq \alpha t)$, with $t \to \infty$ be found?

Personally, I think, that there exists such constant $c = c(A, \alpha)$, that $P(t)$ is asymptotically equivalent to $c^t$. However, I know neither how to prove this conjecture, nor how to calculate the explicit value of $c$ for given $A$ and $\alpha$.

1

There are 1 best solutions below

2
On BEST ANSWER

To find the asymptotic you can use the concept of ergodicity, which is the property in which the temporal average of a system equals its spatial average. In good introductions to markov chain theory (for instance, the start of this book), a theorem is proved which states that irreducible and aperiodic markov chains are ergodic. Moreover, any markov chain can be decomposed into ergodic components that don't interact with each other.

I will assume that your markov chain has this property. I consider the function $f(t)$ which returns $1$ if $a(t)=0$, and $0$ otherwise. Then the temporal average of $f(t)$ equals $$ X_t=\frac{\#\{0\leq \tau\leq t\colon a(t)=0\}}{t+1}, $$ whereas the spatial average of $f(t)$ equals its average with respect to the stationary distribution of the markov chain $\pi(x)$, which evaluates simply to $\pi(0)$.

Thus, the ergodic theorem says that with probability $1$, the sequence of random variables $X_t$ converges to the deterministic number $\pi(0)$. Consequently, the probability $P(t)$ you have written converges to $1$ if $\alpha>\pi(0)$ and $0$ if $\alpha<\pi(0)$.