Consider the attached Markov Chain. I need to calculate the E[number of visits to State 2 | the system starts from 2 and gets absorbed to State 1]. More generally, I am interested in calculating the expected time to absorption given that the system is finally absorbed to a specific recurrent state. I can simulate the system, but does anyone have ideas on how I can calculate that analytically? Thanks!
What is the expected time to absorption in a Markov Chain given that the system is absorbed to a specific recurrent state?
5.9k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 3 best solutions below
On
Given that the system is absorbed in state 1 and that it starts in state 2, you can throw out state 3 as it can never be visited. This means the transition from 4 to 2 occurs with probability 1 in this conditioned environment. Let $C_k(x)$ be the probability of hitting $X_k=x$, given that the system absorbs at 1 after some finite time . Then:
$$C_{k}(1)=0.25\cdot C_{k-1}(2)$$ $$C_{k}(2)=C_{k-1}(4).$$ $$C_{k}(4)=0.75\cdot C_{k-1}(2).$$
You can easily solve this by looking at the last two equations, and noticing the parity requires $C_k(2)$ to be zero when $k$ is odd, to get $C_{k}(2)=0.75^{k-1}$ when $k$ is even and 0 otherwise. This gives $C_k(1)=0.25\cdot 0.75^{k-2}$ when $k$ is odd and 0 otherwise. Now just sum for odd indices to get your expectation:
$$\sum_{k=0}^\infty (2k-1)C_{2k-1}.$$
Can you finish it from here?
On
Let us find the conditional distribution for the number $N$ of visits to state $2$ given absorption at state $1$ occurs for chain starting with state $2$. Denote by $A$ the following event $$ A=\{\exists n: X_n=1\}=\{\text{absorption at state $1$ occurs}\} $$ where $X_n, n\ge 0$ is the MC. We need to find $$ P_2(N=k | A) = \dfrac{P_2(N=k, A)}{P_2(A)} $$ where index $2$ indicates that chain is starting from state $2$. To calculate absorption probability $P_2(A)$ write and solve relations. Let $P_4(A)$ be an absorption probability at $1$ for chain starting at $4$. To absorb at state $1$ starting at $2$ one can either move to $1$ w.p. $0.25$ and stay here, or move to $4$ w.p. $0.75$ and then absorb at $1$ w.p. $P_4(A)$. To absorb at state $1$ starting at $4$ one should move to $2$ w.p. $0.4$ and then absorb at $1$ w.p. $P_2(A)$. Therefore $$P_2(A)=0.75P_4(A)+0.25,\quad P_4(A)=0.4P_2(A).$$ Solving this relations get $$P_2(A)=\dfrac{0.25}{1-0.75\cdot0.4}.$$
Next find $P_2(N=k,\ A)$. The chain is initially at $2$. To have exactly $k$ visits at $2$ and than absorb at $1$ one need to have $k-1$ times when chain goes to $4$ and returns back, and then moves to $1$: $$ P_2(N=k, A) = 0.25\cdot(0.75\cdot0.4)^{k-1}. $$
Finally, $$ P_2(N=k | A) = \dfrac{P_2(N=k, A)}{P_2(A)} = (0.75\cdot0.4)^{k-1}(1-0.75\cdot0.4). $$ This is geometric distribution with mean $\dfrac{1}{1-0.75\cdot0.4}.$

Here, I am generalizing NCH's answer to this question. Consider a Markov Chain with the state space $\Omega$. I use $A$ to denote the set of absorbing states and $A^c$ to denote the set of transient states ($A\cup A^c = \Omega $). I am interested in calculating $E(V_{ij}|B_{ib})$, where the random variable $V_{ij}$ is the number of visits to State $j \in A^c$, given that the system starts from State $i \in A$, and $B_{ib}$ denotes the event for absorption at State $b \in A$ given that the system starts from State $i \in A$. We know:
$$ \Pr(V_{ij}=k|B_{ib}) = \frac{\Pr(V_{ij}=k,B_{ib}) }{\Pr(B_{ib})}. $$ The probability $\Pr(B_{ib})$ can be calculated as shown in this Wikipedia article (Subsection Absorbing Probabilities).
Let's use $T_{ij}$ to denote the event of visiting State $j$, starting from State $i$, before any absorption (not just absorption at $b$). Then $V_{ij}=k \cap B_{ib}$ includes: one time moving from $i$ to $j$, $k-1$ time moving from $j$ to $j$, and moving from $j$ to $b$ in the end without visiting $j$. That is: $$ \Pr(V_{ij}=k,B_{ib}) = \Pr(T_{ij}) \Pr(T_{jj})^{k-1} [\Pr(B_{jb})(1-\Pr(T_{jj}))] . $$ To calculate $\Pr(T_{ij})$, I will use the result in Transient Probabilities subsection of this Wikipedia article. So: $$ \begin{align} E(V_{ij}|B_{ib}) &= \sum_{k=0}^\infty k \Pr(V_{ij}=k|B_{ib}) \\ &= \sum_{k=0}^\infty k \frac{\Pr(T_{ij}) \Pr(T_{jj})^{k-1} [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib})} \\ &= \frac{\Pr(T_{ij}) [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib})} \sum_{k=0}^\infty k \Pr(T_{jj})^{k-1} \\ &= \frac{\Pr(T_{ij}) [\Pr(B_{jb})(1-\Pr(T_{jj}))]}{\Pr(B_{ib}) (1-\Pr(T_{jj}))^2} \\ & = \frac{\Pr(T_{ij}) \Pr(B_{jb})}{\Pr(B_{ib}) (1-\Pr(T_{jj}))}, \forall i \ne j \in A, b\in A^c. \end{align} $$
If $i = j$: $$ E(V_{ii}|B_{ib}) = \frac{1}{1-\Pr(T_{ii})}, \forall i \in A, b\in A^c. $$