Finite Markov Chain: $ X_n = X_{n-1}+B_{n} $ with two recurrent states.

141 Views Asked by At

I have a simple and rather basic question regarding Markov Chains:

let $B_{n}$ be an i.i.d process that gets the values $\{-1,1\}$ with probability of $\frac{1}{2}$.

We define the following Markov Chain above $\{0,1,...,K-1,K\}$ states:

$X_{n}=\begin{cases} X_{n-1}+B_{n}, & 0<X_{n-1}<K\\ X_{n-1}, & X_{n-1}=\{0,K\} \end{cases}$

The State Diagram for ($ p=\frac{1}{2} $):

enter image description here

I'm interested in finding the transition probability:

$\rho_{ji}=P\{\exists n>0:X_{n}=i|X_{0}=j\}$ for $i,j\in\{1,2,...,K-1\}$

which is the probability that the process $X_{n}$ reaches state $i$ at some time $n$, given that the

initial state was $j$.

first, I was able to calculate: $\pi_{k}=\rho_{k0}=P\{\exists n>0:X_{n}=0|X_{0}=k\}=\begin{cases} 1-\frac{k}{K}, & 0<k<K\end{cases}$

which is the probability to reach state $0$ given that the initial state was at state $0<k<K$.

this was done by assuming a general solution $\pi_{k}=a\lambda^{k}+b$, and solving an equation that has

a relation between $\pi_{k,}\pi_{k-1,}\pi_{k+1}$, which gives $\pi_{k}=1-\frac{1-\lambda^{k}}{1-\lambda^{K}}$ where $\lambda={{\frac{p}{1-p}} }$.

taking the limit $\lambda\rightarrow1$ , which suits our case, (in which $p=\frac{1}{2}$), gives the result for $\pi_{k}$ that I wrote above.

I want to use $\pi_{k}$ to find an expression for $\rho_{ji}$ for $i,j\in\{1,2,...,K-1\}$.

I argue that, in case $j>i$:

$\rho_{ji}=P\{\exists n>0:X_{n}=i|X_{0}=j\}=P\{\exists n>0:X_{n}=0|X_{0}=|j-i|\}=\pi_{|j-i|0}$ from symmetry.

which means moving (to the left) from state $j$ to $i$ (where $i,j\in\{1,2,...,K-1\}$), is equivalent

to moving from state $|j-i|$ to state $0$.

thus, by using the previous solution for $\pi_{k}$, one can find $\rho_{ji}$ for $j>i$ case:

$\rho_{ji}=P\{\exists n>0:X_{n}=i|X_{0}=j\}=P\{\exists n>0:X_{n}=0|X_{0}=|j-i|\}=\begin{cases} 1 & |j-i|=k=0\\ 1-\frac{k}{K} & |j-i|=k\in\{1,..,K-1\}\\ 0 & |j-i|=k=K \end{cases}$

I somehow concluded also that $\rho_{ji}$ for $j<i$ should be equal to to the expression I got fior $j>i$ case, since intuitively, it sounds right to me.

but turns out, according to an official solution (which wasn't really detailed), that the final answer for case $j<i$ (moving to the right) should be:

$\rho_{ji}={ {\frac{j}{i}} }$

I can't figure out why, and I would be glad for some enlightenment.

Note: it's possible that there is a mistake in the official solution, and I hope you can help me verify that.



Modification:

Thanks to Misha's Answer, I figured out that the reason behind replacing $K$ with $K-i$ is the following:

for example for $i<j:$

we are interested in defining a "new" Markov Chain depending on the distance we want to move from $j$ to $i$. (from right to left)

for example if $K=6$ and we want to move from $state$ $j=5$ to $state$ $i=2$ , its equivalent

to moving from $state$ $j=$$3$ to $state$ $i=0$ in a "smaller" Markov Chain that has the state $K-i$ at it's far right instead of $K.$ (which is the state we are never interested to visit, since our goal is to meet the wanted state which happens to be on the left).

in other words, the Markov chain $\{0,...,K-i\}$ includes all the possible traveling distances to state $0$ starting from the state $\{K-i-1\}$, and the largest distance is moving $\{K-i-1\}$ $states$ down to state $zero$, which is equivalent to the largest traveling distance from $state$ $j=5$ down to $state$ $i=2$, which is $\{j-i\}$.

1

There are 1 best solutions below

2
On BEST ANSWER

So there's one modification that should be made to your answer, which is that when you "shift the problem over" to ask about $\rho_{k0}$ instead of $\rho_{ji}$, you should shift the last state over from $K$ to $K-i$.

Once that's done: the reason that the answer for $j < i$ is different is that to reduce it to your case, you should replace $j$ by $K-j$ and $i$ by $K-i$. Your formula is equivalent to $\rho_{ji} = \frac{K-j}{K-i}$; if you take the mirror image of the $j < i$ case to reduce it to the case you've handled, you get $\rho_{ji} = \frac{j}{i}$.


Here's another approach.

Note that before we reach one of the endpoint states, the difference $X_n - X_{n-1}$ is $1$ with probability $\frac12$ and $-1$ with probability $\frac12$; in expectation, $\mathbb E[X_n - X_{n-1}] = 0$. We start at $X_0 = j$; therefore for any $n$, \begin{align} \mathbb E[X_n] &= \mathbb E[X_n - X_{n-1}] + \mathbb E[X_{n-1} - X_{n-2}] + \dots + \mathbb E[X_1 - X_0] + \mathbb E[X_0] \\ &= 0 + 0 + \dots + 0 + j = j. \end{align}

We can extend this (formally, by using Wald's identity) to a stopping time. Let $T$ be the time at which we either reach state $i$ (as we want) or reach one of the states $0$ or $K$. Then we have $\mathbb E[X_T] = j$.

On the other hand, we can find $\mathbb E[X_T]$ in terms of $\rho_{ji}$. We have:

  • If $i < j$, then we either reach state $i$ or state $K$, so $$j = \mathbb E[X_T] = i \cdot \rho_{ji} + K \cdot (1 - \rho_{ji})$$ giving us $\rho_{ji} = \frac{K-j}{K-i}$.
  • If $i > j$, then we either reach state $i$ or state $0$, so $$j = \mathbb E[X_T] = i \cdot \rho_{ji} + 0 \cdot (1 - \rho_{ji})$$ giving us $\rho_{ji} = \frac{0-j}{0-i} = \frac ji$.

By the way, this also works for $p \ne \frac12$, except that instead of computing $\mathbb E[X_T]$ (which is no longer equal to $X_0 = j$ in the asymmetric case) we should compute $\mathbb E[\lambda^{X_T}]$ where $\lambda = \frac{p}{1-p}$ as you have defined it.