Stationary distribution of a Markov chain in a G. Bianchi's paper

164 Views Asked by At

enter image description here

The above Markov chain diagram is from a paper (Fig. 3) that is linked below: http://www.icg.isy.liu.se/courses/tsin01/material/Bianchi1998.pdf

Although it deals with quite a bit of technical details on Wi-Fi, I am sure my question can be answered by general folks who are good at Markov chains.

In the Markov chain above, each state is denoted as $(i,k)$ and $s(t)$ and $b(t)$ denote the processes incrementing $i$ and $k$, respectively. Given these notations, the paper states:

Let $b_{i,k} = \lim_{t \rightarrow \infty} P\{s(t)=i, b(t)=k\}$ where $i \in (0,m), k\in (0,W_i-1)$ be the stationary distribution of the chain. Owing the chain regularities, the following relations hold: \begin{align} b_{i,0} &= p^i b_{0,0}\\ b_{m,0} &= \frac{p^m}{1-p}b_{0,0}\\ b_{i,k} &= \frac{W_i - k}{W_i}b_{i,0} \end{align}

My Question: Does anybody know how these three equations were derived? I especially have no clue where the fraction ${W_i - k}/{W_i}$ came from in the third equation. Any help from stat experts will be really appreciated!

2

There are 2 best solutions below

1
On

The only way state $\ (i,k)\ $ can be reached, for $\ 1 \le i\le m-1\ $, and $\ 0\le k\le W_i-2\ $ is from the states $\ (i-1,0)\ $ and $\ (i,k+1)\ $, from which it is jumped to with probabilities $\ \frac{p}{W_i}\ $ and $1$, respectively, at each step. Therefore, $$ b_{i,k} = \frac{p}{W_i}b_{i-1,0} + b_{i,k+1}\ , $$ for those values of $\ i\ $ and $\ k\ $. By induction on $\ k\ $, these equations give $$ b_{i,k}=\left(W_i - k-1\right)\frac{p}{W_i}b_{i-1,0}+ b_{i,W_i-1}\ . $$ Since the only way state $\ \left(i, W_i-1\right)\ $, for $\ 1\le i\le m-1\ $, can be reached is from the state $\ (i-1,0)\ $, from which it is jumped to with probability $\ \frac{p}{W_i}\ $ at each step, we have $\ b_{i,W_i-1}=\frac{p}{W_i}b_{i-1,0}\ $. Substituting for $\ b_{i,W_i-1}\ $ in the equation above, we therefore get $$ b_{i,k}=p\left(\frac{W_i - k}{W_i}\right) b_{i-1,0}\ , $$ for $\ 1\le i\le m-1\ $ and $\ 0\le k\le W_i-1\ $. In particular, we have $$ b_{i,0} = p b_{i-1,0} $$ for $\ 1\le i\le m-1\ $, and therefore \begin{eqnarray} b_{i,0} &=& p^i b_{0,0}\\ b_{i,k} &=& \left(\frac{W_i - k}{W_i}\right)p^i b_{0,0}\\ &=& \left(\frac{W_i - k}{W_i}\right)b_{i,0} \end{eqnarray} for those values of $\ i\ $ by induction.

Next, each of the states $\ (m, k)\ $, for $\ 0\le k\le W_{m-2}\ $ can only be reached from one of the states $\ (m,k+1)\ $, $\ (m-1,0)\ $, or $\ (m,0)\ $, from which it is jumped to with probabilties $1$, $\ \frac{p}{W_m}\ $ and $\ \frac{p}{W_m}\ $, respectively. Therefore, $$ b_{m,k}=b_{m,k+1} + \frac{p}{W_m}\left(b_{m-1,0}+b_{m,0}\right)\ , $$ and by the same argument as before, we end up with $$ b_{m,k} = p\left(\frac{W_m-k}{W_m}\right)\left(b_{m-1,0}+b_{m,0}\right)\ . $$ Putting $\ k=0\ $ in this equation gives $$ b_{m,0}=\frac{pb_{m-1,0}}{(1-p)}=\frac{p^mb_{0,0}}{(1-p)}\ , $$ and then, substituting for $\ b_{m,0}\ $ back in the preceding equation, $$ b_{m,k} = \left(\frac{W_m-k}{W_m}\right)\frac{p^mb_{0,0}}{(1-p)}= \left(\frac{W_m-k}{W_m}\right)b_{m,0}\ . $$ Now each of the states $\ (0,k)\ $, for $\ 0\le k\le W_0-2\ $, can only be reached from one of the states $\ (0,k+1)\ $ or $\ (i,0)\ $, for $\ 0\le i\le m\ $, from which it is jumped to with probabilities $1$ and $\ \frac{1-p}{W_0}\ $, respectively. So, similarly to the previous two cases we get \begin{eqnarray} b_{0,k} &=& (1-p)\left(\frac{W_0-k}{W_0}\right)\sum_{i=0}^m b_{i,0}\\ &=& (1-p)\left(\frac{W_0-k}{W_0}\right)\left(\frac{p^mb_{0,0}}{1-p} +\sum_{i=0}^{m-1}p^i b_{0,0}\right)\\ &=& \left(\frac{W_0-k}{W_0}\right)b_{0,0}\ . \end{eqnarray} This completes the derivation of the three equations for all values of $\ i\ $ and $\ k\ $.

0
On

It's easiest to solve for the stationary distribution in a different way. Pick any state $s$; then the stationary distribution $b$ satisfies:

  • $\frac{1}{b_s}$ is the expected number of steps it takes to return to $s$, starting from $s$.
  • for a state $x \ne s$, $\frac{b_x}{b_s}$ is the expected number of times state $x$ is visited on an excursion starting from and ending at $s$.

Thus, if we want to solve for all probabilities in terms of $b_{00}$, we want to set $s=(0,0)$ and use the second formula above.

For an arbitrary state $(i,k)$, with $i<m$, we can only visit that state once before returning to $(0,0)$, so the expected number of visits is equal to the probability of a visit. That probability is the product of

  • $p^i$, the probability that we take $i$ steps going a level down before we end up on the path back to $(0,0)$, and
  • $\frac{W_i-k}{W_i}$, the probability that when going from the $(i-1)^{\text{th}}$ to the $i^{\text{th}}$ level, we go to one of $(i,k), (i,k+1), \dots, (i, W_i-1)$. Any of these will eventually get to $(i,k)$.

Notably, this also works when $i=0$; in that case, the probability that we go back to some state on the $0^{\text{th}}$ level eventually is $p^0 = 1$, and then $\frac{W_0-k}{W_0}$ tells us the chance of going to one of $(0,k), (0,k+1), \dots, (0,W_0-1)$ when we do so.

For a state $(m,k)$, that also gives an initial probability of visiting that state, but once we go to $(m,0)$, we get a second chance. The expected number of visits to $(m,k)$ is $$ p^i \cdot \frac{W_m-k}{W_m} + p^{i+1} \cdot \frac{W_m-k}{W_m} + p^{i+2} \cdot \frac{W_m-k}{W_m} + \dots = \frac{p^i}{1-p} \cdot \frac{W_m-k}{W_m} $$ where the term with $p^{i+j}$ corresponds to getting to the $m^{\text{th}}$ layer and looping around it $j$ times. This simplifies by the formula for a geometric series.