Mean time spent in transient states/Markov chain

4.3k Views Asked by At

I dont get this in my book:

For transient states $i$ and $j$ , let $s_{ij}$ denote the expected number of time periods that the markov chain is in state $j$ , given that it starts in state $i$. Let $\delta_{i,j} = 1$ when $i = j$ and let it be $0$ otherwise.Condition of the initial transition to obtain:

$s_{ij} = \delta_{i,j} + \sum_k P_{ik}s_{kj} = \delta_{i,j} + \sum\limits^t P_{ik}s_{kj} $

can someone show me how , if i condition of the initial transition, obtain the above equation ? or explain in how do interpret the equation?

1

There are 1 best solutions below

10
On

If you start in j, then you automatically get one count of being in the target state. Now pick your next transition k. Do a weighted average of the expected number of visits from k to j, where you weigh by the probability of going to a particular k first.

Let $S_{j}$ be a random variable that counts the number of times one visits state $j$.

$$ S_{j} = \sum_{n=1}^\infty 1_{X_n = j}$$ $$ s_{ij} = E[S_j | X_1 = i] = \sum_{n=1}^\infty E[1_{X_n = j} | X_1 = i]$$ $$ = \delta_{ij} + \sum_{n=2}^\infty E[1_{X_n = j} | X_1 = i] $$ $$ = \delta_{ij} + \sum_{n=2}^\infty \sum_k P_{ik} E[1_{X_n = j} | X_1 = i,X_2=k] $$

Used what's sometimes called the "Law of total expectation" above. Now you can use the Markov property and switch the order of summation

$$ = \delta_{ij} + \sum_k P_{ik} E\left[\sum_{n=2}^\infty 1_{X_n = j} | X_2=k\right] $$ The sums inside the expectation above is basically like what we took for the definition of $s_{kj}$, except the first index is shifted from $1$ to $2$; that doesn't matter for the expectation. $$ = \delta_{ij} + \sum_k P_{ik} s_{kj}$$