Mean time spent in transient states /inverse of (I-P)

976 Views Asked by At

For transient states $i$ and $j$ , let $s_{ij}$ denote the expected number of time periods that the Markov chain is in state $j$, given that it starts in state $i$. Let

$\delta_{i,j}= \begin{cases} 1,&\text{when $i=j\quad$ and}\\ 0,& \text{otherwise.} \end{cases}$

Condition of the initial transition to obtain:

$s_{ij}=\delta_{i,j}+\sum P_{ik}s_{kj}$

Let $S$ denote the matrix of values $s_{ij},\text{for }i,j=1,2,\cdots,t$

In matrix notation, $S=I+PS$, $S=(I-P)^{-1}$ where $P$ specifies the transition probabilities from transient states into transient states.

Can someone show me how to prove the existence of the inverse of $(I-P)$

1

There are 1 best solutions below

2
On BEST ANSWER

Let $\mathbf{(I-P)x=0}$, which implies $\mathbf{x-Px=0}\Rightarrow\mathbf{x=Px}$. But it is an iterative equation meaning that $$\mathbf{x}=\mathbf{P}^n\mathbf{x}\tag{1}$$

Since $\mathbf{P}^n\to\mathbf{0}$, it means also $\mathbf{P}^n\mathbf{x}\to\mathbf{0}$. From $(1)$, it implies $\mathbf{x}=\mathbf{0}$. So $\mathbf{I-P}$ is invertible.

It is also worth adding that $$(\mathbf{I-P})(\mathbf{I+P}+\mathbf{P}^2+\cdots+\mathbf{P}^n)=\mathbf{I}-\mathbf{P}^{n+1}$$ again, as $n\to\infty$ it becomes: $$(\mathbf{I-P})(\mathbf{I+P}+\mathbf{P}^2+\cdots)=\mathbf{I}$$ Hence, $$(\mathbf{I-P})^{-1}=(\mathbf{I+P}+\mathbf{P}^2+\cdots)$$