Repair chain example problem in find the limiting distribution of a Markov Chain

363 Views Asked by At

I am working through Durrett's book Essentials of Stochastic Processes, which is quite good. He had a problem in the first chapter on Markov processes, and I had trouble understanding how he obtained a particular matrix to solve the problem.

The problem is a repair chain problem. Here is the text of the problem from the book:

BEGIN QUOTE FROM BOOK

A machine has three critical parts that are subject to failure, but can function as long as two of these parts are working. When two are broken, they are replaced and the machine is back to working order the next day. Declaring the state space to be the parts that are broken $\{0, 1, 2, 3, 12, 13, 23\}$, we arrived at the following transition matrix:

$$ \begin{matrix} & 0 & 1 & 2 &3 &12 &13 &23 \\ 0 & 0.93 & 0.01 & 0.02 &0.04 &0 &0 &0 \\ 1 & 0 &0.94 & 0 & 0 &0.02 &0.04 & 0 \\ 2 & 0 & 0 & 0.95 & 0 &0.01 &0 &0.04 \\ 3 & 0 & 0 & 0 &0.97 &0 &0.01 &0.02 \\ 12 & 1 & 0 & 0 &0 &0 &0 &0\\ 13 & 1 & 0 & 0 &0 &0 &0 &0\\ 23 & 1 & 0 & 0 &0 &0 &0 &0\\ \end {matrix} $$

and we asked: If we are going to operate the machine for 1,800 days (about 5 years) then how many parts of types 1,2, and 3 will we use?

To find the stationary distribution we look at the last row of

$$ \begin{bmatrix} & -0.07 & 0.01 & 0.02 &0.04 &0 &0 & 1 \\ & 0 & -0.06 & 0 & 0 &0.02 &0.04 & 1 \\ & 0 & 0 & - 0.05 & 0 &0.01 &0 & 1 \\ & 0 & 0 & 0 & -0.03 &0 &0.01 & 1 \\ & 1 & 0 & 0 &0 & -1 &0 & 1\\ & 1 & 0 & 0 &0 &0 & -1 & 1\\ & 1 & 0 & 0 &0 &0 &0 & 1\\ \end{bmatrix}^{-1} $$

END QUOTE FROM BOOK

So I was trying to understand where Durrett obtains this this second matrix which he inverts? I checked and this is not the transition matrix raised to the 1800 power. Also, it seems like this second matrix is formed by taking $p(i, i) - 1$ except for the last column which becomes all 1s. It did not make sense that these negative values represented transition probabilities.

As a hint he does suggest that the solution to this problem illustrated a theorem:

$$ \text{Suppose a markov process is irreducible, stationary, and } \sum_x|f(x)|\pi(x) < \infty \ \text{then} \\ \frac{1}{n}\sum_{m=1}^n f(X_m) \rightarrow \sum_x f(x)\pi(x) $$

1

There are 1 best solutions below

0
On BEST ANSWER

Let $A$ be the transition matrix and let $e_n$ be the all-one vector of dimension $n$. Since each row of the transition matrix sums to one, one properties that a transition matrix has to satisfy is $Ae_n = e_n$ that is $(A-I)e_n = 0$.

Let's express $A-I = [B , b]$ where $b$ is the last column.

That is we have $$Be_{n-1} +b = 0\tag{1}$$

For a stationary distribution $\pi \neq 0$, it satisfies$$\pi^TA = \pi^T$$

that is $$\pi^T(A-I) = 0$$

or $$\pi^T[B, b]=0$$

From $(1)$, we can see that $\pi^TB=0$ would imply $\pi^Tb=0$.

As a probability distribution, $\pi$ also has to satisfy

$$\pi^Te = 1$$

We can write the requirement compactly as $$\pi^T [B ,e] = \begin{bmatrix} 0 \ldots, 0, 1\end{bmatrix}$$

Hence $$\pi^T = \begin{bmatrix} 0 \ldots, 0, 1\end{bmatrix}[B, e]^{-1}$$

that is $\pi^T$ is the last row of $[B, e]^{-1}$.