Computation of balance equation example in Markov model

158 Views Asked by At

I am studying some examples of balance equations for Markov models. I am presented with the following example:

$$\mathcal{P} = \begin{bmatrix} 0.2 & 0.3 & 0.5 \\ 0.1 & 0 & 0.9 \\ 0.55 & 0 & 0.45 \end{bmatrix}$$

[dropping the $i$ subscript by writing $\pi_j$ for $\pi_{ij}.]$

The balance equations are

$$\begin{align} &\pi_1 = 0.2 \pi_1 + 0.1 \pi_2 + 0.55 \pi_3 \tag{a} \\ &\pi_2 = 0.3 \pi_1 \tag{b} \\ &\pi_3 = 0.5 \pi_1 + 0.9 \pi_2 + 0.45 \pi_3 \tag{c} \end{align}$$

Since, also, $\pi_1 + \pi_2 + \pi_3 = 1$, the unique solution is

$$\pi_1 = 1/2.7 = 0.37037, \ \ \ \pi_2 = 1/9 = 0.11111, \ \ \ \pi_3 = 1.4/2.7 = 0.51852$$

How do we solve this for the values $\pi_1, \pi_2, \pi_3$? Is there a way to solve this using matrix computations? The difficulty here, as I see it, is that we have a constraint $\pi_1 + \pi_2 + \pi_3 = 1$ that must hold, so I'm unsure of how this is done.

I would greatly appreciate it if someone would please take the time to show this.

2

There are 2 best solutions below

3
On BEST ANSWER

$\pi_2 = 0.3\pi_1 \Rightarrow$

$\pi_1 = 0.2\pi_1 + 0.1\cdot 0.3\pi_1 + 0.55\pi_3 \Leftrightarrow 0.77\pi_1 = 0.55\pi_3 \Rightarrow \pi_3 = \frac{7}{5}\pi_1$

$\pi_3 = 0.5\pi_1 + 0.9\cdot 0.3\pi_1 + 0.45\pi_3 \Leftrightarrow 0.55\pi_3 = 0.77\pi_1 \Rightarrow \pi_3 = \frac{7}{5}\pi_1$

It means that we have only two equations with three unknowns, which implies that there is no unique solition to the system of linear equations. So $\pi_1$ can be any number.

Now, we have to use the condition $\pi_1+\pi_2+\pi_3 = 1$ in order to find a unique solution.

$\pi_1+\pi_2+\pi_3 = 1 \Rightarrow \pi_1 + 0.3\pi_1 + \frac{7}{5}\pi_1 = \frac{27}{10}\pi_1 = 1 \Rightarrow$

$\pi_1 = \frac{10}{27} \approx 0.37037$

$\pi_2 = 0.3\frac{10}{27} = \frac{1}{9} \approx 0.11111$

$\pi_3 = \frac{7}{5}\frac{10}{27} = \frac{14}{27} \approx 0.51852$

0
On

The standard Eigenvalue equation is $$P^Tv=\lambda v$$ You're seeking the eigenvector corresponding to $\lambda=1$.

Fortunately, in the case of a Markov (aka stochastic) matrix, this happens to be the largest eigenvalue and therefore can be computed via the power iteration as $$\eqalign{ v_{k+1} &= \frac{P^Tv_k}{\|P^Tv_k\|_{\tt1}} }$$ or, if you prefer, the transposed equation using row vectors $$\eqalign{ v_{k+1}^T &= \frac{v_k^TP}{\|v_k^TP\|_{\tt1}} }$$ If you start with a stochastic vector, then $P$ preserves the stochastic property and scaling each step is unnecessary. This simplifies the iteration to $$v_{k+1}^T = v_k^TP \quad\implies v_n^T = v_0^TP^n$$ For the matrix in your example, this iteration yields $$\eqalign{ v_{0}^T &= \;\big(&0.333333 \quad&0.333333 \quad&0.333333 \;\big) \\ v_{5}^T &= \;\big(&0.372054 \quad&0.109744 \quad&0.518201 \;\big) \\ v_{10}^T &= \;\big(&0.370358 \quad&0.111112 \quad&0.518529 \;\big) \\ v_{15}^T &= \;\big(&0.37037 \quad&0.111111 \quad&0.518518\;\big) \\ v_{20}^T &= \;\big(&0.37037 \quad&0.111111 \quad&0.518519\;\big) \\ }$$