Find the stationary distribution of a CTMC

1.7k Views Asked by At

Let $\{ X_t : t \geq 0 \}$ be a CTMC on the state space $\Omega = \{ 1,2,3 \}$ with the following transition rates: $$ \lambda_{12} = 1, \; \; \lambda _{21} = 10 $$ $$ \lambda_{23} = 1, \; \; \lambda_{32} = 5 $$ and all other rates zero.

Find the stationary distribution $\pi_i$

The generator of this chain is:

$$G = \begin{bmatrix} -1 & 1 & 0 \\ 10 & -11 & 1 \\ 0 & 5 & -5 \\ \end{bmatrix} $$

In class we had a similar example where we looked at $G^2$ and determined $G^n$ from it, but here this gives:

$$ G^2 = \begin{bmatrix} 11 & -12 & 1 \\ -120 & 136 & -16 \\ 50 & -80 & 30 \\ \end{bmatrix} $$

which is weird and I don't see any constant $c$ to divide out to apply the technique

$$G^n = (-c)^{n-1} G$$

After this we compute $P_t$, which is pretty simple to find

$$ P_t = \sum_{n=0}^{\infty} \frac{t^n G^n}{n!} = I + \sum_{n=1}^{\infty} \frac{t^n (-c)^{n-1}}{n!} G $$

and take $ t \to \infty$ to determine $\pi_i$

2

There are 2 best solutions below

0
On BEST ANSWER

First of all, it should be noted that there are simpler ways to compute the stationary distribution $\pi$.

Your matrix $G$ is diagonalisable. This fact may be useful when computing matrix powers directly, but there is another reason this is useful. You are computing a matrix exponential.

$$ P_t = e^{t G} = \sum_{n=0}^\infty \frac{t^n G^n}{n!} $$

In the wikipedia article on the matrix exponential there is a section on computation, which shows that the diagonalisable case is relatively simple. It does however appear to involve some work to obtain the stationary distribution in this way.

Calculations using Mathematica

Note that you can follow along with the computations even if you don't have Mathematica, by using the Wolfram Programming Lab (more convenient link).

Here I describe how we can use the theory above to calculate the $\pi_i$ in a similar way to what you describe, using Mathematica to fill in the gaps and to verify the results.

We associate qM with $G$ and initialise it accordingly.

qM = {{-1, 1, 0}, {10, -11, 1}, {0, 5, -5}}; 

I will first show how we can find the $\pi_i$ more directly. Mathematica can immediately find the stationary distribution as follows (no output shown to not spoil the fun).

StationaryDistribution@ContinuousMarkovProcess[{1, 0, 0}, qM]

This can also be obtained using the following, which corresponds to the technique on wikipedia, i.e. the simpler way.

{pi1, pi2, pi3} /. 
  Solve[{pi1, pi2, pi3}. qM == 0 && pi1 + pi2 + pi3 == 1, {pi1, pi2, 
    pi3}] // First

Now we return to the problem of using your path to find the stationary distribution. Note that Mathematica can also immediately find the matrix exponential of $t G$, without any help. This could be done as follows

MatrixExp[t qM]
(*output: big matrix*)

In addition Mathematica can find the limit of this expression directly.

The more manual approach is to use the fact that $G$ is diagonalisable. We find the eigenvalues values and eigenvectors vectors

{values, vectors} = Eigensystem[qM];

These satisfy

qM. Transpose@vectors == Transpose[vectors].DiagonalMatrix@values
(*output: True*)

In correspondence with the wikipedia article, once we have these vectors and values, we can compute the matrix exponential. One key fact in the article is that we can easily calculate the matrix exponential of a diagonal matrix, by taking the exponents entry wise.

diagExp = DiagonalMatrix[Exp /@ (t values)];

We can verify that this is correct as follows

MatrixExp[t DiagonalMatrix@values] == 
  diagExp // Simplify
(*output: True*)

Now we use the second key fact from the wikipedia article. This fact is that if $A$ is a diagonalisable, i.e. $A = U D U^{-1}$ for some diagonal matrix $D$, then $e^A = U e^D U^{-1}$. So we calculate $e^G$, which we shall call mExp, as follows

vecInv = Inverse[Transpose@vectors];
mExp = Transpose[vectors].diagExp.vecInv;

Again we can verify our result

mExp == MatrixExp[t qM] // FullSimplify
(*output: True*)

We still have to find the limit as $t \rightarrow \infty$. We could simply let Mathematica do the work. The following gives the stationary distribution.

First@Limit[mExp, t -> \[Infinity]];

To manually calculate this, it seems we should be able to take some shortcuts. However, I will simply show how one entry can be calculated directly. We see that the entry at position $(1,1)$ in our matrix is

mExp[[1, 1]] // TeXForm

$$ -\frac{1}{112} \sqrt{\frac{5}{13}} \left(\frac{\sqrt{\frac{13}{5}}}{2}-\frac{17}{10}\right) \left(5+\sqrt{65}\right) e^{\frac{1}{2} \left(-17-\sqrt{65}\right) t}-\frac{1}{112} \sqrt{\frac{5}{13}} \left(\frac{17}{10}+\frac{\sqrt{\frac{13}{5}}}{2}\right) \left(5-\sqrt{65}\right) e^{\frac{1}{2} \left(\sqrt{65}-17\right) t}+\frac{25}{28} $$

Both terms containing exponentials go to $0$, as both $-17-\sqrt{65} < 0$ and $\sqrt{65}-17 < 0$. Note that these are also the eigenvalues, so that there is probably some additional structure that I missing. Anyway, we see that the limit must be $\pi_1 = \frac{25}{28}$. This corresponds to the alternative methods of computing the stationary distribution.

0
On

Note that for all $t>0$, $P(t)=e^{tG}$ satisfies the differential equations \begin{align}P'(t)&=P(t)G\\ P'(t) &= GP(t) \end{align} (known as Kolmogorov's forward and backward equations.) If $\pi$ is a stationary distribution, then $\pi P=\pi$. Differentiating and setting $t=0$ yields $\pi G=0$. Conversely, if $\pi G=0$, multiplying by $P(t)$ yields $\pi G P(t) = \pi P'(t)=0$, so $$\frac{\mathsf d}{\mathsf dt}[\pi P'(t)]=0, $$ which means that $\pi P'(t)$ is constant in $t$ and hence $\pi P(t) = \pi P(0) = \pi$, so that $\pi$ is a stationary distribution.

From this result, we may compute the stationary distribution by solving the system of equations \begin{align} \pi Q &= 0\\ \sum_i \pi_i &= 1. \end{align}