Markov models: Proving that an occupation law is a stationary law

44 Views Asked by At

I am currently studying Markov models. I am presented with the following definition, theorem, and proof:

Definition 3. If

$$\pi^*_{ij} := \lim_{n \to \infty} m_{ij}(n)/n$$

exists for all $i, j \in S$ and $\sum_{j \in S} \pi^*_{ij} \equiv 1$, then $\vec{\pi}^*_i = (\pi^*_{ij} : j \in S)$ is called an occupation law. In matrix form,

$$\Pi^* = [\pi^*_{ij}] = \lim_{n \to \infty} n^{-1} (\mathcal{I} + \mathcal{P} + \dots + \mathcal{P}^n). \tag{4}$$

Theorem 5. An occupation law is a stationary law.

Proof. We are assuming that the defining limits exist, so the right-hand side of (4) equals

$$\lim_{n \to \infty} (\mathcal{I}/n) + \lim_{n \to \infty} n^{-1}(\mathcal{P} + \dots + \mathcal{P}^n) = [0] + \lim_{n \to \infty} n^{-1}(\mathcal{I} + \dots + \mathcal{P}^{n - 1})\mathcal{P} = \Pi^* \mathcal{P},$$

i.e., $\Pi^* = \Pi^* \mathcal{P}$. The rows of this matrix identity have the form (3), i.e., $\vec{\pi}^*$ is stationary.

The definition for the stationary laws is as follows:

Stationary laws

Definition 2. A stationary law is any non-negative solution $\vec{\pi}$ of the balance plus mass equations, i.e.,

$$\vec{\pi} = \vec{\pi}\mathcal{P} \ \ \ \ \ \text{&} \ \ \ \ \ \sum_{j \in S} \pi_j = 1. \tag{3}$$

Note that if $\vec{\pi}\mathcal{P} = \vec{\pi}$, then $\vec{\pi} \mathcal{P}^n = \vec{\pi}$ for all $n$. This follows by applying the first result iteratively.

And a related theorem and proof is as follows:

Definition 1. If the limit

$$\pi_{ij} = \lim_{n \to \infty} p^{(n)}_{ij}$$

exists for all $i, j \in S$, and if

$$\sum_{j \in S} \pi_{ij} = 1, \ \ \ (i \in S), \tag{1}$$

then for each $i$ we say that the row vector $\vec{\pi}_i = (\pi_{ij} : j \in S)$ is a limit law (or limiting distribution).

If $\vec{\pi}_i$ is a limit law, then $\vec{\pi}_i = \vec{\pi}_i \mathcal{P}$, i.e.,

$$\pi_{ij} = \sum_{k \in S} \pi_{ik}, \ \ \ (i, j \in S) \tag{2}$$

Proof: Define the square matrix $\Pi = [\pi_{ij}]$, which exists by assumption. Definition 1 can be restated as $\Pi = \lim_{n \to \infty} \mathcal{P}^n$. But $\mathcal{P}^n = \mathcal{P}^{n - 1}\mathcal{P}$, so

$$\Pi = \lim_{n \to \infty} P^n = \lim_{n \to \infty} \mathcal{P}^{n - 1} \mathcal{P} = \Pi \mathcal{P}$$

I am having difficulty understanding this part of the first proof:

$$[0] + \lim_{n \to \infty} n^{-1}(\mathcal{I} + \dots + \mathcal{P}^{n - 1})\mathcal{P} = \Pi^* \mathcal{P}$$

Specifically, I do not understand how we get the result that the limit

$$\lim_{n \to \infty} n^{-1}(\mathcal{I} + \dots + \mathcal{P}^{n - 1})\mathcal{P}$$

results in

$$\Pi^* \mathcal{P}.$$

I would greatly appreciate it if people would please take the time to clarify this.

1

There are 1 best solutions below

0
On BEST ANSWER

This is really not particularly tied to Markov chains. It is simply the following calculation that makes sense for matrices $A$ with complex entries: if $L:=\lim_{n \to \infty} \frac{\sum_{k=0}^n A^k}{n}$ exists, then

$$L=\lim_{n \to \infty} \frac{I}{n} + \lim_{n \to \infty} \frac{\sum_{k=1}^n A^k}{n} = 0 + \left ( \lim_{n \to \infty} \frac{\sum_{k=0}^{n-1} A^k}{n} \right ) A.$$

This follows by linearity of the limit operation (first step) and continuity of linear transformations (second step). Now that inner limit is again $L$, because $\frac{\sum_{k=0}^{n-1} A^k}{n} = \frac{\sum_{k=0}^{n-1} A^k}{n-1} \frac{n-1}{n}$ and $\frac{n-1}{n} \to 1$. So $L=LA$. You could also have obtained $L=AL$, but that is not particularly useful in the Markov chain context, in which you want the rows of $L$ to be invariant under $p \mapsto pA$.