Computing the invariant distribution of a two-step Markov chain

131 Views Asked by At

I'm having trouble with this problem.

Let $X=(X_n)$ be a Markov chain on a countable state space $S$ with transition matrix $P$ and invariant distribution $\pi$. Let $Y=(Y_n)$ be defined by $Y_n = (X_n, X_{n+1})$. I want to compute the invariant distribution $\nu$ of $Y$.

It was easy to show that $Y$ is in fact a Markov chain. For the invariant distribution, these are my thoughts so far:

If $Q$ is the transition matrix of $Y$, then we need $\nu Q = \nu$ to hold, i.e. $$\nu(r,s) = \sum_{(i,j)\in S \times S} \nu(i,j,) q\big((i,j),(r,s)\big).$$ By definition of $Y$ we also have $$\mathbb P\big(Y_{n+1}=(r,s) \vert Y_n=(i,j)\big) =\mathbb P\big(X_{x+1}=r, X_{n+2}=s \vert X_n=i, X_{n+1}=j\big) =\mathbb P\big(X_{n+2}=s \vert X_{n+1}=j\big)$$ So we have $q((i,j),(r,s)) = p(j,s)$ and I need to find a distribution $\nu$ such that $$\nu(r,s) = \sum_{(i,j)\in S \times S} \nu(i,j,) p(j,s).$$

From here I was lacking a systematic approach because of that nasty double sum. But since $Y$ is defined in terms of $X$ and we know the invariant distribution $\pi$ of $X$, it makes sense that $\nu$ should be given in terms of $\pi$ (and $P$ as I figured after some trying).

So I basically just tried to choose $\nu(r,s)$ in a way that gets rid of those sums. Turns out that $\nu(r,s) = \pi(r) p(r,s)$ does the trick and I guess it also makes kind of sense intuitively. Howver, plugging it into the definition of an invariant distribution yields

$$\sum_{i,j \in S} \pi(i) p(i,j) p(j,s) =\sum_j \sum_i \pi(i) p(i,j) p(j,s) =\sum_j \pi(j) p(j,s) =\pi(s).$$

But here I'm confused: This term should give back the stationary distribtion $\pi(r) p(r,s)$, shouldn't it?