In my studies of Markov chains, I was tackled with this tough problem:
Let $ \{ X_n \}_{n=0}^{\infty} $ be a homogeneous Markov chain with transition probabilities satisfying $ | i-j | > 1 \to p_{ij}=0 $ and $ | i-j | \leq 1 \to p_{ij} > 0 $ and we are asked if $ Y_n = X_n - X_{n-1} $ is also a Markov chain
I know without the condition on the transition probabilities this is not necessarily true but given what we know is it Markovian? I cannot prove it nor can I figure out a counterexample. I certainly would appreciate all help.
The chain $ Y_n = X_n - X_{n-1} $ is a Markov chain when $ p_{ij} = \alpha ( j-i ) $ for some probability mass function on integers. Otherwise, there is no reason that it will be a Markov chain.
Before going into the calculations, quickly look at the intuitive reason. Clearly form the definition of transition probability, it is clear that we must have $ Y_n := X_n - X_{n-1} $ take values in the state space $ \{ -1, 0, 1 \} $. If $ Y_{n+1} = 1 $, this will imply that $ X_{n+1} = X_n + 1 $. Thus this probability will be weighted sum of $ p_{ i, i+1 } $ where weights will be the probability of $ X_n = i $. If $ p_{ i, i+1 } $ is independent of $ i $, i.e., $ p_{ i, i+1 } = p $, regardless of the probabilities of what $ X_n $'s value is, we must have that the conditional probability is $ p $ (as the weights will be summed to $1$). This argument should be applicable for all values that $ Y_{n+1} $ can take, i.e., we need to have $ p_{i, i-1} = q, p_{i,i} = r, p_{i,i+1} = p $ with $ p+r+q = 1 $. In fact little more thought will show that if the transition probabilities only depend on the increment, the same argument should hold. In such a case, the original Markov chain is a random walk (with steps in a general set) and hence the difference should give the step at time $n+1$. Therefore, it is the case that $ \{ Y_n : n \geq 1 \} $ is a sequence of i.i.d. random variables, hence of course a Markov chain.
On the other hand, if the transition probabilities does not only depend on the increment only, there is no reason to think that there is a magical way that the weighted sums would become same for any $n$. Some calculation of conditional probability will provide us an example.
Assuming $ p_{i,j} = \alpha (j-i) $ for $ i, j \in \mathbb{Z} $ and $ \epsilon_1, \epsilon_2, \dotsc, \epsilon_n \in \mathbb{Z} $, we have \begin{align*} & \mathbb{P} \bigl( Y_{n} = \epsilon_{n} , Y_{n-1} = \epsilon_{n-1}, \dotsc, Y_2 = \epsilon_2, Y_1 = \epsilon_1 \bigr) \\ & = \sum_{ x_0 \in \mathbb{Z} } \mathbb{P} \bigl( Y_{n} = \epsilon_{n} , Y_{n-1} = \epsilon_{n-1}, \dotsc, Y_2 = \epsilon_2, Y_1 = \epsilon_1 , X_{0} = x_0 \bigr) \\ & = \sum_{ x_0 \in \mathbb{Z} } \mathbb{P} \bigl( X_{n} = X_{n-1}+ \epsilon_{n} , X_{n-1} = X_{n-2} + \epsilon_{n-1}, \dotsc, X_2 = X_1 +\epsilon_2, X_1 = X_0 + \epsilon_1 , X_{0} = x_0 \bigr) \\ & = \mathbb{P} \bigl( X_{n} = x_{n} , X_{n-1} = x_{n-1}, \dotsc, X_2 = x_2, X_1 = x_1 , X_{0} = x_0 \bigr) \end{align*} where we have set $ x_k = x_{k-1} + \epsilon_k $ for $ k \geq 1 $. Now, we note that \begin{align*} & \mathbb{P} \bigl( X_{n} = x_{n} , X_{n-1} = x_{n-1}, \dotsc, X_2 = x_2, X_1 = x_1 , X_{0} = x_0 \bigr) \\ & = \mathbb{P} \bigl( X_{0} = x_0 \bigr) \mathbb{P} \bigl( X_1 = x_1 \mid X_{0} = x_0 \bigr) \mathbb{P} \bigl( X_2 = x_2 \mid X_{1} = x_1 \bigr) \dotsm \mathbb{P} \bigl( X_n = x_n \mid X_{n-1} = x_{n-1} \bigr) \\ & = \mathbb{P} \bigl( X_{0} = x_0 \bigr) p_{x_0, x_1} p_{ x_1, x_2} \dotsm p_{ x_{n-1}, x_n } \\ & = \mathbb{P} \bigl( X_{0} = x_0 \bigr) \alpha( x_1 - x_0) \alpha( x_2 - x_1) \dotsm \alpha ( x_n - x_{n-1}) \\ & = \mathbb{P} \bigl( X_{0} = x_0 \bigr) \alpha( \epsilon_1) \alpha( \epsilon_2) \dotsm \alpha ( \epsilon_n) . \end{align*}
Thus, we have \begin{align*} & \mathbb{P} \bigl( Y_{n} = \epsilon_{n} , Y_{n-1} = \epsilon_{n-1}, \dotsc, Y_2 = \epsilon_2, Y_1 = \epsilon_1 \bigr) \\ & = \sum_{ x_0 \in \mathbb{Z} } \mathbb{P} \bigl( X_{0} = x_0 \bigr) \alpha( \epsilon_1) \alpha( \epsilon_2) \dotsm \alpha ( \epsilon_n) \\ & = \alpha( \epsilon_1) \alpha( \epsilon_2) \dotsm \alpha ( \epsilon_n) \sum_{ x_0 \in \mathbb{Z} } \mathbb{P} \bigl( X_{0} = x_0 \bigr) \\ & = \alpha( \epsilon_1) \alpha( \epsilon_2) \dotsm \alpha ( \epsilon_n) . \end{align*}
This clearly shows that $ \{ Y_n : n \geq 1 \} $ is a sequence of i.i.d. random variables with $ \mathbb{P} \bigl( Y_{n} = i ) = \alpha (i) $ for $ i \in \mathbb{Z} $.
Now, to come up with a counter example for the other case, let us take the following transition matrix on the state space $ \{ 0, 1, 2, \dotsc, \} $ where \begin{equation*} \mathbf{P} = \begin{bmatrix} 1/2 & 1/2 & 0 & 0 & 0 & \cdots \\ 0 & 1/2 & 1/2 & 0 & 0 & \cdots \\ 0 & 0 & 1/3 & 2/3 & 0 & \cdots \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{bmatrix}. \end{equation*} The transition probabilities $ p_{i,j} $ will not matter for $ i > 2 $. Suppose that $ X_0 = 0 $ w.p. $1$. By, choice of the transition matrix, we have $ Y_n \in \{ 0, 1 \} $. Now, we can compute some elementary probabilities, (using $ X_0 = 0 $) \begin{align*} \mathbb{P} \bigl( Y_2 = 1, Y_1 = 1 \bigr) & = \mathbb{P} \bigl(X_2 = 2, X_1 = 1, X_0 = 0 \bigr) \\ & = \mathbb{P} \bigl(X_0 = 0) p_{0,1} p_{1,2} \\ & = (1/2) \times (1/2) = 1/4 \\ \mathbb{P} \bigl( Y_2 = 1, Y_1 = 0 \bigr) & = \mathbb{P} \bigl(X_2 = 1, X_1 = 0, X_0 = 0 \bigr) \\ & = \mathbb{P} \bigl(X_0 = 0) p_{0,0} p_{0,1} \\ & = (1/2) \times (1/2) = 1/4 \\ \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1, Y_1 = 1 \bigr) & = \mathbb{P} \bigl(X_3 = 3, X_2 = 2, X_1 = 1, X_0 = 0 \bigr) \\ & = \mathbb{P} \bigl(X_0 = 0) p_{0,1} p_{1,2} p_{2,3} \\ & = (1/2) \times (1/2) \times (2/3) = 1/6 \\ \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1, Y_1 = 0 \bigr) & = \mathbb{P} \bigl(X_3 = 2, X_2 = 1, X_1 = 0, X_0 = 0 \bigr) \\ & = \mathbb{P} \bigl(X_0 = 0) p_{0,0} p_{0,1} p_{1,2} \\ & = (1/2) \times (1/2) \times (1/2) = 1/8 . \end{align*}
Hence, we have \begin{align*} \mathbb{P} \bigl( Y_2 = 1 \bigr) & = \mathbb{P} \bigl( Y_2 = 1, Y_1 = 1 \bigr) \\ & \qquad + \mathbb{P} \bigl( Y_2 = 1, Y_1 = 0 \bigr) \\ & = 1/4 + 1/4 = 1/2 \\ \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1 \bigr) & = \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1, Y_1 = 1 \bigr) \\ & \qquad + \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1, Y_1 = 0 \bigr) \\ & = 1/6 + 1/8 = 7/24. \end{align*} Therefore, we have \begin{equation*} \mathbb{P} \bigl( Y_3 = 1 \mid Y_2 = 1 \bigr) = \frac{ \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1 \bigr) }{ \mathbb{P} \bigl( Y_2 = 1 \bigr) } = 7/12 \end{equation*} and \begin{equation*} \mathbb{P} \bigl( Y_3 = 1 \mid Y_2 = 1 , Y_1 = 1 \bigr) = \frac{ \mathbb{P} \bigl( Y_3 = 1, Y_2 = 1 , Y_1 = 1\bigr) }{ \mathbb{P} \bigl( Y_2 = 1, Y_1 = 1 \bigr) } = 2/3. \end{equation*} This shows that the Markov property is not achieved.