I've learned that the Markov property says the following:
$$P(X_{n+1} = i \mid X_n = j, X_{n-1}= j_1,\dots, X_0 = j_n) = P(X_{n+1}= i \mid X_n = j)$$
For me it is not clear how you can derive the following conclusion:
$$P(X_{n+m} = i \mid X_{n} = j,X_0 =j_n) = P(X_{n+m} = i \mid X_n = j)$$
Where $m > 1$. How can we derive this mathematically?
By the definition of the conditional expectation,
$$\mathbb{P}(X_{n+m} = i \mid X_n = j, X_0 = j_n) = \frac{\mathbb{P}(X_{n+m}=i, X_n = j, X_0 = j_n)}{\mathbb{P}(X_n = j, X_0 = j_n)}. \tag{1} $$
Now, by the Markov property (MP)
$$\begin{align*}& \mathbb{P}(X_{n+m}=i, X_n = j, X_0 = j_n) \\ &= \sum \mathbb{P}(X_{n+m} = i, X_{n+m-1} = i_1,\ldots,X_{n+1} = i_m, X_n = j, X_{n-1} = j_1,\ldots,X_0 = j_n) \\ &\stackrel{\text{(MP)}}{=} \sum \mathbb{P}(X_{n+m} = i \mid X_{n+m-1} = i_1) \mathbb{P}(X_{n+m-1} = i_1,\ldots,X_{n+1} = i_m, X_n = j, X_{n-1} = j_1,\ldots,X_0 = j_n). \\ &= \sum_{i_1} \mathbb{P}(X_{n+m} = i \mid X_{n+m-1} = i_1) \mathbb{P}(X_{n+m-1} = i_1, X_n = j, X_0 = j_n) \tag{2} \end{align*}$$
(Here, the sum $\sum$ is over $i_1,\ldots,i_m$ and $j_1,\ldots,j_{n-1}$.) Iterating this procedure gives
$$\begin{align*}& \mathbb{P}(X_{n+m}=i, X_n = j, X_0 = j_n) \\&= \sum \mathbb{P}(X_{n+m} = i \mid X_{n+m-1} = i_1) \cdots \mathbb{P}(X_{n+1} = i_m \mid X_n = j) \mathbb{P}(X_n = j, X_0 = j_n). \tag{3} \end{align*}$$
Hence, by $(1)$
$$\begin{align*} &\quad \mathbb{P}(X_{n+m} = i \mid X_n = j, X_0 = j_n) \\ &= \sum \mathbb{P}(X_{n+m} = i \mid X_{n+m-1} = i_1) \cdots \mathbb{P}(X_{n+1} = i_m \mid X_n = j). \tag{4} \end{align*}$$
If we sum $(3)$ over all $j_n$, then we get
$$\begin{align*} &\quad \mathbb{P}(X_{n+m} = i, X_n = j) \\ &= \mathbb{P}(X_n = j) \sum \mathbb{P}(X_{n+m} = i \mid X_{n+m-1} = i_1) \cdots \mathbb{P}(X_{n+1} = i_m \mid X_n = j). \end{align*}$$
Plugging this into $(4)$ yields
$$\mathbb{P}(X_{n+m} = i \mid X_n = j, X_0 = j_n) = \frac{\mathbb{P}(X_{n+m} = i, X_n = j)}{\mathbb{P}(X_n = j)} = \mathbb{P}(X_{n+m} = i \mid X_n = j).$$