My book defines a continuous time Markov chain like this.
Let $\{X_t\}, t \in T$ be a stochastic process on $(\Omega, \mathcal{A}, P)$, with a countable state space $S$. The process is a Markov chain, if for all
$n>0, t_1<t_2<\ldots<t_n <t_{n+1} \in T$ and $i_1,i_2,\ldots i_{n+1} \in S$
With
$P[X_{t_1} =i_1 \ldots X_{t_n} =i_1]>0$, the following statement holds:
$P[X_{t_{n+1}}=i_{n+1}| X_{t_k}=i_k \forall k \le n]=P[X_{t_{n+1}}=i_{n+1}|X_{t_n}=i_n]$
The problem here is that he only has the definition for a finite number of previous values. But in a proof he seems to use that it holds for a set that may contain countable infinite values. What is the usual definition given, is it conditioned on a finite number, countably infinite or may uncountable infinite like an interval?
If we only have this definition for n, arbitrary but finite. And we condition on a countable number of previous events, is it then true that we might just have conditioned on the last one, like is usual for Markov chains, or does the fact that we have conditioned on an infinite number of previous values, may give us more information?