I am stuck with the following problem about Markov processes:
Let $(\Omega,\mathcal{F}, \mathbb{P}, \mathcal{F}_t)$ be a filtered probability space. If $X,Y:\mathbb{R}_{+}\times\Omega\to\mathbb{R}$ are Markov processes (with respect to $\mathcal{F}_t)$ with the same 2-dimensional distributions, then they are equivalent (they have the same law).
What I want to prove is that they have the same $n$-dimensional distributions for all $n\in \mathbb{N}$, i.e. that for all $0\leq t_1 < \dots < t_n \in \mathbb{R},$ and for all $ B_{t_1},\dots, B_{t_n}\in \mathcal{B}(\mathbb{R}),$ the property $\mathbb{P}(X_{t_n}\in B_{t_n}, \dots, X_{t_1}\in B_{t_1})=\mathbb{P}(Y_{t_n}\in B_{t_n}, \dots , Y_{t_1}\in B_{t_1})$ holds true.
Well, now by induction \begin{align} &\mathbb{P}(X_{t_n}\in B_{t_n}, \dots, X_{t_1}\in B_{t_1})=\\ & =\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}}, \dots, X_{t_1}\in B_{t_1})\mathbb{P}(X_{t_{n-1}}\in B_{t_{n-1}}, \dots, X_{t_1}\in B_{t_1})= \\ & =\dots= \\ & =\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}}, \dots, X_{t_1}\in B_{t_1})\dots \mathbb{P}(X_{t_2}\in B_{t_2}|X_{t_1}\in B_{t_1})\mathbb{P}(X_{t_1}\in B_{t_1}). \end{align} By Markov property I can obtain that \begin{equation} \mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}})=\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}},\dots,X_{t_1})= \mathbb{P}(X_{t_n}\in B_{t_n}|\mathcal{F}_{t_{n-1}}), \end{equation} and this should tell me that every factor in the expression above is determined by the 2-dimensional distribution (e.g. $\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}} \dots, X_{t_1}\in B_{t_1})=\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}})$), but sadly I can't formalize the passage from the conditional probability (which is defined as a random variable) in the Markov property to the conditional probability that appears in the final statement.
The question thus boils down to: Is the following property true? Why? \begin{equation} \mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}},\dots,X_{t_1})=\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}) \implies \\ \mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}} \dots, X_{t_1}\in B_{t_1})=\mathbb{P}(X_{t_n}\in B_{t_n}|X_{t_{n-1}}\in B_{t_{n-1}}) \end{equation} From a symbolic point of view it seems reasonable, but I can't connect the two concepts. This should also help me to get some intuition about the conditional probability (as a random variable). Thanks to everybody.
The statement that the Markov property implies $$ \mathbb{P}(X_{t_n} \in B_{t_n} | X_{t_{n-1}} \in B_{t_{n-1}}, \dots, X_{t_1} \in B_{t_1}) = \mathbb{P}(X_{t_n} \in B_{t_n} | X_{t_{n-1}} \in B_{t_{n-1}}) $$ is hopelessly false, as the example $B_{t_{n-1}} = \mathbb{R}$ will show. The Markov property does not say anything about a window of space. The statement appearing in the Markov property regarding the conditioning on $X_t$ can be stated in terms of regular conditional probabilities as a property holding for all $X_t=x_t$. This is why it is useful to think not about the measure $\mathbb{P}$, but the family of measures $\mathbb{P}_x$, indexed by elements $x$ in your state space. In particular, the probability of a collection of paths $\{X_t, t\geq 0\}$ under $\mathbb{P}_x$ is nonzero only if there are paths in the collection such that $X_0 = x$.
For a more direct counterexample, consider the discrete time Markov chain $\{X_i, i\geq 0\}$ defined on $\{0, 1\}$, with transition kernel $$ p(0, 1) = 1, \quad p(1, 0) = 1. $$ Thus, the chain is deterministic and goes to $1$ from $0$ and vice-versa. Let $\mu = (1/2, 1/2)$ be the initial distribution of $X_0$. The entire path of $\{X_i, i \geq 0\}$ is determined by $X_0 = x_0$. The above statement does not hold for $t_i = i$, since \begin{eqnarray*} \frac{1}{2}&= & \mathbb{P}_\mu(X_2 \in \{1\} | X_1 \in \{0, 1 \}) \\ &\neq & \mathbb{P}_\mu(X_2 \in \{1\} | X_1 \in \{0, 1\}, X_0 \in \{0 \}) \\ &= & 0. \end{eqnarray*}
As to your question, you don't need to use semigroups (as hinted in the comments). It depends on what your definition of "equivalent" is, and how you construct a Markov process in the first place.