In my notes on Markov processes there are three apparently equivalent formulations of the Markov property for a Markov process (continuous time $\mathbb{R}_+$, Polish state space $(S,d)$with Borel $\sigma$-algebra $\mathcal{B}(S)$). Let $(\Omega,\mathcal{F},(\mathcal{F_t})_{t\geq 0},\mathbb{P})$ a filtered probability space.
One being the definition: $X = (X_t)_{t\geq 0}:\Omega\to S^{\mathbb{R}_+}$ is a Markov process, if $\forall s,t\geq 0\forall A\in\mathcal{B}(S)$: $\mathbb{P}(X_{s+t}\in A|\mathcal{F}_s) = \mathbb{P}(X_{s+t}\in A|X_s)$ almost surely.
The second is: For $s,t$ and $A$ as above, $\mu$ some initial distribution and $f:S\to\mathbb{R}$ measurable and bounded: $\mathbb{E}_\mu(f(X_{s+t})|\mathcal{F}_s) = \mathbb{E}_{X_s}(f(X_t))$ $\mathbb{P}_\mu$-a.s.
The third is: $Law_\mu((X_{s+t})_{t\geq 0}|\mathcal{F}_s) = Law_{X_s}((X_t)_{t\geq 0})$ $\mathcal{P}_\mu$-a.s.
I know how to show the equivalence of the first and second version, but I am struggling to show, that the first and second implie the third version. My approach was to use the fact, that the law of a Markov process is characterised by the finite-dimensional distributions, so it suffices to show: $\forall 0=t_0<t_1<\ldots<t_n$, $n\in\mathbb{N}$ and $\forall A_0,\ldots,A_n\in\mathcal{B}(S)$: $\mathbb{P}_\mu(X_{s+t_0}\in A_0,\ldots,X_{s+t_n}\in A_n|\mathcal{F}_s) = \mathbb{P}_{X_s}(X_0\in A_0,X_{t_1}\in A_1,\ldots,X_{t_n}\in A_n)$ $\mathbb{P}_\mu$-a.s.
I tried to write the lefthand conditional probability as a conditional expectation and then somehow iteratively use tower property and the second version of Markov property, but I don't get there. I don't know how to remove the conditioning to then somehow get something like on the righthand side.
I hope somebody can enlighten me :)
SOLUTION: I actually found an answer to the question in Achim Klenkes book on probability theory, theorem 17.9. I guess, in the above it was missing, that for the formulation in law one needs the homogeneous Markov property instead of just the normal one.