Transition functions and Markov processes

605 Views Asked by At

I am wondering whether there is a one-to-one correspondence between transition functions and homogeneous Markov processes?

We say that $(X_t,\mathcal{F}_t)_{t\geq 0}$ is a Markov process if $\mathbb{P}(A\vert \mathcal{F}_t)=\mathbb{P}(A\vert X_t)$ for all $A\in\sigma(X_s, s\geq t)$.

On the other hand a transition function $[0,\infty)\times E\times \mathcal{E}\ni(s,x,A)\mapsto P_s(x,A)$ satisfies

(i) $x\mapsto P_s(x,A)$ is measurable for all fixed $s,A$;

(ii) $P_s(x,\cdot)$ is a probability measure (or it satisfies $\leq 1$, in which case we extend the state space $E$ by a point $\Delta$ so it becomes a probability measure) and

(iii) the Chapman-Kolmogorov property.

It is clear that a transition function that satisfying (i)-(iii) induces a Markov process.

If $(X_t,\mathcal{F}_t)$ is a Markov process, the following is well defined for any Markov process

$P_s(x,A):=\mathbb{P}[X_s\in A\vert X_0=x]$.

Now, this should satisfy (i) which can be showed using the Radon-Nikodym theorem. (ii) and (iii) should also be satisfied.

However, I am confused as there are Markov processes $X$ which have branching points, i.e., $P_0(x,{x})\neq 1$. But if $X$ is a Markov process, isn't $\mathbb{P}(X_0\in A\vert X_0=x)=\delta_x(A)$?