If $(X, \mathcal{F})$ is a measure space, we may define a Markov transition function $P: X \times \mathcal{F} \to \mathbb{R}$ as a function such that if $x$ is fixed, $P(x, A)$ is a probability measure on $\mathcal{G},$ and if $A$ is fixed, then $P(x,A)$ is a measurable function of $x.$ However, the definition does not say anything about how we move from one state to the next, so I'm trying to figure that out.
My 1st guess: If $A$ is fixed, then starting at $x_0$ we move to $P(x_0, A),$ then $P(P(x_0,A), A)$ and so on. This is clearly deterministic and does not explain the case $X \ne \mathbb{R},$ so it's wrong. My 2nd guess: Instead, $P(x_0, A)$ represents the probability $x_0$ moves to a point in $A.$ However, how do we know that just by defining $P,$ the function doesn't contradict itself? If we have $X = \mathbb{R}$ and somehow arrive at $P(0, [-1,0]) = 1/5, P(0, [1,2]) = 2/5, P(0, [-1,0] \cup [1,2]) = 4/5,$ then $P$ is clearly inconsistent.
If $P(x_0, \cdot)$ is $\sigma$-additive as a function $\mathcal{F} \to \mathbb{R},$ then we can guarantee consistency, and the probabilities actually mean something. Luckily, this is implied by the italicized part of the definition. But how about actually calculating where $x_0$ goes? If $X = \mathbb{R},$ we should have some way of calculating a probability distribution. What would the distribution be in terms of $P$? Certainly, if $F_n(x)$ is the distribution that is $P(x, [i/n, (i+1)/n])$ on $[i/n, (i+1)/n]$ for all $i \in \mathbb{Z},$ then $F(x) = \lim\limits_n F_n(x),$ is the distribution we seek. But there has to be a simpler way, right?
I thought of these issues while solving the problem below. It is easy, but I suspect if someone hands me a more complicated distribution in the future and asks me to calculate a different probability, I would be hard pressed to jump through the hoops I did in my solution.
Consider a Markov chain whose state space is $\mathbb{R}.$ Let $P(x,A), x \in \mathbb{R} ,A \in \mathcal{B}(\mathbb{R}),$ be the following Markov transition function, $$P(x,A) =\lambda([x−1/2,x+ 1/2]\cap A),$$ where $\lambda$ is the Lebesgue measure. Assuming that the initial distribution is concentrated at the origin, find $P(|\omega_2| \le 1/4).$
Solution: $P$ clearly models $x$ moving to a point selected from $[x-\frac{1}{2}, x+\frac{1}{2}]$ uniformly at random. Thus, the distribution of $\omega_1$ is a uniform distribution on $[-1/2, 1/2],$ and the probability $|\omega_2| \le 1/4$ is $\int_{-1/2}^{1/2} P(|w_2| \le 1/4 | w_1 = x) \, dx = \int_{-1/2}^{1/2} \lambda([x-0.5, x+0.5] \cap [-1/4, 1/4]) \, dx = 2(\int_0^{1/4} \frac{1}{2} \, dx + \int_{1/4}^{1/2} \frac{3}{4} - x \, dx) = 2(\frac{1}{8} + \frac{3}{32}) = 7/16.$
My solution relies on recognizing that $P$ is much simpler than it sounds. But what if it's impossible to interpret $P$ and you're just given a bizarre expression? Would there be some brute force way to proceed with the problem and reduce it to mere integration?