I'm trying to understand the Markov property for Brownian motions in full generality. The textbook I'm following states it like this:
Recall that we have a family of measures $P_x, x \in \mathbb{R}$ on $(C,\mathcal{C})$ so that under $P_x, B_t(\omega) = \omega(t)$ is a Brownian motion starting at $x$. For $s \geq 0$, we define the shift transformation $\theta_s: C \to C$ by $$(\theta_s \omega)(t) = \omega(s+t) \quad \textrm{for} \quad t \geq 0$$ Theorem 8.2.1: If $s \geq 0$ and $Y$ is bounded and $\mathcal{C}$-measurable, then for all $x \in \mathbb{R}$ $$E_x(Y \circ \theta_s | \mathcal{F}_s^+) = E_{B_s}Y$$ where the right-hand side is the function $\phi(x) = E_xY$ evaluated at $x = B_s$.
I think I understand the general idea, but there are still 2 gaps which make me think I'm fundamentally missing something here:
1) I'm confused by the $x$ subscript on $E_x$ in the left hand side. I understand that this is expectation with respect to $P_x$, but then shouldn't this also appear on the right side somewhere?
2) I'm trying to figure out what function $Y: C[0,\infty) \to \mathbb{R}$ (as in the theorem statement above) would allow me to state this Markov property in terms of a simpler function $f: \mathbb{R} \to \mathbb{R}$, involving for example $f(B_{t+s} - B_s)$. I think this is something like $Y(\omega(\cdot)) = \omega(t)$ but I'm pretty shaky on this.