On wiki, there are two definitions on Markov property, which are
Let $(\Omega ,{\mathcal {F}},P)$ be a probability space with a filtration $({\mathcal {F}}_{s},\ s\in I)$, for some (totally ordered) index set $I$; and let $(S,\mathcal{S})$ be a measurable space. A $(S,\mathcal{S})$-valued stochastic process ${\displaystyle X=\{X_{t}:\Omega \to S\}_{t\in I}}$ adapted to the filtration is said to possess the Markov property if, for each $A\in {\mathcal {S}}$ and each $s,t\in I$ with $s<t$, $$P(X_t\in A | \mathcal{F}_s)=P(X_t\in A | X_s)$$
$$E[f(X_t) | \mathcal{F}_s]=E[f(X_t) | \sigma(X_s)]$$ for all $t\geq s\geq 0$ and $f:S\rightarrow {\mathbb{R}}$ bounded and measurable.
I am trying to show that the first definition implies the second one. I know that $P(X_t\in A | \mathcal{F}_s)=E[\mathbb{1}_{X_t\in A} | \mathcal{F}_s]$ and for bounded and measurable function $f$, we can approximate it by a sequence of simple functions $f_n$. Then how do I connect $f_n$ to $\mathbb{1}_{X_t\in A}$?