The Markov property is usually defined as something like
$$ P(X_n\in A|X_{n-1}=x_{n-1},X_{n-2}=x_{n-2}, \dots, X_0=x_0)=P(X_n\in A|X_{n-1}=x_{n-1}). $$
My question is, what do the sides of this equation mean? If they are interpreted as conditional probabilities ($P(A|B)=P(A,B)/(B)$) and the probability distribution is continuous, it seems that the denominator becomes zero.
The formulation of the Markov property you quote, works for processes defined on discrete state spaces. In the general case, for example for Markov chains defined on continuous state spaces, one must turn to the "true" formulation, that the one you recall is a special case of, namely the property that, for every measurable $A$, $$ P(X_n\in A|X_{n-1},X_{n-2}, \dots, X_0)=P(X_n\in A|X_{n-1}) $$ or, equivalently, that, for every measurable function $f$, $$ E(f(X_n)|X_{n-1},X_{n-2}, \dots, X_0)=E(f(X_n)|X_{n-1}) $$ Both sides of these identities are conditional expectations conditionally on some sigma-algebras.
Recall that when the sigma-algebra is generated by a random variable $\eta$, one defines $E(\xi\mid\eta)$ as the random variable $g(\eta)$ where the function $g$ is such that, for every measurable $h$, $$E(\xi h(\eta))=E(g(\eta)h(\eta))$$ Thus, the Markov chain definition asks that, for every measurable $f$, $$E(f(X_{n+1})\mid X_n)=g(X_n)$$ is such that, for every measurable $h$, $$E(f(X_{n+1})h(X_0,X_1,\ldots,X_n))=E(g(X_n)h(X_0,X_1,\ldots,X_n))$$