Let $X,Y$ be continuous random variables. For any $A\subseteq\mathbb{R}$ and any $y\in\mathbb{R}$ such that $f_{Y}(y)\neq 0$, I want to show that $$\mathbb{P}(X\in A|Y=y)=\int_{A}f_{X|Y}(x|y)dx,$$ where the conditional density is defined by $$f_{X|Y}(x|y):=\dfrac{f_{X,Y}(x,y)}{f_{Y}(y)}.$$
I have almost figured it out but got stuck in the last but not least step.
Let $\epsilon>0$ and consider $\mathbb{P}(X\in A| Y\in[y,y+\epsilon])$. By the definition of conditional probability, we have $$\mathbb{P}(X\in A| Y\in[y,y+\epsilon])=\dfrac{\mathbb{P}(X\in A, Y\in [y,y+\epsilon])}{\mathbb{P}(Y\in [y,y+\epsilon])}.$$ Use the joint density $f_{X,Y}$ and the density $f_{Y}(y)$, we can express the denominator and numerator in the above identity as integrals: $$\mathbb{P}(X\in A, Y\in [y,y+\epsilon])=\int_{y}^{y+\epsilon}\int_{A}f_{X,Y}(x,t)dxdt$$ $$\mathbb{P}(Y\in [y,y+\epsilon])=\int_{y}^{y+\epsilon}f_{Y}(t)dt.$$ Now, divide the numerator and denominator by $\epsilon$ and taking $\epsilon\longrightarrow 0$. In the numerator, denote by $f(t):=\int_{A}f_{X,Y}(x,t)dx$, and denote by $F(t)$ the anti-derivative of $f(t)$. We have $$\lim_{\epsilon\rightarrow 0}\dfrac{1}{\epsilon}\int_{y}^{y+\epsilon}\int_{A}f_{X,Y}(x,t)dxdt=\lim_{\epsilon\rightarrow 0}\dfrac{1}{\epsilon}\int_{y}^{y+\epsilon}f(t)dt=\lim_{\epsilon\rightarrow 0}\dfrac{F(y+\epsilon)-F(y)}{\epsilon}=:F'(y)=f(y).$$ Hence, in the limit, the numerator is exactly $$\int_{A}f_{X,Y}(x,y)dx.$$ Similarly, in the denominator, we have $$\lim_{\epsilon\rightarrow 0}\dfrac{1}{\epsilon}\int_{y}^{y+\epsilon}f_{Y}(t)dt=\lim_{\epsilon\rightarrow 0}\dfrac{F_{Y}(y+\epsilon)-F_{Y}(y)}{\epsilon}=:F_{Y}'(y)=f_{Y}(y).$$ Combining all these estimates together, we have $$\lim_{\epsilon\rightarrow 0}\mathbb{P}(X\in A|Y\in[y,y+\epsilon])=\dfrac{\int_{A}f_{X,Y}(x,y)dx}{f_{Y}(y)}=\int_{A}\dfrac{1}{f_{Y}(y)}f_{X,Y}(x,y)dx=:\int_{A}f_{X|Y}(x|y).$$
However, how to prove that $$\lim_{\epsilon\rightarrow 0}\mathbb{P}(X\in A|Y\in[y,y+\epsilon])=\mathbb{P}(X\in A|Y=y)?$$ In other words, how to show that the conditional probability is continuous in $y$? You can use any measure theory you want. But I could not find a direct result from measure theory either.
Please feel free to give an alternative proof if it can bypass this problem. However, the only thing I assume is the definition of the conditional density.
The claim holds even if $X$ is ${\mathbb R}^m$-valued and $Y$ is ${\mathbb R}^n$-valued, with joint density $f(x,y)$ and conditional density $f_{X\mid Y}(x\mid y)$ as you've defined (and $0$ when $f_Y(y)= 0$). The measure-theoretic result is:
Claim: Let $A$ be a Borel set in ${\mathbb R}^m$. Put $$g(y):=\int_A f_{X\mid Y}(x\mid y)\,dx.\tag{$\ast$}$$ Then $g(Y)$ is a version of the measure-theoretic conditional probability $P(X\in A\mid Y)$.
Remark: This result justifies the common practice of writing the LHS of ($\ast$) as $P(X\in A\mid Y=y)$.
Proof: By definition $P(X\in A\mid Y)=E(I(X\in A)\mid Y)$. Since $g(Y)$ is clearly $\sigma(Y)$-measurable, to prove that $g(Y)$ equals $E(I(X\in A)\mid Y)$ it remains to show that for all $B\in\sigma(Y)$, $$E [I(X\in A)I_B(Y)]=E[g(Y)I_B(Y)].$$ Indeed, $$\begin{aligned} E [I(X\in A)I_B(Y)]&=P((X,Y)\in A\times B)\\ &=\iint_{A\times B} f(x,y)\,dxdy\\ &\stackrel{(1)}=\iint_{A\times B} f_{X\mid Y}(x\mid y) f_Y(y)\,dxdy\\ &\stackrel{(2)}=\int_B\left(\int_A f_{X\mid Y}(x\mid y)\,dx\right)f_Y(y)\,dy\\ &\stackrel{(\ast)}=\int_B g(y)f_Y(y)\,dy\\ &=E[g(Y)I_B(Y)]. \end{aligned} $$ Step ($1$) is the definition of $f_{X\mid Y}$; step ($2$) is Fubini's theorem; step ($\ast$) is the definition of $g$.