Let
$(\Omega, \mathcal F, \mathbb P)$ be a probability space.
$L_2(\Omega, \mathcal F, \mathbb R)$ be the Lebesgue space of square-integrable random variables from $(\Omega, \mathcal F, \mathbb P)$ to $(\mathbb R, \mathcal B(\mathbb R))$.
Then we define an inner product $\langle \cdot, \cdot \rangle_{L_2}$ by $$\langle X, Y \rangle_{L_2} \triangleq \int_\Omega X (\omega) Y (\omega) \mathrm d\mathbb P, \quad X, Y \in L_2(\Omega, \mathcal F, \mathbb R).$$
Then it's well-known that $L_2(\Omega, \mathcal F, \mathbb R)$ together with $\langle \cdot, \cdot \rangle_{L_2}$ is a Hilbert space. For $X,Y \in L_2(\Omega, \mathcal F, \mathbb R)$, the conditional expectation $\mathbb E[Y|X]$ of $Y$ given $X$ is defined as the orthogonal projection of $Y$ into the closed subspace $L_2(\Omega, \sigma(X), \mathbb R)$ of $L_2(\Omega, \mathcal F, \mathbb R)$.
Now we fix $X, Y \in L_2(\Omega, \mathcal F, \mathbb R)$ and define a function $f:\mathbb R \to \mathbb R$ by $f(x) \triangleq\mathbb E[Y |X=x]$. This means $f(x) = \mathbb E[Y]$ if $\mathbb P[X=x] = 0$ and $\frac{\mathbb E[Y1_{\{X=x\}}]}{\mathbb P[X=x]}$ otherwise. Clearly, $\mathbb E[Y|X] \in L_2(\Omega, \mathcal F, \mathbb R)$. I would like to ask $f$ is square-integrable w.r.t. Lebesgue measure in this case.
In fact, the function $x\mapsto E[Y\mid X=x]$ is not defined in that way. Firstly, we fix a version for the conditional expectation $E[Y\mid X]$. Recall that the random variable $E[Y\mid X]$ is $\sigma(X)$-measurable, so there exists a Borel function $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $E[Y\mid X]=f(X)$. In general, such $f$ is not unique. However, if $f_{1}$ satisfies $E[Y\mid X]=f_{1}(X)$, then for any Borel function $f_{2}$, $f_{2}(X)$ is a version of conditional expectation $E[Y\mid X]$ iff $f_{1}=f_{2}$ $\mu_{X}$-a.e., where $\mu_{X}$ is the distribution of $X$ defined by $\mu_{X}(B)=P(X^{-1}(B))$, $B\in\mathcal{B}(\mathbb{R})$. In short, the notation $E[Y\mid X=x]$ just denotes $f(x)$ and $f$ is determined up to $\mu_X$-a.e. only.
By Jensen inequality, $E[Y\mid X]^{2}\leq E[Y^{2}\mid X]$ (a.e.). Therefore, \begin{eqnarray*} & & \int f^{2}(x)d\mu_{X}(x)\\ & = & \int f^2(X) dP \\ & = & \int E[Y\mid X]^{2}dP\\ & \leq & \int E[Y^{2}\mid X]dP\\ & = & \int Y^{2}dP\\ & < & \infty. \end{eqnarray*} That is, $f$ is $\mu_{X}$-square integrable.
Remark: Intuitively, the definition for $f(x)$ employed by the OP also does not make sense. Consider the case that $X\sim N(0,1)$ and $Y=X$. Note that $P([X=x])=0$ for all $x$. Therefore, if we follow OP's definition, we will get $f(x)=E(Y)=0$. However, intuitively, if we know that $X=x$, then $Y=x$ (because $Y=X)$. Hence, $E[Y\mid X=x]=x$. That is, $f(x)=x$.
If we work on the new definition, we notice that $E[Y\mid X]=Y=X=id(X)$, so $f(x)=x$. However, I emphasize again that $f$ is not unique and is only determined up to $\mu_{X}$-a.e.. Therefore, in general, it is a mistake to talk about the value $f(x_{0})$ for a particular $x_{0}\in\mathbb{R}$.