Consider some quantity $x$ that is to be estimated using the data vector $y$. In Stephen M Kay's book on Estimation Theory, the classical MSE is written as (eq. 10.3, p. 311) \begin{equation} \int(x-\hat x)^2 \ p(y\vert x)\ dy \end{equation} whereas the Bayesian MSE is (eq.10.2) \begin{equation} \iint(x-\hat x)^2 \ p(y, x)\ dy\ dx \end{equation}
Now I can make sense of the Bayesian MSE, but where does the integral for the classical MSE come from? Or what is a natural interpretation for it?
As a follow-up question, which of these would $\mathbb{E}\left[(x-\hat x)^2\right]$ refer to? Does it depend on the context?