I'm confused in the textbook's (A.Borovkov "Mathematical statistics") condition : "the Bayesian estimation of the distribution parameter the best in the sense of the root-mean-square approach". There is a definition
"Estimation $\theta^*_{Q}$ defined by formula $$ q(t/x) = \frac{f_t(x)q(t)}{f(x)}, \text{ } f(x)=\int f_t(x)q(t)\lambda(dt) $$ $$ \theta^*_{Q} = E(\theta|X) = \int tq(t/X)\lambda(dt) $$ called the Bayesian estimation of the parameter $\theta$ corresponding to the a priori distribution $Q$ with density equal to $q(t)$."
Note that $$ E(\theta^*-\theta)^2 = EE((\theta^*-\theta)|\theta) = EE_{\theta}(\theta^*-\theta)^2 = \int E_{t}(\theta^*-t)^2q(t)\lambda(dt) $$ takes the smallest possible value. So then the author says that the Bayesian estimation minimizes the average value of $E_t(\theta^*-\theta)^2$. In other words "the Bayesian estimation of the distribution parameter the best in the sense of the root-mean-square approach".
My question is why the following sentence is true?