Let $\hat{\theta}_{1}$ $\hat{\theta}_{2}$ $\hat{\theta}_{3}$ be three estimators of the parameter $\theta$. E($\hat{\theta}_{1}$) = E($\hat{\theta}_{2}$) = $\theta$, E($\hat{\theta}_{3}$) $\ne$ $\theta$, V($\hat{\theta}_{1}$) = 12 and V($\hat{\theta}_{2}$) = 10 and E($\hat{\theta}_{3} - \theta) ^ 2 = 6$. Which of these estimators you prefer ? Why?
$$ E(\hat{\theta_{3}} - \theta)^2 = 6 $$ I get $$ E(\hat{\theta_{3}}^2 - 2\theta\hat{\theta_{3}} + \theta^2) = 6 $$ $$ E(\hat{\theta_{3}}^2) - E(2\theta\hat{\theta_{3}}) + E(\theta^2) = 6 $$
At this point, I don't know what to do.
Assuming we are in the context of classical parametric estimation, the parameter $\theta$ is a constant, while the estimator $\hat{\theta}$ is a random variable (it depends on the sample). Keeping that in mind, and knowing that if $k$ is constant $E(k)=k$ and $E(kX)=kE(X)$ the manipulation is simple: $$E(\hat{\theta}^2) - E(2\theta\hat{\theta}) + E(\theta^2)=E(\hat{\theta}^2) - 2 \theta E(\hat{\theta}) + \theta^2$$ This can be further simplified if you know that the estimator is unbiased. If you don't know that (as your comment seems to imply), then there's not much else to do.