I have a question how do you go from here the expected risk function:
$$E_\theta (\theta - \hat\theta )^2 = \int (\theta - \hat\theta )^2 f(\textbf{x};\theta) \, dx $$
to here: $$ \text{bias} = E_\theta(\hat\theta)-\theta $$
Estimating using maximum likelihood. For parametric equation where unknown parameter $\theta$ should be estimated from set $\Theta$ with objective function $L(\theta, \hat\theta)=(\theta - \hat\theta )^2$. $\hat\theta(x)$ is an estimate of $\theta$ from $\textbf{x} $
See page 112 in Python for Probability, Statistics, and Machine Learning by José Unpingco.
An estimator's bias is defined to be the average difference between the estimator and its true value. So your equation $\mathrm{bias}=E(\hat\theta) -\theta$ is simply the definition of bias, nothing more. As such it makes no such to ask how to derive it.
What is quite important is the relationship between the mean squared error $E((\theta-\hat\theta)^2)$ and bias is given by $$ E((\theta-\hat\theta)^2) = \theta^2-2\theta E(\hat\theta)+E(\hat\theta^2)\\=\theta^2-2\theta E(\hat\theta)+E(\hat\theta)^2+(E(\hat\theta^2)-E(\hat\theta)^2) \\=(E(\hat\theta)-\theta)^2+(E(\hat\theta^2)-E(\hat\theta)^2) \\=\operatorname{bias}(\hat\theta)^2+\operatorname{Var}(\hat\theta)$$