[From PRML Bishop, p:48]
I do not understand how the cross term vanishes in the integration. I have tried writing this out, but it does not really make sense to me.
The same operation happens in "Gaussian Processes for Machine Learning" by Rasmussen/Williams on p.152, but it does not specify it either.
The definition of the expected value here is: $\mathbb{E}[\mathbf{t} | \mathbf{x}] = \mathbb{E}_t[\mathbf{t} | \mathbf{x}] = \int tp(t | \mathbf{x}) \, d\mathrm{t}$, and $y(\mathbf{x})$ is just any function.
(As a sidenote, it says "integral over $t$", but shouldn't the integral be over $x$?).

By definition you have $$ \begin{aligned} \mathbf{E}\!\left[ L\right] = \int_{ } \int_{ } L( t, y( x) ) p( x, t) \, dx \, dt .\end{aligned} $$ The middle term inserted back into the integral yields $$ \begin{aligned} 2\int_{ } \int_{ } ( y( x) \mathbf{E}\!\left[ t | x\right] - \mathbf{E}\left[ t | x\right] ^{ 2} - y( x) t+ \mathbf{E}\!\left[ t| x\right] t)p( x, t) \, dx \, dt .\end{aligned} $$ Now you change the order of summation and compute the integral over $ t$. We have $$ \begin{aligned} \int_{ } \int_{ } y( x) \mathbf{E}\!\left[ t |x\right] p( x, t) \, dt \, dx &= \int_{ } y( x) \mathbf{E}\left[ t|x\right] p( x) \, dx \\ \int_{ } \int_{ } \mathbf{E}\left[ t|x\right] ^{ 2}p( x, t) \, dt \, dx &= \int_{ } \mathbf{E}\left[ t|x\right] ^{ 2}p( x) \, dx \\ \int_{ } \int_{ } y( x) t p( x, t) \, dt \, dx &= \int_{ } y( x) p( x) \mathbf{E}\!\left[ t|x\right] \, dx \\ \int_{ } \int_{ } \mathbf{E}\!\left[ t|x\right] tp( x, t) \, dt \, dx &= \int_{ } \mathbf{E}\left[ t|x\right] ^{ 2} p( x) \, dx \end{aligned} $$ using $$ \begin{aligned} \mathbf{E}\!\left[ t|x\right] = \frac{1}{ p( x) } \int_{ } tp( x, t) \, dt .\end{aligned} $$ As you see, all terms cancel out.