I'm reading about inverse problems and the Bayesian approach and I'm having troubles understanding how the following equations were obtained.
We consider the following equation initially.
$y=G(u)+\eta$
$y$ is a set of measured data. $G$ is a mathematical model and $u$ are the parameters of the model. The noise that is present in the observed data is given by $\eta$ (0 mean noise).
The authors then go on to describe the Bayesian approach and they say that the likelihood function, that is the probability of $y$ given $u$ is given by:
$\rho(y|u)=\rho(y-G(u))$
How did they derive this relation above?
Note that since $G$ is known, when $u$ is given so it is $G(u)$. So,
$$\begin{align} P(y = y_0|u = u_0) &= P(y = y_0|G(u) = G(u_0)) \\ &= P(G(u) + \eta = y_0 | G(u) = G(u_0)) \\ &= P(\eta = y_0 - G(u)| G(u) = G(u_0)) \\ &= P(\eta = y_0 - G(u_0)) \end{align}$$
Setting ${\eta}_0 = y_0 - G(u_0)$ we get that $P(y = y_0 | u = u_0) = P(\eta = {\eta}_0)$ or in other words: $$P(y|u) = P(\eta)$$
Substitute $\eta = y - G(u)$ by the first equation and you get the authors' claim.