Statistical Inversion Problem $F = Ku + \mathcal{E}$ derive conditional probability density $p(f | u)$

36 Views Asked by At

Consider the following Inversion Problem $f = Ku + \varepsilon$ where $f \in \mathbb{R}^{m}$, $u \in \mathbb{R}^{n}$, $K \in \mathbb{R}^{m,n}$ and $\varepsilon$ is an additive, Gaussian noise. In the Bayesian approach towards Inverse Problems, where you don't rely on explicit regularizers, you consider this problem as $F = Ku + \mathcal{E}$ where $F$ and $\mathcal{E}$ are random variables, $\mathcal{E} \sim \mathcal{N}(0, \Sigma_{\varepsilon})$. Apparently one can then determine the conditional probability density of $F$ given $u$ as

$$ p(f | u) \propto \operatorname{exp}(-\frac{1}{2}\|f - Ku\|^2_{{\Sigma_{\varepsilon}}^{-1}}) $$

where $\|y\|^2_{A} := y^{T}Ay$. How is this derived?

1

There are 1 best solutions below

0
On BEST ANSWER

In general, using the expression for conditional densities $p(f | u) = p(f, u) / p(u)$.

In this case, no one would do that, and simply prefer to use simple properties of Gaussians. The distribution of a Gaussian random variable is completely determined by two things: its mean, and its covariance matrix. Moreover, if $X$ is a Gaussian random vector, then $AX + b$ is Gaussian for any deterministic matrix $A$ and vector $b$.

Applying this to your situation, given $u$, $Ku$ is a deterministic vector. Hence $Ku + \mathcal{E}$ is Gaussian with mean $\mathbb{E}[Ku + \mathcal{E}] = Ku$ and covariance $\mathbb{E}[(Ku + \mathcal{E} - Ku)(Ku + \mathcal{E}- Ku)^T ] = \mathbb{E}[\mathcal{E}\mathcal{E}^T]=\Sigma_\varepsilon$.

Knowing this, and the fact that the density $p(z)$ of a Gaussian with mean $\mu$ and covariance $\Sigma$ is simply $$p(z) \propto \exp\left(-\frac{1}{2}\|z - \mu\|^2_{\Sigma^{-1}}\right),$$ you can write down the conditional density above, simply by treating $u$ (and hence $Ku$) as being "fixed".