About consistency in an inverse problem formulation

103 Views Asked by At

I'm a beginner with inverse problems and I was reading about regularization techniques.

Consider the problem: $$d=Kf_{\text{true}}$$ $d$ is a data vector, $K$ is an linear operator, $d=\hat{d}+\eta$ where $\eta$ is the noise and $\delta=||\eta||$ (noise level).

In regularization by filtering (linear problems), Vogel (Computational Methods for Inverse Problems) defines the error as the error due to truncation, plus the error due to noise:

$$e_{\alpha} = e^{\text{truc}}_{\alpha} +e^{\text{noise}}_{\alpha}$$

and in a model for an inverse problem, is important to choose $\alpha$ such that: $$e^{\text{truc}}_{\alpha} \to 0, \ \ \ \ \ e^{\text{noise}}_{\alpha} \to 0 \ \ \ \ (\star)$$ when $\delta\to 0$. This is a kind of consistency of the method.

A real problem does not satisfy the $(\star)$ condition, but the model of the problem needs to satisfy it, at least in agreement with my interpretation.

Now, here is my question: why is important that a classical regularization model of an inverse problem satisfies $(\star)$?

For example, I've heard that in statistics crooss validation does not satisfy $(\star)$, and some statisticians said that is not really important because a real problem does not satisfy $(\star)$.

My intention is not to create a discussion, but rather have a proper justification of why consistency is necessary or not in the modeling of an inverse problem.

Thank you very much.