Why should I overdetermine my inverse problem?

56 Views Asked by At

It is often said that inverse problems, say the generation of an image from some remote sensing method, benefit from overdetermination by multiple measurements. It matches intuition, but other than a simple "corner case" such as reduction of white noise by signal averaging, I never see a mathematical explanation of this.

I assume this can be framed in terms of the condition number

$$ \kappa(A) = \frac{\sigma_{max}(A)}{\sigma_{min}(A)} $$

where A is the matrix of observations and $\sigma$ max and min are the highest and lowest singular values, and overdetermination is likely to lower this ratio.

The only explanation I can see, is that when the eigenvalues of $A$ are calculated, for a tall $M \times N$ matrix where $M > N$, each entry of $A^{T}A$ will sum $M$ multiplications rather than $N$ which is in fact similar to noise power reduction by signal averaging.

A good discussion of condition number takes place in [1]: it describes a poor condition number as evidence of collinearity. So perhaps the use of overdetermination is better described as strongly reducing the possibility of collinearity?

[1] Belsley, David A., Edwin Kuh, and Roy E. Welsch. Regression diagnostics: Identifying influential data and sources of collinearity. Vol. 571. John Wiley & Sons, 2005.