Regularization and Optimization

441 Views Asked by At

What is the difference between regularization and optimization? I keep reading these terms in various papers on solutions of inverse problems but none of them describe what these terms physically mean.

1

There are 1 best solutions below

0
On

Suppose you are trying to solve a variable $x$ from an equation such as $Ax = y$, where $A$ is some operator. Suppose the equation has several solutions. You need to pick one of them. Which one and how?

You will probably end up minizing something like $||Ax-y||$. Regularization adds an extra term; the simplest example being $\alpha||x||$, where $\alpha$ is some typically heuristically selected positive number. This would bias your minimization to produce solutions which have small norm, and probably give you uniqueness of solutions, as well.

Inverse problems are typically ill-posed, which means that they are sensitive to noise, measurements errors, modelling errors, etc. This implies that, even if in theory an inverse problem has a unique solution, in practice this is not true, so a convergent algorithm might produce inappropriate results like high oscillations in a material parameter you are trying to recover. Regularization improves the behaviour of the algorithm and lets you influence the kinds of solutions you want to get; for example, you might know that the solution should be piecewise constant and you can encourage the algorithm to produce such solutions by using a suitable regularization.

The price you pay is that you put less weight on the actual data you have. Ideally, you forget the noise and keep the important parts, but how can you be certain that this is happening?


I don't know of a special meaning of optimization in the field of inverse problems.