Relationship between lagrange multiplier and constraint

808 Views Asked by At

I know there is one to one relationship between $\lambda$ and $t$ in the following two equivalent optimization formulation. But what is exact relationship?

A) $$ \sum_i(y_i - \sum_k \beta_k x_{ik})^2 + \lambda\sum_k \beta_k^2 $$

B)
$$ \sum_i(y_i - \sum_k \beta_k x_{ik})^2 $$ subject to: $$ \sum_k \beta_k^2 < t $$

1

There are 1 best solutions below

0
On BEST ANSWER

So far, I have never come across any references that gives explicit relationship between $\lambda$ and $t$.

The first form of the optimization problem that you have is generally known as the Tikhonov Regularization form and the second is known as Ivanov Regularization form$^1$. I have not seen any particular references that gives a general view of these regularization thereby connecting the $\lambda$ and $t$ parameter.

However, there are bayesian interpretations for these regularization forms (or at least you can find references for bayesian interpretation of Tikhonov regularization). If you model $Y|X$ as a normal distribution whose mean is parameterized by $\beta$ and with unit variance, then the maximum likelihood estimate under this model will reduce to the unregularized version of your problem. Now, instead of maximizing the likelihood, you compute the MAP estimation, then what would happen? Well, depending on the choice of the prior, you will either get the Tikhonov regularization or the Ivanov regularization.

So far, this is the only connection I have been able to figure out between these two forms of regularization.


$^1$ There is also a third one called Morozov/Phillips Regularization form where the variable norm is in the objective function and the residual is in the constraint.