How this trade-off has been calculated for Regularized least-squares in convex-optimization boyd book

220 Views Asked by At

I am reading this topic of boyd book from convex optimization, but the following division i-e trade-off least square and l2 norm are difficult to understand for me. If kindly someone can explain equation, that how it has been calculated. Your guidance will be appreciated. Regards,

enter image description here

enter image description here

2

There are 2 best solutions below

1
On

Solution for the least square regularization

My apologies, please ignore my question. I have solved my problem. I am just posting the answer in case someone needed. In case the question is too irrelevant here, it can be deleted. Regards

0
On

The solution to this "multi-objective" task is known to be $A^\dagger \,b$ with the Moore-Penrose pseudoinverse $A^\dagger$. If $A$ is injective, we have $$A^\dagger b = (A^* A)^{-1} A^* b.$$

In case $A$ is surjective, we have $$A^\dagger b = A^* (A\, A^*)^{-1} b$$

In the general case, one may apply Tikhonov regularization:

$$A^\dagger \,b = \lim_{\lambda \searrow 0} (A^*A + \lambda)^{-1} A^\intercal b = \lim_{\lambda \searrow 0} A^* (A\, A^* + \lambda)^{-1} b.$$

See also here.