Ridge and Lasso Regression

142 Views Asked by At

I understand how these algorithms works but there is one thing which i don't got it. Ridge and Lasso regressions apply penalty to independent variable's coefficient. I understand it. But How is this happening. How apply penalty to coefficient?

1

There are 1 best solutions below

2
On BEST ANSWER

A good resource is sklearn. To answer your question

As with all ML models you are trying to minimise/maximise an objective function, $$ L_{\text{ridge}}= ||y - Xw||^2_2 + alpha * ||w||^2_2 $$ and $$ L_{\text{lasso}} = \frac{1}{2 * n_{\text{samples}}} * ||y - Xw||^2_2 + alpha * ||w||_1 $$ you can generally write these as $$ L = ||y - Xw||^2_2 + \text{Penalty} $$ and it is the penalty that restricts the coefficients.

If you are asking why do it like this, this is a way to deal with large coefficients (corresponding to overindexing on a particular features) it helps with generalising and not relying too much on the data.

You can also look into the bayesian approach to getting to ridge/lasso here