Ridge Regression and The Lasso

611 Views Asked by At

I'm trying to compare the penalties of ridge regression and the Lasso. I know the difference between the two methods and that ridge regression shrinks the regression coefficients towards zero while the Lasso sets some of them to be zero and that's why it's in a way a subset selection procedure. My question is that what should I compare in the penalties and is it a mathematical comparison?

My second question is: I obtained the form of the ridge regression estimates when the regressors are orthogonal. How should I obtain that of the Lasso when the regressors are orthogonal? Because there's no close form for the Lasso estimates.

1

There are 1 best solutions below

0
On

Ridge regression uses a quadratic penalty, and this is very small for small coefficients, giving almost no regularisation benefit from further reduction, which is why it typically fails to force any of them fully to zero

LASSO uses an absolute value penalty, so reducing a small coefficient to zero can give the same regularisation benefit as reducing a large coefficient by a small amount, making the approach frequently reduce the number of variables actively used in the final model

For your second question, it might be better to demonstrate LASSO with actual data