Maximum and minimum penalty in lasso regression family

364 Views Asked by At

I am trying to adjust penalty, lambda, in group lasso regression, but I have no idea about it. Just to clarify, group lasso regression is a kind of multiple linear regression which use penalties on estimated coefficients to keep them small. Also, it tries to assign same coefficients to variables which are in the same group.

I am wondering is there any theory or rule about maximum and minimum value of lambda based on x, input, and y, response? I think the rule of lambda in lasso works for group lasso, as well, so it is helpful.

I need an automatic procedure to determine the minimum and maximum value of penalty b/c I have more than 10 thousands of response variables which regress on more than 500 independent variables. I appreciate if anyone can help me, I am new in regression field.

Thanks.

1

There are 1 best solutions below

2
On

You need to perform Cross Validation on $\lambda$.

You also could approximate them using forward or backward subset selection.

Theory wise, $\lambda$ is the reduced cost of the objective function which comes from the shadow costs obtained from the dual optimization problem. Hence, nonlinear sensitivity analysis will help. This should be part of the output since a good number of programs perform lasso regression using simplex. Again though, cross validation will take care of this.

Here's a decent paper

http://www.stats.ox.ac.uk/~doucet/bornn_doucet_gottardo_crossvalidation.pdf

As for a text on optimization methods,

the canonical one is Convex Optimization by Boyd

https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf