Nonnegative least squares relation to unconstrained least squares

154 Views Asked by At

I have a dataset that I was originally doing a standard (unconstrained) least squares fit on. I wanted to change to a constrained least squares fit, where I want only nonnegative parameters. I'm currently using MATLAB's function 'lsqnonneg' to solve a nonnegative least squares problem, which is working just fine. I've noticed that for all the cases where I used to have a negative parameter, that parameter has simply become zero. Although I don't know what MATLAB's function is doing under the hood, from the examples I've done so far, it looks like I could just do a standard least squares fit, set any negative parameters to zero, and then redo my fit with the remaining parameters using the standard method again. Will this always be the case -- that negative parameters in the unconstrained case will just be zero in the nonnegative constrained case? I haven't found any mention of this in the sources I've looked at but can't manage to cook up a counterexample where this doesn't happen. Either an explanation of why this must happen or a counterexample would be really helpful!

1

There are 1 best solutions below

0
On BEST ANSWER

It is not true that if a variable is negative in the solution to unconstrained least-squares problem, then it will be zero in the solution to the nonnegatively constrained version of the same problem. Here is a counterexample:

$$\begin{align} y+1 &= 0, \\ 1000(2x-y+1) &= 0. \end{align}$$

The unconstrained solution is $x=-1, y=-1$. The nonnegatively constrained solution is close to $x=0, y=1$; the sign of $y$ is different.