$L_{1}$ Regression as LP

133 Views Asked by At

In solving the problem:

$$\underset{x}{\text{minimize}} \quad ||Ax - b||_{1}$$

We can formulate it as an LP by noting that:

$$||Ax-b||_{1} = \sum_{i} |a_{i}^{T}x - b_{i}|$$

In Boyd's book, they rewrite it as:

$$\underset{x}{\text{minimize}} \quad 1^{T} \mathbf{\delta}$$ $$\text{subject to} \quad -1^{T} \mathbf{\delta} \leq Ax-b \leq 1^{T}\mathbf{\delta}$$

But shouldn't $\mathbf{\delta} \geq 0$? In other words, shouldn't the LP be:

$$\underset{x}{\text{minimize}} \quad 1^{T} \mathbf{\delta}$$ $$\text{subject to} \quad -1^{T} \mathbf{\delta} \leq Ax-b \leq 1^{T}\mathbf{\delta}\quad \mathbf{\delta} \geq 0$$

Or is that distinction somehow implicit? Or does it even matter?

1

There are 1 best solutions below

0
On

You mention that Boyd's book states: $$ -1^T\delta \le Ax-b \le 1^T\delta$$

I think that should be: $$ -\delta \le Ax-b \le \delta$$

From this "sandwich equation" we can see that we require $-\delta \le \delta$ which implies $\delta \ge 0$.

So, $\delta \ge 0$ holds automatically.