Dual of a differentiable $\ell_1$-norm approximation

40 Views Asked by At

I'm considering Problem 6.4 of Boyd's Convex Optimization.

In the problem, we're using $\phi(u) = (u^2 + \epsilon)^{1/2}$ to solve the problem $$ \min \sum_{i=1}^m \phi(a_i^Tx-b_i) $$ in place of the true $\ell_1$ minimization problem $\inf_{x \in \mathbb{R}^m} \| Ax - b \|_1$.

The solutions formulates the dual of the approximation problem as $$ \max \sum_{i=1}^m -b_i \lambda_i $$ subject to constraints $|\lambda_i| \leq 1, \sum_{i=1}^m \lambda_i a_i = 0$. I'm having trouble seeing why this is the case.

What I've tried is to rewrite the approximation problem as:

$$ \min \sum_{i=1}^m \phi(r_i) $$ subject to $Ax - b = r$. Then I can write:

$$ L(x, r, \nu) = \sum_{i=1}^m \phi(r_i) + \nu^T(Ax - b - r)$$ and the dual function $$ g(\nu) = \inf_{r_i} \sum_{i=1}^m \phi(r_i) - \nu_ir_i - \nu^Tb.$$

It seems like the solution is using $\lambda$ in place of $\nu$ in my formulation above. Reverse-engineering from the solution also suggests that somehow minimizing over $r_i$ should make the summation $0$, but I'm not able to see why that's the case.