In section 6.3 of Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers there is a method for minimizing a loss function with l1 regularization. i.e.
minimize $l(\bf{x})+\lambda||x||_1$
How can I add the equality constraint
$\sum\limits_{i} x_i =1$ for such a problem and perform the optimization ?
One approach could be to rewrite it as $$\sum_i x_i - 1 = 0$$ which motivates adding the following penalty term $$\lambda_S\left\|\sum_i x_i - 1\right\|_k = \lambda_S\|{\bf Sx} - 1\|_k$$ for some suitable $k$ where $\bf S$ is a matrix for the sum operator. Basically a dot product between x vector and a vector of ones. The equality will be true when the thing inside the norm is equal to 0. Which norm $k$ we choose ( and maybe the size of a scalar weight like $\lambda_S$ ) will determine how important that it is exactly 0.