Is it true that the solution to a linearly constrained convex minimization problem can only be placed on the boundary of the constraint set, for any nonlinear convex objective, e.g.
$$ \min_x f(x)$$ s.t. $$Ax \le b$$
when $f$ is convex?
If yes, why?
Update: I guess the answer is no. $\min (x-2)^2$ s.t. $x \le 100$
But then why adding a $L_1$ norm to convex objectives leads to exact sparsity?
By exact sparsity I mean all elements of the solution vector are either zero or one.(As opposed to $L_2$ norm regularization that only shrinks value of solution.) The reason I heard was that solution is guaranteed to happen on vertices of $L_1$ ball and on the vertices all elements are either zero or one.
Of course this is not correct, you are confused. Convex function might have an optimum without any constraints like you showed yourself. I am not sure I understood your last question regarding $L_1$ norm but coming back to the original problem actuall the opposite is true - a linear (or concave) function on convex sets always has an optimum on the boundary.