How could one derive the Orthogonal Projection Operator of the $ {L}_{1} $ Ball with Box Inequality constraints?
The convex optimization problem is formulated by:
\begin{align*} \arg \min_{x} \quad & \frac{1}{2} {\left\| x - u \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| x \right\|}_{1} \leq \lambda \\ \quad & -1 \leq {x}_{i} \leq 1 \quad \forall i = 1, 2, \ldots, n \end{align*}
This is extension of @Alex answer to arbitrary box constraints with a little easier method to find the solution.
Reformulating the problem with arbitrary boundaries (Preserving $ \lambda $ for the Lagrange Multiplier):
$$\begin{align*} \arg \min_{x} \quad & \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| x \right\|}_{1} \leq r \\ & {l}_{i} \leq {x}_{i} \leq {u}_{i} \; \forall i = 1, 2, \ldots, n \end{align*}$$
The Lagrangian is given by (While the box constraints are implied):
$$ L \left( x, \lambda \right) = \frac{1}{2} {\left\| x - y \right\|}_{2}^{2} + \lambda \left( {\left\| x \right\|}_{1} - r \right) = \sum_{i = 1}^{n} \left[ \frac{1}{2} {\left( {x}_{i} - {y}_{i} \right)}^{2} + \lambda \left| {x}_{i} \right| \right] - \lambda r $$
Namely the problem can be formulated component wise.
The dual function is given by:
$$ q \left( \lambda \right) = \inf_{ {l}_{i} \leq {x}_{i} \leq {u}_{i} } L \left( x, \lambda \right) = \inf_{ {l}_{i} \leq {x}_{i} \leq {u}_{i} } \left\{ \sum_{i = 1}^{n} \left[ \frac{1}{2} {\left( {x}_{i} - {y}_{i} \right)}^{2} + \lambda \left| {x}_{i} \right| \right] - \lambda r \right\} $$
Now, taking advantage of the component wise form one could solve $ n $ scalar problem of the form:
$$ {x}^{\ast}_{i} = \arg \min_{ {l}_{i} \leq {x}_{i} \leq {u}_{i} } \frac{1}{2} {\left( {x}_{i} - {y}_{i} \right)}^{2} + \lambda \left| {x}_{i} \right| $$
The key to easily solve this problem is knowing the sign of optimal solution $ {x}^{\ast}_{i} $.
As in general the solution is given by (Just a derivative of the above):
$$ {x}^{\ast}_{i} = \operatorname{Proj}_{\left[ {l}^{\ast}_{i}, {u}^{\ast}_{i} \right]} \left( {y}_{i} - \lambda \operatorname{sign} \left( {x}^{\ast}_{i} \right) \right) $$
Pay attention for $ {l}^{\ast}_{i} $ and $ {u}^{\ast}_{i} $. We can't use $ l $ and $ u $ as the effective boundaries changes according to $ \operatorname{sign} \left( {x}^{\ast}_{i} \right) $.
In general, if the boundaries won't interfere with the sign of the optimal solution, the optimal solution will have the same sign as $ {y}_{i} $. It is easy to validate this as if the sign is different having $ {x}^{\ast}_{i} = 0 $ will have better objective value as it has lower value for the squared term and zero out the absolute value term. Yet there are cases the boundaries enforces different sign.
The intuition above yields 3 exclusive cases here:
So now we have the optimal solution for each case:
So now, for each value of $ \lambda $ we have a way to evaluate $ q \left( \lambda \right) $ efficiently by plugging in the optimal value of each $ {x}^{\ast}_{i} $.
Since the problem is convex the Dual Function is guaranteed to be concave hence its maximum can easily found by any one variable function maximization method.
Once we find the optimal value of $ \lambda $ it is easy to derive the optimal value of $ x $ from the above.
I wrote a MATLAB code which implements the solution at my Mathematics StackExchange Question 2824418 - GitHub Repository (See
ProjectL1BallDual.m).The code validates the result to a reference calculated by CVX.