Is there an efficient way or algorithm to find the $L_1$-projection from a vector $\mathbf{b}$ with a transformation matrix $\mathbf{A}$? In other words, how to find a vector $\mathbf{x}$ that minimizes the expression below? $$ \begin{align} \mathcal{L}(\mathbf{x}) &= ||\mathbf{Ax} - \mathbf{b}||_1 \\ \mathbf{x}^* &= \arg\min_\mathbf{x} \mathcal{L}(\mathbf{x}) \end{align} $$ for a given matrix $\mathbf{A}$ and a vector $\mathbf{b}$.
For $L_2$ projection, it can be solved by finding $\mathbf{x}$ with zero gradient of the loss function. But for $L_1$ projection, the gradient is not defined everywhere, so it is impossible to find a point with zero gradient.
Depending on the size and sparsity of $A$, you might want to use a first order method such as the one mentioned in xei's answer.
An alternative that can be fast if you can solve the least square problem quickly is to use Iteratively Reweighted Least Squares (IRLS).
This problem can also be formulated as a Linear Programming problem and solved using an LP solver. If you've already got access to a good LP solver and want a solution that's very easy to program, formulating this as an LP might be the easiest thing for you to do.