I want to solve the following minimization problem: ($G(x)=\lambda||Ax+b||_1$) $$ prox_{\tau G}(x^k) = \arg\min_z \frac{1}{2\tau} ||z-x^k||_2^2 + \lambda||Az + b||_1 $$
I know how to calculate in the case of L1-norm and translation. ($G(x)=\lambda ||x-g||_1$ for a constant vector $g$) $$ prox_{\tau G}(x^k) = \arg\min_z \frac{1}{2\tau} ||z-x^k||_2^2 + \lambda||z-g||_1 $$
Let $z-g=y$, then $$ y^{k+1} = \arg\min_y \frac{1}{2\tau} ||y-(x^k-g)||_2^2 + \lambda||y||_1 \\ = shrink(x^k-g,\tau\lambda)$$
where $$ shrink(y,\alpha)=sgn(y)\max(|y|-\alpha,0)=\begin{cases}y-\alpha & y\in(\alpha,\infty)\\ 0&y\in[-\alpha,\alpha]\\ y+\alpha & y\in(-\infty,-\alpha)\end{cases} $$
So $$x^{k+1}:= z^{k+1} = g + y^{k+1}\\ x^{k+1} = g+ shrink(x^k-g,\tau\lambda) $$
But I don't know how to solve when $G(z) = \lambda ||Az+b||_1$.
There is no known closed form expression for this (unless $ A $ has some special structure such as orthogonality). If there were a closed form expression for this, it would be extremely useful in convex optimization.
To evaluate this prox-operator, you would have to solve the optimization problem using some numerical method.
But why do you need to evaluate this prox-operator? You can use proximal algorithms in a way that decouples $ A$ from the $1$-norm.
Edit: One important example of special structure in $A$ is when $A A^T = \lambda I$. In this case, the prox-operator can be evaluated using the technique in slide 8-8 of Vandenberghe's 236c notes: http://www.seas.ucla.edu/~vandenbe/236C/lectures/proxop.pdf