I have an optimization problem of the form:
\begin{align*} \text{minimize} &\quad f(x) + g(x) \\ \text{such that} & \quad Ax = b \end{align*}
Where $f,g$ are both convex (but not differentiable), and I know the proximal operators for $f,g$ separately.
From reading Parikh and Boyd, this problem looks almost exactly like the kind solved by the Alternating Direction of Method of Multipliers (ADMM, aka Douglas-Rachford splitting). Indeed, my problem almost exactly matches section 4.4 in the this text, except in the text, it is not constrained. It is just:
\begin{align*} \text{minimize} &\quad f(x) + g(x) \\ \end{align*}
In fact, in equation 4.9, they show that ADMM is equivalent to the idea of an augmented Lagrangian, which they derive by adding a constraint:
\begin{align*} \text{minimize} &\quad f(x) + g(z) \\ \text{such that} & \quad x = z \end{align*}
But I would like to add my own constraint, and I do not know how to do that.
I believe this is possible, because there is a popular work called Robust PCA, where they modify the augmented Lagrangian to add their own constraint. But their case seems quite different from mine, as they are optimizing over two variables (which nicely match up with the $x$ and $z$ in the augmented Lagrangian formulation), but I am not.
How can I use ADMM to solve my problem? If it is not possible, is there a more suitable method?
If you are using ADMM, then you should read this paper by Boyd et al.
The first form that the paper deals with is: $$\begin{align*} \text{minimize} &\quad f(x) + g(z) \\ \text{subject to} & \quad Ax + Bz = c. \end{align*}$$ Your problem can easily be written under this form: $$\begin{align*} \text{minimize} &\quad f(x) + g(z) \\ \text{subject to} & \quad \begin{bmatrix}A \\ I\end{bmatrix}x + \begin{bmatrix}0 \\ -I\end{bmatrix}z = \begin{bmatrix}b \\ 0\end{bmatrix}. \end{align*}$$