Advantage of Proximal Gradient Descent Over Alternating Minimization Approach?

224 Views Asked by At

I have a convex function $f(x)$ which I decompose it into sum of two convex functions as follow: $g(x) + k(x)$. Where $g(x)$ is smooth and differentiable and $k(x)$ is non-smooth and non-differentiable. I can compute the proximal operator of $k(x)$ and use proximal gradient decent algorithm to solve it. but I have seen that, other people tried to solve it Alternating Minimization Approach approach. I was wondering what are the obvious advantage of Alternating Minimization approach over prximal gradient decent.