Suppose the minimization problem is
$$\operatorname{arg min} \limits_x f(x) + g(x)$$
where function $f$ is not convex but $g$ is. If we solve it using ADMM
$$\operatorname{arg min} \limits_{x_1,x_2}f(x_1) + g(x_2), s.t. x_1=x_2$$
the $x_2$ subproblem is solved in the standard way and $x_1$ subproblem is solved in a heuristic way such that the new $f(x_1)$ + penalty term is always smaller than the $f(x_1)$ + penalty term from the previous iteration.
Would it produce at least a sub-optimal result that is close to the optimal result?
Has anyone done this kind of analysis? If yes can someone point me to the link so I can read up?
Thanks.
$f(x_1)+g(x_2)$ will converge to a local optima, as per http://arxiv.org/abs/1409.8033. Note that $x_1$ and $x_2$ themselves might not converge (cf. Section 3.2.1 of Boyd et al., "Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers").