ADMM on non-convex problem

2.5k Views Asked by At

Suppose the minimization problem is

$$\operatorname{arg min} \limits_x f(x) + g(x)$$

where function $f$ is not convex but $g$ is. If we solve it using ADMM

$$\operatorname{arg min} \limits_{x_1,x_2}f(x_1) + g(x_2), s.t. x_1=x_2$$

the $x_2$ subproblem is solved in the standard way and $x_1$ subproblem is solved in a heuristic way such that the new $f(x_1)$ + penalty term is always smaller than the $f(x_1)$ + penalty term from the previous iteration.

Would it produce at least a sub-optimal result that is close to the optimal result?

Has anyone done this kind of analysis? If yes can someone point me to the link so I can read up?

Thanks.

2

There are 2 best solutions below

0
On BEST ANSWER

$f(x_1)+g(x_2)$ will converge to a local optima, as per http://arxiv.org/abs/1409.8033. Note that $x_1$ and $x_2$ themselves might not converge (cf. Section 3.2.1 of Boyd et al., "Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers").

0
On

A recent result at ftp://ftp.math.ucla.edu/pub/camreport/cam15-62.pdf analyzed ADMM with nonconvex and nonsmooth functions. It establishes a set of conditions that the point sequence converges to a stationary point of the augmented Lagrangian. To your problem, the conditions allow $f(x_1)$ to be the sum of a Lipschitz continuous function and a nonconvex nonsmooth function (such as the $\ell_{1/2}$ quasi-norm and a piece-wise linear function) and $g(x_2)$ to be a Lipschitz continuous function. None of these functions needs to be convex.