Does the concave-convex procedure (CCCP) avoid local minima?

237 Views Asked by At

I'm learning convex optimization and have just learnt about the concave-convex procedure (CCCP) for minimizing objective functions that can be written as a convex function plus a concave function. I studied an example and found that CCCP can descend into a local minimum. Here's the example:

$$\mbox{Minimize }\,f(x) = ax-bx^2+cx^4,$$

where $\,a=0.5,\,b=2,\,c=0.25\,$ are all positive. The convex part is $\,ax+cx^4$ and the concave part is $-bx^2$. CCCP finds the minimum iteratively starting from an initial guess $\,x_0$. To obtain $\,x_{n+1}\,$ from $\,x_n$, the concave part is linearized around $\,x_n\,$ to obtain

$$f_{x_n}(x)=ax+cx^4-bx_n^2-2bx_n(x-x_n).$$

One then minimizes the convex function $f_{x_n}(x)$ to find

$$x_{n+1}=\left(\frac{2bx_n-a}{4c}\right)^{1/3}.$$

The iteration descends into the global minimum at $x=-2.05979\,$ if $\,x_0<x_c=0.125494$, and descends into the local minimum at $x=1.934298\,$ if $\,x_0>x_c$. Also, the convergence rate of CCCP is first-order (exponential), slower than Newton's method which has second-order convergence.

Am I understanding CCCP correctly? In what sense is difference-of-convex (DC) programming easier to solve, then, compared with a general non-convex optimization problem?