What point will ADMM converge to ? A feasible point or a stationary point or local optima or global optima?

281 Views Asked by At

In Boyd's great ADMM paper Section 3.2.1, ''Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers'', it says that as iteration index $k\to \infty$, when solving problem P1, ADMM iterates satisfy

  • residual convergence, i.e., $r^k\to0$, where $r=Ax+Bz-c$,
  • objective convergence, i.e., $f(x^k)+g(x^k)\to p^*$, where $f(x), g(x)$ are two convex functions, $p^*$ is the optimal value of the problem P1

Boyd also says that $x^k$ and $z^k$ need not converge to optimal values, that is, when ADMM converges, $x^k$ and $z^k$ may not be the optimal solution of problem P1.

This is very diffucult for me to understand this statement. When $f(x^k)+g(x^k)\to p^*$ and $r^k\to0$, why $x^k$ and $z^k$ may not be the optimal solution?? My understanding is: (1) when $r^k\to0$, $(x^k,z^k)$ is the feasible solution of P1; (2) when $f(x^k)+g(x^k)\to p^*$, the feasible solution $(x^k,z^k)$ is the optimal solution of P1.

Who can help me to understand the convegence of ADMM?

$$P1:\min_{x,z}f(x)+g(z)\\ s.t. Ax+Bz=c$$

1

There are 1 best solutions below

0
On

You should be a bit more careful with your notation. Then, the answer would likely be clearer to you.

why $x^k$ and $z^k$ may not be the optimal solution.

Usually methods don't converge in finitely many iterations, so iterates won't be solutions. There is a limiting process involved. In particular the problem is that $x^k$ and $z^k$ might not converge to elements of your vector space.