convex optimisation using ADMM

86 Views Asked by At

In consensus ADMM presentation by Boyd (pg 37 in https://web.stanford.edu/~boyd/papers/pdf/admm_slides.pdf), it is stated that:

The summation $\sum\limits_{i = 1}^{N}y_i^k = 0$, for all dual variables $y_i$ of consensus equality constraints at each step $k$. Why is that the case? I would appreciate an explanation and a proof.

Thanks

1

There are 1 best solutions below

3
On

The summation $\sum_{i=1}^N y_i^k = 0$ is actually derived from $\nabla L(x,y,z) = 0$, where $L(x,y,z)$ is Lagrangian function of original problem.

But we still can obverse this result in ADMM formulation. Since we have \begin{equation} \begin{aligned} x_{i}^{k+1} &:=\underset{x_{i}}{\operatorname{argmin}}\left(f_{i}\left(x_{i}\right)+y_{i}^{k T}\left(x_{i}-z^{k}\right)+(\rho / 2)\left\|x_{i}-z^{k}\right\|_{2}^{2}\right), \\ z^{k+1} &:=\frac{1}{N} \sum_{i=1}^{N}\left(x_{i}^{k+1}+(1 / \rho) y_{i}^{k}\right) ,\\ y_{i}^{k+1} &:=y_{i}^{k}+\rho\left(x_{i}^{k+1}-z^{k+1}\right). \end{aligned} \end{equation} compute the summation of $y_i^{k+1}$, i.e., \begin{equation} \sum_{i=1}^N y_i^{k+1} = \sum_{i=1}^N y_i^k + \rho \left(\sum_{i=1}^N x_i^{k+1}- \sum_{i=1}^{N} z^{k+1} \right), \end{equation} where \begin{equation} \sum_{i=1}^N z^{k+1} = N \cdot z^{k+1} = \sum_{i=1}^N\left(x_{i}^{k+1}+(1 / \rho) y_{i}^{k}\right). \end{equation} Thus \begin{equation} \sum_{i=1}^N y_i^{k+1} = \sum_{i=1}^N y_i^k + \rho \left(\sum_{i=1}^N x_i^{k+1}- \sum_{i=1}^{N}\left(x_{i}^{k+1}+(1 / \rho) y_{i}^{k}\right) \right) = 0. \end{equation}