I came across this question about ADMM.
linear constrained minimisation problem: \begin{equation} \begin{array}{cl} {\min} & {\frac{1}{2} \langle \mathbf{x},Q \mathbf{x} \rangle + \langle \mathbf{c},\mathbf{y} \rangle + \delta_{\mathbb{R}_+^m}(\boldsymbol{z})} \\ {\mathrm{s.t.}} & {A \mathbf{x} + \mathbf{z}=\mathbf{b},} \\ {} & {B\mathbf{x} + \overline{B}\mathbf{y} = \overline{\mathbf{b}}} \\ {} & \end{array} \end{equation}
a) Lagrangian function
b) How do I use a two block ADMM to solve the above problem? I am not sure how to go about finding the next iterates
I would appreciate any help. Thank you
Let \begin{equation} \mathbf{u} = \begin{pmatrix} \mathbf{y} \\ \mathbf{z} \end{pmatrix}, \quad f(\mathbf{x})=\frac{1}{2} \langle \mathbf{x}, Q\mathbf{x} \rangle, \quad g(\mathbf{u})=g(\mathbf{y},\mathbf{z}) = \langle \mathbf{c},\mathbf{y} \rangle + \delta_{\mathbb{R}^m_+}(\mathbf{z}) \end{equation} and \begin{equation} A' = \begin{pmatrix} A \\ B \end{pmatrix}, \quad B' = \begin{pmatrix} I & 0 \\ 0 & \bar{B}\end{pmatrix}, \quad \mathbf{b}' = \begin{pmatrix} \mathbf{b} \\ \mathbf{\bar{b}} \end{pmatrix}. \end{equation} Then we can rewrite the primal problem into \begin{equation} \begin{array}{cl} {\min} & {f(\mathbf{x})+g(\mathbf{u})} \\ {\mathrm{s.t.}} & {A' \mathbf{x} + B' \mathbf{u} = \mathbf{b}'} \end{array} \end{equation} Its corresponding augmented Lagrangian function is \begin{equation} \begin{aligned} \mathcal{L}_{\sigma}(\mathbf{x},\mathbf{y},\mathbf{z};\lambda, \mu)= & \frac{1}{2}\langle\mathbf{x}, Q \mathbf{x}\rangle+\langle\mathbf{c}, \mathbf{y}\rangle+\delta_{\mathbb{R}_{+}^{m}}(z) \\ &+ \langle \lambda, A\mathbf{x}+\mathbf{z}-\mathbf{b} \rangle + \frac{\sigma}{2}\|A\mathbf{x}+\mathbf{z}-\mathbf{b} \|^2 \\ &+ \langle \mu, B\mathbf{x}+\bar{B}\mathbf{y}-\bar{\mathbf{b}} \rangle + \frac{\sigma}{2}\|B\mathbf{x}+\bar{B}\mathbf{y}-\bar{\mathbf{b}} \|^2 \\ \end{aligned} \end{equation}
In this sense, the iterates of ADMM are \begin{equation} \begin{array}{l} {\mathbf{x}^{(k+1)} = \arg\min_{\mathbf{x}} \mathcal{L}_{\sigma}(\mathbf{x},\mathbf{y}^{(k)},\mathbf{z}^{(k)};\lambda^{(k)}, \mu^{(k)})} \\ {\mathbf{u}^{(k+1)} = \arg\min_{\mathbf{y}, \mathbf{z}} \mathcal{L}_{\sigma}(\mathbf{x}^{(k+1)},\mathbf{y},\mathbf{z};\lambda^{(k)}, \mu^{(k)})} \\ {\lambda^{(k+1)} = \lambda^{(k)} - \sigma (A\mathbf{x}^{(k+1)}+\mathbf{z}^{(k+1)}-\mathbf{b})} \\ {\mu^{(k+1)} = \mu^{(k)} - \sigma (B\mathbf{x}+\bar{B}\mathbf{y}-\bar{\mathbf{b}})} \end{array} \end{equation} Since $\mathbf{y}, \mathbf{z}$ is independent with each other, the update of $\mathbf{u}$ can be decomposed into \begin{equation} \begin{array}{l} {\mathbf{y}^{(k+1)} = \arg\min_{\mathbf{y}} \mathcal{L}_{\sigma}(\mathbf{x}^{(k+1)},\mathbf{y},\mathbf{z}^{(k)};\lambda^{(k)}, \mu^{(k)})} \\ {\mathbf{z}^{(k+1)} = \arg\min_{\mathbf{z}} \mathcal{L}_{\sigma}(\mathbf{x}^{(k+1)},\mathbf{y}^{(k+1)},\mathbf{z};\lambda^{(k)}, \mu^{(k)})} \end{array} \end{equation}
Actually, this problem is a quadratic programming with equality constraint and inequality constraint, i.e., \begin{equation} \begin{array}{cl} {\min} & {\frac{1}{2}\langle\mathbf{x}, Q \mathbf{x}\rangle+\langle\mathbf{c}, \mathbf{y}\rangle} \\ {\text{s.t.}} & A \mathbf{x} \leq \mathbf{b} \\ {} & {B \mathbf{x}+\bar{B} \mathbf{y}=\overline{\mathbf{b}}} \\ {} & {\mathbf{x} \in \mathbb{R}^{n}, \mathbf{y} \in \mathbb{R}^{n}, \mathbf{z} \in \mathbb{R}^{m}} \end{array} \end{equation} One can refer to this paper ``Distributed Convex Optimization with Many Non-Linear Constraints'' for alternative method.