I am currently reading Atomic norm denoising with applications to line spectral estimation by Bhaskar et al. In appendix E, an ADMM algorithm is presented to solve the SDP program \begin{equation*} \min_{t, u, x, Z} \frac{1}{2} \| x - y \|_2^2 + \frac{\tau}{2}(t + u_1) \quad \text{s.t.} \quad Z = \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix}, \ Z \succeq 0, \end{equation*} where $\tau > 0$ is a regularisation parameter and $T(u)$ is the Hermitian Toeplitz matrix, whose first row is $u$.
According to the paper
augmented Lagrangian is \begin{equation*} L_{\rho}(t, u, x, Z, \Lambda) := \frac{1}{2} \| x - y \|_2^2 + \frac{\tau}{2}(t + u_1) + \left\langle \Lambda, Z - \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix} \right\rangle_F + \frac{\rho}{2} \left\| Z - \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix} \right\|_F^2, \end{equation*} where $\rho > 0$.
The ADMM algorithm consists of the update steps \begin{align*} (t^{k + 1}, u^{k + 1}, x^{k + 1}) & \leftarrow \text{argmin}_{t, u, x} L_{\rho}(t, u, x, Z^k, \Lambda^k) \\ Z^{k + 1} & \leftarrow \text{argmin}_{Z \succeq 0} L_{\rho}(t^{k + 1}, u^{k + 1}, x^{k + 1}, Z, \Lambda^k) \\ \Lambda^{k + 1} & \leftarrow \Lambda^k + \rho\left( Z^{k + 1} - \begin{bmatrix} T(u^{k + 1}) & x^{k + 1} \\ \overline{x^{k + 1}} & t^{k + 1} \end{bmatrix}\right). \end{align*} These updates have a closed form: \begin{gather*} t^{k + 1} = Z_{n + 1, n + 1}^{k} + \frac{1}{\rho} \left( \Lambda_{n + 1, n + 1}^{k} - \frac{\tau}{2}\right) \\ x^{k + 1} = \frac{1}{2 \rho + 1}\left(y + 2 \rho z_1^k + 2\lambda_1^k\right) \\ u^{k + 1} = W\left(T^*\left (Z_0^k + \frac{1}{\rho} \Lambda_0^k\right) - \frac{\tau}{2 \rho} e_1\right), \end{gather*} where $W$ is a diagonal $n \times n$-matrix with the entries \begin{equation*} W_{i i} := \begin{cases} \frac{1}{n}, & i = 1, \\ \frac{1}{2(n - i + 1)}, & i > 1. \end{cases} \end{equation*} and we partition each $Z$ as \begin{equation} Z = \begin{bmatrix} Z_0 & z_1 \\ z_1^{\mathsf{H}} & Z_{n + 1, n + 1} \end{bmatrix} \tag{1} \end{equation} and $\Lambda$ in the same manner.
My Question I can't find the closed form for the $x$ update.
What I've tried I got, dropping all terms independent of $x$, $$ \frac{\partial}{\partial x} \frac{1}{2} \| x - y \|_2^2 + \frac{\tau}{2}(t + u_1) = x - y $$ and using the bilinearity of the inner product and linearity of the trace, \begin{align*} \frac{\partial}{\partial x} \left\| Z - \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix} \right\|_F^2 & = \frac{\partial}{\partial x} \left(\text{Tr}\left(\begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix}^{\mathsf{H}} \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix}\right) - 2 \Re\left( \text{Tr}\left(\begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix}^{\mathsf{H}} Z \right) \right)\right) \\ & = \frac{\partial}{\partial x} \left(\text{Tr}\left(\begin{bmatrix} T(\bar{u}) & x^{\mathsf{H}} \\ x & t \end{bmatrix} \begin{bmatrix} T(u) & x \\ x^{\mathsf{H}} & t \end{bmatrix}\right) - 2 \Re\left( \text{Tr}\left(\begin{bmatrix} T(\bar{u}) & x^{\mathsf{H}} \\ x & t \end{bmatrix} Z \right) \right)\right) \\ & = \frac{\partial}{\partial x} \left(\sum_{k = 1}^{n} \overline{x_k}^2 + x_k^2 - 2 \Re\left( 2 \sum_{k = 1}^{d} z_{d + 1, k} \Re(x_k) \right)\right) \\ & = \frac{\partial}{\partial x} \left(\sum_{k = 1}^{n} \overline{x_k}^2 + x_k^2 \right) - 4 \frac{\partial}{\partial x} \left(\sum_{k = 1}^{d} \Re(z_{d + 1, k}) \Re(x_k) \right). \end{align*} Using Wirtinger calculus as described here, I got $\frac{\partial}{\partial x_k} x_k^2 + \overline{x_k}^2 = x_k$ and $\frac{\partial}{\partial x_k} \Re(x_k) = \frac{1}{2}$ and thus the above expression reduces to \begin{align} x - 2 z_{1}. \end{align} In conclusion we have \begin{align} \frac{\partial}{\partial x} L_{\rho}(t, u, x, Z, \Lambda) = x - y - 2 \lambda_1 + \frac{\rho}{2} \cdot (x - 2 z_1) \end{align} and setting this to zero yields \begin{align} \rho z_1 + y + 2 \lambda_1 = \left(1 + \frac{\rho}{2}\right) x, \end{align} which is equivalent to \begin{align} x = \frac{2}{\rho + 2}\left(\rho z_1 + y + 2 \lambda_1\right), \end{align} which is different from the $x$-update in the paper. Where have I gone wrong?.
The error is in the derivative of the last term. That should be $\rho/2 \cdot4(x-z_1) = 2\rho(x-z_1)$.
The easiest way to see this is by writing the Frobenius norm as the sum of the squared components (or squared lenghts of the matrix component for the complex plane). The only places where $x$ occurs are the first column where you have $z_1^H - x^H$ and the first row where you have $z_1-x$, from which the result follows.