Let $X \in \mathbb{R}^{m \times n}$ be a matrix that is assumed to be low rank. According to,
- Recht, Benjamin; Fazel, Maryam; Parrilo, Pablo A., Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization, SIAM Rev. 52, No. 3, 471-501 (2010). ZBL1198.90321.
if we have a set of linear equality constraints on the entries of matrix $X$, then we can solve the following convex optimization program to achieve our solution
\begin{equation} \begin{aligned} & \underset{X}{\text{minimize}} & & \Vert X \Vert_* \\ & \text{subject to} & & \mathcal{A}(X) = b \end{aligned} \end{equation}
where $\mathcal{A} :\mathbb{R}^{m \times n} \to \mathbb{R}^p$ and $b \in \mathbb{R}^p$. I am confused how to express my linear equations in the form of this linear mapping $\mathcal{A}$. One way is to define a matrix $A^{(k)} \in \mathbb{R}^{m \times n}$ with coefficients of $X_{ij}$ and other elements as $0$ in matrix $A^{(k)}$, then every constraint can be expressed as
\begin{equation} \mathcal{A} = \langle A^{(k)}, X \rangle = \sum_{ij} A_{ij}X_{ij} = b_k \quad \forall k= 1, ..., p \end{equation}
Now, I am not sure how to define adjoint $\mathcal{A}^*$ because we need that to solve the problem using accelerated proximal gradient method according to the following paper.
- Toh, Kim-Chuan; Yun, Sangwoon, An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems, Pac. J. Optim. 6, No. 3, 615-640 (2010). ZBL1205.90218.
Is there any other way to define this mapping and its adjoint?
The adjoint operator is simply a linear combination of the matrices $A^{(k)}$. $A^{*}(y)=y_{1}A^{(1)}+\ldots +y_{p}A^{(p)}$