Fast approximation minimizing the nuclear norm of weighed sum of matrices

104 Views Asked by At

Hi I am trying to find a rank-1 matrix within the space of a linear combination of known matrices:

$$ \boldsymbol{A} = \boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m $$

where $\boldsymbol{A_0}$ and $\boldsymbol{B_m}, \forall m$, are hermitian matrices of size $N\times N$, and $\boldsymbol{x} = [x_1, \ldots, x_M]^T\in\mathbb{R}^{M\times 1}$ is a real vector. I have read many papers and posts related to this, and the best way I know so far to handle this problem is NNM, i.e., Nuclear Norm Minimization (related post here), which is formulated as: $$ \min_{\boldsymbol{x}} \left\|\boldsymbol{A_0} + \sum_{m = 1}^{M} x_m \boldsymbol{B}_m \right\|_{*} $$ I used CVX to solve this convex problem and it turned out that this works very well. However, it takes too much time when $M$ and $N$ grows larger, e.g, $N = M = 64$ or more. In the scenario I am considering, time complexity is quite important. So I need faster algorithms to solve this problem, and performance degradation is okay for that.

What I have tried to solve this problem faster:

  1. subgradient descent (see here for subgradient of nuclear norm).
  2. proximal minimization (see here for proximal operator for nuclear norm).
  3. reformulate the NNM as an SDP and solve it with ADMM (NNM as SDP; ADMM for SDP).

These three methods achieve similar performance as CVX, but still consume too much time, and the main reason is that SVD for an $N\times N$ matrix is employed in every iteration.

My question:

  1. Is there faster approach to solve the NNM problem without doing SVD, or at least not in every iteration? Some performance degradation is okay with faster speed.
  2. Or if there are better methods to minimize the rank of $\boldsymbol{A}$ other than NNM for large $N$ and $M$?