Is it possible to do rank reduction with constraints

178 Views Asked by At

I have a real $n\times m$ matrix $\mathbf{A}=U\Sigma V^T$ s.t. $\operatorname{rank}(\mathbf{A})> 1$. Let $\mathbf{B}\in \mathbb{R}^{n\times m}$ be the best rank-1 approximation of $\mathbf{A}$ that comes from singular vectors of $\mathbf{A}$

\begin{equation} \mathbf{B}=\sigma_1*\mathbf{u}_1*\mathbf{v}_1^{T} \end{equation}

My question is:

Is it possible to find $\mathbf{B'}\in \mathbb{R}^{n\times m}$, defined as $\mathbf{B'}=\mathbf{x}*\mathbf{y}^T$, with following constraints on $\mathbf{x},\mathbf{y}$

  1. $\mathbf{x}\in\mathbb{R}^{n\times 1}$ and last $(n-n_1)$ elements of $\mathbf{x}$ are $0<x_i<\epsilon \hspace{2mm}\forall i=n_1+1,\dots,n$
  2. $\mathbf{y}\in\mathbb{R}^{m\times 1}$ and last $(m-m_1)$ elements of $\mathbf{y}$ are $0<y_j<\epsilon \hspace{2mm}\forall j=m_1+1,\dots,m$

$${\bf x} = \left.\left( \begin{array}{c} x_1\\ \vdots\\ x_{n_1}\\ x_{n_1+1}\\ \vdots\\ x_n \end{array} \right) \right\}n\hspace{2mm}, {\bf y} = \left.\left( \begin{array}{c} y_1\\ \vdots\\ y_{m_1}\\ y_{m_1+1}\\ \vdots\\ y_m \end{array} \right) \right\}m\hspace{2mm} $$

If yes, then what kind of optimization problem is this, convex or not? And how do I write the objective function.

Optimization cost is: $\min\|\mathbf{B}-\mathbf{B'}\|_2$

1

There are 1 best solutions below

4
On

Some thoughts, which may not answer the question definitively.

Edit: I originally thought that we are given an arbitrary matrix $\mathbf{A}$ and we are looking for sparse (scaled) singular vectors $\mathbf{x}$ and $\mathbf{y}$, where a specific subset of entries is restricted to be in $(0, \epsilon)$. I now realize that the problem was different:

  • Let $\mathbf{A}$ be a given arbitrary matrix and let $\mathbf{B} = \sigma_{1}\mathbf{u}_{1}\mathbf{v}_{1}^{\top}$ be its best rank-$1$ approximation. We seek $\mathbf{x}$ and $\mathbf{y}$ with specific restrictions, such that the matrix $\mathbf{B} = \mathbf{x}\mathbf{y}^{\top}$ is the best approximation of $\mathbf{B}^{\prime}$ (and not $\mathbf{A}$).

Note that $\mathbf{B}^{\prime}$ being the best approximation of $\mathbf{B}^{\prime}$ is not the same as being the best (constrained) approximation of $\mathbf{A}$.

Observations:

  1. First assume that we are willing to force the small entries to be equal to zero.

    • The problem of approximating $\mathbf{B}$ with $\mathbf{B}^{\prime}$ should be easy. Let $\overline{\mathbf{x}}$ and $\overline{\mathbf{y}}$ be unit $\ell_{2}$-norm vectors satisfying the desired constraints. If you expand the objective function you should see that the objective is to maximize the inner products $\mathbf{x}^{\top}\mathbf{u}_{1}$ and $\mathbf{y}^{\top}\mathbf{v}_{1}$, which can be easily done. Then you need to scale the product $\mathbf{x}\mathbf{y}^{\top}$ appropriately, but the scaling factor can again be easily determined.

    • Under this assumption, even the problem of approximating $\mathbf{A}$ with $\mathbf{B}^{\prime}$ is easy: just have to focus on the upper right $n_{1} \times m_{1}$ block of $\mathbf{B}$ and compute its singular vectors. Padding those with zeros, will give you the desired $\mathbf{x}$ and $\mathbf{y}$.

  2. The problem of approximating a general matrix $\mathbf{A}$ with a matrix $\mathbf{B}^{\prime}$ satisfying those restrictions is generally NP-hard: for $n_{1}=0$ and $m_{1}=0$, and very large $\epsilon$, the problem is equivalent to seeking the best nonnegative rank-1 approximation of $\mathbf{A}$, which is an NP-hard problem, unless $\mathbf{A}$ is a nonnegative matrix. What this means is that if $\mathbf{A}$ is nonnegative, then it is easy to compute the top pair of nonnegative singular vectors (and hence the reduction does not work) --it does not mean that your restricted problem is easy.