How to find a sparse basis of the null space of a large sparse matrix using QR decomposition

315 Views Asked by At

Suppose that we have a large sparse matrix $A\in{\mathbb{C}}^{m\times n}$, $m<n$, and $A$ is row full rank. Let $V$ be the solution set to $Ax=0$, and we know that $V$ is a linear space and $\dim(V)=n-m$. When $n>>m$, $\dim(V)=n-m$ is large. How to find a sparse basis of $V$. Because singular value decomposition can be applied to solve an orthonormal basis of the null space of a dense matrix. However, my matrix is sparse, therefore SVD can not be used to find a basis of the null space of a sparse. I want to use the QR decomposition to find it, that is $$A=QR,~Q\textrm{ is a unitary matrix and } R \textrm{ is a upper matrix}$$ How to use this decomposition to find a sparse basis of the null space of the matrix $A$? Thank you!

2

There are 2 best solutions below

2
On

The QR decomposition sounds like a red herring to me. If you want a sparse basis, then better to solve $Ax=0$ with sparse fitting procedure such as LASSO. If you insist on using QR, then you can continue Gram-Schmidt process starting with Q basis with a new random vector (that is not in the range of $A$. If it is, then redraw a new random vector). But it will likely not be sparse.


Update (23-06-11) After the comment by littleO, I've given some thoughts to this problem, and it seems it's not possible to use LASSO for this problem (or at least, I haven't found a way to do it).

Going back to the QR decomposition approach, instead of decomposing $A$, you should decompose its conjugate tranpose $$ A^* = QR $$ such that $$ AQ = R^{*} $$ Now, suppose you used $Q$ to continue orthogonalization on a random set of $\mathbb{C}^{n\times n-m}$ basis to get the orthogonal nullspace $P_\emptyset$. You can then proceed to diagonalize the upper part of $P_\emptyset$ such that $$ P_\emptyset = \begin{pmatrix}D\\P\end{pmatrix} $$ where $D$ is a diagonal matrix with $n-m$ elements, and $P\in\mathbb{C}^{m\times n-m}$ is a dense matrix, resulting in at most $(m+1)\times(n-m)$ nonzeroes.

While this approach is interesting, it starts with the formation of a very large dense matrix $P_\emptyset$, which may make it impractical. It might be better to embed the diagonalization process inside the orthogonalization completion process, though having $(m+1)\times(n-m)$ nonzeroes doesn't sound so good either. (The original matrix $A$ was $m\times n$, but sparse).

A random vector can be projected to the nullspace via $$ x_\emptyset = x - A^{*}(AA^{*})^{-1}Ax $$ so I'm wondering if an optimization problem can be formulated based on $x_\emptyset$. I haven't found an answer yet.

0
On

The computation of sparse null bases is very active research due to their applications in the solution of saddle point problems, which occur often in large-scale applications with sparse matrices. There is unfortunately no easy explicit representation of such bases for general sparse matrices. It is also a very difficult problem computationally. In fact, they may not even exist, and if they do, finding a null basis with optimal sparsity has been shown to be NP-hard. Here is a survey of numerical solution techniques for saddle point problems and Section 6 contains a section on null space methods with many references to work on constructing sparse null bases.