Fast algorithms for long sequences of sparse matrix products multiplying a vector?

76 Views Asked by At

Context: Having worked with developing algorithms involving huuge linear least squares systems involving sparse matrices, so far I have mostly constructed these huge sparse matrices explicitly and then solved using Krylov subspace methods. For various reasons of flexibility and memory optimization I am now investigating possibility of having sparse matrices more loosely coupled and calculating matrix-vector products without accumulating one big sparse matrix.

So to the question: Given that I have a sequence of sparse matrices $${\bf M}_1,\cdots, {\bf M}_k$$, and I want to calculate the matrix-vector product $$\left(\prod_{\forall k} {\bf M}_k\right){\bf v} = {\bf M}_1 \cdot {\bf M}_2 \cdots {\bf M}_k \cdot {\bf v}$$ in one go e.g. without matrix-matrix multiplication and without having to store the intermediate results in vectors (as memory reads and writes often are slow for computers compared to calculations).

1

There are 1 best solutions below

1
On

If you keep all initial matrices, assuming a sparsity factor $\sigma$ for each, iterating the $k$ matrix/vector multiplies will cost $k\sigma n^2$ in total. This is to be compared to the cost with a "huge matrix", $\Sigma n^2$, hence a key factor is the degradation of sparsity, $\Sigma\leftrightarrow k\sigma$.

Also note that the amount of coefficients to be read are in the same ratio ($\Sigma n^2\leftrightarrow k\sigma n^2$) and this can seriously impact the running time due to cache or virtual memory effects (on the other hand, reading/writing the $n$ vector elements $k$ times should not be significant).