I have the following matrix product: $$C=X(X^TAX)^{-1}X^T$$ where $A$ is an $m\ x\ m$ sparse symmetric positive definite matrix and $X$ is a sparse rectangular $m\ x\ n$ matrix. In case it matters, $X$ is the nullspace of another matrix used to constrain the solutions. Everything is real-valued. $X$ has the property that $X^TX=I_n$.
Because my matrices have so many seemingly beneficial properties, is there any matrix manipulation that can simplify this expression? I was hopeful that some of the $X$ terms could be eliminated or removed from the inverse term, but this does not appear to be the case. Typically, I do not need to explicitly form this product and can treat it as an operator on some vector $f$ $$r=X(X^TAX)^{-1}X^Tf$$ My usual procedure is:
- $g=X^Tf$
- Form $X^TAX$ explicitly using MKL's
mkl_sparse_syprroutine. Forming this matrix product is so much faster than performing 3 matrix-vector products in an iterative solver routine. I have no idea how it achieves this performance, but it is faster than any of my "clever" solutions. - Use conjugate gradient (CG) to compute $z=(X^TAX)^{-1}g$. I have a preconditioned CG loop that uses all MKL sparse BLAS routines.
- $r=Xz$
For linear algebra, I am using C++ and am reasonably adept with Eigen (whenever possible), SuiteSparse (mostly for their sparse QR and rectangular LU decompositions), and Intel MKL (for my CG loop).
Edit: narrowed the question to focus on my primary concern