I've read that you should avoid computing a matrix inverse, as you generally don't need to, but I don't know the best way to avoid it. I need to compute:
$$x = \mathbf v \mathbf A^{-1}\mathbf v^\top$$
where $x$ is a scalar, $\mathbf v$ is a row vector, $\mathbf A$ is a symmetric positive definite matrix (but perhaps with eigenvalues close to $0$) and ${}^\top$ means transpose.
I'm using numpy/scipy so feel free to express an answer using their functions.
EDIT:
Any pros/cons of the least squares approach versus doing an eigenvector decomposition?
v*inv(A) is the same as v/A in matlab notation, which uses a linear (least squares) solver rather than calculating the inverse.
So I guess I would use:
I'm not sure if the row versus column will matter to the software, but you can freely transpose things without changing anything important as A is symmetric (so if it cannot detect you passing the wrong kind of vector in, then use
numpy.transpose).dotdocslstsqdocstransposedocsIf you have many different v for a single A, then you can use:
choleskyto get a better solver. I didn't see any such solver built-in to numpy, but things like matlab (and hopefully octave) would automatically switch to a faster solver for the SPD case.