I want to numerically solve $$\min_x \frac{x^TAx}{x^TBx} \quad \mathrm{s.t.}\quad Cx=0,$$ where $A$ and $B$ are large sparse matrices, $A$ is positive semi-definite, $B$ is positive-definite, and $C$ is sparse.
One approach would be to find a basis $N$ for the nullspace of $C$, write $x=Ny$, and then reduce the above to an unconstrained eigenvector problem. But if I try to compute $N$ (using e.g. a QR decomposition of $C$) it ends up intractably dense.
So instead I want to incorporate constraint projection into the power iteration for finding $x$: more specifically, I start with a random vector $x^0$, then iterate:
- Solve $A\tilde x^{i+1} = Bx^i$ for $\tilde x^{i+1}$.
- Solve $CC^T\lambda = C\tilde{x}^{i+1}$ for $\lambda$.
- Compute $x^{i+1} = \frac{\tilde{x}^{i+1}-C^T\lambda}{\|\tilde{x}^{i+1}-C^T\lambda\|_B}.$
However I'm running into a couple of problems: A) The matrix $A$ is singular, so the linear system in step (1) does not have a solution; B) The matrix $CC^T$ may be singular.
Concern B is less serious than concern A since the linear system in step (2) is guaranteed to always have at least one solution. So I can solve the linear system in step (2) using a robust method for underdetermined least squares, e.g. QR decomposition. What should I do to fix step (1), though? Of course, there is a strong temptation to "fix" $A$ by adding a small multiple of the identity; is this fix mathematically sound?
Answering my own question as I've found a rather satisfactory solution to this problem.
From the KKT conditions of the original problem one sees that the minimizer is the $x$-part of the eigenvector with smallest eigenvalue in the generalized eigenvalue problem $$\begin{bmatrix} A & C^T \\ C & 0\end{bmatrix}\begin{bmatrix}x\\\mu\end{bmatrix} = \lambda \begin{bmatrix} B & 0\\0 & 0\end{bmatrix} \begin{bmatrix} x \\ \mu \end{bmatrix}.$$ It follows that the following power iteration method will find $x$: