Estimate overlap of trial state $x$ with the (unknown) eigenvector $x_0$ of a real symmetric, sparse matrix H

43 Views Asked by At

Suppose we have a very large, sparse, real symmetric matrix $H$ of dimension $n$ ($n > 10^7$). Being diagonalizable, it has a spectrum of eigenpairs $(\mathbf{x}_i, \lambda_i)$. The problem is the following: given a trial vector-value pair $(\mathbf{x}, \lambda)$, what are some known ways to lower bound (and upper, if possible) the dot product $\mathbf{x}_0 \cdot \mathbf{x}$ -- "the overlap", or "ground state support" -- without computing the spectrum of $H$ directly?

Because the matrix is so large, assume that it is impractical to compute the spectrum directly, even with the best available methods (QR algorithm, etc.). However, since $H$ is sparse, assume that certain matrix operations involving $H$ are allowed (e.g. acting with $H$ on the trial vector, taking powers of $H$ up to some reasonable number).

What I tried:

I wondered if perhaps this could use the fact that eigenpairs of $H$ must obey certain properties? Some of the relevant properties I identified include

  • The variance of an eigenvector, $\mathbf{x}^t \cdot H^2 \cdot \mathbf{x} - (\mathbf{x}^t \cdot H \cdot \mathbf{x} )^2$, vanishes (the Weisenstein eigenvalue bound could be re-stated in a form related to the overlap). In practice this looks like a very loose bound.
  • Eigenvectors of $H$ are fixed points of the map $F: x \rightarrow Hx$,
  • The Krylov subspace-generating sequence $\mathbf{x}, H\mathbf{x}, H^2\mathbf{x},...$ becomes trivial for an eigenvector -- could "how much it does not terminate" be a metric?
  • For a high enough power $k$ of $H$, the difference $||H^k\cdot\mathbf{x} - ( (H^k \cdot \mathbf{x}) \cdot \mathbf{x} ) \mathbf{x}||$ preferentially measures the distance of the trial vector to the eigenvector of $H$ with the largest eigenvalue (a-la power method).

The most relevant literature I found refers to this problem as "eigenvector verification" or "eigenvector error bounds": Krawczyk's method, Moore's ansatz using Brouwer's fixeed point theorem, and others are well-described in the introduction of this SIAM publication (paywall). However, the literature reviewed there is sparse, a bit dated, and a lot of it has not been translated to English (unfortunately the only language accessible to me).

Moreover, the algorithms described in some of this literature (paywalled, see an earlier similar paper that is open-access) require inversion of a matrix that includes $H$ as a submatrix, which does not seem any less complex than diagonalizing $H$ directly. And the only examples given are for matrices of size 3 by 3, which raises doubts about whether this would work for a very large matrix.

Note: I checked out the seemingly related question posed here, however I hope that because I am not actually interested in knowing the eigenpair, just knowing how bad my guess is, that there is a different approach that scales better than the inverse iteration method mentioned in that link above.