Estimate eigenvectors of symmetric matrix with almost vanishing diagonal

636 Views Asked by At

Is there a way to approximate the eigenvectors of a symmetric matrix with almost vanishing diagonal elements, i.e. with the block matrix form, \begin{equation} M=\left( \begin{array}{cc} \alpha\epsilon _1 & A \\ A ^T & \alpha\epsilon _2 \end{array} \right) \end{equation} with $\alpha \ll 1$? I'd like the result to be to first order in $\alpha$. I thought that maybe I could first take it to block diagonal form using a an satz for the unitary transformation as I've seen done in another similar case (the seesaw mechanism for studying neutrino masses in particle physics), but didn't lead anywhere.

2

There are 2 best solutions below

3
On BEST ANSWER

Note that below I replaced $A^{T}$ by $A^{\ast }$, the adjoint of $A$. In case $A$ is real they coincide but it has advantages to use a complex formalism. The problem is a standard perturbation situation in quantum mechanics and is treated in nearly all textbooks. Thus we have a Hamiltonian \begin{equation*} H_{\alpha }=\left( \begin{array}{cc} \alpha \varepsilon _{1} & A \\ A^{\ast } & \alpha \varepsilon _{2}% \end{array} \right) =H_{0}+\alpha V,\;H_{0}=\left( \begin{array}{cc} 0 & A \\ A^{\ast } & 0 \end{array} \right) ,\;V=\left( \begin{array}{cc} \varepsilon _{1} & 0 \\ 0 & \varepsilon _{2} \end{array} \right) , \end{equation*} where $V$ is small. The parameter $\alpha $ is introduced for convenience in tracking orders in a perturbation expansion. The idea is to express various quantities in terms of the eigenvalues and eigenvectors of $H_{0}$ which are usually known. Suppose that $\mathbf{f}$ is an eigenvector of $H_{0}$ at the eigenvalue $\lambda $, \begin{equation*} H_{0}\mathbf{f}=\lambda \mathbf{f}, \end{equation*} or \begin{equation*} \left( \begin{array}{cc} 0 & A \\ A^{\ast } & 0% \end{array}% \right) \left( \begin{array}{c} f_{1} \\ f_{2} \end{array} \right) =\lambda \left( \begin{array}{c} f_{1} \\ f_{2} \end{array} \right) . \end{equation*} A little calculation shows that \begin{eqnarray*} AA^{\ast }f_{1} &=&\lambda _{{}}^{2}f_{1} \\ A^{\ast }Af_{2} &=&\lambda ^{2}f_{2} \end{eqnarray*} Note that some of the eigenvalues may be degenerate (have multiplicity $>1$ ). Then we denote the eigenprojector corresponding to $\lambda $ by $ P_{\lambda }$.

An elegant method to treat the perturbation problem is by Kato (T. Kato, Perturbation Theory for Linear Operators). Thus let \begin{equation*} R_{\alpha }(z)=[z-H_{\alpha }]^{-1} \end{equation*} be the resolvent of $H_{\alpha }$ and \begin{equation*} R_{0}(z)=[z-H_{0}]^{-1} \end{equation*} It is analytic outside the spectrum of $H_{\alpha }$ which consists of the finite set of eigenvalues of $H_{\alpha }$ (which are real). We can write \begin{equation*} R_{\alpha }(z)=R_{0}(z)[1-\alpha VR_{0}(z)]^{-1} =R_{0}(z)[1-{\alpha}VR_{0}(z)]+\alpha ^{2}\{VR_{0}(z)\}^{2}-\alpha ^{3}\{VR_{0}(z)\}^{3}+\cdots ], \end{equation*} which is a series expansion in $\alpha $. Let now $\Gamma $ be a contour that wraps around the eigenvalue $\lambda $ of $H_{0}$ (with associated eigenprojector $P_{\lambda }$) but avoids its other eigenvalues. Then \begin{equation*} P_{a}=\frac{1}{2\pi i}\int_{\Gamma }dzR_{\alpha }(z) \end{equation*} is a projector and equals the sum of all eigenprojectors of $H_{\alpha }$ inside $\Gamma $, \begin{equation*} P_{a}=\sum_{n}P_{\alpha }^{(n)} \end{equation*} whereas ($\lambda _{\alpha }^{(n)}$ is the eigenvalue associated with $% P_{\alpha }^{(n)}$) \begin{equation*} \frac{1}{2\pi i}\int_{\Gamma }dzzR_{\alpha }(z)=\sum_{n}\lambda _{\alpha }^{(n)}P_{\alpha }^{(n)}. \end{equation*} Introducing the perturbation expansion, above, we have \begin{eqnarray*} P_{\alpha } &=&\frac{1}{2\pi i}\int_{\Gamma }dz\{R_{0}(z)-\alpha R_{0}(z)VR_{0}(z)\}+\mathcal{O}(\alpha ^{2}) \\ &=&P_{\lambda }-\alpha \frac{1}{2\pi i}\int_{\Gamma }dzR_{0}(z)VR_{0}(z)+ \mathcal{O}(\alpha ^{2}) \\ &=&P_{\lambda }+\alpha P^{(1)}+\mathcal{O}(\alpha ^{2}). \end{eqnarray*} In case the eigenvalue problem for $H_{0}$ is solvable we can then calculate $P^{(1)}$ and, to first order in $\alpha $, the $\lambda _{\alpha }^{(n)}$. Note that the original eigenvalue $\lambda $ can split up. Note further that truncation after the first order in $\alpha $ can lead to a result that is no longer a projector.

A full exposition, including the mathematical details, can be found in Kato's book.

3
On

Here is an intuitive approach:

Note that $\det(A-\lambda I) = \det(A-\lambda I)^T = \det(A^T-\lambda I)$ and so if $\lambda$ is an eigenvalue of $A$ then it is also an eigenvalue of $A^T$. Now, let $v,w \neq 0$ be such that $A^Tv = \lambda v$ and $Aw = \lambda w$, so we have $$\begin{pmatrix} \epsilon_1 & A \\ A ^T & \epsilon_2 \end{pmatrix}\begin{pmatrix} v \\ w \end{pmatrix} =\begin{pmatrix} \epsilon_1 v+ A w \\ A^T v+ \epsilon_2 w \end{pmatrix} =\begin{pmatrix} \epsilon_1 w+ \lambda v \\ \lambda w+ \epsilon_2 v \end{pmatrix} %\approx \lambda \begin{pmatrix} v \\ w \end{pmatrix}$$ Now let $\mu_{i}$ be the largest eigenvalue of $\epsilon_i$ and assume $\mu_i \ll \lambda$, then we have $$\|\epsilon_i u \| \leq \mu_{i} \|u\| \ll \lambda \|u\|,$$ it follows that for any $x,y$ with $\|x\| = \|y\|$, we have $$\|\epsilon_i x + \lambda y\| \leq \mu_{i} \|x\|+\lambda \|y\| \approx \|\lambda y\|.$$ and thus it seems reasonable to assume that $$\epsilon_1 v+ \lambda w \approx \lambda w, \quad \text{ and } \quad \lambda v+ \epsilon_2 w \approx \lambda v.$$ From which follows that $$\begin{pmatrix} \epsilon_1 & A \\ A ^T & \epsilon_2 \end{pmatrix}\begin{pmatrix} v \\ w \end{pmatrix} \approx \lambda \begin{pmatrix} v \\ w \end{pmatrix}.$$