I was trying to numerically compute eigenvalues and eigenvectors of a matrix in Python with Numpy. The problem was that the inverse similarity transformation of the eigenvalue matrix does not accurately reproduce the original matrix in the sense that the element-wise error was around the same order of the entries of the original matrix; instead, for other more "well-behave" matrices, I found that the error was of the order of, 1e-15, the floating point precision. All elements of the original matrix is of the same order. Concluding that the matrix is non-diagonalizable, I tried instead to compute the Jordan form of the same matrix in Mathematica, but instead what I got was a diagonal matrix with the same "inaccurate eigenvalues" as its diagonal, and exactly the same set of eigenvectors. However, the result from SVD can reproduce the original matrix up to floating point precision.
To be specific, the matrix in the question is \begin{equation} \mathbf{M}_2^{-1}\mathbf{M}_1 \end{equation} with \begin{equation} \begin{split} \left(\mathbf{M}_{1}\right)_{ab} &= -2J_{ab} +\left(2\sum_c J_{ac}+\sum_c K_{ca}\right)\delta_{ab}\\ \left(\mathbf{M}_{2}\right)_{ab} &= -K_{ab}. \end{split} \end{equation} $\mathbf{J}$ and $\mathbf{K}$ are taken from the Wishart ensemble. So far, I've seen that the dimension of the matrix doesn't matter; the element-wise error is comparable to the entries of the original matrix regardless of how big or small it is.
Here is the code that can reproduce the problem
import numpy as np
rng = np.random.default_rng(1)
N=2
lamb=0.02
def wishart(N,lamb):
M = int(N/lamb)
X = rng.standard_normal(size=(N,M))
return [email protected]()/M
J = wishart(N,lamb)
K = wishart(N,lamb)
M1 = -2*J +np.diag(np.sum(2*J,axis=0)+np.sum(K,axis=0))
M2 = -K
MM = np.linalg.inv(M2)@(M1)
w, v= np.linalg.eig(MM)
U,S,V = np.linalg.svd(MM)
print(MM)
print(np.abs(MM- [email protected](w)@v.transpose())) #element-wise absolute error from Numpy eigenvalue routine
print(np.abs(MM- [email protected](S)@V)) #element-wise absolute error from Numpy SVD routine
With this seed, I got
\begin{equation}
\mathbf{M}_2^{-1}\mathbf{M}_1 =
\begin{pmatrix}
-1.06211474 & 0.06211474\\
0.06946018 & -1.06946018
\end{pmatrix}.
\end{equation}
The element-wise error from Numpy eigenvalue routine is
\begin{equation}
\begin{pmatrix}
0.05930343 & 0.00015695\\
0.00718848 & 0.05930343
\end{pmatrix},
\end{equation}
whose maximum value is of the same order to some elements of the original matrix. SVD instead give maximum element-wise error of order 1e-16. Mathematica's JordanDecomposition[] function yields the approximately the same eigenvalue matrix.
Both linear algebra routines of Numpy and Mathematica give accurate results when applied to ordinary matrices. Is there a numerical remedy or a classification of this kind of problematic matrices?