diverging determinate of numpy eigenvalues

26 Views Asked by At

I am trying to solve the equation

$u''=\lambda u $

via discretized matrix scheme. Therefore, as a first step, I need to compute the eigenvalues $\lambda$ using the numpy.linalg.eigvals function. If I now insert these eigenvalues into

$det(T-\lambda \,I)$,

where T is the discretized matrix, my determinate diverges. But we calculated the eigenvalues k such that the determinate should be 0. Is that a floating point problem, and how can I improve on that ?

I am thankful for every advice how to dodge that difficulty.

Best jw

1

There are 1 best solutions below

2
On BEST ANSWER

Remember that eigenvalues of the continuous boundary value problem form a linear sequence. In the discretization the eigenvalues of the matrix increase similarly, even if only the lower third is close to the continuous eigenvalues As the determinant has the value $\prod_{k=1}^N(\lambda_k-\lambda)$, if the smallest factor is not exactly zero, the other factors combined provide a rapidly increasing magnification factor.

So yes, it is a problem of the accuracy of the eigenvalue approximation, supported by inaccuracies due to the floating-point number type.

The determinant is just not a good measure for the eigenvalue accuracy. At least divide by the determinant of $T$ to remove the bulk of the growth. A better measure would be to apply the transformation matrix to the original system matrix and check how close to zero the lower triangle is.

Or do some inverse power iterations with $A=T-\lambda I$ and check how large the thus obtained offset for $\lambda$ is.