I have a large, sparse, non-symmetric matrix $M$, and I need to get accurate eigenvalues $\lambda_i$ and eigenvectors $\vec{s}_i$ for all $i$. (An example system is below; it's based on a finite-difference problem.).
I know that $\lambda_i$ will be of the form
$$\lambda_i = e^{i\omega_i\Delta t}$$
where both $\omega_i$ and $\Delta t$ are real. In other words, $\lambda_i$ is complex and $\left|\lambda_i\right|=1$. When I just tell scipy to numerically find $\lambda_i$ and $\vec{s}_i$, scipy spits out $\lambda_i$ which are close to magnitude 1, but some are far enough from 1 that I question the accuracy of $\vec{s}_i$. (The $\vec{s}_i$ represent the vibrations of a system, and I can examine them graphically. Some of the results look good, but some look very fishy. I need accurate $\vec{s}_i$ to characterize some properties of the vibrations.)
My basic question is: can I somehow use the condition that $\left|\lambda_i\right|=1$ to get more accurate numerical solutions to $\lambda_i$ and $\vec{s}_i$?
(I should note that I can recast the system into another, somewhat-smaller, but still non-symmetric, matrix $N$ which has eigenvalues of the form $\lambda_i=2\left(\cos\left(\omega_i\Delta t\right)-1\right)$, where both $\omega_i$ and $\Delta t$ are again real. This puts the condition that $-4<\lambda_i<0$. If I simply use scipy on this system, the resulting $\lambda_i$ are again, good but maybe not good enough; they have imaginary components that are sometimes large enough to cause problems.)
Below is an example system to illustrate what's going on. However, it may be somewhat misleading. I'm giving a 1D example that yields a (nearly) tridiagonal matrix. In the system I actually care about, the matrix is no longer tridagonal because it is in 2D. I realize that the problem I am about to describe can be approached in other ways which may be easier to solve, but please trust me that I have good reasons for setting up the problem in the following way.
Say I have the following coupled PDEs:
$$\frac{\partial v}{\partial t}=m\frac{\partial\tau}{\partial x}$$ $$\frac{\partial\tau}{\partial t}=\mu\frac{\partial v}{\partial x}$$
Note that these can be combined to form $\frac{\partial^2v}{\partial t^2}=m\mu\frac{\partial^2v}{\partial x^2}$, which is a wave equation with oscillatory solutions of the form $v\left(x,t\right)=v_0e^{-i\left(\omega t - kx\right)}$.
I want to use a finite difference method to study the system, so I discretize $v$ and $\tau$ to create a staggered grid with alternating $v$ and $\tau$ elements. Say I have three of each with periodic boundary conditions, so the elements on my grid are ordered
I.e. the $v$ values are defined on the red circles and the $\tau$ values are defined on the blue squares. Say the distance between the adjacent elements is $\Delta x/2$. Say that $v^{l}_i$ represents $v_i$ at timestep $l$, and each timestep takes $\Delta t$.
When I replace the derivatives with centered finite differences (and use a leapfrog time scheme), I get equations like:
$$\frac{v^{l+\frac{1}{2}}_0-v^{l-\frac{1}{2}}_0}{\Delta t} = m \frac{\tau^l_0-\tau^l_2}{\Delta x},$$
which can be rewritten
$$v^{l+\frac{1}{2}}_0=v^{l-\frac{1}{2}}_0 + m \frac{\Delta t}{\Delta x}\left(\tau^l_0-\tau^l_2\right).$$
If I put everything in a matrix, I get:
$$\left( {\begin{array}{cccccc} 1 & m\frac{\Delta t}{\Delta x} & 0 & 0 & 0 & -m\frac{\Delta t}{\Delta x} \\ -\mu\frac{\Delta t}{\Delta x} & 1 & \mu \frac{\Delta t}{\Delta x} & 0 & 0 & 0 \\ 0 & -m\frac{\Delta t}{\Delta x} & 1 & m\frac{\Delta t}{\Delta x} & 0 & 0 \\ 0 & 0 & -\mu\frac{\Delta t}{\Delta x} & 1 & \mu \frac{\Delta t}{\Delta x} & 0 \\ 0 & 0 & 0 & -m\frac{\Delta t}{\Delta x} & 1 & m\frac{\Delta t}{\Delta x} \\ \mu \frac{\Delta t}{\Delta x} & 0 & 0 & 0 & -\mu\frac{\Delta t}{\Delta x} & 1 & \\ \end{array} } \right) \left( {\begin{array}{c} v^{l-\frac{1}{2}}_0\\ \tau^{l-1}_0\\ v^{l-\frac{1}{2}}_1\\ \tau^{l-1}_1\\ v^{l-\frac{1}{2}}_2\\ \tau^{l-1}_2\\ \end{array} } \right) =\left( {\begin{array}{c} v^{l+\frac{1}{2}}_0\\ \tau^{l}_0\\ v^{l+\frac{1}{2}}_1\\ \tau^{l}_1\\ v^{l+\frac{1}{2}}_2\\ \tau^{l}_2\\ \end{array} } \right)$$
This is matrix is not symmetric, but while it is (mostly) tridiagonal, it will no longer be tridiagonal if I have a 2D grid. In fact, for the problem I actually care about, my grid has 5 different values defined at four different types of grid points, so the resulting matrix is very non-tridiagonal. (Maybe there is some higher dimensional analog to tridiagonal?)
Since this system has oscillatory solutions, if I have an eigenvector for this system, then $v^{l+\frac{1}{2}}_i = e^{i\omega\Delta t}v^{l-\frac{1}{2}}_i$ and $\tau^{l+1}_i = e^{i\omega\Delta t}v^l_i$, so the eigenvalues should be of the form $e^{i\omega\Delta t}$.
Regarding the fishy solutions, when doing the discretization it can be important to regularize the solution or really strange things can happen when you try and solve the matrix-vector equation.
As I have not investigated very many real problems in physics but mostly mimicked these types of equations for applications to other fields, I can't really say which type of regularization would be most suitable for this application. In any case a very popular approach to solve these types of sparse matrix problems is to use Krylov subspace methods such as conjugate gradient. There exist libraries for these in Matlab, octave and probably scipy too. What can become a bit confusing is keeping track of the data-points when building the matrices and going into more than one dimension but if you are used to work with vectorization and kronecker products you can calculate all indexes for the non-zero values in a very systematic way.