How do I efficiently solve a linear differential equation with purely imaginary coefficients: $ \frac{dy}{dt} = i A y $ (A real, symmetric, sparse)?

184 Views Asked by At

I would like to solve a system of differential equations of the following form: $ \frac{dy}{dt} = i A y $. Here, $ A \in \mathbb{R} $ is a large, sparse and symmetric matrix and $i$ is the imaginary number.

In the case where $A$ is diagonal, the solutions are simply complex exponentials. As the magnitude of the off-diagonal coefficients increase, the solutions are still oscillatory in nature but interactions between the different components of $y$ cause the amplitudes to change over time (beating patterns).

I am interested in finding an efficient numerical method to accurately find $y(T)$ when $y(0)$ is given, where $T$ is a time span much larger than the typical period of the oscillations, and $y(t)$ is a very large vector (more than 5000 elements).

I have tried solving this problem using matrix exponentials and ODE solvers. Matrix exponentials (MATLAB implementation using Padé approximants) do not scale well to large systems and take a prohibitively long time to run. ODE solvers tend to select very small step sizes (due to the oscillations), and take even more time to run (I tried stiff and non-stiff solvers).

Is there a way to solve this system more efficiently for large vector sizes by exploiting the structure of the equation? What is the relevant literature for this particular problem?

I note that there are related questions (1, 2) on this site, but I am hoping to find a solution that would work well specifically for the problem described above.

2

There are 2 best solutions below

0
On

If $u_j$ are orthonormal eigenvectors of $A$ corresponding to eigenvalues $\lambda_j$, the solution for initial value $y(0)$ will be $$ \eqalign{y(T) &= \sum_j c_j \exp(iT \lambda_j) u_j \cr c_j &= {u_j}^T y(0)}$$ Unfortunately the matrix formed by the eigenvectors will not be sparse, so there's a large amount of data to compute, but if you're lucky you might have only relatively few large $c_j$'s and everything else small enough to ignore. Matlab (for example) should be able to handle eigenvalues and eigenvectors for a $5000 \times 5000$ matrix.

0
On

Based on the comments made here and elsewhere on this site, I decided to try a number of different algorithms and compare their performances. Beware, this might not generalize to any type of data.

The algorithms I tried are:

  • MATLAB's expm function (which works by scaling and squaring)
  • Eigenvalue decomposition ($ e^{i A T} = V e^{i D T} V^{-1}$ where the exponential in $e^{i D T}$ acts only on the diagonal)
  • A toolbox called Expokit, which has a sparse method to directly calculate the solution $y(T)$ without forming the exponential matrix.
  • Another such sparse method found in literature (code here, paper reference 2 below, denoted "amh" in the table below)

The results in seconds are as follows, for matrices $A$ of size $n$ by $n$ with $N$ nonzero elements.

+---------+-------+-------+--------+--------+--------+---------+
| n       |  1126 |  1762 |   2526 |   3446 |   4490 |    5678 |
| N       | 43778 | 85718 | 145394 | 231730 | 342258 |  487946 |
+---------+-------+-------+--------+--------+--------+---------+
| expm    |  3.95 | 18.92 |  68.48 | 201.16 | 488.22 | 1059.60 |
| eig     |  0.35 |  1.22 |   3.49 |   8.94 |  18.97 |   38.05 |
| amh     |  0.14 |  0.24 |   0.34 |   0.56 |   0.68 |    1.15 |
| expokit |  0.18 |  0.28 |   0.56 |   0.76 |   1.02 |    1.52 |
+---------+-------+-------+--------+--------+--------+---------+

Fitting a power law profile reveals that the runtime for expm grows as $n^{3.3}$, the eigenvalue method grows as $n^{2.9}$, and the two other methods' runtimes seem to grow approximately linearly with the number of nonzero elements $N$. They also grow linearly with the number of vectors to transform (not shown above). All methods yield the same results within $10^{-12}$ on my data. I did not include ODE solvers in the above comparison, because they take forever to run on my machine for these problem sizes.

So, the answer is this: for a problem which is very large but sparse, use one of the two latter methods listed above, to calculate the result of multiplication with a matrix exponential without actually forming that matrix.

References:

  1. C. Moler and C. Van Loan, "Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later," SIAM Rev. 45, 3–49 (2003).
  2. A. H. Al-Mohy and N. J. Higham, "Computing the Action of the Matrix Exponential, with an Application to Exponential Integrators," SIAM Journal on Scientific Computing 33, 488–511 (2011).