Accuracy of numerical methods in finding eigenvalues

827 Views Asked by At

I have a problem with assesing the accuracy of my numerical calculation. I have a 2nd order ODE. It is an eigenvalue problem of the form:

$ y'' + ay' + \lambda^2y = 0 $

and the boundary condiations are:

$ y(0) = y(1) = 0 $

This equation describes a vibrating string, clamped at x=0 and x=1, with a certain mass distribution. I want to be able to calculate the eigenvalues of this equation numerically. I do this by using the Runge-Kutta method to find solutions to the equation with initial conditions:

$ y(0) = 0, y'(0) = 1 $

with different values of $\lambda$ and then looking for the ones that are zero at x=1.

I terminate my search for eigenvalues when I find a function that has

$y(1) < \delta$

where $\delta$ is a predefined constant.

Now the problem is that I'd like to know how accurate my calculations are, i.e. how do I choose the values of $\delta$ and the stepsize used in computing the values of the potential eigenfunction, so that the error in the eigenvalue is less than, say, $ 10^{-3}$?

3

There are 3 best solutions below

0
On BEST ANSWER

The exact solution as well as the numerical solution for fixed stepsize $h$ depends analytically on $λ$, at simple eigenvalues there is locally a near linear relationship between $λ$ and the right boundary value. The numerical error is not exactly smooth in the step size $h$, but its leading term can be identified with the solution of a linear system with right side the local truncation error. In total this allows to say that $$ y_{λ,h}(1)=a(λ-λ_*)+bh^p+... $$ so that if $|y_{λ,h}(1)|<δ$ then $$|λ-λ_*|<\frac{δ+|b|h^p}{|a|}.$$

Thus for a method of order $p$, the numerical eigenvalue for a given stepsize $h$ will lead to a right boundary value of size $O(δ+h^p)$ in the exact solution and thus a distance of the same magnitude to the eigenvalue of the exact solution.

Now you can employ the error estimates of the Richardson extrapolation method or similar to verify the correctness of your numerical result.


Note that the higher oscillation for larger $λ$ leads to consider the requirements of the sampling theorem, i.e., the sampling density has to be high enough. Or in other words, for large $λ$ the problem becomes stiff, the error term could be modified to $O(e^{c|λ|}h^p)$ to reflect that.

This should be no problem for the lowest eigenvalue, thus find the solution within error $δ=O(h^{p+1})$ for $h=0.1$ and for $h=0.2$ and use the extrapolation formula to estimate the error $$ error = \frac{|λ_{0.2}-λ_{0.1}|}{2^p-1} $$ for $λ_{0.1}$.

3
On

Rewrite the problem as $$ \text{find }y,\rho\text{ such that }Ly=\rho y\text{ and }y\left(0\right)=y\left(1\right)=0. $$ It is implicit in the notation that $y$ is smooth enough. You impose $y^{\prime}\left(0\right)=1$, though this seems arbitrary; I will ignore this.

You are interested in numerical solutions, so you will eventually "discretize" your operator $L$. Let's call this discretization $L_{N}\in\mathbb{R}^{N\times N}$. Then, your problem becomes $$ \text{find }\mathbf{y},\rho\text{ such that }L_{N}\mathbf{y}=\rho\mathbf{y}. $$ The first and last rows of $L_{N}$ should incorporate the boundary conditions. In this form, the problem is simply one of finding eigenvalues and eigenvectors of $L_{N}$, for which there exist good numerical methods. This might also make your analysis regarding the error easier to handle.

0
On

Based on you comment stating that you don't want a formal approach, the following engineering approach occurred to me ...

I assume that you do not have the actual value of $\lambda$, therefore we need to find a way around this fact. An alternative way to asses the "goodness" of your eigenvalue is the residual of the ODE, that is, how good the discretised ODE is satisfied at each node $x_i$.

To compute it, you need to compute the residual at each node, $$ r_i = D^2[y(x_i)] + a(x_i) D[y(x_i)] + \lambda^2 y(x_i), $$ where $D^2$ and $D$ are a spatial discretisation of your derivative operators (which is different from your RK algorithm, since you already have the values of the function $y(x)$ at each node). Then, the residual of the whole ODE could be built like: $$ r(\lambda) = \frac{1}{N}\sum_i^N \left|r_i\right|, $$ or any similar sum that aggregates the error at each node.

Now the algorithm would be:

  1. Define multiple values of $H = [h_0, h_1, \ldots, h_N]$ and $\Delta = [\delta_0, \delta_1, \ldots, \delta_M]$.

  2. Perform a parametric sweep in the previous sets and find for each tuple $(h_i, \delta_j)$ a value of $\lambda$.

(Note that you should have a nested loop, so that for constant values of each parameter you change the other one).

  1. Compute for each $\lambda$ the value of the ODE residual $r(\lambda)$.

From this point onwards, realise that each value of $\lambda$ and the tuple $(h_i, \delta_j)$ are interchangeable, since the latter lead you to the former. Therefore, we have a two-dimensional function $r(h,\delta)$.

You can slice it by planes in the $h$ or $\delta$ space to obtain one-dimensional figures with iso-lines. From this plot you can draw conclusions of how the values of $h$ and $\delta$ influence the accuracy of $\lambda$, as they will show how the residual behaves with different values of $h$ and $\delta$ (you might see a plateau beyond certain values or linear behaviours in a log-scale).

As a closing remark, the spatial discretisation of the derivate operators $D$ and $D^2$ introduced an error in the residual of the ODE, but since you will be benchmarking all of the $(h, \delta)$ combinations against the same discretisation, I think it is fair to say that it should not have an impact in your conclusions.

Sketch of the desired figure : https://i.stack.imgur.com/bvgnE.png