How to numerically find zeros of a system of first-order differential equations (Airy function)?

345 Views Asked by At

To numerically approximate the Airy function y = Ai(x) which satisfies the equation $$ y'' - xy = 0 $$ I converted this second-order diff. eq. into a pair of first order diff eq. and solved them using the Runge-Kutta method. However, my goal is to find its zeros. Should I go about this by using e.g. Newton's method or is there some better way?

2

There are 2 best solutions below

0
On BEST ANSWER

You are engaged in a special case of event location, i.e. given a vector valued initial value problem $$y'(t) = f(t,y(t)), \quad y'(t_0) = y_0$$ as well as a real valued event function $g$, you are searching for solutions $t$ of the event equation $$g(y(t)) = 0.$$

Let $v_n$ denote your approximation of $y_n = y(t_n)$. Let us assume that you are doing a one step method with fixed step size $h$, i.e $$ v_{n+1} = \phi(t_n,v_n,h).$$ The standard Runge-Kutta methods are all of this type. Your position is ideal, because you can track and detect changes in the sign of $g(v_n)$. Specifically, if $g(v_n)$ and $g(v_{n+1})$ have different sign, then $g \circ y$ has a zero in the interval between $t_n$ and $t_{n+1}$. I am quietly assuming that 1) you can trust the computed sign of your approximations and 2) your approximations have the same sign as their targets. Having established such a bracket around a root, you then proceed to find a zero for the auxiliary function $$G(\rho) = g(\phi(t_n,v_n,\rho h)), \quad \rho \in [0,1].$$ By design, you have $G(0) = g(v_n)$ and $G(1) = g(v_{n+1})$ and you can now apply, say, the bisection algorithm to the problem of solving $$G(\rho) = 0$$for $\rho \in (0,1)$. If speed is of the essence, then a hybrid between the bisection method and the secant method is a good choice. You get the superlinear speed of the secant method and you get to maintain a shrinking bracket around the root at all times. It is certainly possible to apply Newton's method, but the extra effort involved in computing the derivatives may not be worthwhile as the driving function $\phi$ need not be simple.

Once you have found (all) $\rho^*$ such that $G(\rho^*) = 0$, you record $t^* = t_n + \rho^* h$ and continue integrating the differential equation from the point $(t_{n+1},v_{n+1})$. You will very nearly have $g(y(t^*)) = 0$.

It is possible to obtain error estimates for $t^*$, but that is a question for another day.

1
On

Newton's method is a very efficient way of finding zeros (quadratic convergence: every iteration, you gain 2 digits). It only converges locally (under some conditions) so you won't know if there are other zeros than the one you found, without additional arguments such as monotony.