how to implement adaptive gaussian (kronrod?) quadrature (technicalities)

709 Views Asked by At

I am trying to figure out what is the best way to implement an adaptive quadrature scheme which preferentially makes use of guassian quadrature. Guassian quadrature doesn't really give any easily calculable error estimate, and doesn't permit node nesting, but some authors suggest doubling the number of nodes and asking to stop when the relative error between two subsequent estimates is below a certain tolerance. This seems to work reasonably fine for me. Well, what I actually do, which I'm not sure is entirely legitimate, is that I start with 5 nodes and add 5 nodes until I meet the accuracy requirement. Doubling seem to "work" a bit less, ie. sometimes it doesn't converge when it did by adding 5 points at a time. With both methods, I noticed that the relative error in subsequent refinements would sometimes grow, which makes me question either method. My guess is that without enough points, the function isn't sampled well enough and some of the variation is missed out, so at times the algorithm underestimate the error (which is of course not a true estimate). That, and or, maybe my function isn't so well approximated by a polynomial and that more points can be sometimes be less accurate (which would explain why it would converge by adding 5 points but not when doubling). Is this still a legitimate way to work?

To be more specific, I'm trying to calculate a certain mean quantity: $$\overline{f(x)}=\frac{\int_a^bf(x,x_0)dx_0}{b-a}$$ where $f(x,x_0)$ is the solution of an integral equation. I solve the integral equation with a variable step size scheme, marching in time and halving or doubling the time steps depending on my needs. Halving time steps of course means that I need to meet a by step error tolerance, which can be considerably smaller than the tolerance of the whole integral. All this means that I need an integration method as fast and precise as possible, for both the integrals involved in solving the integral equation and for doing the mean. I said my gaussian method works reasonably well, actually, for the integral equation solution I can reach a tolerance of $10^{-3}$, and $10^{-2}$ for the mean. That's not exactly extraordinary. I'd prefer something along the lines of $10^{-6}$. The thing is, I need to calculate this mean at several points, some of them work just fine, because the functions involved don't vary too strongly. At other times, the functions vary a lot faster (no oscillations, at worst a growth followed by a decrease) and that's where I have problems. So I am wondering, knowing that my error estimates are bad, is it possible that I actually overestimate the error and have trouble converging because of that? But at the same time the relative error should diminish over time. Or maybe the method I use is just too slow.

I was thinking that maybe I should try Gauss-Kronrod quadrature. From what I understand, an adaptive scheme works by first considering the whole interval, then dividing by two the subintervals where the error tolerance (the difference between both quadratures) isn't met. What is less clear to me, is how many nodes I should use, would it possible to change that on the go? Because for a lot of the $\overline{f(x_i)}$, 5 points for the Gaussian quadrature works just fine, using 15 points like Matlabs' quadgk would really slow down things. Or maybe I shouldn't divide the interval but use more points like I already do for regular gaussian quadrature and stop when I meet my error criteria. Or just use as few points as possible but divide the interval. On the other hand, that would mean diminishing the error tolerance appreciably...