What is the fastest converging method which can be used to approximate definite integrals?

559 Views Asked by At

What is the fastest-converging method which can be used to approximate definite integrals? I have come across various methods of finding definite integrals such as: finding anti derivatives, Simpson's rule, Trapezoidal rule, etc. But, which method converges the fastest?

2

There are 2 best solutions below

0
On BEST ANSWER

It all depends on the class of functions you want to integrate (oscillating, polynomial...). Hence there are several possible answers for this question. Maybe the simpler case is when you have a positive weight function that allows you to introduce a scalar product: $$ \langle f,g\rangle = \int_a^b f(x)g(x)w(x)dx $$

If you want to compute $$ \int_a^b f(x)w(x)dx \approx \sum_{i=1}^n w_i f(x_i) $$ then Gauss quadrature rule are optimal in the sens that it integrates exactly a polynomial of degree $2n-1$. It's easy to see that we cannot do better as a polynomial of degree $2n-1$ is defined by $2n$ coefficients and the quadrature rule also uses $2n$ coeffients ($n$ for the weights, $n$ for the nodes).

In practice $f$ is not a polynomial and we get only an approximate result. An estimate of the error can be found by using a Gaussian quadrature with $n$ nodes and then reevalute the integral with a Gaussian quadrature of higher order (with $n+1$ nodes for instance). The error is then estimated by $$ E_n\approx |G_{n+1}-G_n| $$ then the interval is splitted if the estimated error is too high. New numerical integrations and error estimates are then performed as before but for these smaller intervals, etc...

However the problem with Gaussian quadrature is that the sequence of nodes $x_i$ is not nested (between a rule of order $n$ and $n+1$), hence $f$ must be reevaluated at $n+1$ new nodes.

A cure for this problem is to introduce the Gauss-Kronrod rule. It is a variant of Gaussian quadrature, that reuse some of the nodes of the Gaussian-Rule (hence you do not have to reevaluate $f$ for all the nodes).

This is typically how some "real" numerical algorithms work.

Another "generic" state of the art method is Clenshaw–Curtis quadrature. The idea here is different, we approximate the function with Chebyshev polynomials. One property of the Chebyshev polynomials $T_n$ is that they are polynomials with the largest possible leading coefficient, but subject to the condition that their absolute value on the interval $[−1,1]$ is bounded by 1. This explain why they are relevant for a wide variety of approximation procedures (for instance if your error is polynomial you want it to be a Chebyshev polynomial has its absolute value will be minimal in that case...).


Update: as a complement concerning one dimensional singular integrals that converge in the sense of Cauchy principal value, I just saw this recent & easily readable Arxiv paper Simple Method for Evaluating Singular Integrals

3
On

Like @saulspatz said, there's no one answer to this. There are many interesting numerical integrator algorithms, some of which may be tailored towards specific types of functions. Matlab for instance uses Adaptive Quadrature.

If you think about it, the Left-hand sum method is breaking the domain into segments and approximating each segment area with a degree 0 polynomial. For the trapezoid rule, a degree 1 polynomial. For Simpson's rule, degree 3. Boole's rule does it for degree 5. This can be generalized to whatever degree you want, which is what the Newton-Cotes formulae are.