Error in numerical integration technique

604 Views Asked by At

I am reading about numerical integration techniques and the error in the approximations found. My notes give an example using the constant rule (I have read elsewhere it is called the rectangular method/rule) where the function is considered to be constant over some interval. Using the Taylor expansion of the function, the error is found to be $O(\Delta x ^2)$, which I get.

However then the notes consider breaking up the integral into N sections and applying the constant rule/rectangular method to each of these. With regards to the error in the result, it simply says that

Because the number of increments $N \propto \frac{1}{\Delta x}$, the total error in the estimate is $O(\Delta x)$.

Now I don't know if I'm missing something completely obvious here, but I can't convince myself as to why we get an error of $O(\Delta x)$. I'm trying to motivate this with the following reasoning:

  • The error in using the constant rule in each interval is $O(\Delta x' ^2) = O( (\frac{\Delta x }{N})^2)$ where $\Delta x$ is the whole interval in question and $\Delta x'$ is the little interval $\frac{\Delta x }{N}$ which we apply the constant rule to.

  • Then summing over $N$ values is like multiplying by $N$, giving an error of order $O( (\frac{\Delta x ^2 }{N})) = O((\Delta x)(\Delta x'))$, which isn't looking at all promising to me... Any help would be much appreciated!

As an aside I am also struggling to comprehend how the error is now of a smaller order of the interval $\Delta x$, or rather, the special significance of some cut-off like the number 1. While I agree that, for the series to be convergent, the coefficients are decreasing, it seems that when the interval is small the greater the order of the error, the smaller the error is (i.e. we think that an error on the order of $\delta x ^2$ is better than $\delta x$, but here when we approximate the integral better, we get an error of order 1 lower, which is supposedly 'better'? The 'special significance of a cut off' I am referring to is that this seems almost like 'squaring a number greater than 1 makes it bigger, and squaring a number less than one makes it smaller', but I cannot see how, whatever the interval size may be, the error may somehow be smaller by using larger intervals? I think I definitely have a major conceptual confusion somewhere!

1

There are 1 best solutions below

2
On

Consider an integral $\int_a^b f(x)\; dx$. If you use $N$ intervals of length $\Delta x = (b-a)/N$, the error in each interval is $O((\Delta x)^2)$, i.e. bounded by $K (\Delta x)^2 = K (b-a)^2/N^2$ for some constant $K$. On $N$ intervals, the total error could be as much as $N$ times this, or $K (b-a)^2/N$, and thus $O(\Delta x)$.