Why can the error term in the sum definition of an integral be reduced to O(h)?

49 Views Asked by At

$$ \lim_{N \to \infty} \sum_{n = 0}^{N - 1} ( f(x_n) \Delta x) + N \mathcal{O}(\Delta x^2) = \lim_{N \to \infty} \sum_{n = 0}^{N - 1} ( f(x_n) \Delta x) + \mathcal{O}(\Delta x) = \int_{a}^{b} f(x) \ \text{d}x. $$

Looking at any pictorial representation of integration, I understand that the error is $N \cdot \mathcal{O}(h^2)$, but I cannot understand how to reduce this to $\mathcal{O}(h)$.

1

There are 1 best solutions below

0
On BEST ANSWER

The interval $[a,b]$ is divided into $N$ intervals of length $h$, so we have $Nh=b-a$, so $N*O(h^2) = O((b-a)h) = O(h)$.

(More formally, there is some constant $c$ such that the error term is eventually bounded by $cNh^2$, and this is equal to $c(b-a)h$.)