Understanding max error from Riemann sums.

927 Views Asked by At

I'm trying to understand how the maximum error when using Riemann sums is equal to the difference between the overestimate and underestimate. I'm presented with the following equation:

\begin{align} |\text{overestimate} - \text{underestimate}| & = |\sum_{i=1}^{n} f(x_{i})\Delta x - \sum_{i=0}^{n-1} f(x_{i})\Delta x | \\\\ & = |f(x_{n}) - f(x_{0})|\Delta x \end{align}

I get that I can factor out the $\Delta x$ from both summations, but how is it possible that the sum from $i=1$ to $n$ on $f(x)$ is equal to $f(x_{n})$? This equation assumes that I have a fixed $n$.

1

There are 1 best solutions below

1
On BEST ANSWER

If you take out the $\Delta x$ then the summation become $f(x_i) -f(x_i-_1)$ summing over which gives you the required expression. This is simply because the terms in the sum get cancelled out.