As an engineer I'm wondering about the treatment of errors when dealing with limits. Take for instance integrals, which are traditionally introduced by dividing the area between $x_1$ and $x_2$ above (or under) a curve $f(x)$ into $n$ bars (width $w=\frac{x_2-x_1}{n}$) and then applying $\lim_{n\to \infty}$ C.f. this link for visualization: 
My question is: How do we know that neglecting the error between the area of the bars and the function is actually permissible? I do understand that the deviation for a single bar approaches zero as $w$ goes to $0$, but on the other hand $n$ goes to $\infty$, so for me it is a "$0\times\infty$" situation and the result seems not quite so obvious. Is there a theorem dealing with this issue? (Note that integrals are just an example and the same principle is applied in many other problems, e. g. with the stress tensors in mechanics)
It is an easy matter to estimate the error. In every slice, the true area is comprised between those of two rectangles, of width $w$, and height $f(x)$ and $f(x+w)$ respectively.
For example, with $f(x)=\ln(x)$, integrating between $x=2$ and $x=2+nw=3$,
$$\sum_{k=0}^{n-1}w\ln\left(2+kw\right)<I<\sum_{k=1}^{n}w\ln\left(2+kw\right)$$ and by cancellation, the width of the interval of uncertainty is
$$w(\ln(2+1)-\ln(2+0))=\frac{\ln(1.5)}n.$$
(This is just the difference between the areas of the extreme slices.) Hence the error clearly vanishes.