In a book I'm really enjoying, The Irrationals by Julian Havil, in Chapter 2 he mentions that John Wallis often used Cavalieri's Principle, which he explains.
However, he then says that "in modern terms" it can be stated in 2 dimensions as $$\int_0^1f(x)dx = \lim_{N\to\infty}\frac{\sum_{r=0}^Nf(r)}{M_N(N+1)}$$ where $M_N = \max\{ f(x) | x\in [0, N]\}$ and where $f(x)$ is a positive- and real-valued continuous function defined on the positive real axis, without any justification.
I can see that this is an incredibly useful idea, but why on earth is it true?! Why would the values of $f$ attained arbitrarily far away from the interval $[0,1]$ (e.g. in the sum $\sum^N_{r=0}f(r)$ in as $N$ grows large) have any effect on the area under $f(x)$ on that interval?
Isky, the equation as others have said, is simply wrong there's no way getting around it (e.g. test for (absolute) convergence). It is mostly likely a typo, but having read a review of the book:
'The book has a fair number of proofreading errors. Some sentences don't make any sense. There are some formatting errors in the equations. There are some occasional minor errors in the proofs, but they are geneally easy to correct. The proof that pi is transcendental contains a serious error, and it took some work for me to modify it to something correct'
Daniele