In big O notation we have that for all $x \geq x_0$:
$f(x) \in O(g(x))$ if there exists a positive real $c$ where $|f(x)| \leq cg(x)$
My question has to do with using integrals to approximate big O.
For example what is the big-O of:
$$T(n) = \sum_{k=0}^{n} k$$
We can compute that directly and the result is $n(n+1)/2$ so it's $O(n^2)$.
But for something like the following $S(n)$ we don't have a clean way to get a closed form of the sum:
$$S(n) = \sum_{k=0}^{n} \sqrt{k}$$
But we can approximate it with an integral
$$\int \sqrt{k}~dk = (2/3)k^{3/2} + c_0$$
and here we say that $S(n) \in O(n^{3/2})$.
We could have done the same thing with $T(n)$, too:
$$\int k~dk = (1/2)k^2 + c_0$$
Which suggests $O(n^2)$ as it did earlier.
But what's the proof that we can always get the big-O by taking an integral, rather than computing the exact sum?
More specifically:
How do I know that it is even valid to use an integral rather than the explicit sum?
How do I know the result when using an integral will have the same big-O as the result when performing the sum?
Draw the sum of $\sum_i c_i$ as a collection of rectangles, with the $i$th rectangle having base $[i, i+1]$ on the $x$-axis and height $c_i$. Compare the area of this sum of rectangles with the integral of the associated function from $0$ to $n+1$. By sliding the rectangles left or right by one unit, you can typically show either upper or lower bounds.
For this to work, the following are sufficient:
The numbers $c_i$ are monotone increasing or decreasing (which is typically true), or at least have this property for all $i > N$ for some fixed $N$.
The function whose values are the $c_i$ (in your examples, $f(x) = x$ and $f(x) = \sqrt{x}$ are monotone, so that for $i < x < i+1$, you have $c_i < f(x) < c_{i+1}$. (Or the less-thans might be greater-thans....but the key thing is that it's one or the other for all $i$.)