This problem is an example in a Discrete Math textbook. How can big-O notation be used to estimate the sum of the first n positive integers?
Solution: Because each of the integers in the sum of the first n positive integers does not exceed n, it follows that $$1+2+...+n \le n+n+...+n=n^2$$ From this inequality it follows that $1+2+...+n$ is $O(n^2)$.
I do not understand how the sum of $n$'s is equal to $n^2$. Can anyone explain to me what the book used to get this?
The sum of $n$ $n$'s is $n \cdot n$, by definition. $n \cdot n = n^2$.
Also, Big-O notation is used for bounding, not estimation. As Big-O notation removes all constant multipliers and merely represents an upper asymptotic bound, it is not a useful estimator.