I'm reading a book on number theory at the moment that assumes familiarity with big Oh notation...and while I think I do understand the notation I cannot understand the point of it.
For instance let $\tau$ be the divisor function, then the book is able to give an approximation for a sum of $\tau(i)$ as $\sum_{1=i}^{n}\tau(i) = nln(n) + (2\gamma -1)n + O(n^{\frac{1}{2}})$, with $\gamma$ being the Euler constant.
My understanding of this is that the absolute value of the difference between the actual sum, and the approximation by $nln(n) + (2\gamma - 1)n$ cannot exceed $kn^{\frac{1}{2}}$, where k is some constant. Is this correct?
If so, what is the point? Because in practice $k$ could take any positive value, and so one is not in practice given any information of the amount of error involved. There doesn't appear to be any way to find the exact value of this $k$, because just comparing one answer to the expected answer obviously wouldn't do it.
So what is the point of big Oh notation? What does one actually gain by expressing a function as above?
I've only been introduced to some of first year undergraduate information, first year UK analysis classes.
-If one could find $k$, say $k = x$ then why not simply express it as SUM = APPROXIMATION + $kn^{\frac{1}{2}}$ and note that $k < x$?
The $O$-notation doesn't mean anything for a fixed $n$, say: $n:=1000$. But it means something in the mental process $n\to\infty$.
We have this $\tau$-sum which we try to understand, or estimate. And there is the simple and exact term $g(n):=n(\log n+2\gamma -1)$ that we understand. Saying that the difference between this as yet mysterious sum and the term $g(n)$ is $O(n^{1/2})$ when $n\to\infty$ is a formidable result. A priori this difference ("error") could be as large as $e^n$ when $n\to\infty$, but now we are told that it is essentially ("by order of magnitudes") smaller than the size of the explicitly written down approximation $g(n)$.