Two days ago I felt very uncomfortable with Big O notation. I've already asked two questions:
- Why to calculate "Big O" if we can just calculate number of steps?
- The main idea behind Big O notation
And now almost everything has become clear. But there are few questions that I still can't understand:
Suppose we have an algorithm that runs in $1000n$ steps. Why people say that $1000$ coefficient becomes insignificant when $n$ gets really large (and that's why we can throw it away)? This really confuses me because no matter how large $n$ is but $1000n$ is going to be $1000$ times bigger than $n$. And this is very significant (in my head). Any examples why it is considered insignificant as $n$ tends to infinity would be appreciated.
Why Big O is told to estimate running time in worst case? Given running time $O(n)$, how is it considered to be worst case behavior? I mean in this case we think that our algorithm is not slower than $n$, right? But in reality the actual running time could be $1000n$ and it is indeed slower than $n$.
According to the definition, Big O gives us a scaled upper bound of $f$ as $n \to +\infty$, where $f$ is our function of time. But how do we interpret it? I mean, given algorithm running in $O(n)$, we will never be able to calculate the actual number of steps this algorithm takes. We just know that if we double the size of the input, we double the computation time as well, right? But if that $O(n)$ algorithm really takes $1000n$ steps then we also need to multiply the size of the input by $1000$ to be able to visualise how it grows, because $1000n$ and $n$ have very different slopes. Thus in this case if you just double the computation time for the doubled size of the input, you're going to get wrong idea about how the running time grows, right? So how then do you visualise its growth rate?
I want to add that I know the definition of Big O and how to calculate it, so there is no need in explaining it. Also I've already googled all these questions tons of times and no luck. I'm learning calculus at the moment, so I hope I asked this question in the right place. Thank you in advance!
You are right: The $O$ notation only tells you how big it is within a constant factor. If you want something more accurate than that, do not use $O$. The reason that $O$ notation is widely known and widely used is: it has been found useful over the years. Of course in some settings, other estimates may be more useful than $O$.