I came across a question asking the following:
Why is the O-notation for a constant running time always given as O(1)?
I have been thinking about it but I can't make sense of it. Could anyone shed any light on the question at hand?
I came across a question asking the following:
Why is the O-notation for a constant running time always given as O(1)?
I have been thinking about it but I can't make sense of it. Could anyone shed any light on the question at hand?
Intuitively, you can think of "$f(n)$ is $O(g(n))$" as meaning "$f$ grows (asymptotically) at most as fast as $g$". So a constant function "grows at most as fast as" a constant function, such as $g(n) = 1$, thus if $f$ is constant, $f$ is $O(1)$.
Formally, "$f(n)$ is $O(g(n))$" means there exists a $c > 0$ and an $N > 0$ such that $|f(n)| \le c |g(n)|$ whenever $n \ge N$. If $f(n)$ is constant, just take $c$ to be the (absolute value of the) constant value, and you will see it is $O(1)$.