Questions about Big-O notation

167 Views Asked by At

I understand the formal definition of Big-O: $f(n)$ is $O(g(n))$ if there exist constants $N$ and $c$ such that for $n>N$ we have that $f(n) \leq c\cdot g(n)$. However, the problem is that, by this definition, $2n+1$ is $O(n^2)$ even though it is more precise to say that it is $O(n)$.

So, to resolve this, is it possible to modify the definition as such: $f(n)$ is precisely $O(g(n))$ if there exist $N$ and $k$ such that $f(n) \leq k\cdot g(n)$, and $f(n) \geq k^{-1}g(n)$, for $n>N$? Are there any possible issues with this definition?

1

There are 1 best solutions below

2
On BEST ANSWER

Big-O notation lets you provide a loose upper bound for the behaviour of the function. It makes it easier to prove results quickly, and it also lets you group functions together and even provide simple demarcations between groups of functions.

This is what lets you do things like note the divide between polynomially-bounded functions (i.e. ones that are $O(x^n)$ for some $n \in \mathbb{N}$) and those that aren't.

And often, that's all you need. You're not trying to put strict asymptotic bounds on the function, you're just trying to get enough of an idea of its behaviour to work with in some fashion. And sometimes, trying to get a stricter bound is actually a lot of work, which will sometimes not be worth the effort.