When is the $O(x)$ of $f(x)$ "useful"?

121 Views Asked by At

As I was reading through Justin Abrahms' "Understanding the formal definition of Big-O" article to better understand Big-O notation I came across something at the end that caught my attention:

You might notice that this definition means that the $O(45n)$ function is also $O(n^3)$ because it will also never cross. This is true, but not particularly useful.

(emphasis mine)

Which fits the definition given in the Rosen's "Discrete Mathematics and its applications" book.

So then, at which point does Big-O becomes useful (or not)? It has do something to do with the witnesses you choose, right?

4

There are 4 best solutions below

0
On BEST ANSWER

At the risk of being too vague...

In practice you often have a large number of error terms, like

$$ f(x) = g(x) + \epsilon_1(x) + \epsilon_2(x) + \cdots + \epsilon_n(x), $$

where $g(x)$ is a useful approximation to $f(x)$ and each $\epsilon_k(x)$ is small by comparison. Say you're interested in how good the approximation $f(x) \approx g(x)$ is.

Rather than spending lots of effort estimating each $\epsilon_k(x)$ precisely, you can quickly estimate them each roughly, and you'll often end up with different big-O bounds on each of them.

Note that it might be EXTREMELY difficult to estimate each of them precisely. Further, it might be the case that finding reasonable big-O bounds is orders of magnitude easier than finding their respective big-$\Theta$ classes. Another possibility is that making a distinction between $O(\cdots)$ and $\Theta(\cdots)$ just doesn't make a difference for your particular purpose.

Then you use the "not particularly useful" fact you're asking about to combine all of these big-O bounds into one big-O, the biggest one, and you end up with a simple expression like

$$f(x) = g(x) + O(h(x)).$$

0
On

Yes, it

has something to do with the witnesses

The usefulness of a big-O analysis is to show that something doesn't grow too fast - often so that one can justify using a particular algorithm to compute. So you want your witness to grow as slowly as you can manage to make it, as long as it grows faster than what you care about. So $$ 1000x^2 + 2000\log(x) = O(e^x) $$ is true but useless, while $$ 1000x^2 + 2000\log(x) = O(x^2) $$ is informative.

0
On

The situation is similar to the concept of upper bounds of a set of numbers. Any number greater than an upper bound is also an upper bound, but there is a unique least upper bound in a complete number system. The situation with big-O upper bound is similar but, in general, there is no least big-O upper bound, but it useful to have one which is as small as possible, taking into account asymptotic equivalence, and little-O bounds.

2
On

You might be interested in $\Theta$ (Big theta), namely, $f(n)=\Theta(g(n))$ if there exists $n_0$, $k_1$, $k_2$ such that $n>n_0$ implies that $$ k_1g(n)\leq f(n)\leq k_2g(n). $$ Thus $f(n)=\Theta(g(n))$ if $f$ is bounded asymptotically below and above by $g$. This gives much more information than big-oh. In this case $x=O(x^3)$ but $x\neq \Theta (x^3) $