What's the difference between "worst/best case big-O()" and omega()/theta()?

229 Views Asked by At

In formal discrete math and computer science we talk about "big-θ," "big-O," and "big-Ω" notation, being tight, upper, and lower bounds (respectively) on the growth of properties of an algorithm as its input grows.

I have never seen θ() or Ω() referred to outside of an academic context, though. Usually texts geared towards application rather than theory refer to "worst-case big-O", "best-case big-O", and "average/amortized big-O," such as these Wikipedia articles.

The former and the latter seem like equivalent concepts to me. Am I misunderstanding, or are they just two different ways of referring to the same thing?

1

There are 1 best solutions below

0
On BEST ANSWER

There are two different concepts at play here. On one hand, you have three different ways of giving bounds to any given function: tight bounds ($\theta$), upper bounds ($O$), and lower bounds ($\Omega$).

On the other hand, you have three different ways of defining the function for the running time of an algorithm: a function that gives you the worst-case running time, a function that gives you the best-case running time, and a function that gives you the average running time.

In practice, as you say, we are often mostly interested in upper bounds (though tight bounds are even better if we can get them), and in worst-case running times. However, there is nothing that prevents us from speaking about for example the lower bound on the best-case running time.