Assume that you are designing a sorting algorithm that uses an operation $x \leq y$ that have 3 possible results:
• $x < y$
• $x = y$
• $x > y$
Can your algorithm do better than ${\cal \Omega}(n \log n)$?
Assume that you are designing a sorting algorithm that uses an operation $x \leq y$ that have 3 possible results:
• $x < y$
• $x = y$
• $x > y$
Can your algorithm do better than ${\cal \Omega}(n \log n)$?
I will for simplicity assume that we are just talking about the worst case scenarios. In practical terms though that isn't really something you should take for granted as being the sole indicator of a good algorithm. There's also the best case or the average case scenario (together these $3$ are the algorithms time complexity), and space complexity (how much memory is taken up by the algorithm) etc.
For comparison sorts (Merge Sort, Quicksort, Shell Sort, Insertion Sort, Heapsort etc.) the best known is as far as I know $\mathcal{O}(n\log{}n)$.
However if you have non-comparison sorts like Counting Sort, LSD Radix Sort, MSD Radix Sort, Bucket Sort etc. they can be a lot quicker.
The worst-case for these algorithms usually don't solely depend on the actual input of the thing that you sort, but also other variables. For example for LSD Radix Sort, the worst case is given by the number of keys (size of input) and then the average length of each key.
For most practical purposes though we use Quicksort because it has an average time complexity of $\mathcal{O}(n\log{}n)$ (even though it's worst case is $\mathcal{O}(n^2)$), and a space complexity of $\mathcal{O}(\log{}n)$.
Also remember that Big O notation removes constants for their time complexity/space complexity, Quicksort has the lowest constant here across almost all algorithms, which is an added bonus for sorts of a smaller nature.