Question regarding the $\mathcal{O}$ notation used on constant functions

34 Views Asked by At

I'm having difficulties to understand the following passage out of the CLRS book, when the Counting Sort algorithm is introduced:

Counting sort assumes that each of the elements is an integer in the range $1$ to $k$, for some integer $k$. When $k = \mathcal{O}(n)$, the Counting-Sort runs in $\mathcal{O}(n)$ time.

Remark: $n$ is the length of the input array

I don't understand the mathematical background of the requirement $k = \mathcal{O}(n)$. If I'm not mistaking the claim $k = \mathcal{O}(n)$ always holds, because for $ C := k$ holds $k \leq C *n = k*n$. But the fact that the quoted paragraph stresses this requirement makes me think that I overlook something.

I'd be thankful if you could help me with this.

1

There are 1 best solutions below

0
On BEST ANSWER

If we have an array such that $a(n) = n^2$, then $k = n^2$ which is not $O(n)$.