Inefficient sorting, proving running time

61 Views Asked by At

Suppose we have an algorithm for sorting that works like this:

  • Go through the starting list and pick the minimum item
  • Add the item to the output list
  • Remove the item from the input list
  • Repeat until no more elements are in the input list
  • Return output list

So, two questions here:

  • Every iteration compares n elements, then n-1, then n-2 and so on. Does that mean that the running time is $$ \sum_{i=0}^n (n-i) $$ ?

  • I'm pretty sure that this is a worse running time than $$ n (log(n)) $$ but I have no idea how to prove that

$$ \sum_{i=0}^n (n-i) > n (log(n)) $$

1

There are 1 best solutions below

0
On BEST ANSWER

Your algorithm is called selection sort. Each iteration makes one comparison fewer than your count (to find the least of $n$ elements you only need $n-1$ comparisons), but that doesn't matter.

Thus the total number of comparisons your algorithm makes is $$ (n-1)+(n-2)+(n-3)+\cdots+3+2+1+0 $$ this is the $(n-1)$th triangular number, and there's a well-known formula for that: $$ (n-1)+(n-2)+(n-3)+\cdots+3+2+1+0 = \frac{n(n-1)}{2} = \tfrac12 n^2 - \tfrac12 n$$ which I hope you can plug into a big-O notation for yourself.

(One way to see the formula is to add the triangular number to itself, but with the terms of the second copy given in the opposite order: $$ 2x = (n-1+0)+(n-2+1)+(n-3+2)+\cdots+(2+n-3)+(1+n-2)+(0+n-1) $$ which gives you $n$ times $n-1$).