Considering that to prove a recursive algorithm we should refer to mathematical induction. Given the following algorithm (which sort an Array of size r) I found that base cases are for array size of 0 and 1 because an empty or 1-element array is already sorted. How can I prove mathematically that the algorithm behaviour holds for bigger sizes of the array?
1: MYSTERY(A,l,r)
2: range := r − l + 1
3: subrange := d(2 · range/3) // d() I assumed d() as the ceiling function
4: if range = 2 and A[l] > A[r] then
5: swap A[l] ↔ A[r]
6: else if range ≥ 3 then
7: MYSTERY(A,l,l + subrange − 1)
8: MYSTERY(A,r − (subrange − 1),r)
9: MYSTERY(A,l,l + subrange − 1)
10: end if
11: end
This seems to have a number of problems.
First of all, what are d2 and 3e? These are not specified anywhere.
When range = 2, you are sorting the array. So it looks like the routine wants to sort its input.
Otherwise you process the first subrange elements, then the last subrange elements, then the first subrange elements again. If subrange is less than half the size of the array, there will be a middle part that will never get processed.
Disregarding all this, you have to state what the induction hypothesis is. In this case it would seem to be: For all arrays of size less than n, this routine sorts the array.
Assuming this is true, and assuming that the input array is of size n, prove that the routine then sorts that array.
However, it does not seem to me that this routine will do that.