We are to determine a recurrence relation for a recursive algorithm.
Let us use the Josephus Problem for this: Given n people standing in a circle, every kth person is killed until one person remains.
The code is given as:
def josephus( n, k):
if n ==1:
return 1
else:
return ((josephus(n-1,k)+k-1) % n)+1
So I am assuming that the portion after the final return is the recurrent portion? Because that defines the problem recursively. So:
T(n,k)=[T(n-1,k)+k+1] + 1
But my professor gives T(n)=T(n-1)+C as the recurrence relation for a factorial n!. Then using that idea, I get the recurrence relation:
T(n)=T(n-1)+O(1)
With the O(1) derived from the if n == 1 fragment.
And then I find the following in many math textbooks:
T(2n)=2T(n)-1
T(2n+1)=2T(n)+1
But it seems this is only valid for when k=2.
This is where I am confused. I don't know which is the recurrence relation and what T(n) is supposed to mean. What is a recurrence relation T(n)? It is obvious that this algorithm is a linear upperbound by simply looking at the code.
For your josephus problem, the recursion has one function call for each of n, n-1, n-2,...,1, so that $T(n)=O(n)$ is the expected outcome.
The $T(2n)=c\cdot T(n)+O(1)$ behavior occurs typically for divide and conquer algorithms like efficient integer powers, Karatsuba multiplication and other fast multiplication algorithms, FFT. There you can consider the numbers $c^{-n}T(2^n)$ and find that they most often amount to a constant, so that $T(2^n)=O(c^n)$ or $T(N)=O(N^{\log_2c})$. For example in Karatsuba multiplication you get to replace one full multiplication by 3 half length multiplications, so $c=3$.
The problem of the formulation of your question is that you used
T(n)for the iteration function. But in the context of complexity of algorithms, $T(n)$ is almost exclusively reserved for some counting of the run time for input (of size) $n$, so better useJ(n)orF(n)for the iteration function.So yes, if
F(2n)andF(2n+1)are computed fromF(n), then $T(2n)=T(n)+c$ and $T(n)=O(log_2(n))$.