I have the following algorithm:
def pow1(n):
"""Return 2 ** n, where n is a non-negative integer."""
if n == 0:
return 1
x = pow1(n // 2)
if n % 2 == 0:
return x * x
return 2 * x * x
In theory, using the Master Theorem, I've come to this conclusion:
The algorithm has one subproblem ($a = 1$), the subproblem size halves with each call ($b = 2$) and the work outside recursion is constant ($f(n) = 1$). (In reality it is some constant time depending on e.g. the implementation of division/module.) Looking at the expression $n^{\log_b a}$ and substituting we have that
$n^{\log_b a} = n^{\log_2 1} = n^0 = 1$
and since this is (roughly) equal to $f(n)$ the final runtime, according to the theorem, is
$\Theta(n^{\log_2 1} \log n) = \Theta(\log n)$
But when I run benchmarks on this with $n$ from $10$ up to $10^7$ I get a more or less linear increase in running time. So my questions are:
Is there something I'm missing when applying the Master Theorem that would imply that the complexity even in fact linear?
Is there something in the algorithm itself that is linear that I'm missing?
EDIT:
I realise now that the main point of confusion for me was mixing up what $n$ means. When discussing the time complexity of multiplication, $n$ usually signifies the size of the multiplied numbers, not the numbers themselves. For example, Python uses a multiplication algorithm for small numbers that has the time complexity $O(n^2)$, but that does not mean that if you make the numbers twice as large, the time increases by a factor 4. The increase in number of digits will determine the time complexity. Multiplying two $n$ digit numbers (in base 2) will produce a $2n$ digit number which should make it obvious that if we instead let $n$ signify the size of the number, the time complexity is $O(n)$.
Since the result has $O(n)$ digits, merely the return value in memory takes $O(n)$ time. Indeed, the
x * xstep takes $O(\log x)=O(n)$ time. Therefore, you want the last term in the master algorithm to be $O(\log x)=O(n)$, not $O(\log n)$.