I am not sure whether this question is a good fit for this forum. If it is not, maybe this question should be moved to more CS-related places like StackOverflow.
I am now taking an intro level algorithm course. When the professor talks about exponential complexity, he provides the following C++ snippet to compute $2^n$.
__int power2BF_I (int n)
{
__int64 pow = 1;
while (0 < n--)
pow <<= 1;
return pow;
}
Then he claims that this algorithm is of complexity $O(2^r)$, where $r=1+\lfloor \log_2 n\rfloor$ is number of binary digits of input $n$. Then it is not tractable because of this exponential complexity.
However, as much as I think his analysis is true, I could also say this the algorithm is of complexity $O(n)$. This is great and makes the problem tractable.
I am not sure how to explain this controversy. Could someone provide some pointers for me? Thank you in advance.
Complexity is typically measured in terms of the size of the input, where the input is assumed to be presented economically. Here, it would not be reasonable to encode $n$ in more than $\lceil \log_2n\rceil$ bits, so that is the measure of input to use.
When we do complexity analysis of an everyday problem like sorting a list, it is assumed that all the list items are encoded in some fixed number of bits, so that the size of the input is proportional to the number of items in the list.
When we want to analyze a primality-testing algorithm, we consider the number of bits in the input. If we have an algorithm that works well for $200$-bit numbers, and we want to know how it scales to $300$-bit numbers, we consider that the input size has increased by a factor of $1.5$, not $10^{100}$.