Why is computing powers in modular arithmetic $O(2^k)$?

75 Views Asked by At

enter image description here

How do they get the exponential time claim? Is this a mistake in my lecture notes? I am told that to multiply two k-bit integers roughly takes $≤2k^2$ bit operations; thus shouldn't $b-1$ multiplications take $≤(b-1)2k^2$ bit operations = $O(k^2)$?

2

There are 2 best solutions below

1
On BEST ANSWER

Your question is a fair one. Based on your subsequent comments, I don't think you're grossly misunderstanding big-Oh notation, but rather the book is eliding a couple of ideas into a brief sentence, which perhaps a bit unfair for an introductory textbook thought it would be well-understood in the context of a journal paper.

Based on the structure of the sentence, which ends with "not polynomial time", it would be reasonable for a reader to assume that the $O(2^k)$ also refers to time complexity, but as you point out the actual time complexity is $O(k^2 \cdot 2^k) \ne O(2^k)$. As an experienced reader, I could easily imagine the author making the following chain of thoughts while writing this sentence:

  • Because of the range of $b$, the best we can say about the number of multiplications is that it's $O(2^k)$, no smaller complexity class would be true.
  • In order to achieve polynomial time, we certainly need to have at most polynomial number of multiplications.
  • Since this is (in general, when $b$ is not uncommonly small for its allowed range of $k$ bits) an exponential number of multiplications, this has no hope of being polynomial time.

IMO you are absolutely right to be a bit confused by this, since generally math textbooks are written in precise language. Of course, authors and editors are still very much human and you can't assume there are no mistakes or slightly imprecise wordings such as this one.

2
On

The exponent m is a k-bit number with a value up to $2^k-1$. Therefore the naive method does $O(2^k)$ multiplications.