The rationale behind algorithm for the Modular Exponentiation from the book "Introduction to Algorithms"

86 Views Asked by At

I saw this pseudocode from the book Introduction to algorithm (Chapter 31 page 957) on how to implement Modular Exponentiation.

MODULAR-EXPONENTIATION(a,b,n)

1 c = 0

2 d = 1

3 let <b_k,b_(k-i),....b_0> be the binary representation of b

4 for i = k downto 0

5     c = 2c 

6     d = (d.d) mod n

7     if b_i == 1

8        c = c + 1

9        d = (d.a) mod n

10 return d

I was a bit confused with the format they used to solve it. The following is what I know about solving modular exponentiation:

Example to solve $4^{10} \quad \% \quad 7$

A

$4^2 \quad \% \quad 7 \quad = \quad 16 \quad \% \quad 7 \quad = \quad 2 $

B

$4^4 \quad \% \quad 7 \quad = \quad (4^2)^2 \quad \% \quad 7 \quad = \quad 2^2 \quad \% \quad 7 \quad = \quad 4 $ (since we know from A that $4^2 % 7$ is 2)

C

$4^8 \quad \% \quad 7 \quad = \quad (4^4)^2 \quad \% \quad 7 \quad = \quad 4^2 \quad \% \quad 7 \quad = \quad 2 $

Then I take the binary representation of 10 = 1010. The value of 1 in the binary representation is 2 and 8.

So I will have

($4^8 \quad \% \quad 7) * (4 ^2 \quad \% 7) $

but from A and C We know that

$4^8 \quad \% \quad 7 = 2 $

and

$4^2 \quad \% \quad 7 = 2 $

So therefore we have

(2 * 2) % 7 = 4 % 7 = 4

But from the modular exponentiation algorithm above, I can see that they are using a different format. They first find the binary representation of the exponent (10) which is 1010.

Then they follow this step to find the modulo ( a = 4, b = 7 and n=10)

Loop 1

d = 1

d = d * d = 1 * 1 = 1 mod 7 = 1

d = a * d = 4 * 1 = 4 mod 7 = 4

Loop 2

d = d * d = 4 * 4 = 16 mod 7 = 2

Loop 3

d = d * d = 2 * 2 = 4 mod 7 = 4

d = a * d = 4 * 4 = 16 mod 7 = 2

Loop 4

d = d * d = 2 * 2 = 4 mod 7 = 4

Therefore the final answer is 4.

Please my question is : Why does their approach work perfectly and simpler ? What rationale or theorem was used to solve it ?

2

There are 2 best solutions below

2
On

The key idea is basically to use rules of exponents and the binary expansion of $b$ to speed things up. Consider $a^{b}$. Now write $b$ in binary. So: $$b = \sum_{i=0}^{\ell} a_{i}2^{i},$$

where $\ell = \lceil \log_{2}(b) \rceil$ (basically, our stopping point), and the $a_{i} \in \{0,1\}$. So:

$$a^{b} = \prod_{i=1}^{\ell} a^{a_{i}2^{i}}.$$

We need only evaluate the terms where $a_{i} \neq 0$. Furthermore, since we are evaluating $a$ to a power of $2$, we can store previous results and reuse those rather than re-running the same computation multiple times.

1
On

There are two obvious methods: One, you calculate $x^1$, $x^2$, $x^4$, $x^8$, $x^{16}$ etc. and then multiply the powers that you need. Two, you calculate in turn $x^k$ where k is the highest 1, 2, 3, 4 bits etc. of the desired exponent, at exh step you square the result and then optionally multiply by x.

You were using the first method. The second one has important advantages. First, the Miller-Rabin primality test requires the second method. Two, if the exponents are large, then instead of squaring and multiplying in 50% of cases, you can pre-csculate $x^2$ to $x^3$ or to $x^7$ then square three times followed by at most one multiplication, saving a significant number of multiplications modulo p.