why does Binary GCD algorithm have $O(n^2)$ complexity?

820 Views Asked by At

I don't have Donald Knuth's book. So, I merely wonder why Binary GCD algorithm has $O(n^2)$ complexity?

1

There are 1 best solutions below

0
On

The idea behind this modification of the standard Euclidean algorithm is that we get rid of all common powers of two in both $x$ and $y$, instead of doing ordinary modulo operations. Without loss of generality, lets assume $x$ is even and $y$ is odd. Then, we will remove all powers of two from $x$ (in other words, remove all trailing zeros), now $x$ and $y$ are both odd numbers. Then, we go from $(x, y)$ to $(x - y, y)$, so that now we have an even and an odd number again. This way, in each step, the number of digits in the binary representation decreases by one, so it takes $\log_2(x) + \log_2(y)$ steps.

Let $n = \log_2(\max(x,y))$ (maximum number of bits possible), then indeed the overall worst case complexity is $O(n^2)$, since large numbers subtraction operation take $\Theta(\log_2(N))$.