Halving input but not at each step of a process

123 Views Asked by At

I am reading about an algorithm that takes as input $2$ integers and does the following to calculate the GCD:
If both are even then then shift right (=division by $2$) both numbers and repeat
If one is even then shift right (=division by $2$) the even number only and repeat
If both are odd then call the process with $(x - y) , y$ (assuming $x > y)$

Now if both input numbers are a power of $2$ then this process keeps halving the numbers at each step and so the complexity is $log(|x|) + log(|y|)$ (where $|x|$ means the number of bits in $x$)
But in all other cases, we end up interchanging between even/odd at each call i.e.
If we have $2$ even, shift right (i.e. division by 2) will give $2$ odd numbers.
This is since 2 even $\equiv$ $2\cdot(2n + 1), 2\cdot(2k + 1)$
If we have $2$ odds we do the subtraction and we end up with $1$ even and $1$ odd.
This means that each number is halved at every second step.

So what is the complexity in this case? Is it still $log(|x|) + log(|y|)$ even though the halving is done at > 1 step?

2

There are 2 best solutions below

16
On

It depends by what you mean by "What is the complexity?"

In order to answer that question, you need a model for what sort of operations count as "one step". If your model is a CPU with a built-in $\text{gcd}$ operation (using your algorithm), then your algorithm has time complexity exactly $1$. (This is probably not the model you should use.)

More practically, you haven't specified what size your integers are. On an x86 CPU (now somewhat dated), addition of 32-bit integers might take one step…but addition of 64-bit integers might take two. (An add for the lower-order 32 bits, followed by an add-with-carry for the upper-order 32 bits.) So your GCD algorithm might take twice as long for 64- (as opposed to 32-) bit integers.

For this reason, we usually drop constant prefactors when analyzing the complexity of an algorithm. (This is the mathematical definition of $O$, if you've seen that.) If an algorithm takes twice as long as you want, then just wait 6 months and Moore's Law will fix your problem. If it scales like $O(2^n)$, then Moore's law will never help you.

Change the "time unit" of your computation to be "two CPU operations." Call this a "double-CPU operation." Then your algorithm consumes a bit on each double-CPU operation; thus, it takes at most $(\log{|x|}+\log{|y|})$-many double-steps. Converting back to the usual units, your algorithm takes at most $2(\log{|x|}+\log{|y|})$-many steps.

Since your algorithm takes at most $2\cdot(\text{logs})$ time, it is indeed $O(\text{logs})$.

1
On

Hmm I believe this is known as Stein's algorithm, binary GCD, or binary Euclidean GCD.

You can read about the complexity of it here, and it has some references to the complexities of other GCD algorithms too which might help.

I'm not too sure, but I think what you're saying is on the right track, and the wikipedia's mentioning of the complexity being on the order of $log(max(u, v))$ might be similar to your logic.