Euclid's algorithm and Gaussian elimination: most computationally efficient approach

420 Views Asked by At

I think this is more a mathematics question than a computer science one - but as it is about computational complexity it could be either, so apologies if you think it is in the wrong place...

The question is this: when using Euclid's algorithm to simplify a rational generated through Gaussian elimination, is it more or less computationally efficient to apply the algorithm to each rational number generated or to the final rational alone, or does it make no difference?

To explain further: I have build a simulator in software of a parallel computing device. Unfortunately the simulator runs very slowly and so speeding up the test case - Gaussian elimination - might help.

Each major step in the elimination can be thought of as a calculation of the form: $\frac{A}{B} = \frac{a}{b} - \frac{c}{d}\frac{e}{f} = \frac{adf - ceb}{bdf}$ and the question is whether there is an computational advantage in applying Euclid's algorithm to the product $\frac{ce}{df}$ at a first stage or whether it's best just to wait until we get $\frac{A}{B}$ and apply the algorithm then?

Update

The full algorithm has been requested, so here goes (apologies for imprecision in naming etc, I hope the intention/algorithm is clear though:

Let the "baseline" fraction = $\frac{a}{b}$ Let the constant multiplier = $\frac{c}{d}$ Let the "current" fraction = $\frac{e}{f}$

(ie ultimately what we are trying to find is the fraction $\frac{e}{f} - (\frac{a}{b}\frac{c}{d})$ as a rational in the lowest possible terms.

Steps:

  1. Calculate $\frac{a}{b}\frac{c}{d}$

  2. Calculate the GCF for 1 using Euclid's algorithm - let us call this $g^\prime$

  3. Divide through by $g^\prime$ to get a new fraction $\frac{N}{g^\prime}$

  4. Calculate $\frac{eg^\prime - fN}{fg^\prime} = \frac{O}{P}$

  5. Once again apply Euclid's algorithm to $\frac{O}{P}$ to get new gcf which we call $g^{\prime\prime}$

  6. So divide through again and get the final form $\frac{Q}{g^{\prime\prime}}$

(For Euclid's algorithm we divide repeatedly using remainders until we have no remainder left)

I appreciate that for any given set of rationals there may be a different answer - so I am merly looking for guidance as to which is most likely to be the best approach - have the intermediate stages 2 and 3 or not?

1

There are 1 best solutions below

3
On BEST ANSWER

The computational complexity is going to be the same either way (in the typical big O notation). Whenever you are talking about doing something once or twice (or a fixed $n$ number of times), or on double or single length input, you are just talking about the constant factor, not the argument, associated with big O complexity.

So, what you are talking about here is a fine tuning issue, and that typically comes down to hardware and/or fine tuned assembly code vs your own code.

Euclid's algorithm, which is your bottleneck if I understand your algorithm correctly, takes time proportional to input length of the smaller input, as well as time required for modular reductions.

Breaking one $gcd$ problem into multiple does not usually help with input size. In particular, the first reduction may find no common factor, in which case you reduce it all at the end anyway. The best case could be a win on input size by doing it twice, but most situations will increase input size.

An example of two operation reducing total input size is if in the first case the first number is a small divisor of the second, and in the second case the second number is small; two operations keeps you bound by the two small numbers, where as one operation will bind you by a large number. This would probably be pretty rare, and you would typically have smaller total input with one operation.

Where it gets trickier is time for each modular reduction. This can increase significantly if you cross barriers in your hardware. On a 64 bit machine, if you cross the 64 bit mark by waiting til the end, you end up with 128 bit arithmetic, which will be a lot slower. This slowdown will probably drown anything else out. Not to mention potential programming complexity introduced by going beyond the maximum integer size your implementation supports.

So, in short, do the gcd once, unless you can identify a significant probability that this will force you to cross barriers in the hardware (for example, by requiring 128 bit arithmetic). In that case, it may be worth doing it twice. The only way to be sure is to actually write your code both ways and measure. However, if you know you will not be crossing such hardware barriers, you are very likely to do better with one final reduction.

Three reminders you might find helpful: 1) Your programming language probably supports types beyond the native integer type supported by your hardware. A common situation is a language supporting 128 bit types on a machine that natively supports 64 bit types. 2) A fine tuned euclidean algorithm provided with your programming language will almost certainly be a lot faster than one you write by hand (no matter how well you write it, if it is provided for you it was probably hand written in assembly language). 3) Trust your compiler (if you are implementing in a compiled language) to deal with fine optimizations. For example, if you need to use a 128 bit type to be sure to avoid overflow, you probably won't pay much more than you would for 64 bit modular reductions unless the actual argument exceeds 64 bits.