How to determine computational efficiency of different formulas?

112 Views Asked by At

I am not sure whether or not the speed at which computers can compute a formula heavily depends on the calculating program or not. My question will assume there is a significant difference though, and ask accordingly. Do let me know if this is unnecessary.

If I have some number formulas that give the same answer with the same input(s), how do I determine:

  1. Which is generally computed the fastest (averaging across different calculating programs)?
  2. For any (or most) given program(s), which formula is computed the fastest?

I realize there may not be all that many general principles to answer the second question with. If so, I'll remove it.

1

There are 1 best solutions below

0
On

There is algorithmic complexity which is independent of the programming language and hardware it is implemented on (assuming a specific model of computation eg: a Turing machine model, a Quantum computing machine model etc.,).

Since there can be variation in software (such as programming language, compiler used, optimizations used, third-party libaries used etc.,) and hardware implementations (register word size, clock speed etc.,) the Big-O notation is used to abstract away these implementation details to facilitate comparison of different algorithms that do the same thing (eg: multiplication, sorting, searching etc.,).

If you have different formulas that achieve the same thing in different ways, it is possible they have different algorithmic complexity represented in base operations in the chosen computing model. Example: multiplication of two integers may be done using

  • textbook long multiplication
  • Karatsuba's multiplication

Both achieve the same result, but Karatsuba's multiplication uses a different number of base additions and multiplications of digits. The computing model used in Karatsuba's method is that addition is simpler and faster to implement compared to multiplication.

To answer your specific questions:

  1. Which is generally computed the fastest (averaging across different calculating programs)?

  2. For any (or most) given program(s), which formula is computed the fastest?

The answer for both these questions is: the algorithm with the lower asymptotic complexity (eg. Big-O notation) is theoretically faster.

Big-O notation ignores the constant factor, but the constant factor may impact program speed (i.e., real wall-clock time). Eg: $O(n^2) \sim O(2n^2)$. If your program had a nested loop it would be $O(n^2)$ and if it had two nested loops in sequence it would be $O(2n^2)$. But both these are asymptotically equivalent, although in practice you would see difference depending on the actual input and the constant factor. The definition of asymptotic behavior is at the limits i.e., as the input size tends to $\infty$.

In reality, program speed (i.e., wall-clock time) on specific inputs depends on both the algorithmic complexity and the implementation.