Square Root Yields Different Result After Unit Conversion

1.1k Views Asked by At

This is boggling my mind.

  • The square root of 100 seconds is 10 seconds
  • The square root of 100000 milliseconds is approximately 316.23 milliseconds
  • 10 seconds does not approximately equal 316.23 milliseconds

Do I have a fundamental problem with my unit conversions or my ability to punch numbers on a calculator, or am I just not thinking through the problem with the right mindset?

The reasoning behind the question is that I made an incorrect prediction for the run time of an algorithm after I reduced its complexity. Previously, the algorithm had n operations and ran for 100 seconds, where n is the size of the input. Now, my algorithm has sqrt(n) operations. I predicted that the algorithm would run in 10 seconds, and was surprised when it ran in 0.375 seconds.

Note: For answering purposes, assume that the overhead time associated with running my program is negligible compared to the variable time it takes to run my algorithm. In other words, assume that the run-time is almost entirely dependent on the size of n, for any n of sufficient size.

2

There are 2 best solutions below

0
On BEST ANSWER

No, the square root of $100\,\mathrm s$ is $10\,\mathrm s^{1/2}$, not $10\,\mathrm s$.

One $\mathrm s^{1/2}$ equals approximately $31.623\,\mathrm{ms}^{1/2}$.

Just like a square meter is not the same unit as a meter (and a square kilometer is certainly not 1000 square meters), a $\sqrt{\text{second}}$ is not the same as a second.


In the case of running time, the speedup from $n$ to $\sqrt n$ operation depends on what $n$ is -- just knowing that it sums up to $100$ seconds is not in itself helpful without knowing how much time one operations takes.

Also, what you have are probably asymptotic bounds, $O(n)$ and $O(\sqrt n)$, so there are unknown constant factors involved anyway, which means you have (given) far too little information to predict the running time of the new algorithm.

0
On

To expand a bit on Henning's answer: the $n$ that shows up in asymptotic analysis is generally not a dimensioned value at all - which is a good thing, because otherwise you wouldn't be able to (for instance) take its logarithm in a dimensionful fashion for a $\Theta(n\log n)$ algorithm like, say, quicksort!

Instead, $n$ is a so-called dimensionless parameter that's (to simplify things a bit) manipulated into a value for operation count — also dimensionless. This operation count value is (again, simplifying) what's multiplied by the cost per operation to get the total runtime of your program.

To see how this affects your results, let's take your example in two distinct cases: where your $100\mathrm{s}$ runtime is the result of $n=100$ one-second operations, and where it's the result of $n=10000$ $10\mathrm{ms}$ operations. For simplicity (though this is unlikely to be the case), I'll also assume that a single operation in your $O(\sqrt{n})$ algorithm takes exactly as long as a single operation as in the $O(n)$ algorithm. Then:

  • In the first case, since $n=100$ we know (as mentioned above) that an operation takes $1$ second. Going to an $\sqrt{n}$ algorithm will then result in $\sqrt{100} = 10$ operations of $1$ second each, yielding a total runtime of $10\times 1\mathrm{s}=$ 10s.
  • In the second case, then by the same logic (as mentioned above, again) we know that a single operation takes $100\mathrm{s}/10000 = .01\mathrm{s} = 10\mathrm{ms}$. Going to the $\sqrt{n}$ algorithm then results in $\sqrt{10000}=100$ operations, each of which takes $10\mathrm{ms}$, yielding a total runtime of $100\times 10\mathrm{ms} = 1000\mathrm{ms} =$ 1s.

Hopefully this helps illustrate the issue a bit — in short, don't confuse your input size with your runtime, but rather understand exactly how the latter is calculated from the former.