A quicker algorithm.

98 Views Asked by At

Consider an algorithm which essentially counts to a certain number then halts.

Counting Algorithm C

Given any $n \in \Bbb N$

Step 1

$x_0=0$

Step 2

$\text{if} (x_n =n)\ \text{then}(\text{halt}) $

Step 3

$x_n=x_{n-1}+1$

Step 4

$\text{go to Step 2}$

Forgive my poor notation of an algorithm as I am not familiar that much with methods and notations used in computational science and I did not want to simply use c style syntax or something.

Now I ask the question for which the question is to be asked about; "Is there a quicker way to compute C then the obvious?"

Now my questions are these. Does this question even make sense? If in some way it does, is this in a very informal way (I hope you can understand what I mean) analogous to asking if there is a way to determine if a given natural number is prime; that is quicker then the obvious way?

1

There are 1 best solutions below

2
On

The notion of how much time is taken should be defined to be somewhat independent of the machine, otherwise a machine that executes instructions faster would beat a slow machine that executes the same instructions. To do that we usually utilize Big-O notation which you can find on wikipedia. Also, we need a specific model of what instructions are allowed, and typically we use either Turing machines for purely theoretical results or the RAM model for many real-world applications. All these should be covered in a proper computational theory course.

Just for example, if you use a Turing machine, it is very cumbersome and slow to even increment a number stored on the tape by $1$. If the tape can contain at least two symbols (including the blank) then integers can be encoded in essentially binary and incrementing and comparison both will take $O(\log(n))$ steps as $n \to \infty$. So your counting algorithm $C$ would take $O(\log(n))$ steps to execute on a Turing machine. However, in the real-world we use computers that are more similar to RAM machines, where addition and comparison on 64-bit words always takes the same amount of time. Of course this already assumes that $n$ can be stored in $64$ bits, so it would be incorrect to say anything about asymptotic time complexity, but one must choose the model as appropriate. For example it is often said that balanced binary trees give an $O(\log(n))$ traversal from the root to any leaf, but that requires $O(1)$ pointer arithmetic so it is the RAM model that is more suitable than a Turing machine model, since a Turing machine would take linear time just to access each node.