How can an algorithm be "constant" or O(1)

201 Views Asked by At

I just started learning about algorithms. I've got this great book Practical Algorithms and Data Structures.

The first algorithm I created was to make the summation of the first n numbers. I started by creating this simple algorithm:

def sum_to_n(n):
    total = 0
    for i in range(n + 1):
        total += i
    return total

The time complexity was said to be O(n). Since then, I created a new one based based on this premise:

1+6=7
2+5=7
3+4=7
7*3=21
(1+6)*(6/2)=21

Here it is:

def arithmetic_sum(n):
    return (1 + n) * (n / 2)

The book then goes on to say this:

We describe sum_to_n as “linear” or O(n), and arithmetic_sum as “constant” or O(1). Hopefully you can start to see why. Irrespective of the exact times that these functions take to execute, we can spot a general trend, that the execution time for sum_to_n grows in proportion to n, whereas the execution time for arithmetic_sum remains constant. All else being equal, arithmetic_sum is the better algorithm, for this reason.

I don't understand how that could be true. Why does the execution time for arithmetic_sum remain constant? Wouldn't suppling a large enough number for n take longer? I mean, the computer has to perform this mathematical operation, so wouldn't it be "harder" for it to perform one with a larger number? I know it would take longer for me. Maybe the issue is that I don't understand how a computer does math.

2

There are 2 best solutions below

1
On

I can add a bit more clarity:

The function arithmetic_sum(n) is being called O(1) under the assumption that addition, multiplication, and division (by 2) each take a constant amount of time (combined with the fact that it uses only a finite number of such operations). If the argument (n) has a type representable by a native machine word (for example, a 32-bit integer), then this is effectively true since each operation can be performed directly by the hardware (note: this is partly because the hardware has no knowledge of how many leading zeros each term has, and so has to operate on all 32 bits of each term regardless).

On the other hand, it could be that (n) is a number in arbitrary precision (i.e it could have in principle any number of digits). If this were so, then your intuition that it cannot be O(1) would be correct; the running time would instead depend on the number of digits (and would be dominated by the time to multiply, as the other operations can all be performed in linear time).

0
On

Wouldn't suppling a large enough number for n take longer?

Note that the computer needs to store each number in memory, and if an integer type requires, say 32bits, the machine will need the exact same amount of memory for small numbers (e.g. $x=0$) and big numbers (e.g. $x=10000$) - for instance in Python you could check the maximum integer size with

import sys
sys.maxsize # prints 9223372036854775807

so the machine does not really care if you're performing arithmetic operations on small or big numbers, because all of them will require the same amount of memory (and small numbers will be padded with zeros in binary representation).

You might also want to look at machine's word size to see how numbers are stored in memory.