How is the phrase "$n$-bit number" related to the big $O$ notation?

3.4k Views Asked by At

When it comes to algorithms, you frequently have to evaluate problems like this:

Let $x$ be an $n$-bit integer. For each of the following questions, give your answer as a function of $n$.

Or a question like this:

[given algorithm] Assume that the subtraction takes $O(n)$ time on an $n$-bit number.

What does "let $x$ be an $n$-bit integer" really mean? It is just the amount of bits reserved for a random int variable $x$? How does the $n$-bit number relate to the big $O$ of $n$ notation?

3

There are 3 best solutions below

5
On

It asks you how does the space/time of algorithm evaluation growth with $n$. O(n) means linear dependency. $O(n^2)$ is square dependency, $O(e^n)$ is exponential.

1
On

$n$ is used for the sake of scaling things in terms of bits. If you didn't know the size of $x$, then lots of complexities would have to be expressed as a log base 2 as you don't know how many bits $x$ has unless it is expressed this way. Consider the question of how many addition operations have to be done on $a$ and $b$ where the addition is done base 2 though the numbers are in base 10? First, you want to know how long is each number in base 2, which could be seen as $n_a$ and $n_b$ which makes the answer the maximum of these values plus one as if the numbers are the same length there may be a carry bit to factor into things here.


The intention of Big O is to understand in the worst case how does the complexity grow relative to the size of the input, often denoted as $n$. Sorting would be one of the classic problems that has been studied to great lengths in terms of the number of comparisons that have to be done in order to sort a list.

Just to give a couple of examples, first consider the BubbleSort algorithm where the algorithm is to compare a pair of elements, and if a pair change order, start all over again at the beginning. So, for example consider this list of numbers that are to be sorted in descending order:

2 5 3 7

Comparing 2 and 5, these aren't in the proper order so we reverse these 2 elements and start again.

5 2 3 7

Comparing 5 and 2, these are in the proper order. Next, we compare 2 and 3 which aren't in the right order and have to be reversed and then we start again.

5 3 2 7

Comparing 5 and 3, these are in the proper order. Next, we compare 3 and 2 which are also in the proper order. Next, 2 and 7 are compared and have to be reversed.

5 3 7 2

Comparing 5 and 3, these are in the right order. Next, 3 and 7 are compared and have to be reversed.

5 7 3 2

Comparing 5 and 7, these aren't in the right order and so we reverse these now.

7 5 3 2

Now, the comparisons all work out and the list is sorted. Notice all the comparisons that had to be done which in the worst case will be $O(n^2)$ as if the list is in reverse order, there are a quadratic number of comparisons to be done.

In contrast, take that list and apply MergeSort:

2 5 3 7

After the initial split and sort each half, as the idea is divide and conquer here:

5 2 7 3

Now, merge the lists:

7 5 3 2

Merge is better in overall complexity in the worst case as the divisions reduce the comparisons to being $O(n \log n)$ which is slightly better.


In your example, if the number of repetitions is constant, then the big O stays the same. However, if the loop was for each bit within a loop for each bit within a loop for each bit, then the complexity is likely to be $O(n^3)$ unless the iterator for the loop is incrementing in a non linear fashion.

2
On

We usually represent numbers as finite sequence of digits. In base-$2$ each digit is called bit and has value $0$ or $1$. So we write each number $a$ as $\sum_{k=0}^n a_i2^i$, where $a_i$ are digits and assuming $a_n\neq0$ number $a$ is called $n$-bit number. Notice that if $a$ is $n$-bit number then $2^{n-1}\leq a<2^n$. So number of bits of number $a$ is $O(\log a)$.

If algorithm works on individual digits, then the complexity is dependant on length of given number (by length I mean number of bits). For instance how does addition algorithm works? You want to add $a$ and $b$, first you add $a_0$ and $b_0$, write down the result, if necessary carry $1$, continue for next bit and so on. As a result you do $n$ bit additions, so adding two $n$-bit numbers has complexity $O(n)$.

So basicly, saying that algorithm takes time $O(n)$ for $n$-digit number, means that it is linear with respect to the length of given number. And also that it takes time $O(\log n)$ with respect to the number itself.