Algorithm for adding n 1-bit numbers

1k Views Asked by At

suppose adding two numbers, (that first number has a bits and second number has b bits) can be done in O(max{a,b}). we want to add n, numbers (n 1-bit numbers i.e add n 0 or 1). it's obvious that the cost of this algorithm is vary from input permutation to another permutation. what is the best and worst case of this algorithm?

i ran into this problem as an old- quiz on Computer Course. any nice hint or idea would be highly appreciated.

We have two Solution:

1- Best Case and Worst Case can be in O(n)

2- Best in O(n) and Wost Case in O(n lg n)

any idea or pesudocode for (1) and (2)?

2

There are 2 best solutions below

0
On BEST ANSWER

Note: Here we let the unary representation of a number $n$ to be $n$ copies of 1. So, $5$ in unary would be $11111$.

When parsing the input, ignore those numbers that are zeroes. So, we have a unary representation of $n$ and we want to convert it to binary.

The idea is to create $\log n$ numbers $(n, \lfloor n/2 \rfloor, \lfloor n/4 \rfloor, \ldots, 1)$ in unary representation. The total complexity to compute and store these numbers in unary is $n \sum_i (1/2)^i \le 2n = O(n)$. The binary representation of $n$ is then the parity of these numbers in reverse order (that is, the parity of the unary representation of $n$ is the least significant digit, and so on).

To create a unary representation of $n/2^i$ from $n/2^{i-1}$, we simply sweep over the unary representation of $n$, and append a $1$ for every pair of elements we sweep.

Thus, it works in $O(n)$. In pseudocode:

Let unary = [1] * n (that is, an array consisting of n 1s)
binary = []
while unary is not empty:
  new_unary = []

  bit_counter = 0
  for element in unary:
    if bit_counter == 1:
      add [1] to new_unary
      bit_counter = 0
    else:
      bit_counter = 1

  add [bit_counter] to binary
  unary = new_unary

reverse binary, and return it.
0
On

The second algorithm is straight-forward: Start with a sum equal to zero and just keep keep adding the numbers to it as you process them. Since the maximum possible value of the sum is $n$ and this can be represented in $1+\lfloor\lg n\rfloor$ bits, each operation is going to take $O(\lg n)$, bringing the total complexity to $O(n\lg n)$. In pseudocode:

input: values
sum = 0
for each number x in values:
    sum := sum + x
output: sum

The better time complexity can be reached using the divide-and-conquer trick. One starts by adding pairs of consecutive numbers first, producing roughly $\frac{n}{2}$ pair-sums. Each of the addition will have cost $1$, since we're only adding one-bit numbers. Then, the same process can be repeated to produce $\frac{n}{4}$ numbers, with each operation costing at most $2$, since the numbers will be between zero and two. Another iteration will yield list of roughly $\frac{n}{8}$ numbers at cost $3$, and so on, until we are left with single number, which will be the result. The total cost of the additions is thus going to be roughly equal to $\left(1\times \frac{n}{2}\right)+\left(2\times \frac{n}{4}\right)+\left(3\times \frac{n}{8}\right)+\ldots \leq 2n$.

Of course, the devil can be in the detail and we didn't say what happens if there is odd number of elements to be added. A simple trick is to add one extra element whose value doesn't change the result: a zero. As it turns out, this doesn't have too bad effect on the total time complexity; in fact, we could actually pad the whole sequence with zeros until its length becomes power of $2$ (which would make all the halving steps work perfectly, without any remainders) and the total time would still stay $O(n)$ since the length would only double at worst and $O(n)$ can easily hide that multiplicative constant.

In pseudocode:

input: values
while |values| > 1 do
    if |values| is odd
        append 0 to values
    create new list new_values
    for each (a,b) in values:
        append (a+b) to new_values
    values := new_values
output: first (and only) element of values