suppose adding two numbers, (that first number has a bits and second number has b bits) can be done in O(max{a,b}). we want to add n, numbers (n 1-bit numbers i.e add n 0 or 1). it's obvious that the cost of this algorithm is vary from input permutation to another permutation. what is the best and worst case of this algorithm?
i ran into this problem as an old- quiz on Computer Course. any nice hint or idea would be highly appreciated.
We have two Solution:
1- Best Case and Worst Case can be in O(n)
2- Best in O(n) and Wost Case in O(n lg n)
any idea or pesudocode for (1) and (2)?
Note: Here we let the unary representation of a number $n$ to be $n$ copies of 1. So, $5$ in unary would be $11111$.
When parsing the input, ignore those numbers that are zeroes. So, we have a unary representation of $n$ and we want to convert it to binary.
The idea is to create $\log n$ numbers $(n, \lfloor n/2 \rfloor, \lfloor n/4 \rfloor, \ldots, 1)$ in unary representation. The total complexity to compute and store these numbers in unary is $n \sum_i (1/2)^i \le 2n = O(n)$. The binary representation of $n$ is then the parity of these numbers in reverse order (that is, the parity of the unary representation of $n$ is the least significant digit, and so on).
To create a unary representation of $n/2^i$ from $n/2^{i-1}$, we simply sweep over the unary representation of $n$, and append a $1$ for every pair of elements we sweep.
Thus, it works in $O(n)$. In pseudocode: