Trying to get my head around the fundamentals behind proofs of asymptotic complexity in comp. sci. One quantity that appears no matter what one seems to do is (n - 1). Here's a salient quote from Algorithms by S. Dasgupta, C.H. Papadimitriou, and U.V. Vazirani - page 24.
1 1 0 1
x 1 0 1 1
----------
1 1 0 1 (1101 times 1)
1 1 0 1 (1101 times 1, shifted once)
0 0 0 0 (1101 times 0, shifted twice)
+ 1 1 0 1 (1101 times 1, shifted thrice)
----------------
1 0 0 0 1 1 1 1 (binary 143)
If x and y (1101 and 1011 here) are both n bits, then there are n intermediate rows, with lengths of up to 2n bits (taking the shifting into account). The total time taken to add up these rows, doing two numbers at a time, is O(n) + O(n) + ... + O(n) -- which happens (n - 1) times -- , quadratic in the size of the inputs.
Where does n - 1 come from? And are they not playing fast and loose with at least one of the quantities. Those rows are certainly lower than 2n, but they're lower than any arbitrary integer greater than 1 times n. Also, you can't just add 2 numbers at a time after column 2. Could someone possibly clarify, at least the n - 1 part?
When you add $n$ numbers (rows) together, there are $n-1$ plus signs. Each one takes $O(n)$ time to perform.