Doubt on complexity of multiplying two binary numbers.

57 Views Asked by At

In one of my lectures, it was stated that when we multiply two binary numbers (say $n$ and $m$) such that $n$ has $k$ bits and $m$ has $l$ bits we have a maximum of $kl$ bit operations.

I don't believe this affirmation. When we perform multiplication of the two binary numbers $n$ and $m$ we are essentialy doing a maximum of $l-1$ additions (note that if we are multiplying $n \times m$ we will have exactly $l$ lines,which correspond to $l-1$ additions) of numbers that never have more than $k+(l-1)$ bits (obviously, the first line will contain a number with $k$ (or $0$) bits and so on... in the last line we might have a number with $k+(l-1)$ bits). Therefore, in the worst case scenario, we are essentially perfoming $l-1$ additions of numbers with $k+(l-1)$ bits. Since the addition of two numbers with $k+(l-1)$ uses at maximum $k+l$ bit operations, it follows that our multiplication takes at maximum $(l-1)\times(k+l)$ bit operations.

This doesn't look like $kl,$ in every scenario possible.

Where am I going wrong here?