Way of simplifying binary multiplication

6.3k Views Asked by At

Is there a way to simplify multiplication of binary numbers regardless of digits? Or do we always have to resort to 10-base multiplication? As computers do multiplication, there should be ways to perform binary multiplication in simple manner, right?

Edit: So, I mean like $11 \times 10$, $111 \times 100$ etc.

2

There are 2 best solutions below

0
On BEST ANSWER

You can set up long multiplication just like you learned in base $10$. Say we want to multiply $1101_2 \times 101_2 = 13 \times 5 = 65$. We can just do:$$\begin {align} 1101& \\ \underline{\ \ \ 101}& \\ 1101& \\ 0\ \ & \\\underline{ 1101\ \ \ \ }& \\ 1000001\end{align}$$

Both the partial products (just copy) and the final addition are rather easier than in base $10$

0
On

The answer depends quite a bit on what you mean by simplify. As people have noted in comments, the same tabular scheme you used in school works well in binary, except that none of the single-digit-by-long-number multiplications actually have to be performed because the products are trivial; either they're the full number (suitably shifted) or zero (in which case they can obviously be ignored). In fact, there's a version of binary multiplication using decimal versions that goes by numerous names but is perhaps best known as Russian Peasant Multiplication; see Link for the details on how it works, but it essentially repeatedly halves one number and doubles the other, then adds.

On the other hand, these methods - just like standard base-10 multiplication - still take $O(n^2)$ single-digit operations to multiply two $n$-digit numbers together. In that sense, while the operations are being made simpler (at the cost of there being more of them - $n$-digit numbers in decimal correspond roughly to $3n$-digit numbers in binary), the order of growth still goes up quadratically as the number of digits gets bigger; if it takes 10 milliseconds to multiply thousand-digit numbers, it will take roughly 40 milliseconds to multiply two-thousand-digit numbers. In this sense, the binary methods aren't really simpler at all.

So are there faster methods? There are, but they're much more complicated. The simplest(!) of the bunch works by noting that $(aX+b)(cX+d) = acX^2+(ad+bc)X+bd$; this looks like it takes four multiplications, but in fact the computation can be done with just three, since $ad+bc = (a+b)(c+d)-ac-bd$, and we've already computed the last two products. If we're multiplying $2n$-digit numbers, then we can take $X=2^n$, break our numbers in half in the middle, and note (since the additions come almost 'for free' by comparison) that multiplying $2n$-digit numbers takes roughly $3$ times as long as multiplying $n$-digit numbers with this method; by applying the same algorithm recursively we get a method that takes roughly $3^k$ operations to multiply numbers of length $2^k$ (as opposed to the $\left(2^k\right)^2=4^k$ that the naive method uses), or in other words $n^{\log_23} \approx n^{1.6}$ operations to multiply numbers of length $n$. This method has a fair bit of overhead, though - several more additions have to be done - so it's only a practical alternative for multiplying numbers that are hundreds or thousands of digits long. (For more details, see http://en.wikipedia.org/wiki/Karatsuba_algorithm )

There are other methods that can improve this idea even further, all the way down to $O(n\log n\log \log n)$ time - in other words, very close to linear growth. But whether it's actually possible to multiply numbers in linear time (on a 'standard' machine model) is actually an open question, so right now it's fair to say that no one knows how fast we can actually multiply!