Can any characterized property give a solid account of why multiplication is a harder computation operation than addition?

2.9k Views Asked by At

For humans, in general, it's far easier to do an addition than a multiplication. One might argue this is just an effect of the particular way brains are wired.

But from what I find, multiplication is also more expensive on computers, where one could certainly arrange chips in an arbitrary way, or at least without any pre-established biological constraint.

But from a group theory point of view, addition is not more fundamental than multiplication, is it?

Are there some characteristics, like associativity or commutativity that explain that difference in terms of computational complexity?

The answer might of course rely on other mathematical fields, like topology, or even be related to some physical constraints that hold for both brains and common CPUs but are not strictly constrainted by mathematical characteristics.

Related resources:

5

There are 5 best solutions below

3
On

Each digit (to whatever base is used) of one term interacts (i.e., multiplies) with all the digits of the other term. Therefore if there are $m$ digits in each, this requires $m^2$ interactions.

In addition, each digit only interacts with one digit of the other term.

11
On

It's not quite completely true that multiplication is harder than addition.

When we want to write numbers, we usually use a writing system that's based on addition: a notation like "$125$" means "$100 + 20 + 5$," and likewise, "$81$" means "$80 + 1$." If we want to add two numbers that are written using this system, then we're basically calculating a sum of two sums, which, thanks to associativity and commutativity, is pretty straightforward. Multiplication, on the other hand, means calculating a product of two sums, and that's more complex; it involves using the distributive property, which gets cumbersome for large numbers.

However, this writing system is not the only possible writing system! We could write our numbers as, for example, $5 \cdot 5 \cdot 5$ and $3 \cdot 3 \cdot 3 \cdot 3$. With this system, multiplication is very easy: you just combine the numbers, giving $3 \cdot 3 \cdot 3 \cdot 3 \cdot 5 \cdot 5 \cdot 5$. On the other hand, in this writing system, addition is extremely difficult.

So, in theory, whether addition or multiplication is harder depends on what number writing system you're using. In practice, the standard system is extremely overwhelmingly popular, so you'll usually find addition easier than multiplication.

1
On

One way to view your question is through the lens of computational complexity: why does the computational complexity of addition appear to be less than the computational complexity of multiplication? In particular, why can two $n$-bit integers be added with $\Theta(n)$ time complexity, but multiplication appears to require more than $\Theta(n)$ time complexity, as far as we can currently tell?

As it turns out, we don't know whether multiplication is actually significantly harder, at least by this metric. We do not have any proof that multiplication cannot be done with $\Theta(n)$ time complexity. It is known that multiplication can be done in $O(n \log n)$ time complexity -- which is asymptotically not that much slower than addition. Moreover, there is no known proof that multiplication requires $\Omega(n \log n)$ time, or even a proof that it requires $\omega(n)$ time.

Where you can read more: https://cs.stackexchange.com/q/77703/755, https://cs.stackexchange.com/q/126725/755.

0
On

Since you mentioned computers, computers work "natively" with integers as $n$-bit binary words, such as a 64-bit value for a 64-bit processor. While for an addition the result fits in $n+1$ bits (one word and one carry bit like a carry flag), multiplication results require $2n$ bits, or 2 words to store. Binary digit arithmetic is just what is easiest to implement in silicon, after engineers experimented with decimal-based digits (what we normally use), binary-coded decimal, even ternary. Old processors didn't have the transistors for a hardware multiplication, but nowadays we dedicate a large area of silicon to fast multiplication because it's so important.

If you work with factoring large numbers, such as in the quadratic sieve, it is worth storing a number $x$ as a vector $[e_1, e_2, \cdots]$ where $x = 2^{e_1} 3^{e_2} \cdots p_k ^{e_k} $ is the prime factorization handling primes up to $p_k$. The vector can be arbitrarily long for arbitrarily large $x$. Then multiplication of two numbers is simply elementwise addition of the lists, and in fact you can check if a number is square-free just by seeing if all the exponents are less than 2.

Another interesting representation is factorial base, where each digit starting from least significant has base $0!, 1!, 2!, \dots$ If you extend the system to use fractional bases like $1/0!, 1/1!, 1/2!, \dots$ you can represent certain constants that have nice Taylor series in a nice way, for example $e = 0.0 1 1 1 \dots$

1
On

A Finite State Machine can compute addition, but it cannot compute multiplication. See Computation: Finite and Infinite Machines, by Marvin Minsky. (Section 2.3 example 4 for addition, and Theorem 2.6 for multiplication.)