How is addition different than multiplication?

7k Views Asked by At

Is there a fundamental difference in the things we call multiplication and those we call addition?

In a field, both binary operations obey exactly the same rules (commutativity, associativity, identity element, and inverse element [actually this one is the same for all but 1 element: namely $0$]). In a ring, some of the multiplicative rules of a field are relaxed, but it seems like we could just as easily have relaxed the additive rules.
It seems that even the distributive law could also be defined so that addition distributes over multiplication (opposite to the normal way), and thus is only a distinguishing property -- when it even applies -- because we've decided it should be.

Other than the fact that often we require both operations on whatever set of "numbers" we're considering, is there some property of addition that is never shared by multiplication (and vice versa), no matter which generalization of each we choose? That is, can we define the necessary and sufficient conditions for a binary operation to be called "addition" or "multiplication"?

5

There are 5 best solutions below

1
On BEST ANSWER

Claude Shannon's master's thesis, a seminal contribution to Boolean algebra and electrical engineering, used the notation of addition and multiplication for the two operations that we now think of as AND (multiplication) and OR (addition), applied to the elements 0 and 1. In this case, not only does multiplication distribute over addition, $x(y+z)=xy+xz$, but addition also distributes over multiplication, $x+yz=(x+y)(x+z)$. (These are Shannon's equations 3a and 3b.)

Boolean algebra is completely symmetric in the two operations -- it doesn't matter which one you call addition and which one you call multiplication.

The usage of the terms "addition" and "multiplication" is like many issues of notation: Authors use whatever seems most natural to them, and as long as it's defined clearly, readers will deal with it.

15
On

Sometimes multiplication isn't commutative (e.g. matrix multiplication), but it's a stretch to call something addition if it isn't commutative.

Edit: Below my comment that I don't think ordinal addition should be called addition seems to have been misinterpreted; my apologies for the confusion. I meant no offense, and in particular I didn't mean to imply that ordinal addition is not interesting or not worthy of study. I just don't think it should be called addition, in the same way that I don't think the free product should be called a product.

One basic intuitive model for where addition comes from is that it abstracts the properties of the coproduct in some category; for example, addition of natural numbers corresponds to the coproduct in $\text{FinSet}$ or even to the coproduct in $\text{FinVect}$. A distinguishing feature of the coproduct is that it treats its arguments symmetrically, and is in particular commutative, but that's just a way of stating a more fundamental property, which is that the coproduct, like any commutative and associative operation, takes as input a multiset of operands rather than an ordered list. Ordinal addition, of course, doesn't have this property; in particular it is not the coproduct in the category of ordinals, and it treats its inputs asymmetrically.

There's reason to believe that category theory has a special place in its heart for commutative and associative operations; see, for example, this blog post. I think it's valuable to use additive notation and terminology to refer to this cluster of ideas - e.g. when discussing additive categories and so forth - and that ordinal addition genuinely belongs to a different cluster of ideas which doesn't have a good name that I'm aware of.

1
On

It is very standard to use additive notation only if the operation is commutative, whereas multiplicative notation may also denote noncommutative operations, as is standard in group theory. Multiplication (say of a ring, an algebra) often happens in important cases to be noncommutative (well unless you restrict to commutative rings and algebras of course), e.g. in quantum mechanics (canonical commutation relations) or the Weyl algebra. No one uses additive notation to denote noncommutative operations, which seems to suggest that we consider addition always to be commutative.

5
On

The property that distinguishes addition from multiplication is the distributive law. This is part of the convention of calling the operations addition and multiplication. If the distributive law wasn't there, we wouldn't call the operations by those names.

The fact that multiplication distributes over addition admits $0\cdot a = 0 \;\forall a$ where $0$ is the additive identity. There is no such analog for addition. This is one property that falls out of imposing the distributive law.

Another necessary property is that if every element has an additive inverse, then addition must be commutative. Observe the commutativity of addition derived from the distributive law and the existence of additive inverses: $$\begin{align}(1+1)(a+b)\quad &= 1(a+b) + 1(a+b)\\ &= (1+1)a + (1+1)b\end{align}$$ $$a+b+a+b\quad =\quad a+a+b+b$$ $$b+a\quad=\quad a+b$$

Besides this, the addition and multiplication operations on a set $S$ are simply functions: $$\begin{align} +&\;:\;S\times S \rightarrow S\\ \cdot\:&\;:\;S\times S \rightarrow S \end{align}$$

They can be any mapping we choose so long as the distributive law is upheld.

The commutativity of multiplication can be relaxed, but as explained, commutativity of addition is required if every element has an additive inverse.

5
On

All ansers being good, i would like to give another perspective here.

Multiplication is just "higher-order Addition"

Explaining:

assume initially $a,b \in N$

$a \times b = a+a+..+a$ ($b$ times)

or equivalently

$a \times b = b+b+..+b$ ($a$ times)

So in this sense multiplication is an addition where the times the addition is performed (or the number of arguments if you like) is variable, instead of constant as $a+c$ (here addition arguments are constant i.e $2$ $a$ and $c$)

From there one can generalise the new higher-order addition (i.e multiplication) to other fields e.g $R, C$ etc..

The distributive law is then just a way to combine the operations when involved in same expression. i would say that other distributive laws are also possible exactly because of this higher-order connection.

The above argument is also clear when one uses exponentials or logarithms to transform between addition/multiplication (a kind of operator duality if you like).

This is an interesting subject, reducing all arithmetic to only one operator (i.e addition) and see the various connections with other operations under a new light (e.g a computational or functional one).

Note an investigation like this may be important to (for example) physics. Physics uses mathematical formulae to describe physical reality, such formulae contain various operators, the simplect of which are addition and multiplication. But assuming one can physicaly add (let's say) two sticks together, by placing one after the other (so the compound length is the sum of the lengths), how can these be multiplied? What would be the physical counter-part of a multiplication operation? (related to "physical computation")