Is this $\gcd(0, 0) = 0$ a wrong belief in mathematics or it is true by convention?

4.9k Views Asked by At

I'm sorry to ask this question but it is important for me to know more about number theory.

I'm confused how $0$ is not divided by itself and in Wolfram Alpha $\gcd(0, 0) = 0$ .

My question here is: is $\gcd(0, 0) = 0$ a wrong belief in mathematics or it is true by convention?

Thank you for any help.

11

There are 11 best solutions below

15
On BEST ANSWER

It's not a belief. It's not a convention. It can be proved...

Negative numbers would just be an unnecessary distraction. So we will assume that we are talking about numbers in $\mathbb Z^+ =\{0, 1, 2, 3, \dots\}$

There are two conditions that $g$ must meet to be called the greatest common divisor $(\gcd)$ of $x$ and $y$.

\begin{array}{lll} 1. &g|x \text{ and }g|y &\text{(g is a common divisor of x and y)}\\ 2. &\text{If }z|x \text{ and } z|y \text{, then } z|g &\text{(g is the greatest common divisor of x and y)} \end{array}

As an example, consider the numbers $180$ and $216$. The common divisors of these two numbers are $D_{180,216} = \{1,\,2,\,3,\,4,\,6,\,9,\,12,\,18,\,36\}$ The elements of $D_{180,216}$ are all of the numbers that satisfy condition $(1).$

Here is where things get different than you might think they are. Your immediate reaction might be to say that $36$ is the greatest common divisor because all of the other numbers in $D_{180,216}$ are less than $36$. No, that's wrong. Look more closely at condition $(2)$. The reason that $36$ is the greatest common divisor is that all of the other numbers in $D_{180,216}$ are divisors of $36$. This is a much more subtle and sophisticated thing:

Every list of the common divisors of two numbers contains exactly one number that all of the other numbers in that list divide into. That number is the greatest common divisor.

So how do we prove that $\gcd(0,0) = 0$?

It turns out that there are two definitions of $a|b$.

DEFINITION A. Let $a,b \in \mathbb Z^+.$ Then $a|b$ if and only if there exists an $n \in \mathbb Z^+$ such that $b = an$.

DEFINITION B. Let $a,b \in \mathbb Z^+.$ Then $a|b$ if and only if there exists an $n \in \mathbb Z^+$ such that $\dfrac ba = n$.

The two definitions are exactly the same except, by definition A, $0|0$ is true, and by definition B, $0|0$ is at best undefined. Defining $0|0$ to be true makes the two definitions equivalent to each other.

I found that Wolfram's mathworld uses definition B. Wikipedia lists both definitions. Every algebra and number theory book I own uses definition A.

Personally, I am appalled to see definition B because definition A is necessary to generalize the concept of "divides" to principle ideal domains. One other misconception should be addressed here. There seems to be a belief that $a|b$ implies that $a \le b$. That isn't always true. According to either definition, $4|0$ (Since $\dfrac 04 = 0$ and $0 = 4 \cdot 0$) but $4 \nleq 0$.

Since $0 = x \cdot 0$ for all $x \in \mathbb Z^+$, the divisors of $0$ are $D =\{0,1,2,3,4,\dots\}$ So the common divisors of $0$ and $0$ are $D_{0,0}=D =\{0,1,2,3,4,\dots\}$. The one number in that list that all of the other numbers divide into is $0$.

Hence $0 = \gcd(0,0)$.

I suppose the answer to your question is that it depends. If you accept definition(A), then $\gcd(0,0) = 0$ can be proved. If you accept definition(B), then you have to define $\gcd(0,0) = 0$ so that it agrees with the consequences of definition(A).

3
On

Well it depends.

When we define the GCD for two integers, the definition makes sense as long as at least one of the integers is not zero. And whenever when you deal with divisibility problems, $\gcd(0, 0)$ doesn't make sense.

Now, if you know a bit of algebra, there is another way of characterizing (i.e. identifying) the GCD of two integers:

The ideal generated in $\mathbb Z$ by $a$ and $b$ is a Principal Ideal. This ideal is generated by $\gcd(a, b)$ or by $-\gcd(a,b)$. Now, this definition is equivalent to the original one for $(a, b) \neq (0, 0)$ and can be extended naturally to the case $a = 0, b = 0$. With this definition we get $\gcd(0, 0) = 0$.

When using results/tools from Ring theory, one might want to use this convention. This way, one can get slightly more general results, and there is no need to worry at every step if we are in the case $(0, 0)$.

10
On

You're right to be reluctant to accept the validity of $\gcd(0, 0) = 0$. A lot of times, questions about 0 lead to graph discontinuities and existential quandaries.

Given nonzero integers $a$ and $b$, $$\frac{a}{\gcd(a, b)}$$ is a divisor of $a$ and $$\frac{b}{\gcd(a, b)}$$ is a divisor of $b$. Now, think for a moment about $\gcd(a, 0)$ when $a \neq 0$. ProofWiki tells us that $\gcd(a, 0) = |a|$. For example, with $a = -3$ we have $\frac{-3}{3} = -1$, which is certainly a divisor of $-3$, and $\frac{0}{3} = 0$, which we can choose to accept as a divisor of 0 (more on that choice later).

So far we have not triggered the division by zero error. But if we then ask what the divisors of 0 are, then we run into that problem. And then we also have to re-examine the definition of divisor, a definition we take for granted when dealing with positive as well as with negative integers. Given nonzero integers $a$ and $b$, we say $b$ is a divisor of $a$ if $\frac{a}{b}$ is an integer. For example, $-3$ is a divisor of $-27$ since $\frac{-27}{-3} = 9$. But $-6$ is not a divisor of $-27$ because $\frac{-27}{-6} = \frac{9}{2}$, which is not an integer (though it is a rational number).

Furthermore, each positive integer is one of its own divisors, and its list of divisors is a finite sequence. Thus, if $a > 0$, then $\gcd(a, a) = a$. With minor adjustments we can say the same thing for each negative integer. Indeed, if $a \neq 0$, then $\frac{a}{a} = 1$. But as you already know, $\frac{0}{0}$ is undefined or invalid.

So we reach the seemingly absurd conclusion that every positive integer and every negative integer is a divisor of 0, but 0 itself is not its own divisor. Someone commented earlier that this means that $\gcd(0, 0) = \infty$. Or one could say that $\gcd(0, 0) = -\infty$ for the pure heck of it. But neither $\infty$ nor $-\infty$ are specific integers.

No matter how large two positive integers might be, they each still have a finite list of divisors. The intersection of two finite sets is also finite, and can therefore (at least theoretically, in the case of large integers beyond our practical computational ability) be sorted to find a divisor in common that is greater than all the other common divisors.

But 0 has an infinite list of divisors. Even if we accepted 0 as being its own divisor (which we don't), it's still smaller than, say, $5^{4^{3^2}}$. And then there's $5^{4^{3^2}} + 1$, $5^{4^{3^2}} + 2$, etc., ad infinitum. Therefore $\gcd(0, 0)$ is undefined.

This is all fine and dandy, but if you're programming a calculator or a computer algebra system, do you really want to throw an error message for every little undefined thing that comes your way? You also have to consider that programmers like to build functions on top of other functions. If you have to program GCD, do you implement the Euclidean algorithm (which might not always be the most efficient, e.g., two consecutive Fibonacci numbers) or do you build it on top of FactorInteger?

Try computing $\gcd(8, 13)$. The quickest way is to factorize $8 = 2^3$ and note that 13 is prime. Thus $\gcd(8, 13) = 1$. Now try this in Wolfram Alpha: FactorInteger[0]. Is it any wonder that Wolfram Alpha gives you 0 for GCD[0, 0]? Also try

  • Table[GCD[n, n], {n, -10, 10}]
  • Table[GCD[n, n + 2], {n, -10, 10}]

(That second one gives a $-2$ in the middle that sticks out like a sore thumb).

In conclusion, $\gcd(0, 0) = 0$ is wrong, though I don't know if a lot of people believe it. I think that if you choose to treat it as if it were true, it would be for convenience, not convention.

11
On

For those who find my previous answer TLDR, here's what it boils down to:

A nonzero integer $a$ has a finite set of divisors, and so does a different nonzero integer $b$. The common divisors of $a$ and $b$ are the intersection of the divisors of $a$ and the divisors of $b$, and this intersection is also finite, which means there is a number among the common divisors that is greater than all the other common divisors: that's the greatest common divisor. If $a = b$, then we only need to look at one set of divisors, which is still finite.

But every integer is a divisor of 0 (except 0 itself). Whatever positive integer we look at, no matter how large and unreachable it may seem to us, is still smaller than the next higher positive integer. Even if we were to accept 0 as its own divisor, it is still smaller than all the positive integers, and therefore it can't be its own greatest common divisor.

Therefore $\gcd(0, 0) = 0$ is wrong. As for why a given computer program would give you that answer, that depends on how the program was written (I go a little more in detail on that in my longer answer).

5
On

This is one of those question that raise religious wars. ;-)

The greatest common divisor was considered first when $0$ was not listed among numbers and it was defined as the largest (under the usual ordering) common divisor. But things change over the centuries.

In what follows, number will mean “natural number” (zero included).

Some people say that every nonzero number $n$ divides $0$, because $n\cdot 0=0$, but they don't accept $0\mid 0$. On the other hand, the definition for $a\mid b$ expressed as “there exists a number $c$ such that $ac=b$” clearly includes $0\mid 0$. One could add “unique”, so in this case $0$ would not divide $0$. As you see, it's just a matter of definitions.

Let's consider the usual ordering of numbers:

$a\le b$ if and only if there exists $c$ such that $a+c=b$

If we compare to the above definition of divisibility

$a\mid b$ if and only if there exists $c$ such that $ac=b$

we surely see an interesting similarity. More interesting is that this defines another order relation on numbers: divisibility is reflexive, antisymmetric and transitive. It's not a total order, because $2\nmid 3$ and $3\nmid 2$, but it's a very interesting ordering nonetheless. Indeed, the ordered set of numbers under divisibility is a lattice: any two elements have a greatest lower bound and a least upper bound.

The greatest lower bound of $a$ and $b$ is a number $d$ such that

  1. $d\mid a$ and $d\mid b$ ($d$ is a lower bound)
  2. for all $c$, if $c\mid a$ and $c\mid b$, then $c\mid d$ ($d$ is the greatest lower bound)

Of course, the greatest lower bound of a single element is the element itself.

What's this greatest lower bound? Of course it is the greatest common divisor in all cases when one among $a$ and $b$ is nonzero. And there's no point in excluding the element $0$ which, in this ordering is the maximum element, because $n\mid 0$ for all $a$ (and $1$ is the minimum).

Once we note that this relation on the natural numbers reflects the partial ordering of ideals in the ring $\mathbb{Z}$ of integers with respect to reverse inclusion, this ordering becomes even more interesting, because the greatest lower bound of two ideals is their sum (and the least upper bound is their intersection), which immediately provides the Bézout relation for the greatest common divisor.

Let's get bold. There are (commutative) rings where the notion of “largest” cannot be given a sensible meaning (think to the Gauss integers), but the notion of greatest common divisor makes sense and is very important. We lose uniqueness, but for any two elements $a$ and $b$ we can find one element $d$ satisfying properties 1 and 2 above.

So, in view of this generalization, the “modern” definition of gcd is more useful and, $\gcd(0,0)=0$ makes no problem. From a computational point of view, restraining ourselves to the natural numbers, perhaps the older definition is better and asking about $\gcd(0,0)$ is rather pointless.

Decide what your definition is and act on consequence. That's all. Allowing $\gcd(0,0)$ requires making exceptions to the rule that $a/\gcd(a,b)$ is a divisor of $a$; on the other hand, not defining it requires making different exceptions. I don't think there are better or worse exceptions: it's a matter of conventions. In some areas a convention may be more useful than another. And, at the end, $\gcd(0,0)$ is not a big deal.

0
On

We will use
$\mathbb N = \{1, 2, 3, 4, \ldots\}$ (the set of natural numbers) and
$\mathbb W = \{0, 1, 2, 3, \ldots\}$ (the set of whole numbers).

First we will define divides and gcd over $\mathbb N$. Then we will look at some of the consequences of those definitions, and finally we will look for a mathematically sound way to extend the definitions of divides and gcd to $\mathbb W.$


DIVIDES and GCD over $\mathbb N$


DEFINITION $1.1.$ For all $x, y \in \mathbb N,$ we say that $x$ divides $y$, writen $x|y$, if $\dfrac yx$ is in $\mathbb N.$

LEMMA $1.2.$ For all $x, y \in \mathbb N,$ $x|y$ if and only if there exists an $n \in \mathbb N$ such that $y = nx.$

COROLLARY $1.2.A.$ For all $x, y \in \mathbb N,$ if $x|y$ then $x \le y.$

So, for example $3|15$ because $\dfrac{15}{3}$ is a natural number and because $15=3 \cdot 5.$

DEFINITION $1.3.$ For any $n \in \mathbb N,$ we define the divisors of $n$ to be the set $D_n=\{x \in \mathbb N: x|n\}$

THEOREM $1.4.$ For each $x,y \in \mathbb N,$ there exists a unique $g \in \mathbb N$ such that $D_x \cap D_y = D_g.$

Proof. Consider the prime decompositions $x=\prod_{i=1}^{\infty}p_i^{\alpha_i}$ and $y=\prod_{i=1}^{\infty}p_i^{\beta_i}$ where there exists some $N \ge 1$ such that $\alpha_i = \beta_i = 0$ for all $i \ge N.$ It isn't hard to verify that $g = \prod_{i=1}^{\infty} p_i^{\gamma_i}$ where $\gamma_i = \min(\alpha_i, \beta_i)$ for all $i\ge 1$.

DEFINITION $1.5.$ For each $x,y \in \mathbb N, \gcd(x,y)$ is the unique $g \in \mathbb N$ such that $D_x \cap D_y = D_g.$

COROLLARY $1.5.A.$ For each $x,y \in \mathbb N, \gcd(x,y) = g$ if and only if \begin{array}{ll} (1.) & g|x \text{ and } g|y\\ (2.) & \text{If } z|x \text{ and } z|y \text{, then } z \le g \end{array}

COROLLARY $1.5.B.$ For each $x,y \in \mathbb N, \gcd(x,y) = g$ if and only if \begin{array}{ll} (1.) & g|x \text{ and } g|y\\ (2.) & \text{If } z|x \text{ and } z|y \text{, then } z|g \end{array}


How much trouble does trying to handle $0$ cause?


DEFINITION $1.1$ Almost still works with $0$ thrown in. We can claim, for example, that $4|0$ since $\dfrac 04 = 0.$ But, since $\dfrac 00$ is undefined, we can't decide if $0|0$ is True or False.

LEMMA $1.2$ Is the key to the problem. If we convert this lemma into a definition, then we haven't changed how "divides" works for the natural numbers and now, we can demonstrate that $0|0.$

COROLLARY $1.2.A$ is now False since $4|0$ yet $4 \nleq 0.$ But, "divides" is a partial order and, with respect to that order, $4|0$ indicates that 0 is greater than 4.

DEFINITION $1.3$ Would be fine, except it might seem odd that $D_0 = \mathbb W.$

THEOREM $1.4$ is still true.

DEFINITION $1.5$ Will still make sense.

COROLLARY $1.5.A$ would no longer be true.

COROLLARY $1.5.B$ would still work if we claimed that $0|0$


DIVIDES and GCD over $\mathbb W$


So, how are we to handle $0?$ What follows is essentially the modern approach. In this approach, $0|0$ and $\gcd(0,0) = 0.$

DEFINITION $1.1.$ For all $x, y \in \mathbb W,$ $x|y$ if and only if there exists an $n \in \mathbb W$ such that $y = nx.$

So $0|0$ because $0 = 0 \cdot 0.$

DEFINITION $1.3.$ For any $n \in \mathbb W,$ we define the divisors of $n$ to be the set $D_n=\{x \in \mathbb W: x|n\}$

In particular, $D_0 = \mathbb W.$

THEOREM $1.4.$ For each $x,y \in \mathbb W,$ there exists a unique $g \in \mathbb W$ such that $D_x \cap D_y = D_g.$

In particular, $D_0 \cap D_0 = D_0.$

Proof. Use the old proof for $x, y \gt 0.$ The special cases are easy to handle.

DEFINITION $1.5.$ For each $x,y \in \mathbb W, \gcd(x,y)$ is the unique $g \in \mathbb N$ such that $D_x \cap D_y = D_g.$

It follows that $\gcd(0,0) = 0.$

COROLLARY $1.5.B.$ For each $x,y \in \mathbb W, \gcd(x,y) = g$ if and only if \begin{array}{ll} (1.) & g|x \text{ and } g|y\\ (2.) & \text{If } z|x \text{ and } z|y \text{, then } z|g \end{array}

Note that one way to read condition (2) is that it indicates that $g$ is the "greatest" divisor or $x$ and $y$ with respect to the partial order $divides$.

5
On

What is gcd? It is the greatest common divisor. Divisor is something that divides, wholly here. It is defined for natural numbers and $0$ isn't a natural number. Even if you extend it to integers, $0$ cannot divide a number without resulting in undefined. And moreover $0$ over any number is $0$. Thus IMO, $\gcd(0,0)=\infty$.

4
On

A lightning rod answer: since the Euclidean algorithm is regarded so highly, and not just for boring old $\mathbb{Z}$, if $\gcd(0, 0)$ is valid, then it must be computable using the Euclidean algorithm.

Going by http://mathworld.wolfram.com/EuclideanAlgorithm.html to start off the computation of $\gcd(a, b)$ we set $$q_1 = \big\lfloor \frac{a}{b} \big\rfloor.$$ But $\big\lfloor \frac{0}{0} \big\rfloor$ is undefined, and therefore $\gcd(0, 0)$ must also be undefined.

0
On

One of the very first difficulties in trying to use the Euclidean algorithm to validate $\gcd(0, 0)$ is the problem of division by zero. As a way around this difficulty, I suggest the trial-and-error version of the algorithm.

Instead of trying to compute $a \over b$ to determine $q_1$ for $a = q_1 b + r$, you try setting $q_1 = 1$. If then $r > b$, you increment $q_1$ until $r \leq b$. e.g., $\gcd(24, 10)$, you get $24 = 1 \times 10 + 14$, and since $14 > 10$, you try $24 = 2 \times 10 + 4$.

So for $\gcd(0, 0)$ you get $0 = 1 \times 0 + 0$, concluding that $\gcd(0, 0) = 0$ as desired.


This implementation of the algorithm is of course inefficient for dealing with positive $a$ and $b$. A slight optimization would be this change: "If then $r \geq b$, you increment $q_1$ until $r < b$." But if I had said that from the get-go, it would have created yet another difficulty.

0
On

This question is strongly related to the question Why doesn't $0$ being a prime ideal in $\mathbb Z$ imply that $0$ is a prime number?. Let me quote a sentence from Pete L. Clark's accepted answer to that question.

Contemporary algebraists often think about factorization as a property of monoids rather than integral domains per se.

I fully agree with this point of view. Thus here we consider $\mathbb{N}$ as a commutative monoid under multiplication. Here is another quote from the same answer.

... Thus we can think of factorization questions as taking place in the cancellative monoid ...

That is, the usual definition considers the cancellative monoid $\mathbb{N} - \{0\}$ still under multiplication. In other words, when dealing with the notions of factorization, prime element, irreducible element, lcm, gcd and so forth on natural numbers, we simply ignore $0$.

Now, this question insists on using $0$, doesn't it? At the level of monoids, a more general question can be asked.

Are there any coherent notions of factorization, prime element, irreducible element, lcm, gcd that work for non-cancellative commutative monoids?

The answer is yes, but it requires to define carefully a few notions. Let $M$ be a commutative monoid with identity $1$. For convenience, I will denote by $\preccurlyeq$ the divisibility relation (that is $a \leqslant b$ if there exists $x \in M$ such that $ax = b$). I will also assume, for simplicity, that $\preccurlyeq$ is a partial order, that is $a \preccurlyeq b$ and $b \preccurlyeq a$ imply $a = b$. This is certainly the case for $\mathbb{N}$.

Then the gcd of a finite family $(x_i)_{i \in I}$ of elements of $M$ can be defined as the infimum (or greatest lower bound) of the set $\{x_i \mid i \in I\}$ with respect with $\leqslant$. Of course, it does not always exist, but in our case, the gcd of $0$ and $0$ exists and is equal to $0$ since $0 \preccurlyeq 0$ and $n \preccurlyeq 0$ for every $n$. This answers the question, but let me go beyond this rather trivial example.

We first need a clean definition of irreducible and prime elements in a commutative monoid:

Definition. An element $x$ is irreductible if, for each finite family $(x_i)_{i \in I}$ such that $x = \prod_{i \in I}x_i$, there exist $i \in I$ such that $x_i = x$.

Definition. An element $x$ is prime if, for each finite family $(x_i)_{i \in I}$ such that $x \preccurlyeq \prod_{i \in I}x_i$, there exist $i \in I$ such that $x_i \preccurlyeq x$.

It is easy to see that every prime element is irreducible, but the converse is not true in general.

Definition. A decomposition of $x$ is a finite family $D = (x_i)_{i \in I}$ of irreducible elements such that $x = \prod_{i \in I}x_i$.

Decompositions of a given element can be compared as follows. Given $D = (x_i)_{i \in I}$ and $D' = (x'_i)_{i \in I'}$ two decompositions of $x$, we say that $D$ is more reduced than $D'$ (notation $D \leqslant D'$) if there exists an injection $\sigma: I \to I'$ such that, for all $i \in I$, $x_i \preccurlyeq x'_i$. One can show that this defines a preorder on the set of all decompositions of a given element. A minimal element of this set is called a reduced decomposition of $x$.

Now comes the main definition.

Definition. A commutative monoid is said to be factorial if each of its elements admits a reduced decomposition.

Note that in a factorial monoid, an element can have several decompositions, but only one reduced decomposition (up to the order of the factors). Here is a simple example.

Let $M = \{1, p, q, r, 0\}$ where $p$, $q$ and $r$ are idempotent $qr = rq = q$ and $pr = rp = pq = qp = 0$. Then $1 \preccurlyeq p \preccurlyeq 0$ and $1 \preccurlyeq r \preccurlyeq q \preccurlyeq 0$, $p$ and $q$ are primes, $r$ is irreducible but not prime. The reduced decompositions of $1$ and $0$ are $1 = \prod_{i \in \emptyset}x_i$ (yes, the empty product is allowed!) and $0 = pr$. $0 = pq$ is another decomposition of $0$, but not a reduced one.

One can show that lcm's always exist in a factorial monoid, but gcd's do not necessarily exist.

2
On

Let $D$ be a principal ideal domain; that is, given any ideal $I$ there exists some element, say $a\in I$, such that $I=(a)=\{b.a|b\in D\}$. It follows, by definition, that $\mathbb{D}=(D\backslash \{0\}, \cdot, 1)$ is a commutative cancellation monoid. We define the divisibility relation $a|b$ if and only if $ac=b$ for some $c\in D$. Then $$1|a \text{ and } a|0$$ for any $a\in D$. Define $(D/\sim, |)$ with $a\sim b$ if and only if $a|b$ and $b|a$. Observe $$a|b \text{ if and only if } (a)\supset (b)$$

That is, $[0]$ is the maximum of the ordered set $(D/\sim, |)$ (because $0$ is in every ideal).

Now just apply the definition of "greatest common divisor".