Is this $\gcd(0, 0) = 0$ a wrong belief in mathematics or it is true by convention?
Solution 1:
It's not a belief. It's not a convention. It can be proved...
Negative numbers would just be an unnecessary distraction. So we will assume that we are talking about numbers in $\mathbb Z^+ =\{0, 1, 2, 3, \dots\}$
There are two conditions that $g$ must meet to be called the greatest common divisor $(\gcd)$ of $x$ and $y$.
\begin{array}{lll} 1. &g|x \text{ and }g|y &\text{(g is a common divisor of x and y)}\\ 2. &\text{If }z|x \text{ and } z|y \text{, then } z|g &\text{(g is the greatest common divisor of x and y)} \end{array}
As an example, consider the numbers $180$ and $216$. The common divisors of these two numbers are $D_{180,216} = \{1,\,2,\,3,\,4,\,6,\,9,\,12,\,18,\,36\}$ The elements of $D_{180,216}$ are all of the numbers that satisfy condition $(1).$
Here is where things get different than you might think they are. Your immediate reaction might be to say that $36$ is the greatest common divisor because all of the other numbers in $D_{180,216}$ are less than $36$. No, that's wrong. Look more closely at condition $(2)$. The reason that $36$ is the greatest common divisor is that all of the other numbers in $D_{180,216}$ are divisors of $36$. This is a much more subtle and sophisticated thing:
Every list of the common divisors of two numbers contains exactly one number that all of the other numbers in that list divide into. That number is the greatest common divisor.
So how do we prove that $\gcd(0,0) = 0$?
It turns out that there are two definitions of $a|b$.
DEFINITION A. Let $a,b \in \mathbb Z^+.$ Then $a|b$ if and only if there exists an $n \in \mathbb Z^+$ such that $b = an$.
DEFINITION B. Let $a,b \in \mathbb Z^+.$ Then $a|b$ if and only if there exists an $n \in \mathbb Z^+$ such that $\dfrac ba = n$.
The two definitions are exactly the same except, by definition A, $0|0$ is true, and by definition B, $0|0$ is at best undefined. Defining $0|0$ to be true makes the two definitions equivalent to each other.
I found that Wolfram's mathworld uses definition B. Wikipedia lists both definitions. Every algebra and number theory book I own uses definition A.
Personally, I am appalled to see definition B because definition A is necessary to generalize the concept of "divides" to principle ideal domains. One other misconception should be addressed here. There seems to be a belief that $a|b$ implies that $a \le b$. That isn't always true. According to either definition, $4|0$ (Since $\dfrac 04 = 0$ and $0 = 4 \cdot 0$) but $4 \nleq 0$.
Since $0 = x \cdot 0$ for all $x \in \mathbb Z^+$, the divisors of $0$ are $D =\{0,1,2,3,4,\dots\}$ So the common divisors of $0$ and $0$ are $D_{0,0}=D =\{0,1,2,3,4,\dots\}$. The one number in that list that all of the other numbers divide into is $0$.
Hence $0 = \gcd(0,0)$.
I suppose the answer to your question is that it depends. If you accept definition(A), then $\gcd(0,0) = 0$ can be proved. If you accept definition(B), then you have to define $\gcd(0,0) = 0$ so that it agrees with the consequences of definition(A).
Solution 2:
Well it depends.
When we define the GCD for two integers, the definition makes sense as long as at least one of the integers is not zero. And whenever when you deal with divisibility problems, $\gcd(0, 0)$ doesn't make sense.
Now, if you know a bit of algebra, there is another way of characterizing (i.e. identifying) the GCD of two integers:
The ideal generated in $\mathbb Z$ by $a$ and $b$ is a Principal Ideal. This ideal is generated by $\gcd(a, b)$ or by $-\gcd(a,b)$. Now, this definition is equivalent to the original one for $(a, b) \neq (0, 0)$ and can be extended naturally to the case $a = 0, b = 0$. With this definition we get $\gcd(0, 0) = 0$.
When using results/tools from Ring theory, one might want to use this convention. This way, one can get slightly more general results, and there is no need to worry at every step if we are in the case $(0, 0)$.
Solution 3:
This is one of those question that raise religious wars. ;-)
The greatest common divisor was considered first when $0$ was not listed among numbers and it was defined as the largest (under the usual ordering) common divisor. But things change over the centuries.
In what follows, number will mean “natural number” (zero included).
Some people say that every nonzero number $n$ divides $0$, because $n\cdot 0=0$, but they don't accept $0\mid 0$. On the other hand, the definition for $a\mid b$ expressed as “there exists a number $c$ such that $ac=b$” clearly includes $0\mid 0$. One could add “unique”, so in this case $0$ would not divide $0$. As you see, it's just a matter of definitions.
Let's consider the usual ordering of numbers:
$a\le b$ if and only if there exists $c$ such that $a+c=b$
If we compare to the above definition of divisibility
$a\mid b$ if and only if there exists $c$ such that $ac=b$
we surely see an interesting similarity. More interesting is that this defines another order relation on numbers: divisibility is reflexive, antisymmetric and transitive. It's not a total order, because $2\nmid 3$ and $3\nmid 2$, but it's a very interesting ordering nonetheless. Indeed, the ordered set of numbers under divisibility is a lattice: any two elements have a greatest lower bound and a least upper bound.
The greatest lower bound of $a$ and $b$ is a number $d$ such that
- $d\mid a$ and $d\mid b$ ($d$ is a lower bound)
- for all $c$, if $c\mid a$ and $c\mid b$, then $c\mid d$ ($d$ is the greatest lower bound)
Of course, the greatest lower bound of a single element is the element itself.
What's this greatest lower bound? Of course it is the greatest common divisor in all cases when one among $a$ and $b$ is nonzero. And there's no point in excluding the element $0$ which, in this ordering is the maximum element, because $n\mid 0$ for all $a$ (and $1$ is the minimum).
Once we note that this relation on the natural numbers reflects the partial ordering of ideals in the ring $\mathbb{Z}$ of integers with respect to reverse inclusion, this ordering becomes even more interesting, because the greatest lower bound of two ideals is their sum (and the least upper bound is their intersection), which immediately provides the Bézout relation for the greatest common divisor.
Let's get bold. There are (commutative) rings where the notion of “largest” cannot be given a sensible meaning (think to the Gauss integers), but the notion of greatest common divisor makes sense and is very important. We lose uniqueness, but for any two elements $a$ and $b$ we can find one element $d$ satisfying properties 1 and 2 above.
So, in view of this generalization, the “modern” definition of gcd is more useful and, $\gcd(0,0)=0$ makes no problem. From a computational point of view, restraining ourselves to the natural numbers, perhaps the older definition is better and asking about $\gcd(0,0)$ is rather pointless.
Decide what your definition is and act on consequence. That's all. Allowing $\gcd(0,0)$ requires making exceptions to the rule that $a/\gcd(a,b)$ is a divisor of $a$; on the other hand, not defining it requires making different exceptions. I don't think there are better or worse exceptions: it's a matter of conventions. In some areas a convention may be more useful than another. And, at the end, $\gcd(0,0)$ is not a big deal.
Solution 4:
You're right to be reluctant to accept the validity of $\gcd(0, 0) = 0$. A lot of times, questions about 0 lead to graph discontinuities and existential quandaries.
Given nonzero integers $a$ and $b$, $$\frac{a}{\gcd(a, b)}$$ is a divisor of $a$ and $$\frac{b}{\gcd(a, b)}$$ is a divisor of $b$. Now, think for a moment about $\gcd(a, 0)$ when $a \neq 0$. ProofWiki tells us that $\gcd(a, 0) = |a|$. For example, with $a = -3$ we have $\frac{-3}{3} = -1$, which is certainly a divisor of $-3$, and $\frac{0}{3} = 0$, which we can choose to accept as a divisor of 0 (more on that choice later).
So far we have not triggered the division by zero error. But if we then ask what the divisors of 0 are, then we run into that problem. And then we also have to re-examine the definition of divisor, a definition we take for granted when dealing with positive as well as with negative integers. Given nonzero integers $a$ and $b$, we say $b$ is a divisor of $a$ if $\frac{a}{b}$ is an integer. For example, $-3$ is a divisor of $-27$ since $\frac{-27}{-3} = 9$. But $-6$ is not a divisor of $-27$ because $\frac{-27}{-6} = \frac{9}{2}$, which is not an integer (though it is a rational number).
Furthermore, each positive integer is one of its own divisors, and its list of divisors is a finite sequence. Thus, if $a > 0$, then $\gcd(a, a) = a$. With minor adjustments we can say the same thing for each negative integer. Indeed, if $a \neq 0$, then $\frac{a}{a} = 1$. But as you already know, $\frac{0}{0}$ is undefined or invalid.
So we reach the seemingly absurd conclusion that every positive integer and every negative integer is a divisor of 0, but 0 itself is not its own divisor. Someone commented earlier that this means that $\gcd(0, 0) = \infty$. Or one could say that $\gcd(0, 0) = -\infty$ for the pure heck of it. But neither $\infty$ nor $-\infty$ are specific integers.
No matter how large two positive integers might be, they each still have a finite list of divisors. The intersection of two finite sets is also finite, and can therefore (at least theoretically, in the case of large integers beyond our practical computational ability) be sorted to find a divisor in common that is greater than all the other common divisors.
But 0 has an infinite list of divisors. Even if we accepted 0 as being its own divisor (which we don't), it's still smaller than, say, $5^{4^{3^2}}$. And then there's $5^{4^{3^2}} + 1$, $5^{4^{3^2}} + 2$, etc., ad infinitum. Therefore $\gcd(0, 0)$ is undefined.
This is all fine and dandy, but if you're programming a calculator or a computer algebra system, do you really want to throw an error message for every little undefined thing that comes your way? You also have to consider that programmers like to build functions on top of other functions. If you have to program GCD
, do you implement the Euclidean algorithm (which might not always be the most efficient, e.g., two consecutive Fibonacci numbers) or do you build it on top of FactorInteger
?
Try computing $\gcd(8, 13)$. The quickest way is to factorize $8 = 2^3$ and note that 13 is prime. Thus $\gcd(8, 13) = 1$. Now try this in Wolfram Alpha: FactorInteger[0]
. Is it any wonder that Wolfram Alpha gives you 0 for GCD[0, 0]
? Also try
Table[GCD[n, n], {n, -10, 10}]
Table[GCD[n, n + 2], {n, -10, 10}]
(That second one gives a $-2$ in the middle that sticks out like a sore thumb).
In conclusion, $\gcd(0, 0) = 0$ is wrong, though I don't know if a lot of people believe it. I think that if you choose to treat it as if it were true, it would be for convenience, not convention.