Are there contradictions in math?
Solution 1:
Someone told me that math has a lot of contradictions.
Correct mathematics does not, as far as we know. However, mathematicians amuse themselves with little "proofs" whose conclusion is absurd. The game is to identify the error. It's important because the "proofs" usually rely on errors that people often make by accident. Finding the error helps mathematicians avoid making the same error themselves.
Probably the simplest such game is a "proof" that $1 = 0$:
Let $x = 1$ and $y = 0$. Then $x \cdot y = y \cdot y$. Dividing both sides by $y$, $x = y$.
The error of course is that you cannot divide both sides by zero and maintain an equality. The lesson is not to divide by something that is or might be $0$. In a more complex proof it might take some work to prove that the thing you want to divide by actually is never $0$, or if it could be $0$ to consider the case where it is separately from the case where it isn't.
Sometimes this game becomes more serious. There was a thing that is now called "naive set theory" that basically said, "any collection of sets that you can describe, is a set". This allows us to consider such things as "the set of all sets", and "the set of all sets that are elements of themselves". Of course, "the set of all sets" is an element of itself. The empty set is not an element of itself (no set is an element of the empty set, that's what empty means). So, what if I define S to be "the set of all sets that are not elements of themselves".
Oh dear. Now we have a contradiction. Is S an element of S or not? If it is, then it must satisfy the definition. But according to the definition anything in S is not an element of itself. And "itself" is S. That it to say, if it is an element then it isn't. And if it isn't an element of S, then by the definition it is an element of itself. So if it isn't an element then it is.
What this told mathematicians is that what is now called "naive set theory", just defining any old thing you like, isn't good enough. This is called Russell's Paradox, because Bertrand Russell published it first. Ernst Zermelo had previously discovered it but not published. Both Russell and Zermelo set about constructing systems that:
- allowed everything that mathematicians needed to do with sets (actually Russell worked on "type theory" rather than "set theory", but I don't think the difference matters to this explanation).
- prevented the paradox from occurring by limiting how you're allowed to define sets.
You could reasonably argue that prior to this activity, mathematics did contain contradictions. Fortunately not in a way that mattered, since as it happened there weren't any really important results that couldn't be brought onto more secure foundations. But for this and other reasons we cannot say for sure there are no contradictions, only that there are none we know about and haven't dealt with.
This might sound like a terrible crisis in mathematics, finding "math" to be contradictory. In some sense it was a crisis, in that it required a lot of re-checking of some basic assumptions and intuitions people had. It wasn't a disaster, since set theory as such had existed for less than 50 years at the time, and all the paradox said was that set theory wasn't quite right and needed improvement. Most of all mathematics ever done at the time had been without any particular appeal to this flawed set theory.
You can also think of it as an extended and surprising "proof by contradiction". A proof by contradiction says:
- suppose X is true
- deduce a contradiction
- conclude that X is not true
Set theoreticians had:
- constructed a theory of sets
- deduced a contradiction
- concluded that this theory of sets was no good
So mathematics does "involve" a lot of contradictions in the sense that a lot of proofs you see will end with one. Doesn't quite mean that it "contains" them :-)
He said that a lot of things are not well defined.
This is arguable at a stretch. In mathematics pretty much everything is defined, but one interesting question is what it's defined in terms of. Foundations of mathematics is a large subject within mathematics.
1+2+3+4+...=−1/12
Not true, but you can make it look true in a variety of ways. This is one of those amusements I mentioned above.
Given a divergent series, you can perform some incorrect but plausible-looking manipulations that result in it appearing to have any total you please. AFAIK the original historical reason for choosing -1/12
in particular, is that 1+2+3+4+...
is the divergent Dirichlet series for -1
, and -1/12
is the Riemann zeta function of -1
. Now, when the Dirichlet series converges for a particular value, it is equal to the zeta function. In fact the zeta function can be defined as the "analytic continuation" of those convergent Dirichlet series. That means that it's the only function that's equal to the convergent Dirichlet series and also has another special property called "holomorphism".
But the Dirichlet series for -1
doesn't converge, as I said to begin with it's divergent. So -1/12
is the value of an important and interesting function that coincides with some Dirichlet series, but not this one, which doesn't have a value. So you can call it "the value of that Dirichlet series", but really it isn't. There is no contradiction.
what is infinity ∞?
Since you're not mathematically trained you can't be expected to know what infinity "is" mathematically. Quite aside from the fact that the concept of "infinity" is somewhat mysterious in English, your friend has asked you an unfair question of mathematical general knowledge. He might as well have asked you what the Riemann zeta function is, or the definition of a metric space, and if you've never studied those things then you just don't know.
In mathematics you deal with the concept of infinity by defining very strict rules for how to handle it, and then following them. In different contexts mathematicians will define "infinity" differently. So you should think of ∞ as just being a symbol used to mean some specific thing that's defined somewhere else (hopefully the person who uses it can tell you where). It doesn't necessarily mean the same thing every time it's used in different places.
Back in that subject, "foundations of mathematics", there's plenty of interesting work on how to consider the infinity of the natural numbers 0, 1, 2, 3, ... Further, there's interesting work on whether mathematicians "should" be considering infinity as a quantity at all. Since it's foundational this work can in principle affect all of mathematics, in practice most areas of mathematics can just pick the version of "infinity" they need (if any) and stick with it.
How to disprove the previous two?
I can think of several possibilities:
your friend has genuine questions about the foundation of mathematics. This is fine, but you, as a non-math-specialist, are not going to deal with his concerns. Refer him to a mathematician who will make him be more clear exactly what his concerns are: what does he feel is not well defined? How he has derived his contradictions?
your friend has encountered something he doesn't fully understand, and concluded that it is ill-defined and contradictory. That's a natural response, but in the case of mathematics there's an antidote, which is to ask genuine questions about the foundations of mathematics. See above.
your friend is messing with you. Since he has offered you no proof of what he says, there is no need to find a particular error. It's as if he said, "English contains a lot of contradictions. For example the word 'red' means 'orange'". Well firstly, no it doesn't, it means something different, you can just ignore him (and the sum of 1+2+3+4+... is not -1/12, that series doesn't have a sum). But secondly, OK, in certain contexts actually it does mean that for interesting historical reasons. In fact the word "orange" entered the English language long after certain orange things had already been named "red", such as redheads, red kites, and robins' red breasts. But this is not a contradiction (and in the context of Dirichlet series and the Riemann zeta function, -1/12 is in a strange sense what the sum of 1+2+3+4+... "would be if it converged").
Solution 2:
There are no known contradictions in mathematics. That does not mean there aren't, it just says we didn't find any. Considering the fact that thousands of mathematicians are creating new mathematics daily, and not a single one ever encountered any contradiction is quite overwhelming circumstantial evidence that there are no contradictions. However, it is impossible to prove mathematically that a contradiction does not exist. There is an important theorem due to Goedel that proves that if mathematics does not have contradictions, then it is impossible to prove it.
That is just the state of affairs. Most mathematicians won't give this a second thought since the circumstantial evidence for lack of contradictions suffices to put any serious doubts to rest. Moreover, since we choose the axioms we work with, if a contradiction with the currently (more or less) accepted choice of axioms is found, we'll simply change the axioms so the found contradiction disappears. It is unlikely that will ever happen, but if it does it (probably) won't be a big deal and most of mathematics will survive intact.
As for your friend's confusion with the two results you mention, (s)he is just taking you about 300 years back in time when people had all sorts of weird ideas about infinity and before rigorous definitions for dealing with infinite series were laid down. The apparent contradictions you see are nothing but the result of carelessly playing with undefined concepts. These problems were immediately solved with the advent of rigorous calculus. (The series you give simply does not converges, not to any number. As for the concept of infinity, it can be defined rigorously in many different ways and can be manipulated without causing any contradiction, as long as one understands the context).
Solution 3:
There is already great general answer by Ittay Weiss, so I will try a different approach. In fact, I will try to explain a bit the infinite sum you stated. As for the infinity, one could write a lot about it (mainly because there are multiple infinities, each with different properties), and I couldn't even hope to fit it here. Unfortunately, I don't know any sources or text about infinity that I would like to recommend.
Consider the following infinite sum
$$G(x) = x^0 + x^1 + x^2 + x^3 + \ldots = \sum_{i = 0}^{\infty}x^i.$$
We have no idea what such a sum might mean (without any proper definition, which wasn't stated). However, suppose that we could manipulate it in a way similar to numbers. Let's start with multiplying by $x$
$$x\cdot G(x) = x^1 + x^2 + x^3 + x^4 + \ldots = \sum_{i = 0}^{\infty}x\cdot x^{i},$$
and then add $x^0 = 1$ (we assume that $x \neq 0$)
$$1 + x\cdot G(x) = x^0 + x^1 + x^2 + x^3 + x^4 + \ldots = x^0 + \sum_{i = 1}^{\infty}x^{i} = \sum_{i = 0}x^i.$$
We got the same expression! In other words, we don't know what $G(x)$ means yet, but if it does mean anything similar to a number, then this number has to satisfy
$$G(x) = 1+x\cdot G(x),\quad \text{ that is, }\quad G(x) = \frac{1}{1-x}.$$
Indeed, there is a way to specify what $G(x)$ means for $|x| < 1$ and then, e.g. $$G\left(\frac{1}{2}\right) = 1+\frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \ldots = \frac{1}{1-\frac{1}{2}} = 2.$$
On the other hand, for other values of $x$ we get peculiar results, like
$$G(2) = 1 + 2 + 4 + 8 + \ldots = \frac{1}{1-2} = -1,$$
which would mean that sum of positive numbers got negative. However, this is because we didn't defined what $G(2)$ would mean, we only said we could do it for $|x| < 1$. Nevertheless, there is a way of dealing with this (e.g. see $p$-adic numbers), but for $x=1$ we get even stranger result, that is,
$$G(1) = 1 + 1 + 1 + \ldots = \frac{1}{1-1} = \frac{1}{0} =\ ?!$$
Similar situation happens with your sequence. Consider
\begin{align} S(x) &= 0\cdot x^0 + 1\cdot x^1 + 2\cdot x^2 + 3\cdot x^3 + \ldots = \sum_{i = 0}^{\infty} i \cdot x^i \\ x\cdot S(x) &= \color{white}{0 \cdot x^{?}} + 0 \cdot x^1 + 1 \cdot x^2 + 2 \cdot x^3 + \ldots = \sum_{i=0}^{\infty} i\cdot x^{i+1} \\ x\cdot G(x) + x\cdot S(x) &= 0\cdot x^0 + 1\cdot x^1 + 2\cdot x^2 + 3\cdot x^3 + \ldots = \sum_{i = 0}^{\infty} i \cdot x^i \\ S(x) &= x\cdot G(x) + x \cdot S(x) \\ S(x) &= \frac{x \cdot G(x)}{1-x} = \frac{x}{(1-x)^2} \end{align}
which is true for $|x| < 1$ (i.e. we can define the infinite sum in a way that would make the above equality true with an appropriate $x$), but with $x = 1$ we get
$$ S(1) = \frac{1}{(1-1)^2} = \frac{1}{0} = \ ?! $$
On the other hand, observe that
\begin{align} S(1) &= 1 + 2 + 3 + 4 + 5 + 6 + \ldots \\ 4S(1) &= \color{white}{0} + 4 + \color{white}{0} + 8 + \color{white}{0} + 12 + \ldots \\ -3S(1) &= 1 - 2 + 3 - 4 + 5 - 6 + \ldots \\ -S(-1) &= 1 - 2 + 3 - 4 + 5 - 6 + \ldots = -\frac{-1}{\big(1-(-1)\big)^2} = \frac{1}{4} \\ -3S(1) &= -S(-1) = \frac{1}{4} \\ S(1) &= -\frac{1}{12} \end{align}
which means that, if there is any number-like interpretation of $S(1)$ (and $S(-1)$ which we have used), then it has to be equal $-\frac{1}{12}$. However, note that we didn't argue whether there actually is such an interpretation. Nevertheless, those interpretations of $S(x)$ which do allow $S(1)$ also explain why the sum of positive values might be considered negative (i.e. given particular interpretation it stops being strange). If you would like to know more about it, you can start here.
I hope this helps $\ddot\smile$
Solution 4:
The most common appearance of contradictions in mathematics is when one inserts their own ideas about a mathematical concept or object that aren't actually true.
For example, inserting the idea that $1 + 2 + 3 + 4 + \ldots$ is supposed to represent the kind of infinite summation one learns about in introductory calculus.
There are a number of other summation methods for series of numbers. One that apparently gets a lot of use in analytic number theory and theoretical physics is zeta regularization; when viewed as a zeta regularized sum rather than an introductory calculus sum, the series $1 + 2 + 3 + 4 + \ldots$ does indeed sum to $-\frac{1}{12}$.
There are unfortunately too many things mathematicians want to calculate to have an unambiguous notion for all of them.
The hardest part of $\infty$ is that lay people seem to have the idea that "Infinity" is a proper noun referring to one specific notion.
In reality, there are many different sorts of mathematical structures that work in different ways. For example, the extended real line has two points called $+\infty$ and $-\infty$, that lie on the endpoints of the number line.
However, the projective real line has just one extra point, and calls it $\infty$, but it lies on both endpoints of the number line. (think of the number line being looped around like a circle, with $\infty$ being the place where the 'ends' meet. Or think like old-school video games where you go off one end of the map and reappear on the other side)
And the notions of ordinal numbers and cardinal numbers one talks about in set theory include many infinite numbers (none of them called "infinity"), and have essentially nothing to do with the numbers referred to above as $+\infty$, $-\infty$, or $\infty$!
This may be unfamiliar to the lay person, who kept learning of new mathematical structures as extending previous ones; one learns about natural numbers (and then about zero if you used the convention that natural numbers are positive rather than nonnegative), then about negative numbers, rational numbers, real numbers and complex numbers.
The lay person naturally builds up the idea there's "one true universe" of numbers and the numbers they learn in school just make up more and more of this universe.
But unfortunately, that's really a very incorrect view of mathematics. Mathematicians invent new structures all the time to quantify or otherwise describe the kinds of things they're studying, and being motivated by and used for different purposes, they don't often fit together.
For example, the extended real lines and projective real lines are incompatible with each other. And they are quite incompatible with other contexts put more emphasis on the algebraic laws rather than geometric picture. (e.g. $x + x = x$ has three solutions in the extended real numbers)
Solution 5:
"Are there contradictions in maths?" It's a great question. Before Russell Paradox It was generally believed that there were not contradictions. Better said, the matter hadn't been spotted yet. After Bertrand Russell the first crisis in maths appeared. And, from my point of view, the issue wasn't solved and it is not solved yet. I know there is the Sets and Classes Theory, all supported by some set of axioms, from which we can deduce all known proved theorems. But for me that didn't solved the issue. The axioms were chosen precisely to be able to derive all known maths to that time. So..... the only thing we made was to separate the contradictory results from the not contradictory ones, defining not-contradictory = derivable-from-axioms. Thus we still don't know exactly what is a contradiction in maths or why they appear. We only can do maths that derives from the axioms, so contradictions are impossible by definition at the axioms range. And that is, in my opinion, awful, cause we have killed Mathatics. We cannot go beyond axioms, and those are not perfect (there are known writings that point out that we lack axioms to prove Goldbach Conjecture, to find Prime List general term, o get the Well Ordered formula for Real Numbers). I think we are in a real Maths crisis we cannot face because we aren't aware of it, or don't want to be aware of. I haven't found writings on this subject and would love someone could give some clue were to find or at least look for it.