A non-mathematician’s (programmer’s) question on infinity?

Solution 1:

It is debatable whether any application of mathematics to real life (e.g. solving differential equations) actually implies that real life is described by the infinitary objects we use to model it (e.g. the real numbers). However, it tends to be extremely convenient to use infinitary objects over finitary ones because when we pass to the infinite limit, error terms go away and the main term of various approximations becomes exact. (For example, it is debatable whether derivatives "exist" in any meaningful sense. However, it is much easier to work with derivatives than their finitary analogues.) So using infinite objects is a good way to systematically ignore certain details in order to better focus on the relevant ones.

A basic example is the central limit theorem. Its statement and proof require infinitary objects, but what comes out of it is a finitary understanding of how a large number of i.i.d. random variables behave together, and we get the bell curve, which allows us to understand a lot of natural phenomena.

Solution 2:

Let me point out a few reasons why, in my opinion, we are often interested in the infinite.

  1. Often, dealing with the infinite case is easier or, at least, leads to simpler and more understandable statements. Picking a well known and celebrated example, the prime number theorem (PNT) says that the number of primes less than a given number $x$ is approximately $x/\log x$. Consider what this is really saying. Writing $\pi(x)$ for the number of primes less than $x$, the PNT is saying that the ratio $\pi(x)/(x/\log x)$ tends to one as $x$ tends to infinity. This is written as $$ \pi(x)\sim \frac{x}{\log x}\qquad\qquad{\rm(1)} $$ where the '$\sim$' means that the ratio of the two sides tends to one as $x$ tends to infinity. Precisely, this means the following: if you pick any positive number $\epsilon$, no matter how small, then the ratio of the two sides of (1) will be within a distance $\epsilon$ of 1 when $x$ is large enough. We can express this by saying that there is a number $N_\epsilon$ (depending on your choice of $\epsilon$) such that $1-\epsilon < \pi(x)/(x/\log x) < 1+\epsilon$ for all numbers $x$ greater than $N_\epsilon$. You might ask, what does this say about the number of primes less than any given $x$. Say, the number of primes less than $2^{100}$? The answer is, absolutely nothing! Without specifying explicit bounds on the ratio $\pi(x)/(x/\log x)$ for a given number $x$, all we have is a statement about the infinite. Of course, bounds can be given, and you will need these to apply the PNT to some actual fixed value of $x$. This is besides the point though. The PNT as above is a simply stated and enlightening theorem. Much more so than just writing out some complicated but accurate expression for all values of $x$ up to some huge number. There will also be many such approximations of varying simplicity and accuracy, whereas the PNT is a single elegant statement which every mathematician knows.

  2. More than the raw statements of theorems, what often interests mathematicians is the insight that they provide. Saying that something holds for every natural number -- and proving it -- is much more interesting, and provides more insight into the properties of numbers than just saying that you have checked that it is true for the first billion numbers. People have known that the Riemann hypothesis is true for many zeros of the Zeta function for some time. I'm not sure what the latest count is, but at least 10 trillion zeros have been checked, and do indeed lie on the `critical line' in accordance with the hypothesis. If however, just one zero was found off the critical line, then that would go against many of the properties people expect of such functions, and go against many of the expected properties of the prime numbers. If the Riemann hypothesis was found to be wrong, a lot of other conjectures would be thrown into doubt too.

  3. When we prove mathematical theorems, they will usually be used to derive further results. This is often done in rather involved ways. Even if you prove a theorem up to 'reasonably high' values of $x$, someone will want to use that theorem as a stepping stone in proving further results. In that case, it will likely be needed for unspecified valued of $x$. Coupled with techniques such as induction, even huge values of $x$ could be important. As mathematical proofs are frequently built up out of a large number of small steps, it is imperative that each individual part is watertight. Otherwise, you could quickly end up with nonsense. So, it is important that results can be stated and proven for all cases within certain precisely specified parameters. Just checking a finite number of cases would be of limited use.

Solution 3:

You write (I'm paraphrasing): "Why does mathematics need and require proof for the infinite cases as well, instead of just being satisfied by knowing it for sufficiently large finite numbers?"

I'm going to put this mostly in terms of number theory, since I think it'll be easiest to understand in that case. Often we generalize to the infinite because it's very convenient, and it's a natural abstraction to make. Say that we want to check why some certain pattern in mathematics hold. Some patterns are just purely "coincidental", for example, that sometimes, we can have identities that holds only for a very small number of cases.

Such patterns are often not that interesting, mostly because they don't carve out any real structure that's comprehensible to us. On the other hand, patterns that hold for all numbers sometimes tell us something deep about how ''numbers work''. For a very easy example, look at the fact that for every $n \in \mathbb{Z}$, $2n$ is even. This tells us a lot about how the integers that we love behave. If we'd say "Holds for all numbers less than..." it would probably not be as impressive.

On the other hand, we sometimes have conjectures that seem to be true by numerical evidence, but we don't know for sure. What if there's a counterexample for a number much much greater than anything we could imagine? An example of a conjecture that was proven wrong by numerical data was Pólya conjecture. Now, if it would have been true for all n, then it would have been injecting some structure into the integers, but since it isn't true, it's not so easy to say just what it does.

We could surely say the Riemann hypothesis is true up to some very large integer, but without a proof that it holds in the more general, infinite case, it's just a curious pattern.

There are often other branches that deal with the "infinite" in different ways, sometimes it's natural to introduce the "extended real numbers", which is just the real numbers, with $+\infty$ and $-\infty$. This gives a place for us to use conceptually more advanced tools on the usual real numbers, for example.