Math without infinity
Does math require a concept of infinity?
For instance if I wanted to take the limit of $f(x)$ as $x \rightarrow \infty$, I could use the substitution $x=1/y$ and take the limit as $y\rightarrow 0^+$.
Is there a statement that can be stated without the use of any concept of infinity but which unavoidably requires it to be proved?
Solution 1:
Surprisingly, infinity proves necessary even for finite combinatorial mathematics. For a nice explanation as to why there cannot be any such as thing as a comprehensive, self-contained discipline of finite combinatorial mathematics see Stephen G. Simpson's writeup of his expository talk Unprovable Theorems and Fast-Growing Functions, Contemporary Math. 65 1987, 359-394.
Simpson gives a detailed discussion of three theorems about finite objects whose proofs necessarily require the use of infinite sets. The three theorems discussed are about colorings of finite sets (modified finite Ramsey theorem), embeddings of finite trees (Friedman's finite form of Kruskal's theorem) and iterated exponential notation for integers (Goodstein's theorem).
Below is an excerpt from the introduction.
$\quad$The purpose of the talk is to exposit some recent results (1977 and later) in which mathematical logic has impinged upon finite combinatorics. Like most good research in mathematical logic, the results which I am going to discuss had their origin in philosophical problems concerning the foundations of mathematics. Specifically, the results discussed here were inspired by the following philosophical question. Could there be such a thing as a comprehensive, self-contained discipline of finite combinatorial mathematics?
$\quad$ It is well known that a great deal of reasoning about finite combinatorial structures can be carried out in a self-contained finitary way, i.e. with no reference whatsoever to infinite sets or structures. I have in mind whole branches of mathematics such as finite graph theory, finite lattice theory, finite geometries, block designs, large parts of finite group theory (excluding character theory, in which use is made of the field of complex numbers), and large parts of number theory (including the elementary parts but excluding analytical techniques such as contour integrals). One could easily imagine comprehensive textbooks of these subjects in which infinite sets are never mentioned, even tangentially. All of the reasoning in such textbooks would be concerned exclusively with finite sets and structures.
$\quad$ Consequently, there is a strong naive impression that the answer to our above-mentioned philosophical question is "yes."
$\quad$ However, naive impressions can be misleading. I am going to discuss three recent results from mathematical logic which point to an answer of "no." Namely, I shall present three examples of combinatorial theorems which are finitistic in their statements but not in their proofs. Each of the three theorems is simple and elegant and refers only to finite structures. Each of the three theorems has a simple and elegant proof. The only trouble is that each of the proofs uses an infinite set at some crucial point. Moreover, deep logical investigations have shown that the infinite sets are in fact indispensable. Any proof of one of these finite combinatorial theorems must involve a detour through the infinite. Thus, in a strong relative sense, the three theorems are "unprovable" -- they cannot be proved by means of the finite combinatorial considerations in terms of which they are stated.
Solution 2:
Does math require an $\infty$? This assumes that all of math is somehow governed by a single set of universally agreed upon rules, such as whether infinity is a necessary concept or not. This is not the case.
I might claim that math does not require anything, even though a mathematician requires many things (such as coffee and paper to turn into theorems, etc etc). But this is a sharp (like a sharp inequality) concept, and I don't want to run conversation off a valuable road.
So instead I will claim the following: there are branches of math that rely on infinity, and other branches that do not. But most branches rely on infinity. So in this sense, I think that most of the mathematics that is practiced each day relies on a system of logic and a set of axioms that include infinities in various ways.
Perhaps a different question that is easier to answer is - "Why does math have the concept of infinity?" To this, I have a really quick answer - because $\infty$ is useful. It lets you take more limits, allows more general rules to be set down, and allows greater play for fields like Topology and Analysis.
And by the way - in your question you distinguish between $\lim _{x \to \infty} f(x)$ and $\lim _{y \to 0} f(\frac{1}{y})$. Just because we hide behind a thin curtain, i.e. pretending that $\lim_{y \to 0} \frac{1}{y}$ is just another name for infinity, does not mean that we are actually avoiding a conceptual infinity.
So to conclude, I say that math does not require $\infty$. If somehow, no one imagined how big things get 'over there' or considered questions like How many functions are there from the integers to such and such set, math would still go on. But it's useful, and there's little reason to ignore its existence.
Solution 3:
There are many different notions of "infinity" in math, and you haven't defined what you mean by infinity, so your question doesn't have a well-defined answer. But let me try to interpret your question the way I think you meant it, and try to clear up som confusion.
Let me first state that in most of mainstream mathematics, the symbol $\infty$ is merely notation, and not an actual object.
For instance, when we say that the size of the set $\mathbb R$ of real numbers is "infinite", we simply mean that it is not finite, that is, it does not contain exactly an integer number of elements. Nothing more magical than that. We don't mean that it contains exactly some number $\infty$ of elements.
For another example, when we say that $\lim_{x\to\infty}f(x)=\infty$, we do not mean that the limit equals some number $\infty$ when $x$ approaches that same number $\infty$. The limit notation $\lim_{x\to\infty}f(x)=\infty$ is actually a special case, and needs its own definition, differing from the definition for $\lim_{x\to a}f(x)=b$ where $a,b$ are real numbers. By definition, the notation $\lim_{x\to\infty}f(x)=\infty$ means that we can make $f(x)$ larger than any given number $N$ by letting $x$ be larger than some number $M$ depending on $N$. (The formal definition is $\lim_{x\to\infty}f(x)=\infty$ if and only if $\forall N>0\exists M>0: x>M\implies f(x)>N.$) In this definition, there is no mention of any object called $\infty$. This is an important distinction. When calculus was invented by Newton and Leibniz, they used a "number" $\infty$ in their derivations, which happened to usually give correct answers, but there are cases when doing so results in paradoxes. Therefore a lot of effort was made to reformulate calculus without using infinities, by for example using the limit definition above. (For more about this, you can read the Wikipedia section on the foundations of calculus.)
For this reason, I would claim that $\infty$ is not "used" or "required" in most of mathematics, because it is just a notation, and not an object by itself.
However, one can construct objects to represent some kind of "infinity", and use these as tools in mathematics. Let me tell you how to do this with the two examples above.
One can assign a "number" $|S|$ to any set $S$, called a cardinal number, which represents the size of that set. For any finite set, this will just be an integer, the number of elements in that set. But infinite sets will also have a size, and we can view these as infinite numbers. In this setting, there exists many different infinite numbers, not just one. And fascinatingly, one can show that $|\mathbb Z|=|\mathbb Q|<|\mathbb R|$, that is, the number of integers is the same as the number of rational numbers (!), but the number of real numbers is strictly more than the number of integers (or rationals)!
In calculus, instead of working with the number set $\mathbb R$, one can work with the extended reals $\bar{\mathbb R}=\mathbb R\cup\{-\infty,+\infty\}$, which consists of all real numbers, and two new objects which we will denote by $-\infty$ and $+\infty$. These two new objects are formally just symbols, and so far don't have any meaning. We can then introduce a notion of "neighborhoods" of numbers. A neighborhood of a real number is just something which contains an open set around that real number. A neighborhood of $+\infty$ is any set which contains an interval $\{x:x>a\}\cup\{+\infty\}$ for some real number $a$. Now we can define the limit $\lim_{x\to a}f(x)=b$ as follows: for any neighborhood $N$ of $b$, there exists a neighborhood $M$ (depending of $N$) of $a$ such that if $x\in M$, then $x\in N$. This definition also works for $a=+\infty$ and $b=+\infty$! We have therefore managed to define the limit $\lim_{x\to+\infty}f(x)=+\infty$ actually in terms of an object $+\infty$.
To end my comment, there are certain areas of math where some concepts are most naturally expressed in terms of infinities, and some people explicitly study infinites just because they find them interesting.