Do functions all have an infinite number of limits?
I originally understood limits to be where functions run towards $\pm\infty$ as they approach some specific $x$ value and where they run towards (but never touch) some specific value (like $0$) as $x$ approaches infinity, thus making that value impossible to reach (a limit a function can't cross, in a Zeno's paradox-like way).
Now that I'm beginning to actually study calculus, I'm seeing that limits are somehow more broad. Specifically, I now see limits are always referred to in relation to some stated $x$ value being approached (as indicated by the conventional notation: $\lim_{x\to p} f(x)$). But, this makes it seem to me like you can pick any value (any $p$) you want, that the limit is simply whatever value the function approaches as $x$ approaches whatever value you decided to pick.
- Wouldn't that mean functions have an infinite number of limits? (You can find an infinite number of points on a line/curve, after all.)
- If so, what's so limiting about "limits" then?
- Also, wouldn't this make limits the most stupidly obvious things? For example: $f(x)=x^2$ will obviously approach $4$ as you pick $x$ values arbitrarily closer and closer to $2$ ($1.9, 1.99. 1.999, 1.9999, 1.99999,$ etc)?
- If functions don't have an infinite number of limits, than how do you recognize which values for $x$ to approach make sense?
Obviously, preconceived notions can screw with actually learning how a thing works because it can frame the information you're trying to integrate within a meaningless perspective, but figuring out how to shed those preconceived notions can be hard when you don't understand where you've gone wrong in the first place. ...oh, god, someone help me. I'm stuck in a loop.
Solution 1:
First, congratulations on being an inquisitive mathematics student! These sorts of questions are one of the most important questions you should be asking yourself and others. Questions such as "why is this important?" and "why is this called that way?" are precisely what mathematics is about - i.e. it's not a set of arbitrary rules to annoy students, all the things got created for a reason!
Are there infinitely many boring limits?
You are correct, most "nice" functions defined on an interval or on $\mathbb R$ have infinitely many limits (see the comments section for counterexamples) and yes, they are "stupidly obvious" for continuous functions, i.e. $$\lim_{x\to a} f(x) = f(a)$$ which is often the definition of continuity, too.
But then there are many interesting cases, for example, try to figure out what's happening for $\lim_{x\to 0} \sin(1/x)$ and $\lim_{x\to 0} x\sin(1/x)$. Even if this is beyond your level, just thinking about the functions and looking at their graphs will give you some intuition about how interesting limits can be. One of them is continuous at zero. Which one? Why? What happens to the other one?
$$\sin(1/x)$$ $\sin(1/x)$" /> $$x\sin(1/x)$$
As a sidenote, there are also functions that are discontinuous everywhere. Those are usually rather hard to understand, though (althought Dirichlet's function is quite accessible)
Even "boring" limits are useful
But even for functions that are not that interesting, like $\text{sgn}(x)$, which gives you the sign $x$, i.e. it is $-1$ for negative numbers, $1$ for positive numbers and $0$ for zero, limits are a useful concept. The one-sided limits at zero (coming from left and right) are $-1$ and $1$, respectivelly, whereas the function value is $0$. This intuitively makes complete sense (draw it!) and thus having a rigorous mathematical object, the limit, to support this intuition is useful.
Why "limit"?
I do not know the proper etymology of the mathematical term "limit", but the english word comes from the word for "frontier", or "boundary". This makes sense even for a "boring" limit, like $\lim_{x\to 2} x^2$ - as you yourself suggested, you can approach it through the sequence $1.9,1.99,1.999,\dots$, that is you are coming closer and closer to the boundary point $2$, but never quite touch it, even though you're performing infinitely many steps. In that sense, $2$ would be your "limit", or the "boundary", which you never quite attain.
Lastly, note that you never actually touch the limit point while you're approaching the limit. This is important - what if the point was, say, undefined! Answers by other users stress this point more, make sure you read them.
Solution 2:
For typically used functions at typically considered values, limits and evaluation are the same thing.
More precisely, "typical" functions are continuous at "typical" values, and we have a theorem (or definition)
If $f$ is continuous at $a$, then $\lim_{x \to a} f(x) = f(a) $
Limits generalize this notion; e.g. if I define a function $f$ that is undefined at $x=1$ but satisfies $f(x) = x+1$ whenever $x \neq 1$, then the graph looks like
The hole is easily filled in to make a continuous graph; the limit is a systematic way to determine precisely what value is needed to fill in the gap: here,
$$ \lim_{x \to 1} f(x) = 2 $$
We could then define a function $\bar{f}$ that is a continuous extension of $f$ by filling in the hole:
$$ \bar{f}(x) = \begin{cases} f(x) & x \neq 1 \\ 2 & x = 1 \end{cases} $$
then limits are once again just evaluation: e.g.
$$ \lim_{x \to 1} \bar{f}(x) = \bar{f}(1) $$
Incidentally, a sample formula for the function $f$ is
$$ f(x) = \frac{x^2-1}{x-1} = \frac{(x-1)(x+1)}{x-1} $$
and a formula for the continuous extension is
$$\bar{f}(x) = x+1 $$
The same is true for limits at infinity; an important concept in analysis often not taught in introductory classes is the extended number line whose points are called extended real numbers. It is formed by adding two new points $+\infty$ and $-\infty$ at the 'ends' of the ordinary number line.
There is a topological definition of limit that can be applied for functions on the extended number line or that take values in the extended real numbers (or both); the notion of limit is once again that of filling in holes. And again we are interested in doing continuous extensions; e.g. to define $\arctan(+\infty) = \pi/2$ and $\arctan(-\infty) = -\pi/2$.
The "filling in holes" idea can be made precise using topology — in particular, in terms of the closure of the graph of a function.
Solution 3:
I think you have the notion that limits are useful to study behavior of a function at some troublesome points like say the behavior of $f(x) = 1/x$ at the troublesome point $0$. This is a good start and mathematicians actually carried the idea a bit far and limits are used to study behavior of function not only at troublesome points but also at normal points where there is no trouble.
More specifically when we are dealing with limit of a $f$ at a point $c$ then we are not at all interested in the question "What is the value of $f$ at $c$?" but we are rather interested in studying the values of $f$ at all points near to $c$. Thus as long as a function $f$ is defined at all points near $c$ it is OK to talk about its limit at $c$. Hence depending upon the function it is possible to talk about its limit at many points. Thus for example if $f(x) = x^{2}$ then we can talk about its limit at any point $c$ without any problem. Thus to use your phrase "functions can have an infinite number of limits".
Now you ask: what is limiting about limits? I think the "limiting part" comes from the fact that the limit of $f$ at $c$ is dependent on values of $f$ at all points near $c$ but not dependent at all on its value at $c$. This is sometimes expressed by the fact that the values of the variable (say $x$) which we are dealing with are limited (or say restricted) so as not to reach point $c$. On the other hand some students mistakenly think that if $\lim_{x \to c}f(x) = L$ then $f(x)$ cannot reach $L$. This is wrong (just take any function $f$ which is constant). The limiting part is related to the variable $x$ and not to the values of function $f(x)$. However there can be cases where $f(x)$ is also restricted not to reach its limit $L$ but it is not necessary to be so.
Wouldn't this make limits the most stupidly obvious thing? You are now talking about functions $f$ and points $c$ such that $f$ has no troublesome behavior near $c$. Technically we say in this case that $f$ is continuous at $c$ and $\lim_{x \to c}f(x) = f(c)$. There are many functions whose limit at a point is same as their value at that point. Then why bother to study the limit of such functions? Well such functions possess many nice properties which are not too difficult but perhaps may not be obvious.
For example let $f$ be a function which is continuous at $c$ (think $f(x) = x^{2}$ at $x = 2$) and also assume that $f(c) > 0$. Then it can be observed with a little effort that $f$ is positive at all points near $c$. If $f(c) < 0$ then $f$ would be negative at all points near $c$. Thus continuous functions preserve signs near the point of continuity. Now consider that the function $f$ is continuous at all points of an interval $[a, b]$. Then the magic happens and it is difficult to prove that if $f(x) \neq 0$ for all $x \in [a, b]$ then $f$ maintains a constant sign on whole interval $[a, b]$. The fact mentioned in the last sentence is not obvious and may not hold for discontinuous functions. There are many further properties of continuous functions which emphasize the need to study such studpidly obvious limits.
Continuous functions ensure that their values change only slightly when their argument changes slightly. Thus there are no surprises. On the other hand consider the function $f(x) = 1/x$ near $0$. If $x$ is positive and near $0$ (say $x = 0.00001$) then $f$ has a large positive value. Change the value of $x$ by a little amount to make $x = -0.00001$ and then value of $f$ is suddenly a big negative number. This kind of small change in value of $x$ leads to a very big change in value of $f$ and this is more of a trouble / surprise for us (think of stories of a king becoming a pauper the next day, who would want that!). So continuity is a desirable property and it is worth studying.
Lastly you ask: how do you recognize which values for $x$ to approach make sense? The answer is simple. We can talk about $\lim_{x \to c}f(x)$ provided the function $f$ is defined in a certain neighborhood of $c$ (except possibly at $c$). The the points $c$ for which it makes sense for $x$ to approach are those specific points near which the function is defined.
It is better to give examples. Let $f(x) = 1/x$. Then although $0$ is a troublesome point (because $f$ is not defined there), it makes sense to see what happens when $x$ approaches $0$. It makes sense because apart from $0$ $f$ is defined at all nearby points. And $0$ is the only troublesome point, and rest of the points are fine and for this function it also makes sense to see what happens when $x$ approaches a non-zero point.
Now consider $f(x) = 1/\sin(1/x)$. Here the function is defined at all points except $ x = 1/n\pi$ and you can see that it makes sense to see what happens when $x$ approaches $1/n\pi$. But it does not make sense to see what happens when $x$ approaches $0$ because every neighborhood of $0$ contains these exceptional points $x = 1/n\pi$ where $f$ is not defined. So for this function we can't talk about $\lim_{x \to 0}f(x)$.
Solution 4:
Aren't Limits Stupidly Obvious?
To add on to Dahn Jahn's answer, limits seem obvious now but before they weren't. Going back to you saying you are learning calculus, calculus was invented before limits existed, they were only implied. In the 17th century, when Newton and Leibniz was developing modern calculus, they were working with the idea of infinitesimals. They dealt with what could considered to be very, very small numbers. For very obvious reasons, it wasn't very rigorous or well defined. Later, in the 19th century, the concept of limits was explored and the epsilon-delta definition came about. This help to make calculus more rigorous and extend limits to other fields of mathematics.