Understanding limits and how to interpret the meaning of "arbitrarily close"

Let me be very frank here. For a beginner who is studying limits for the first time (meaning age around 16 years) the terms "sufficiently close" and "arbitrarily close" are very difficult to handle.

This is primarily because such a student at his stage of learning is acquainted mainly with algebraic simplifications/manipulations and the main focus in algebra is the operations $+,-,\times,/,=$. Thus most of the mathematical study is based on establishing equality between two expressions (via rules of common arithmetic operations).

In Calculus the symbols $+,-,\times,/,=$ take backstage and the focus shifts entirely to inequalities (but this point is never emphasized in any calculus textbook). Calculus or analysis is fundamentally based on the order relations like $<, >$ instead of arithmetical operations. Here we are not so much concerned about whether $a = b$ or not, but rather how much near / close $a$ is to $b$ when we already know that $a \neq b$. A measure of this nearness/closeness is given by the expression $|a - b|$ (which is something easily handled by students trained in algebra).

The next issue is that students fail to comprehend the significance of the fact that there is no smallest positive rational / real number (although students know this fact and can supply the proof very easily). Because of this fact we know that if $a \neq b$ then the expression $|a - b|$ can take whatever small positive value based on specific choice of $a, b$. Thus we can choose two distinct numbers $a$ and $b$ which are as close to each other as we please.

Calculus / analysis builds up on such phrases as close to ... as we please and introduces the terms like sufficiently close and arbitrarily close and for this purpose the very powerful notion of functional dependence is used. Thus let the numbers $a, b$ in the previous paragraph have a function dependency on some other variable. To simplify things let $a$ depend on another number $x$ via function relation $a = f(x)$ and let us keep $b$ as fixed. Thus we have a way to choose different values of $a$ by changing the value of $x$.

And then we pose the question : How close is the value $a = f(x)$ to $b$ when the value of $x$ is close to some specific fixed number $c$? Thus we are interested in figuring out how small the difference $|f(x) - b|$ is based on the difference $|x - c|$. If $|f(x) - b|$ is small when $|x - c|$ is small then we say that limit of $f(x)$ is $b$ as $x \to c$.

However to make things precise the smallness of $|f(x) - b|$ and $|x - c|$ needs to be quantified properly and when defining the concept of limit it is essential that it should be possible to make the quantity $|f(x) - b|$ as small as we please by choosing $|x - c|$ to be as small as needed. Thus the goal is to make $|f(x) - b|$ as small as we please and for this making $|x - c|$ as small as needed is a means to achieve that goal. Since the goal is primarily based on our wish (as small as we please) we say that $|f(x) - b|$ should be arbitrarily small (because our wishes are arbitrary and there is no end to supply of numbers as small as we please, remember there is no smallest positive number). And then once we have fixed our goal (say with some arbitrary small number $\epsilon$) we need to now choose $|x - c|$ small enough (or we say sufficiently small and quantify it with another small number $\delta$) to fulfill that goal.

And the next step is the formalism of greek symbols: A function $f$ defined in a certain neighborhood of $c$ (but not necessarily at $c$) is said to have limit $b$ as $x$ tends to $c$, written symbolically as $\lim\limits_{x \to c}f(x) = b$, if for any arbitrarily chosen number $\epsilon > 0$ we can find a number $\delta > 0$ such that $$|f(x) - b| < \epsilon$$ whenever $0 < |x - c| < \delta$.


Let's see if we can make the idea of "arbitrarily close" precise and the best way to do that is by using open intervals on R.

To really understand this, you have to understand the precise definition of a function on it's domain. Consider the precise definition of a function: Let A and B be nonempty sets.A function F from A into B is a nonempty subset of the Cartesian product A x B = { (a,b) ={a,{a,b}| a is in A and b is in B} where no 2 different ordered pairs have the same first member. The set of all first members of the function is called the domain of the function and the set of all second members of the function is called the range of f. Therefore, a function is not defined at a point a in A iff there is no such ordered pair in f where f(a) = z where z is in the range of f.

A limit of a real valued function,however, is defined by open interval (a,b) in the real line R of the domain and a corresponding open subset of the range (c,d)= (f(a).d) such that if |a-b| < r where r is some positive real number, then there exists some positive real number r' such that |c -d|< r'. This is certainly true even if d is not in the range of f.

If you understand this,then applying this to the limit of the difference quotient at a specific point x in R is pretty straightforward even without the geometric definition of the derivative. If a line L is tangent to the given point x, then it's not hard to see that all the points "near" x on L lie within some open ball in the plane or some open interval of the real line.


Let me be very frank here. The reason this appears to be complicated in the other answers is because they are working with long-winded paraphrases of shorter and clearer infinitesimal definitions.

The dirty secret here is that the limit as $x$ tends to $c$ is the standard part when $x$ is infinitely close to $c$. For example a formula like $f'(x)=\lim_{\Delta x\rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$ means that one takes the standard part of the ratio $\frac{f(x+\Delta x)-f(x)}{\Delta x}$ when $\Delta x$ is infinitesimal. To be more specific, if $y=f(x)=x^2$ then one forms the ratio $\Delta y/\Delta x$ resulting in $2x+\Delta x$. Then one takes standard part, obtaining $f'(x)=\text{st}(2x+\Delta x)=2x$. That's all there is too it.

For more details see Keisler's Elementary Calculus.