What is the purpose of the limit?

As you have studied physics, I will give you an example from it. For example, the speed of an object is:

$$ v = \frac{\Delta x}{\Delta t} $$

But in many cases you want the speed in a certain moment in time, i.e. the object's velocity. To do this, you have to make t very very small. So small, it approaces 0. Here comes the limit. Actually, the instantaneous velocity is defined as a derivative:

$$ v = \frac{dx}{dt} = \lim_{t \to 0} \frac{f(t) - f(0)}{t-0} $$

If f(t) gives space at the moment t, then that is equal with:

$$ v = \lim_{t \to 0} \frac{f(t) - f(0)}{t-0} $$

This is a basic application of limits. Actually, limits are the basis for calculus. They are not so used directly in practice (by practice I mean other subjects, such as physics), but the concepts that are defined using them (pretty much entire calculus) are widely used.


Among many other uses, limits allow us to formalize the idea that we can compute certain expressions by instead computing a sequence of approximations that, when iterated or refined, bring us closer to the one we actually desire. The computation of instantaneous velocities mentioned in another answer is an example of that.

Another simple, but very useful example, is Newton's method for approximating solutions to equations of the form $f(x)=0$. The general method requires the notion of derivative, but you can already see the usefulness in a particular case: To compute $\sqrt k$, for an integer $k$ which is not already a square, start with a guess, call it $x_0$. The method allows you to refine your guess, finding new approximations, call them $x_1,x_2,x_3$, and so on, by the formula $$ x_{n+1}= \frac{x_n}2+\frac{k}{2x_n}. $$ For example, say $k=2$, so we want to compute $\sqrt2$. We start with a guess, say $\sqrt2$ is a bit bigger than $1$, so let $x_0=1$. The method then gives us $$ \begin{array}{rl} x_1&=1.5\\ x_2&\approx1.41667\\ x_3&\approx1.41422\\ x_4&\approx1.41421, \end{array} $$ which is already correct up to the number of digits displayed. The method is actually extremely efficient. We cannot "compute" $\sqrt2$ exactly since it is an irrational number, but this process allows us to recover as many digits as we may want, with minimum effort. The approximations we obtain get closer and closer to $\sqrt2$, and "in the limit" we would have recovered its exact value.

A key example, of historical significance, is the computation of areas. This was one of the problems that led to the development of calculus. The way Archimedes tried to compute areas was by the method of exhaustion. To compute, for example, the area of a circle (equivalently, to find the value of $\pi$), we approximate the circle with an inscribed and a circumscribed polygon. The inner one has smaller area than the circle and the outer one has larger area. By increasing the number of sides, we can get closer and closer to the true area both with smaller and with bigger numbers. The true value of the area is the limit of this process. (Archimedes actually duplicated the number of sides each time, starting with hexagons. His work shows that $$ 3\frac{10}{71}\approx3.1408<\pi<3\frac17\approx 3.1429. $$ By increasing the number of sides beyond what Archimedes did, we can get as close to $\pi$ as we may want. Of course, nowadays we have many other methods for computing $\pi$, all based one way or another on the concept of limits.)

The computation of areas and volumes of arbitrary figures requires limits in an essential way. Actually, we do not even need to look at strange or complicated figures. The familiar formula for the volume of a pyramid, $$ V=\frac13bh, $$ where $b$ is the area of the base and $h$ is the height, cannot be established precisely without a limit process. (This is a bit surprising; to compute the area of a polygon, no matter how complicated, we can cut it into small pieces and reassemble them into a square. Nothing of the sort is possible for a general pyramid. The use of limits is unavoidable here.)


Limits are useful when objects make sharp transitions. For instance, assume that the moment someone hits a baseball at time t, it changes shape; one could ask, "What shape did it have right before getting hit?" We can't look at the shape at time t because it has already changed, so we have to look at times arbitrarilry close to the transition. This is the limit.

Now, this is an oversimplified model of life, but it gives some intuition.


Conceptually, a limit captures the spirit of scientific measurement.

In physical sciences, we had conceded long ago that perfect measurements are not possible. Every ruler has some defect. But given any ruler, we can always make a more accurate one.

Equality in mathematics is an extremely rigid condition. And because of what we said just above, we can't possibly hope to make good use of equality in physical contexts.

Limits give us the next best thing.

We might set out, not to measure something exactly, but to measure it to a certain degree of accuracy. We give ourselves a certain error tolerance (the ε).

If our error tolerance gives us a very slim margin of error (for very small ε), we can't measure with just any old ruler; we need a very good ruler. So we design a ruler with a certain level of precision (the δ). We also have the luxury, as experimenters, that if we are asked by our peers to measure within a certain margin of error, we may custom-build the ruler for this purpose. (That is δ may be defined in terms of ε).

Using this accurate ruler, we are able to take measurements accurate within the desired margin of error. And the notion of a limit is that we are always able to do this, no matter how small our margin.

This gives us the idea behind the so-called ε-δ definitions of limits, continuity, and derivatives. It is also related to the limit of a sequence or a series (these are discrete cases, where δ is a positive integer instead of a positive real).


Sequences:

Approaching a real number $a$:

$\forall ε>0$, $\exists N\in \Bbb N:|x_n-x|<ε$, $\forall n\geq N$.

Here we have a discrete set {$x_n$}. We have that $x_n$ ,after $N$, goes very close to $x$.So close,that if you get the every small number $ε>0$, then the difference $|x_n-x|<ε$.

Example:$x_n=\frac {1}{n}$.Then $x_n\to 0$.{$x_n$}={$1,1/2,1/3,...1/1000,...$} and as you see it gets too too close to $0$.

Approaching infinity:

$\forall ε>0$, $\exists N\in \Bbb N:x_n>ε$ $\forall n\geq N$.

This means that $x_n$ gets very large and if you find a $N$ suitable then $x_n>ε$ ,$\forall n\geq N$.

Example:$x_n=n$. Then {$x_n$}={$1,2,3,....,10000000,....$}.

Functions:

Approaching a real number $a$ with limit equal to a real number $b$:

$\lim_{x\to a} f(x)=b<=>\forall ε>0$ ,$\exists δ>0:0<|x-a|<δ=>|f(x)-b|<ε$.

This means that in order to arrive at $a$, we need a bridge of other numbers close to $a$ and the more interesting is that $a$ doesn't bother us. We don't need it.We need only the bridge. That's why $0<|x-a|$.

Now that we have the bridge in mind,we think somehow as the sequences and we are done.Here things are not necessarily discrete.

Approaching a real number $a$ with limit equal to infinity:

Here we approache $a$ as before,but near $a$ we get a very large value of $f(x)$.

Example:$f(x)=\frac {1}{x}$. Near $0$ we get very large numbers,($\lim_{x\to 0} f(x)=+\infty$)

Approaching infinity with limit equal to a real number $b$:

Same thinking as with the sequences.

Approaching infinity with limit equal to infinity:

Same thinking as with the sequences.

In the real world,think as examples these:

Sequences:

Fold continuously in two a paper with square form of edge equal to $1$.Then you can build the sequence $x_n=\frac {1}{2^n}$ that represents the area of the rectangular in every step. Do we have that $x_n\to 0$?

Functions:

Well,every sequence is a function ,$x_n\sim x(n)$.