Why does L'Hôpital's rule work?
Can anyone tell me why the L'Hôpital's rule works for evaluating limits of the form $\frac{0}{0}$ and $\frac{\infty}{\infty}$ ?
What I understand about limits is that when you divide a really small thing (that is $\rightarrow0$) by another really small thing, we get a finite value which may not be so small.
So how does differentiating the numerator and denominator help us get the Limit of a function?
This is far from rigorous, but the way I like to think about L'Hospital's Rule is this:
If I have a fraction whose numerator and denominator are both going to, say, infinity, then I can't say much about the limit of the fraction. The limit could be anything.
It's possible, though, that the numerator goes slowly to infinity and the denominator goes quickly to infinity. That would be good information to know, because then I would know that the denominator's behavior is the one that really swings the limit of the fraction overall.
So, how can I get information about the rate of change of a function? This is precisely the kind of thing a derivative can tell you. Thus, instead of comparing the numerator and denominator directly, I can compare the rate of change (i.e. the derivative) of the numerator to the rate of change (i.e. the derivative) of the denominator to determine the limit of the fraction overall. This is L'Hospital's Rule.
The answer given by Mr. Jackson Walters gives a proof why it works, but if you are looking for an answer that should give an intuition, see this :
Consider the curve in the plane whose $x$-coordinate is given by $g(t)$ and whose $y$-coordinate is given by $f(t)$, i.e. $$\large t\mapsto [g(t),f(t)]. $$ Suppose $f(c) = g(c) = 0$. The limit of the ratio $\large \frac {f(t)}{g(t)}$ as $t \mapsto c$ is the slope of tangent to the curve at the point $[0, 0]$. The tangent to the curve at the point $t$ is given by $[g'(t), f'(t)]$. l'Hôpital's rule then states that the slope of the tangent at $0$ is the limit of the slopes of tangents at the points approaching zero.
Points to assume (credits : Thanks to Hans lundmark for pointing out what I missed and to Srivatsan for improving my formatting . )
Assume that functions $f$ and $g$ have a well defined Taylor expansion at $a$.
Proof:
Another way you can think of this is to use the idea of derivative: a function $f(x)$ is differentiable at $x=a$ if $f(x)$ is very close to its tangent line $y = f'(a) \cdot (x-a) + f(a)$ near $x = a$. Specifically,
$$f(x) = f(a) + f'(a) \cdot (x-a) + E_{1}(x)$$
where $E_{1}(x)$ is an error term which goes to $0$ as $x$ goes to $a$. In fact, $E_{1}(x)$ must approach $0$ so fast that
$$\lim_{x\to a}\frac{E_1(x)}{x-a}=0$$
because $\dfrac{E_{{1}(x)}}{x-a} = \dfrac{ f(x)-f(a) }{x-a} - f'(a) $
and we know from the definition of derivative that this quantity has the limit $0$ at $a$.
Similarly, if $g$ is differentiable at $x = a$,
$$g(x) = g(a) + g'(a) \cdot (x-a) + E_{2}(x)$$
where $E_{2}(x)$ is another error term which goes to $0$ as $x \to a$. If you're computing the limit of $f(x)/g(x)$ as $x \to a$ and if $g(a)$ is not equal to $0$, then as $x \to a$, the numerator becomes indistinguishable from $f(a)$ and the denominator from $g(a)$, so the limit is
$$\lim_{x \to a} \frac{f(x)}{g(x)}=\frac{f(a)}{g(a)} .$$
If both $f(a)$ and $g(a)$ are $0$, then we must use the tangent approximations to say that
$$\frac{f(x)}{g(x)} = \frac{f(a) + f'(a) \cdot (x-a) + E_{1}(x) }{ g(a) + g'(a) \cdot (x-a) + E_{2}(x) }$$
$$=\frac{f'(a) \cdot (x-a) + E_{1}(x)}{g'(a) \cdot (x-a) + E_{2}(x) }$$
$$ =\frac{f'(a) + [E_{1}(x)/(x-a)] }{g'(a) + [E_{2}(x)/(x-a)]}$$
and we have seen that the second term becomes negligible as $x\mapsto a$.
In other words, when both function values approach $0$ as $x\mapsto a$, the ratio of the function values just reduces to the ratio of the slopes of the tangents, because both functions are very close to their tangent lines.
I hope you understood. Thanks a lot. Iyengar.
For the case $\frac{0}{0}$, we need only to use definition of a derivative in terms of the difference quotient. Suppose $f,g:\mathbb{R} \rightarrow \mathbb{R}$ and $f(a)=g(a)=0$, $f$ and $g$ are continuously differentiable, and $g'(a) \ne 0$, then
$$\begin{eqnarray*} \lim_{x \rightarrow a}\frac{f(x)}{g(x)} &=& \lim_{x \rightarrow a}\frac{f(x)-0}{g(x)-0} \\ &=&\lim_{x \rightarrow a}\frac{f(x)-f(a)}{g(x)-g(a)}\\ &=&\lim_{x \rightarrow a}\frac{\frac{f(x)-f(a)}{x-a}}{\frac{g(x)-g(a)}{x-a}}\\ &=&\frac{\lim_{x \rightarrow a}\frac{f(x)-f(a)}{x-a}}{\lim_{x \rightarrow a}\frac{g(x)-g(a)}{x-a}}\\ &=&\frac{f'(a)}{g'(a)} \end{eqnarray*}$$
(http://csclub.uwaterloo.ca/~jy2wong/jenerated/blog/2012-10-07.lhopitals_rule.html) is interesting.
At the heart of it though, L'Hopital's rule just seems to be a marriage of the ideas that differentiable functions are pretty darn close to their linear approximations at some point as long as you don't stray too far from that point and that for a continuous function, a small movement in the domain means a small movement in the value of the function.