Does L'Hôpital's work the other way?

As referred in Wikipedia (see the specified criteria there), L'Hôpital's rule says,

$$ \lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)} $$

As

$$ \lim_{x\to c}\frac{f'(x)}{g'(x)}= \lim_{x\to c}\frac{\int f'(x)\ dx}{\int g'(x)\ dx} $$

Just out of curiosity, can you integrate instead of taking a derivative? Does

$$ \lim_{x\to c}\frac{f(x)}{g(x)}= \lim_{x\to c}\frac{\int f(x)\ dx}{\int g(x)\ dx} $$

work? (given the specifications in Wikipedia only the other way around: the function must be integrable by some method, etc.) When? Would it have any practical use? I hope this doesn't sound stupid, it just occurred to me, and I can't find the answer myself.


##Edit##

(In response to the comments and answers.)

Take 2 functions $f$ and $g$. When is

$$ \lim_{x\to c}\frac{f(x)}{g(x)}= \lim_{x\to c}\frac{\int_x^c f(a)\ da}{\int_x^c g(a)\ da} $$

true?

Not saying that it always works, however, it sometimes may help. Sometimes one can apply l'Hôpital's even when an indefinite form isn't reached. Maybe this only works on exceptional cases.

Most functions are simplified by taking their derivative, but it may happen by integration as well (say $\int \frac1{x^2}\ dx=-\frac1x+C$, that is simpler). In a few of those cases, integrating functions of both nominator and denominator may simplify.

What do those (hypothetical) functions have to make it work? And even in those cases, is is ever useful? How? Why/why not?


With L'Hospital's rule your limit must be of the form $\dfrac 00$, so your antiderivatives must take the value $0$ at $c$. In this case you have $$\lim_{x \to c} \frac{ \int_c^x f(t) \, dt}{\int_c^x g(t) \, dt} = \lim_{x \to c} \frac{f(x)}{g(x)}$$ provided $g$ satisfies the usual hypothesis that $g(x) \not= 0$ in a deleted neighborhood of $c$.


I recently came across a situation where it was useful to go through exactly this process, so (although I'm certainly late to the party) here's an application of L'Hôpital's rule in reverse:

We have a list of distinct real numbers $\{x_0,\dots, x_n\}$. We define the $(n+1)$th nodal polynomial as $$ \omega_{n+1}(x) = (x-x_0)(x-x_1)\cdots(x-x_n) $$ Similarly, the $n$th nodal polynomial is $$ \omega_n(x) = (x-x_0)\cdots (x-x_{n-1}) $$ Now, suppose we wanted to calculate $\omega_{n+1}'(x_i)/\omega_{n}'(x_i)$ when $0 \leq i \leq n-1$. Now, we could calculate $\omega_{n}'(x_i)$ and $\omega_{n+1}'(x_i)$ explicitly and go through some tedious algebra, or we could note that because these derivatives are non-zero, we have $$ \frac{\omega_{n+1}'(x_i)}{\omega_{n}'(x_i)} = \lim_{x\to x_i} \frac{\omega_{n+1}'(x)}{\omega_{n}'(x)} = \lim_{x\to x_i} \frac{\omega_{n+1}(x)}{\omega_{n}(x)} = \lim_{x\to x_i} (x-x_{n+1}) = x_i-x_{n} $$ It is important that both $\omega_{n+1}$ and $\omega_n$ are zero at $x_i$, so that in applying L'Hôpital's rule, we intentionally produce an indeterminate form. It should be clear though that doing so allowed us to cancel factors and thus (perhaps surprisingly) saved us some work in the end.

So would this method have practical use? It certainly did for me!


PS: If anyone is wondering, this was a handy step in proving a recursive formula involving Newton's divided differences.


No it does not, for example consider the limit $\lim_{x\to 0}\frac{\sin x}{x}$, L'Hôpital's gives you 1, but reverse L'Hôpital's goes to $-\infty$ (without introducing extra constants).


Recall that the integral of a function requires the inclusion of an arbitrary constant. So, if $f(x) = x$, then $\int f(x)\ dx = \frac12 x^2 + C$.

Then, assuming your rule holds, then $$\lim_{x\to 0} \frac{x}{x} = \lim_{x \to 0} \frac{\int x\ dx}{\int x\ dx} = \lim_{x \to 0} \frac{\frac12 x^2 + C_f}{\frac12 x^2 + C_g} = \frac{C_f}{C_g}.$$

This means you can get literally any value you wish.


If the limit of the ratio of those functions exists, I suspect that it may be possible for a certain definite integral to have the same limit. Something like:

$$\lim_{x\to \infty}\frac{\int_x^{x+1}f(x)\ dx}{\int_x^{x+1}g(x)\ dx}$$

The idea is that we grow $x$, and compare the ratios of some definite integrals in the neighborhood of x, of equal width and position.

For instance, if we imagine the special case that the functions separately converge to constant values like 4 and 3, then if we take $x$ far enough, then we are basically just dividing 4 by 3. A definite integral of width 1 of a function that converges to 4 has a value that is approximately four, near a domain value that is large enough.

But this is like a derivative in disguise. If we divide a definite integral by the interval length, and then shrink the interval, we are basically doing derivation to obtain the original function that was integrated. Except we aren't shrinking the interval, since we don't have to: the fixed width 1 becomes smaller and smaller in relation to the value of $x$, so it is a "quasi infinitesimal", so to speak.

The real problem with this, even if it works, is that integrals don't seem to offer any advantage. Firstly, even if the function has the properties of being integrable, it may not be symbolically integrable, or it may be hard to integrate. Integration produces something more complex than the integrand.

The advantage of L'Hôpital's Rule is that we can reduce the power of the functions. If they are polynomials, they lose a degree in the differentiation, which can be helpful, and gives us a basis for instantly gauging the limit of the ratios of polynomials of equal degree simply by looking at the ratios of their highest degree coefficients. And then certain common functions at least don't grow any additional hair under differentiation, like sine, cosine, e to the x.