Curiosity: Wouldn't the definition of the derivative always be 1 if it exists?

I'm pretty sure I have the wrong intuition here but I have a slight confusion about the way we could calculate the derivative at a certain point using (one of) the definition(s) of the derivative. See example bellow:

$$\frac{df(x)}{dx}= \lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$

let's see the case of $f(x) = \sqrt{5x+10}$

$$\frac{df(x)}{dx}=\lim_{h\to0}\frac{\sqrt{5h+5x+10}-\sqrt{5x+10}}{h}$$

If we want to calculate $f'(5)$

$$\left.\frac{df(x)}{dx}\right\rvert_5=\lim_{h\to0}\frac{\sqrt{5h+35}-\sqrt{35}}{h}$$

if we try to find the limits when $h\to0^+$:

  • The numerator would be only slightly superior to 0

  • The denominator would be only slightly superior to 0

$$\frac{\text{very small number above zero}} {\text{very small number above zero}}\approx 1$$

It should be the same for $h\to 0^-$

Hence: $f'(5)= 1$?

N.B: I know this result is wrong, I just want to know how the logic I used is faulty.


Solution 1:

The intuitive approach to solving a limit, along the lines of "slightly more than zero" or "slightly less than zero" is just that - an intuitive approach. That is to say, it's a good rule of thumb that often gets you close to the right answer, but it's not actually correct. The issue is that when you have multiple expressions in play, how they synchronize is important.

To take an extremely simple example, consider $\lim_{x \to 0}\frac{2x}{x}$. $2x$ and $x$ are both "very small numbers" when $x$ is very small - but $x$ gets small twice as fast as $2x$ does. At any given instant, $\frac{2x}{x}$ will in fact always be $2$, so the limit is just $2$.

The key idea here is that the definition of the limit is what drives it. The definition of a limit states that $\lim_{x \to a}f(x) = L$ if and only if for every $\epsilon > 0$ there is a $\delta > 0$ so that whenever $|x - a| < \delta$, $|f(x) - L| < \epsilon$. What your example demonstrates is that the intuitive idea of replacing pieces of $f$ with "very small positive numbers" is not an accurate reflection of this definition.

Solution 2:

Let's take your example:

$$\frac{\sqrt{5h+35}-\sqrt{35}}{h}$$

and make $h$ a very small number, say $0.0000001$, so we have

$$\frac{\sqrt{35.0000005}-\sqrt{35}}{0.0000001}$$

which is about

$$\frac{0.0000000422577}{0.0000001}$$ i.e. about $0.422577$ rather than $1$, even though it is a very small number divided divided by a very small number. It is in fact close to $\dfrac{\sqrt{35}}{14}$, which is what the calculus would suggest is the exact derivative $f'(x)=\frac{5}{2} (5x+10)^{-1/2}$ at $x=5$

Solution 3:

The formal error is easy to explain, but maybe the question is about misunderstanding what is being done in this kind of problem more generally, so I give a more "meta" answer.

It is the very idea of analysis: we do not know exactly the quantities which we are dealing with, so we afford to change a little the problem in order to be able to solve it, hoping it will enlighten the original problem. We are able to handle only very few problems in analysis, with specific objects and strong hypothesis, but understanding those simpler cases helps sometimes to reach some better understanding of the original objects (see mainly the problems coming from physics).

Yet, that does not mean doing whatever you thing being reasonable an intuitively defendable. Indeed, Dieudonné said that analysis was only about "bounding by below, bounding by above, approximate", and we cannot forget the main idea behind this: we have to control the remainder, simplify but knowing what we do, not forgetting what we changed and assumed, otherwise we just change the problem and never will be able to come back. This idea, I think, must remain one of the greatest lighthouse for understanding and doing analysis.

You propose that those two quantities have quotient tending to one, but you have to really compare them (it is the meaning of doing a quotient), and saying "small positive number" is just blinding yourself, as if for treating basic number theory with small numbers, you just have a hike to a new spot like $10^{80}$ and say "well, there is not any problem, all those numbers seems to be $1$, hence all equations are trivial".

A simple example of why it is roughly wrong: when $x$ is a very small positive number, then so is $2x$, however you will agree that you do not want to conclude:

$$2 = \frac{2x}{x} \to_{x\to 0} 1$$

It is the very point you learn doing basic calculus like equivalents, developments and "indeterminate forms" (I do not know the word for it in english, but I mean evaluating limits of type $\frac{0}{0}$ or $\frac{\infty}{\infty}$).

Both intuition and formalism are powerful in mathematics, and you have to make a perpetual dialogue between them: the intuition guide you sometimes where formalism would not have, and formalism is the method of walking correctly for finishing your hike, avoiding errors and getting lost, controlling validity. Sometimes it is the formalism that allows you to advance, even with a blind intuition, and intuition being the warden of validity. But if you remove one of those two pillars, as you have just done relying only on your intuition, everything quickly collapses.

Solution 4:

$\lim_{h\to0}$ doesn't really make any statement like “$h$ is a particular very small number”. Rather it considers its argument as a function of a general $h$, decides how small $h$ must be so that function becomes practically constant, and then yields that constant. In principle that doesn't require evaluating for any very small values at all, for example with $$ f(x) = \begin{cases}1&\text{for }x<1\\x & \text{else}\end{cases} $$ we have $$ \lim_{x\to0}f(h) = f(0.0001) = f(0.5) = f(0) = 1 $$ because this function is really constant if only $x$ is smaller than 1. In practice, limits aren't usually over such functions that are constant in a whole region, but they are necessarily over continuous functions, which guarantee you that you can make the error-from-constness arbitrarily small by going to sufficiently small values; but the arguments are still always proper real numbers – no “this doesn't behave like numbers otherwise do”!

And for an ordinary “somewhat small” $h$, say $h=0.5$, you'd certainly agree that $\frac{\sqrt{5\cdot h + 35} - \sqrt{35}}{h}\neq 1$. In fact any pocket calculator will tell you that it is $\approx0.415$. If you then make $h$ yet smaller, the following will happen (on a computer with IEEE754 double precision arithmetic): $$\begin{align} \\ 10^{-1} &&& 0.4210786080687612 \\ 10^{-2} &&& 0.4224263146657137 \\ 10^{-3} &&& 0.4225620364017857 \\ 10^{-4} &&& 0.422575618168608 \\ 10^{-5} &&& 0.42257697643321984 \\ 10^{-6} &&& 0.42257711196924674 \\ 10^{-7} &&& 0.4225771199628525 \\ 10^{-8} &&& 0.42257708443571573 \\ 10^{-9} &&& 0.4225766403465059 \\ 10^{-10} &&& 0.4225775285249256 \\ 10^{-11} &&& 0.4225952920933196 \\ 10^{-12} &&& 0.4227729277772596 \\ 10^{-13} &&& 0.41744385725905886 \\ 10^{-14} &&& 0.4440892098500626 \\ 10^{-15} &&& 0.8881784197001251 \\ 10^{-16} &&& 0.0 \\ 10^{-17} &&& 0.0 \\ 10^{-18} &&& 0.0 \\ 10^{-19} &&& 0.0 \\ 10^{-20} &&& 0.0 \end{align}$$

Notice how the “moderately small” arguments give a very consistent value of 0.4225something – this corresponds to the actual “exact derivative”. But extremely small arguments suddenly give complete bogus. This is similar to your question: with extremely small numbers, the computer can't really calculate anymore (it basically runs out of digits to store the deviations in), so you then have a $0 \stackrel{?}= \frac00 \stackrel{?}= 1$ kind of situation.

Well – one could say that this is only an artifact of the floating-point processor. But IMO it gets to the very heart of how analysis works: it exploits the fact that certain functions behave in a certain regime (often close to a singularity of utter indeterminacy!) very predictable, so they can be well approximated by something simpler. That can then be used for further calculations, which would otherwise have been unfeasible. While it can be mathematically useful to describe the deviations as infinitesimally small, especially for physical applications it's actually more appropriate to say just small enough so we don't have to worry about higher-order effects, but not so small that the signal is lost in the noise floor.