The limit of reciprocal is the reciprocal of the limit

While I was reading the book Exploratory examples for real analysis, I came across this:

to show that : $$\lim_{x\rightarrow x_o}\frac{1}{h(x)} = \frac{1}{\lim_{x\rightarrow x_o}{h(x)}} = \frac{1}{M}$$

Discussion:Let $\epsilon \gt 0$.we want to find $\delta \gt 0$ such that

if $0 \lt |x-x_0| \lt \delta $,then $\Big|\frac{1}{h(x)}-\frac{1}{M}\Big|\lt \epsilon$.

In order to get expression involving $|h(x)-M|$,we first find the common denominator: $\Big|\frac{1}{h(x)}-\frac{1}{M}\Big|=\Big|\frac{M-h(x)}{M~h(x)}\Big|$ $~=\Big|\frac{h(x)-M}{|M|~|h(x)|}\Big|$.

As we have to deal with variable term $|h(x)|$ in denominator.We can construct bound .Let $\epsilon =\frac{|M|}{2}$ ,since lim$_{x\to x_o}h(x)=M$,there exists $\delta_1\gt 0~~$ s.t.

if $0 \lt |x-x_0| \lt \delta_1 $,then $|h(x)-{M}|\lt \frac{|M|}{2}$.

As you study steps below,identify that step where $h(x)\neq 0$ for all $0 \lt |x-x_0| \lt \delta_1 $ plays a critical role:

$|M|=|M-h(x)+h(x)|$

$~~~~~~\leq |M-h(x)|+|h(x)|$

$~~~~~~= |h(x)-M|+|h(x)|$

$~~~~~~\lt \frac{M}{2}+|h(x)| \implies$

$\frac{|M|}{2} \lt |h(x)| \implies$

$\frac{1}{h(x)}\lt \frac{2}{|M|}$,where $0 \lt |x-x_0| \lt \delta_1 $.

Now that we've bound for $1/h(x)$,we can apply the definition to term $|h(x)-M|$.

Since lim$_{x\rightarrow x_o}h(x)=M$,we can find $\delta_2 \gt 0$ such that,

$~~~~~~~~~~~~~~~~~~~~~~~~~~~~$if $0 \lt |x-x_0| \lt \delta_2$, then $|h(x)-M|\lt \frac{|M|^2 \epsilon }{2}$

The thing I can't understand is for what purpose did the author work with such complicated expression involving $\epsilon $ in the end of the discussion?


Solution 1:

Assuming $h(x)$ has a nonzero limit $M$ at $x_0$, then, informally, values of $h(x)$ "get close and remain close" to $M$ so long as values of $x$ are "chosen close to" $x_0$. If you choose values of $x$ within a certain distance of $x_0,$ in particular, the nonzero limit at $x_0$ guarantees such values $h(x)$ are also not zero.

That means the line just before the line beginning with "Now that we've bound ..." makes sense. Why? Because, since $h(x)$ has a nonzero limit $M$ at $x_0,$ then if a challenger wants your function values to be no more away from $M$ than $\epsilon >0$ (your displacement between $y-$ values for Calculus functions), you can tell him he needs to input values of $x$ that are no further away than $\delta>0$ from $x_0$.

In particular, if you don't want to be any further away from $M$ for your function values than $|M|/2$, simply choose $x$ values to input into $h(x)$ that are no further away from $x_0$ than the distance $\delta_1$ - which, since the limit of $h$ at $x_0$ is twice that value, namely $M$, means your function can't ever take a zero value for any such input. So dividing by $h(x)$ in the step I mentioned above guarantees that you never divide by zero, meaning that step will always make sense!

To understand the last step, divide both sides of the second inequality by $|M*h(x)|$. Note choosing the lesser of $\delta_1$ and $\delta_2$ guarantees this quantity is not zero; and, the inequality changes to $$\frac{|h(x)-M|}{|M|*h(x)}=\big|\frac{1}{M}-\frac{1}{h(x)}\big|<\frac{|M|^2 \epsilon}{2|M||h(x)|}=\frac{1}{h(x)}\frac{M*\epsilon}{2} < \frac{2}{|M|}\frac{|M|*\epsilon}{2}=\epsilon$$

This answers the original question: if a challenger gives us any $\epsilon>0$, we've found a corresponding $\delta=\min(\delta_1,\delta_2)$ so that for any value of $x$ no further than $\delta$ away from $x_0,$ we are guaranteed $\frac{1}{h(x)}$ is no further away from the value $\frac {1}{M}$ than $\epsilon$.