Existence of the limit $\lim_{h\to0} \frac{b^h-1}h$ without knowing $b^x$ is differentiable

When trying to derive, from first principles, the fact that exponential functions $a^x$ (where $a>1$ is real) are differentiable, we easily see that $$ \lim_{h\to0} \frac{a^{x+h}-a^x}h = a^x \lim_{h\to0} \frac{a^h-1}h, $$ provided the latter limit exists. It's even pretty easy to see that $$ \lim_{h\to0} \frac{a^h-1}h = ( \log_b a ) \lim_{h\to0} \frac{b^h-1}h $$ for any other real $b>1$, provided the latter limit exists. (And then one can define $e$ to be the number such that $\lim_{h\to0} \frac{e^h-1}h = 1$ and continue.)

So my question, which doesn't seem to have an answer on this site (though I'd be happy to be proved wrong) nor in the textbooks I've consulted: how can one justify the existence of any limit of the form $\lim_{h\to0} \frac{b^h-1}h$ $(b>1)$, without using the as-yet-underived fact that $b^x$ is differentiable? (Edited to add: I also want to avoid infinite series.)


This is just to address some comments by Greg Martin. I place it here for it is long for the comment section.

  • Convexity alone will imply differentiability except on a countable exceptional set.

It is easy to check that convexity of a function $\phi$ is equivalent to any of the inequalities $$ \begin{align} \frac{\varphi(u)-\varphi(x)}{u-x}\leq\frac{\varphi(y)-\varphi(x)}{y-x}\leq \frac{\varphi(y)-\varphi(u)}{y-u}\tag{1}\label{convex-equiv} \end{align} $$ For fixed $a<x<b$, inequalities $\eqref{convex-equiv}$ show that the map $u\mapsto \tfrac{\varphi(u)-\varphi(x)}{u-x}$ decreases as $u\searrow x$ and increases as $u\nearrow x$. Consequently,
the maps $$ \begin{align} \alpha(x):=\sup_{a<u<x}\frac{\varphi(u)-\varphi(x)}{u-x}; \quad \inf_{x<v<b}\frac{\varphi(v)-\varphi(x)}{v-x}:=\beta(x)\tag{2}\label{convex-derivative} \end{align} $$ satisfy $$\begin{align} \alpha(x)\leq\beta(x)\leq\alpha(y),\quad a<x<y<b\tag{3}\label{leftrightderivative} \end{align} $$

Lemma: The functions $\alpha$ and $\beta$ are monotone increasing and left continuous and right continuous respectively. Furthermore, $\alpha(x+)=\beta(x)$ and $\alpha(x)=\beta(x-)$.

Proof: Let $x\in(a,b)$ be fixed, and consider the sequence $x_n=x+\tfrac{1}{n}$. From $\eqref{leftrightderivative}$, it follows that $\beta(x)\leq\alpha(x+\tfrac1n)\leq \beta(x+\tfrac1n)\leq n(\varphi(x+\tfrac2n)-\varphi(x+\tfrac1n))$. Letting $n\nearrow\infty$, we obtain $\beta(x)\leq\alpha(x+)\leq\beta(x+)\leq\beta(x)$. The corresponding statement for left limits follows by using $x_n=x-\tfrac1n$ instead.

Since the functions $\alpha$ and $\beta$ are nondecreasing, we conclude that, except for a countable set of common discontinuities where jumps are equal, $\alpha=\beta$ on $(a,b)$.

Theorem: If $\varphi:(a,b)\rightarrow\mathbb{R}$ convex, then $\varphi$ is continuous; moreover, $\varphi$ is differentiable everywhere, except on a countable set, and

\begin{aligned} \varphi(y)-\varphi(x)=\int^y_x\beta(t)\,dt=\int^y_x\alpha(t)\,dt \end{aligned} for all $a<x<y<b$.

Proof: Suppose $a<x<y<b$ and let $x=x_0<\ldots<x_n=y$. Then $$ \beta(x_{m-1})(x_m-x_{m-1})\leq\varphi(x_m)-\varphi(x_{m-1}) \leq \alpha(x_m)(x_m-x_{m-1}) $$ Adding all terms gives $$ \sum^n_{m=1}\beta(x_{m-1})(x_m-x_{m-1})\leq\varphi(y)-\varphi(x) \leq \sum^n_{m=1}\alpha(x_m)(x_m-x_{m-1}). $$ Consequently, $\varphi(y)-\varphi(x)=\int^y_x\beta(t)\,dt=\int^y_x\alpha(s)\,ds$; hence, $\varphi$ is continuous on any closed interval, and differentiable everywhere except in the countable set $N$ of discontinuities of $\beta$.

Comment 1: There is no need to appeal to integral calculus to show continuity of $\phi$. I am sure the OP knows many ways to achieve this.

Comment 2: Using the fact that the left and right derivatives $\alpha$ and $\beta$ are monotone along with the left-right continuity relations between them, one can conclude that $\phi$ is differentiable at every point with the exceptions of a countable set where $\alpha$ and $\beta$ have jump discontinuities. All this, I believe, makes the arguments suitable for a course of differential calculus prior the introduction of Riemann integration.


  • For the exponential function, if convexity can be proven, then differentiability at every point will follow immediately:

Suppose $\phi(x)=a^x$ is differentiable at $x_0$ (such $x_0$ exists from the discussion above. From the existence of $\lim_{h\rightarrow0}\frac{\phi(x_0+h)-\phi(x_0)}{h}=\lim_{h\rightarrow0}\phi(x_0)\frac{\phi(h)-1)}{h}$, it follows the existence of $\lim_{h\rightarrow0}\frac{\phi(h)-1}{h}$. From this, the differentiable it’s at any point follows.


Alternative method:

I undust a couple of my old soviet textbooks (Kudriavtsev, L. D., Curso de Análisis Matemático, Vol 1, and Nilkosky, S. M., A Course of Mathematical Analysis, Vol. I) and this is more or less how the derivative of exponential functions are presented without the defining the log function as an integral:

  1. Assuming that the exponential function $\phi_a(x)=a^x$ has been introduced and continuity and strict monotonic properties are established (starting from exponential at rational numbers, exteding to irrational, etc)
  2. The existence of $\lim_{h\rightarrow0}\big(1+h\big)^{1/h}=e$ and $2<e<3$ is established (starting from $\lim_{n\rightarrow\infty}\Big(1+\tfrac1n\Big)^n$ and then to $\lim_{h\rightarrow0}(1+h)^{1/h}$ using standard tricks)

then, for $a>1$

  1. the $\log_a:(0,\infty)\rightarrow\mathbb{R}$ function, being the inverse of a strictly monotone increasing and continuous function $\phi_a$, is itself continuous and strictly monotone increasing.

  2. $\lim_{x\rightarrow0}\frac{\log_a(x+1)}{x}=\lim_{x\rightarrow0}\log_a\Big(\big(1+x\big)^{1/x}\Big)=\log_ae$.

  3. The punch line: To compute $\lim_{h\rightarrow0}\frac{e^h-1}{h}$, let $t=e^h-1$ so that $h=\ln(t+1)$, $t>-1$. Then $h\rightarrow0$ is equivalent to $t\rightarrow0$. From this, $$\lim_{h\rightarrow0}\frac{e^h-1}{h}=\lim_{t\rightarrow0}\frac{t}{\ln(1+t)}=1$$


As already mentioned,we define the exponential function on $\Bbb R_{\ge 0}$ as $f(x)=\lim\limits_{n\to\infty}\left(1+\frac{x}n\right)^n\quad (*)$.

For $x>0$ and $n\in\Bbb N,$ the following holds:

$$1\le\underbrace{\frac{\left(1+\frac{x}n\right)^n-1}x=\frac1n\sum_{k=0}^{n-1}\left(1+\frac{x}n\right)^{n-1-k}}_{\text{difference of powers}}\le\frac1n\cdot n\left(1+\frac{x}n\right)^{n-1}=\left(1+\frac{x}n\right)^{n-1}\le\left(1+\frac{x}n\right)^n\tag 1$$

By letting $n\to+\infty$ in $(1)$, we obtain: $$1\le\frac{f(x)-1}x\le f(x)\tag 2$$

For $x<0$,replacing $x$ by $-x$ in $(2)$ and dividing by $f(-x)>0,$ we obtain:

$$\frac1{f(-x)}\le\frac{\frac1{f(-x)}-1}x\le 1\tag 3$$

Now, from $(2)$ and $(3)$ and the definition $(*)$ of the exponential function,$\forall x\ne 0$, we have

$$0<\min\{1,e^x\}\le\frac{e^x-1}x\le\max\{1,e^x\}\tag 4$$

If $|x|<1,$ then $e^{|x|}<e$ and $\max\{1,e^x\}<e,$ so it follows that $$0<|e^x-1|\le e|x|$$ and squeezing we get $\lim\limits_{x\to 0}e^x=1$. Now, let's write:

$$\min\{1,e^x\}=\frac{1+e^x-|1-e^x|}2\\\max\{1,e^x\}=\frac{1+e^x+|1-e^x|}2\tag 5$$ We see from $(5)$ that $\lim\limits_{x\to 0}\min\{1,e^x\}=1$ and $\lim\limits_{x\to 0}\max\{1,e^x\}=1$ and we apply the squeeze theorem to $(4)$.

And $b^h$ can be written as $e^{h\ln(b)},$ so we end up with the limit: $$\lim_{h\to 0}\frac{b^h-1}h=\lim_{h\to 0}\frac{e^{h\ln(b)}-1}{h\ln(b)}\ln(b)$$