What are Different Approaches to Introduce the Elementary Functions?

Motivation

We all get familiar with elementary functions in high-school or college. However, as the system of learning is not that much integrated we have learned them in different ways and the connections between these ways are not clarified mostly by teachers. Once I read the calculus book by Apostol, I just found out that one can define these functions in a treatise systematic way only analytically. The approach used in the book with some minor changes is like this

$1.$ Firstly, introduce the natural logarithm function by $\ln(x)=\int_{1}^{x}\frac{1}{t}dt$ for $x>0$. Accordingly, one defines the logarithm function by $\log_{b}x=\frac{\ln(x)}{\ln(b)}$ for $b>0$, $b \ne 1$ and $x>0$.

$2.$ Then introduce the natural exponential function as the inverse of natural logarithm $\exp(x)=\ln^{-1}(x)$. Afterwards, introduce the exponential function $a^x=\exp(x\ln(a))$ for $a>0$ and real $x$. Interchanging $x$ and $a$, one can introduce the power function $x^a=\exp(a\ln(x))$ for $x \gt 0$ and real $a$.

$3.$ Next, define hyperbolic functions $\cosh(x)$ and $\sinh(x)$ by using exponential function

$$\matrix{ {\cosh (x) = {{\exp (x) + \exp ( - x)} \over 2}} \hfill & {\sinh (x) = {{\exp (x) - \exp ( - x)} \over 2}} \hfill \cr } $$

and then defining the other hyperbolic functions. Consequently, one can define the inverse-hyperbolic functions.

$4.$ Finally, the author gives three ways for introducing the trigonometric functions.

$\qquad 4.1-$ Introduces the $\sin x$ and $\cos x$ functions by the following properties

\begin{align*}{} \text{(a)}\,\,& \text{The domain of $\sin x$ and $\cos x$ is $\mathbb R$} \\ \text{(b)}\,\,& \cos 0 = \sin \frac{\pi}{2}=0,\, \cos \pi=-1 \\ \text{(c)}\,\,& \cos (y-x)= \cos y \cos x + \sin y \sin x \\ \text{(d)}\,\,& \text{For $0 \le x \le \frac{\pi}{2}$ we have $0 \le \cos x \le \frac{\sin x}{x} \le \frac{1}{\cos x}$} \end{align*}

$\qquad 4.2-$ Using formal geometric definitions employing the unit circle.

$\qquad 4.3-$ Introducing $\sin x$ and $\cos x$ functions by their Taylor series.

and then defining the other trigonometric ones and the inverse-trigonometric functions.

In my point of view, the approach is good but it seems a little disconnected as the relation between the trigonometric and exponential functions is not illustrated as the author insisted to stay in the real domain when introducing these functions. Also, exponential and power functions are just defined for positive real numbers $a$ and $x$ while they can be extended to negative ones.


Questions

$1.$ How many other approaches are used for this purpose? Are there many or just a few? Is there some list for this?

$2.$ Would you please explain just one of the other heuristic ways to introduce the elementary functions analytically with appropriate details?


Notes

  • Historical remarks are welcome as they provide a good motivation.

  • Answers which connect more advanced (not too elementary) mathematical concepts to the development of elementary functions are really welcome. As nice example of this is the answer by Aloizio Macedo given below.

  • It is hard to choose the best answer between these nice answers so I decided to choose none. I just gave the bounties to the ones that are more compatible with the studies from high-school. However, please feel free to add new answers including your own ideas or what you may think that is interesting so we can have a valuable list of different approaches recorded here. This can serve as a nice guide for future readers.


Useful Links

  • Here is a link to a paper by W. F. Eberlein suggested in the comments. The paper deals with introducing the trigonometric functions in a systematic way.

  • There are six pdfs created by Paramanand Singh who has an answer below. It discusses some approaches for introducing logarithmic, exponential and circular functions. I have combined them all into one pdf which can be downloaded from here. I am sure that it will be useful.


Solution 1:

There are two canonical group structures in $\mathbb{R}$: $(\mathbb{R},+)$ and $(\mathbb{R}_{>0}, \cdot)$.

We search for the isomorphisms between the structures.

The identity is an automorphism on $(\mathbb{R},+)$ and the exponential is an isomorphism from $(\mathbb{R},+)$ to $(\mathbb{R}_{>0}, \cdot)$.

Furthermore, they are the only continuous such isomorphisms, once you fix a value on $1$.

So, we get:

The identity $id$ is the only continuous automorphism on $(\mathbb{R},+)$ such that $id(1)=1$ and the exponential $\exp$ is the only continuous isomorphism from $(\mathbb{R},+)$ to $(\mathbb{R}_{>0}, \cdot)$ such that $\exp(1)=e$.

From these, all other elementary functions follow.


Summarizing, in order to obtain the elementary functions, you only need the algebraically (and analytic, since we must suppose continuity) interesting ones.


Expanding a bit, if you don't want to be allowed to consider exponentiation to complex numbers, reaching $\sin$ and $\cos$ from $\exp$ and the identity may be troublesome. I will therefore provide another way of introducing $\sin$ and $\cos$. Ironically, it involves "complex" ideas.

Consider $C^{\infty}(\mathbb{R})$, and $X: C^{\infty}(\mathbb{R}) \rightarrow C^{\infty}(\mathbb{R})$ given by $$f \mapsto f'.$$ Consider also the identity function $I$ on $C^{\infty}(\mathbb{R})$. We have that $e^{x}$ and $e^{-x}$ are the two "moral" solutions (more precisely, they form a basis for the solutions) of $$X^2-I=0.$$ It is natural to search for the solutions of $$X^2+I=0.$$ (Seems familiar?) We then have that the solutions with appropriate initial conditions are $\sin$ and $\cos$.

Solution 2:

$1.$ Napier got approximate logarithms by using repeated squaring to compute, for example, that $(1.000001)^{693417}$ is about $2$. So $\log_{1.000001}2$ is about $693147.$ He would "normalize" logs to base $1+1/n$ by dividing them by $n$. The number we call $e$ kept showing up with a normalized log of approximately $1$. Thus the motivation for defining

$$\exp (x)=\lim_{n\to \infty}\left(1+\frac{x}{n}\right)^n$$

which is valid for all complex $x$.

$2.$ I have a fondness for defining $\log x=\int_1^x t^{-1}dt$ because it is so easy , by a linear change of variable, to show $\log a b =\log a+\log b$.

$3.$ H.Dorrie, in $101$ Great Problems In Elementary Mathematics, gives a short and simple deduction of the power series for sin and cos (given only $\sin'=\cos$ and $\cos'=-\sin$, and $x>0\to x>\sin x$, and that $\cos 0=1,\sin 0=0$ ) that requires no background in the general theory of power series, not even "finite power series plus remainder term."

Solution 3:

You can define $\sin$ and $\cos$ as solutions to the equation

$$f''=-f$$

The function $\sin$ is the unique solution satisfying $f(0) = 0, f'(0) = 1$, and $\cos$ is the unique solution satisfying $f(0) = 1, f'(0) = 0$. In other words, $\sin$ and $\cos$ are the functions describing the orbits of simple harmonic oscillators. We can then define $\pi$ as the half-period of $\sin$ (once we prove that it's periodic). In other words, $\pi$ is the time taken for a harmonic oscillator to go from one extreme value to the other (and therefore really has nothing to do with circles).

Now let $(x(t), y(t))$ be the coordinates of a particle moving around a unit circle at uniform speed $1$. Since the distance of the particle to the origin is constant, the velocity vector $(x(t), y(t))'$ must be orthogonal to $(x(t), y(t))$, and it's of unit length since the particle has unit speed. Therefore $(x(t), y(t))'=(-y(t), x(t))$.

From this equation we deduce $x''=-x$ and $y''=-y$, and the initial conditions are fixed by our assumptions about the nature of the circular motion. In other words, we've shown that a particle moving uniformly in a circle is a simple harmonic oscillator. Therefore $x=\sin, y=\cos$ (and therefore the time taken for one rotation is $2\pi$, whence the perimeter formula).

Solution 4:

I quite like the approach, taken in some Russian and Bulgarian books, e.g Fundamentals of mathematical analysis by V.A. Ilyin and E.G. Poznyak and Mathematical Anlaysis by Ilin, Sadovnichi and Sendov. The benefits of the approach is that they use continuity, monotonicity and (mostly) elementary concepts, which students should know from high school.

We start with the exponential function. For $a > 0 \text{ and } x = \frac{p}{q} \in \mathbb{Q} $ we know what is $a^x = a^\frac{p}{q} $ (of course, we should have proven the existence of $n$-th root already). Now prove this function (for now defined on the rational numbers only) is monotonic. At the end of the day, for $x \in \mathbb{R} $ define $a^x$ as the unique number $y$ with the following property: for all $ \alpha < x < \beta, \alpha, \beta \in \mathbb{Q}$ we have $a^\alpha \leq y \leq a^ \beta$. In other words, $a^x$ is defined by "extending via monotonicity".

Now we can define $\log_a (x)$ as the inverse function of $a^x$, that is: the number $t$ such that $a^t = x$ This is exactly the definition students should have been given in high school, so it should come as no surprise.

Next follow the trigonometric functions: in high school they are often defined like that. To make the definition rigorous we can use functional equations, in a manner similar to what OP wrote. A student should already know the $\sin(\alpha + \beta)$, $\cos(\alpha + \beta)$ formulae, and $\sin^2 x + \cos^2 x = 1$ so it should be fairly easy to comprehend that these properties sort of define $\sin$ and $\cos$. The definition is: There exists an unique pair of functions $f$ and $g$, defined over the real numbers, and satisfying the following conditions:

$1.$ $f(\alpha + \beta) = f(\alpha)g(\beta) + f(\beta) g(\alpha)$
$2.$ $g(\alpha + \beta) = g(\alpha)g(\beta) - f(\alpha)f(\beta) $
$3.$ $f^2(x) + g^2(x) = 1$
$4.$ $f(0) = 0 , g(0) = 1, f(\frac{\pi}{2}) = 1, g(\frac{\pi}{2}) = 0$

We define $\sin(x) = f(x) $ and $\cos(x) = g(x)$ After that, we can establish the known properties of the trigonometric functions and find their Taylor series. At the end, one notices the relation $e^{ix} = \cos(x) + i \sin(x)$

The number $e$

We define the number $e:= \lim_{n\to \infty} (1 + \frac{1}{n})^n$. After the definition of $a^x$ for real $x$ we can show that $\lim _{h \to 0} (1 + h)^{\frac{1}{h}} = e$. When we try to find the deriative of $\log_a(x)$ we will get: $$[\log_a(x)]' = \lim_{h \to 0} \frac{1}{x} \log_a \left( 1 + \frac{h}{x}\right)^\frac{x}{h}$$ By the continuity of the logarithm and the above limit we get $[\log_a(x)]' = \frac{\log_a (e)}{x}$. Thus, the natural base for the $\log$ is $e$. Because $a^x$ is the inverse of $\log_a(x)$ it's a simple calculation to show that $(a^x)' = a^x \log_e (a)$, and therefore the natural choice of the number $a$ is $e$.

Remarks: The above definitions use only continuity and monotonicity, no derivatives and integrals. For this reason, they are (arguably) more natural than definitions via differential equations: I highly doubt there is a student who has good intuition for the differential equation $f' = f$ but doesn't have an idea what is $a^x$. The main disadvantage of this approach is the length: it takes around $15$ pages without the proof of the existence of $\sin$ and $\cos$, and the proof itself is around $10$ pages more.