Solution 1:

Here are four different examples to peruse.

  • Book: "e: The Story of a Number", Eli Maor
  • Website: The Number e
  • Paper: e The Exponential - the Magic Number of Growth
  • Journal: J L Coolidge, The number e, Amer. Math. Monthly 57 (1950), 591-602

Solution 2:

I am quoting here from an ancient USENET post of Matthew P. Wiener, who points out that explanations involving $\sum\frac1{n!}$ and $\left(1+\frac 1n\right)^n$ were not actually the first appearances or calculations of $e$:

Napier, who invented logarithms, more or less worked out a table of logarithms to base $\frac1e$, as follows:

0 1 2 3  4  5  6   7   8   9   10 ...
1 2 4 8 16 32 64 128 256 512 1024 ...

The arithmetic progression in the first row is matched by a geometric progression in the second row. If, by any luck, you happen to wish to multiply 16 by 32, that just happen to be in the bottom row, you can look up their "logs" in the first row and add 4+5 to get 9 and then conclude 16·32=512.

For most practical purposes, this is useless. Napier realized that what one needs to multiply in general is $1+\epsilon$ for a base—the intermediate values will be much more extensive. For example, with base 1.01, we get:

  0 1.00   1 1.01   2 1.02   3 1.03   4 1.04   5 1.05
  6 1.06   7 1.07   8 1.08   9 1.09  10 1.10  11 1.12
 12 1.13  13 1.14  14 1.15  15 1.16  16 1.17  17 1.18
 18 1.20  19 1.21  20 1.22  21 1.23  22 1.24  23 1.26
 24 1.27  25 1.28  26 1.30  27 1.31  28 1.32  29 1.33
 30 1.35  31 1.36  32 1.37  33 1.39  34 1.40  35 1.42
[...]
 50 1.64  51 1.66  52 1.68  53 1.69  54 1.71  55 1.73
[...]
 94 2.55  95 2.57  96 2.60  97 2.63  98 2.65  99 2.68
100 2.70 101 2.73 102 2.76 103 2.79 104 2.81 105 2.84
[...]

So if you need to multiply 1.27 by 1.33, say, just look up their logs, in this case, 24 and 29, add them, and get 53, so 1.27·1.33=1.69. For two/three digit arithmetic, the table only needs entries up to 9.99.

Note that $e$ is almost there, as the antilogarithm of 100. The natural logarithm of a number can be read off from the above table, as just $\frac1{100}$ the corresponding exponent.

What Napier actually did was work with base .9999999. He spent 20 years computing powers of .9999999 by hand, producing a grand version of the above. That's it. No deep understanding of anything, no calculus, and $e$ pops up anyway—in Napier's case, $\frac1e$ was the 10 millionth entry. (To be pedantic, Napier did not actually use decimal points, that being a new fangled notion at the time.)

Later, in his historic meeting with Briggs, two changes were made. A switch to a base $> 1$ was made, so that logarithms would scale in the same direction as the numbers, and the spacing on the logarithm sides was chosen so that $\log(10)=1$. These two changes were, in effect, just division by $-\log_e(10)$.

In other words, $e$ made its first appearance rather implicitly.

The calculus connection came later. Fermat had successfully solved the quadrature problem for $y=x^n$ for $n\ne -1$, but not for $y=\frac1x$. Fermat's method was to use geometrically spaced intervals on the $x$ axis, and to add the resulting areas. It took a bit of time for a contemporary to notice that this method produced arithmetically spaced areas under the hyperbola—ie, that there's a logarithm going on.

Solution 3:

My favorite explanation involves adding interest to an investment.

Let's say you invest some money and the bank gives you interest yearly. At the end of the first year you would have $I(1+r)$, where $I$ is the investment and $r$ is the interest rate as a decimal, i.e. 5% = 0.05.

Now let's say you're impatient and you want interest more often. The bank might offer to apply half the interest rate twice in a year. So instead of 5% once, you get 2.5% twice: $I(1+\tfrac{r}{2})^2$.

Now let's say you're still not happy and you want interest more often. The bank might offer to apply quarter of the interest rate four times in a year. So instead of 5% once, you get 1.25% fourtimes: $I(1+\tfrac{r}{4})^4$.

If the bank applies the interest $n$ times in a year then you would have $I(1+\tfrac{r}{n})^n$ after a year.

What happens as $n$ gets larger and larger? Say the bank start paying interest not monthly, not weekly, not daily, not even hourly. What if the bank pay continuous compound interest? Well:

$$\lim_{n \to \infty}\left(1+\frac{r}{n}\right)^n = \operatorname{e}^r$$

This is why the exponential turns up in population dynamics and nuclear decay. Someone is being born almost every instant which gives a tiny continuous increase to the population, just like the bank adding tiny amounts of interest continuously.

If you want to work out $\operatorname{e}$ then just note that $$\lim_{n \to \infty}\left(1+\frac{1}{n}\right)^n = \operatorname{e}$$

For a very good approximation, just pick a very large $n$. For example:

$$\left(1+\frac{1}{10,000}\right)^{10,000} \approx 2.71815$$

Of course, a better way to find an approximation would be to use its power series: $$\operatorname{e}^x \sim 1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots$$ This is much easier to calculate, and it gets very close very quickly: $$1 + 1 + \tfrac{1}{2!} + \cdots +\tfrac{1}{10!} \approx 2.71828$$