$e$ to 50 billion decimal places

There are a couple of reasons why people compute $\pi$ and $e$ to so many digits.

One is simply that it is a way to test hardware. The values are known (or can be known), and you want to see if your hardware is fast and accurate with these computations.

Another is that there are actually a lot of questions about the decimal expansions of $\pi$ and $e$ (and other numbers) for which we simply don't know the answer yet. For example: is $\pi$ normal in base 10? That is, will a particular sequence of digits occur in its decimal expansion in about the frequency you expect? More precisely, given a specific sequence of $n$ digits, will $$\frac{\text{number of times that the specific sequence occurs in the first $m$ digits of $\pi$}}{m}$$ approach $1/10^n$ as $m\to\infty$ (there are $10^n$ different sequences of $n$ digits, so this is what you would expect if the sequence was completely random)? While the answer cannot be known simply by computing digits of $\pi$, knowing more and more digits helps us see whether it seems at least plausible or not, at least early on. We can also perform other tests of randomness to see whether the digits of $\pi$ (or $e$) seem to pass it or not.

As to your question about $e$, yes, in principle we can do that. But the process gets more and more complex as $n$ gets larger. Normally, a computer only knows how to represent numbers that are no larger than a certain quantity (depending on the number of bytes it uses to represent numbers), so you need to "explain" to your computer how to handle big numbers like $n!$ for large $n$. The storage space needed to perform the computations also gets larger and larger and larger. And that idea only works if you start from a known point. So while theoretically we could compute $e$ to as many digits as we want, in practice if we want the computations to finish sometime before the Sun runs out of hydrogen we can't really go that far. There are other algorithms known to compute decimals of $\pi$ or $e$ that are faster, or that don't require you to know the previous digits to figure out the next one.

And that leads to yet another thing that people who are computing so many digits of $\pi$ and $e$ may be doing: testing algorithms for large number computations, for floating point accuracy, or for parallel computing (we are not very good at parallel computing, and trying to figure out how to do it effectively is a Very Big Deal; coming up with ways to do computations such as "compute $\pi$ to the $n$th millionth digit" are ways to test ideas for doing parallel computing).

That leads to a third interest in the computations: coming up with mathematical ideas that can zero in on desired digits of the decimal expansion quickly; not because we are particularly interested in the digits of $\pi$, necessarily, but because we often are interested in particular digits of other numbers. $\pi$ and $e$ are just very convenient benchmarks to test things with.

Why did Euler compute the numbers? Because he wanted to show off some of the uses for Taylor series; up to that time, the methods for approximating the value of $\pi$ were much more onerous. Ludolph van Ceulen famously used the methods of Archimedes (inscribing and circumscribing polygons in circles) to find the value of $\pi$ to 16 decimals (and after many more years of effort, to 25 places); this was in the late 16th century, before it was known that $\pi$ was transcendental (or even before the notion of being transcendental was very clear), so trying to compute $\pi$ precisely actually had a point. He was so proud of the accomplishment (given all the hard work it had entailed) that he had the value of $\pi$ he computed put into his tombstone. Euler showed that Taylor series could be used to obtain the same results with a lot less effort and with more precision.


I'd just like to give two quotes from the book The SIAM 100-Digit Challenge: A Study in High-Accuracy Numerical Computing that might help explain motivation. Here is the one from their chapter on computing constants to 10,000 digits:

While such an exercise might seem frivolous, the fact is we learned a lot from the continual refinement of our algorithms to work efficiently at ultrahigh precision. The reward is a deeper understanding of the theory, and often a better algorithm for low-precision cases.

and here is something from the foreword written by David Bailey, one of the pioneers of experimental mathematics:

Some may question why anyone would care about such prodigious precision, when in the “real” physical world, hardly any quantities are known to an accuracy beyond about 12 decimal digits. For instance, a value of π correct to 20 decimal digits would suffice to calculate the circumference of a circle around the sun at the orbit of the earth to within the width of an atom. So why should anyone care about finding any answers to 10,000 digit accuracy?

In fact, recent work in experimental mathematics has provided an important venue where numerical results are needed to very high numerical precision, in some cases to thousands of decimal digit accuracy. In particular, precision of this scale is often required when applying integer relation algorithms to discover new mathematical identities. An integer relation algorithm is an algorithm that, given $n$ real numbers ($x_i,\quad 1\leq i\leq n$), in the form of high-precision floating-point numerical values, produces $n$ integers, not all zero, such that $a_1x_1+a_2x_2+\cdots+a_n x_n=0$.

The best known example of this sort is the discovery in 1995 of a new formula for π:

$$\pi=\sum_{k=0}^{\infty}\frac1{16^k}\left(\frac{4}{8k+1}-\frac{2}{8k+4}-\frac1{8k+5}-\frac1{8k+6}\right)$$

This formula was found by a computer program implementing the PSLQ integer relation algorithm, using (in this case) a numerical precision of approximately 200 digits. This computation also required, as an input real vector, more than 25 mathematical constants, each computed to 200-digit accuracy. The mathematical significance of this particular formula is that it permits one to directly calculate binary or hexadecimal digits of π beginning at any arbitrary position, using an algorithm that is very simple, requires almost no memory, and does not require multiple-precision arithmetic.


The answer to your last question is no, for a certain value of "no." The problem with your idea is that as $n$ gets bigger it gets harder to calculate $n!$, so it gets harder to tell what the digits of $\frac{1}{n!}$ are. In other words, if you actually tried to carry out your plan, you would quickly find that it is computationally infeasible.

So instead one has to resort to a smarter algorithm. Thus being able to compute constants to high precision is a measure of how smart our algorithms are. If your algorithm is smarter than mine, the most concrete way to prove that is to use it to compute more digits in a reasonable amount of time than I can. So while it's probably true that nobody is actually going to use these digits for anything (except possibly to verify certain conjectures), the fact that we know them is a measure of our knowledge, both about algorithms and about $e$. (It is also a measure of how good our hardware is, but I'm trying to emphasize the math here.)

Also, here is an announcement from Kondo about a more recent version of this computation (for $\pi$) and here are, among other things, his reasons for doing it.