Is there a significance to the asymptotic probability of at least one occurrence of an event in n attempts?

Let's look at the set of the probabilities of having at least one occurrence of an event if we make $n$ attempts, where the probability of the event occurring is $1/n$.

For example, if we are looking at the probability of the occurrence of rolling a $4$ on a die ($1/6$ chance), we will actually make $6$ attempts. The probability of success in that case (at least one occurrence of a $4$ in the $6$ attempts) is $1-(5/6)^{6} = 0.6651$ (I have rounded the results.) If we make $15$ attempts at something with a probability of its happening (on each separate occasion) being $1/15$, then the probability of its occurring at least once in those $15$ attempts is $1-(14/15)^{15} = 0.6447$. If we use $n=1,000,000$ then we get $0.63212$.

So, we see that we have an asymptotic situation. I have always wondered if there was any other known mathematical significance to this specific approximate number/asymptote. Thanks in advance for reading and for all responses!


Roll an $n$-sided die numbered $1$ through $n$. Roll it $n$ times. Your question is equivalent to the following:

What is the probability that you roll at least one $1$, over $n$ throws of this $n$-sided die?

This is equal to $1$ minus the probability that you get no $1$s during the $n$ throws. The probability of not getting a $1$ on any given throw is $(n-1)/n$, so this gives your desired probability as:

$$1 - \left(\frac{n-1}{n}\right)^n$$

$$1 - \left(1- \frac{1}{n}\right)^n$$


Look at number $1$ in this list of characterizations of the exponential function:

Define $e^x$ by the limit

$$ e^x = \lim_{n\to\infty} \left(1 + \frac{x}{n}\right)^n $$


This definition applies to your problem, when we let $x=-1$.

In the limit as $n$ goes to infinity, we can write $$\lim_{n\to\infty} \left[\,1 - \left(1- \frac{1}{n}\right)^n\,\,\right] \,\,=\,\, 1 - \,\lim_{n\to\infty} \,\left(1+\frac{-1}{n}\right)^n\,\,=\,\, \boxed{1 - \frac{1}{e}\,} \,\,\approx\,\, 0.6321$$


The point is that $$\lim_{n \to \infty} \left(1 - \frac{1}{n}\right)^n = \frac{1}{e}$$ where $e$ is the base of natural logarithms. This is approximately $0.3678794412$.


Others have already pointed out that the probability of at least one success in $n$ independent trials, each succeeding with probability $1/n$, tends to $1-1/e \approx 0.63212$ as $n$ increases. However, it might be instructive to work out how you could determine this limit yourself.

You already know that the exact probability of observing at least one success in $n$ trials equals $1$ minus the probability of observing no success in $n$ trials, which in turn equals the probability of a single trial failing raised to the $n$-th power. Given the assumption that a single trial succeeds with probability $1/n$, it fails with probability $1 - 1/n$, and thus the probability $p_n$ of observing at least one success in $n$ trials is: $$p_n = 1 - \left( 1 - \frac1n \right)^n.$$

In this formula, $n$ appears twice: once in the denominator and once in the exponent. You've already experimentally observed that, as $n$ increases, $p_n$ seems to tend to a finite limit $0 < p_\infty < 1$ that is independent of $n$. Thus we can expect that, in the limit, those $n$'s should somehow cancel out. Our task, then, is to somehow manipulate the expression for $p_\infty = \lim_{n\to\infty} p_n$ into a form where we can algebraically cancel out the $n$'s and obtain a closed-form expression for $p_\infty$.

To do that, we somehow need to bring the $n$'s together. We can't get the $n$ in the exponent down, but we can raise the other $n$ into the exponent too, by replacing $1 - 1/n$ with the equivalent expression $e^{\log_e(1 - 1/n)}$ and applying the rule $(x^a)^b = x^{ab}$ to get: $$p_n = 1 - e^{\textstyle n \log_e(1 - 1/n)}.$$

(Strictly speaking, this reformulation is only valid for $n > 1$, since the expression $1 - 1/n$ needs to be positive for its logarithm to be a well defined real number. But since we're concerned with the limit $n \to \infty$, we can safely ignore any special cases involving small values of $n$.)

Of course, we could've just as well picked any other base for the logarithm instead of $e$ above and obtained an equally valid expression for $p_n$. However, what's special about the natural logarithm $\log_e$ is that the derivative of $\log_e(x)$ at $x=1$ equals $1$. In particular, this means that, for values of $x$ close to $0$, we have the approximation $\log_e(1 + x) = x + o(x)$, where $o(x)$ denotes additional higher-order terms such that: $$\lim_{x\to0} \frac{o(x)}{x} = 0.$$

Letting $x = -1/n$, we get $\log_e(1-1/n) = -1/n - o(1/n)$, which we can substitute into the expression for $p_n$ above to get: $$p_n = 1 - e^{\textstyle n\left(-\tfrac1n - o\left(\tfrac1n\right)\right)} = 1 - e^{\textstyle -1 - n \cdot o\left(\tfrac1n\right)}.$$

As $n$ increases, the $n \cdot o(1/n)$ term in the exponent tends to zero, and therefore: $$p_\infty = \lim_{n\to\infty} p_n = 1 - e^{\textstyle -1} = 1 - \frac1e.$$


The general trick here is that, for $x$ close to $0$, $\log_e(1+x) \approx x$ (and, conversely, $e^x \approx 1+x$). Thus, in particular, $(1+x)^n \approx (e^x)^n = e^{nx}$ when $x \approx 0$.

If $nx$ is also small, we can even further approximate $(1+x)^n \approx 1+nx$. This has an intuitive probabilistic interpretation: if an experiment succeeds with a very small probability $p \approx 0$, then repeating it $n$ times yields a success probability of $1-(1-p)^n \approx 1-e^{-np} \approx np$, i.e. approximately $n$ times higher than for a single experiment.

For larger $n$, the second approximation breaks down as the likelihood of multiple successes becomes non-negligible, but the first approximation $1-(1-p)^n \approx 1-e^{-np}$, which only depends on $p$ being small, still holds. In particular, if $n$ and $p$ are inversely proportional, such that $np = c$, then the probability of at least one success tends to $1-e^{-c}$ as $n$ get larger and $p$ gets smaller.

Another way of looking at this is that we're effectively approximating the binomial distribution of the number of successful trials out of $n$ using a Poisson distribution with the event rate parameter $\lambda = np$. This approximation, which gets better as $n$ gets larger and $p$ gets smaller, then yields a probability of $1 - e^{-\lambda}$ of at least one event occurring.


$$1- \left(\frac{n-1}{n}\right)^n=1-\left(1 - \frac{1}{n} \right)^n$$

As we take limit $n \to \infty$,

$$\lim_{n \to \infty}1-\left(1 - \frac{1}{n} \right)^n = 1-\exp(-1)$$

Hence that is why you observe the number that approximate $1-\exp(-1)$