Can every divergent series be regularized?

The following words reflect my understanding(an elementary one) of the divergent series. We first define an infinite series as follows:

$L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k.$

Where $S_k$ is the partial sum of the infinite series from $a_0$ to $a_k$. A series whose limit exists is said to be convergent, if not, then it's called divergent.

By this former definition, series like:

$1-1+1-...$ and $1+2+3+...$ are divergent.

Then we have the notion of regularized sum. Where we look for a new definition for infinite series such that it allows us to assign real values to some divergent series. Also in the new definition series that are normally convergent under the definition $L = \sum_{n=0}^{\infty}a_n \Leftrightarrow L = \lim_{k \rightarrow \infty} S_k$, are convergent under the new definition, and the two definitions yield the same exact limit $L$ for the normally convergent series. Although I'm not sure of following, but different summation methods always assign the same value for a divergent series(in case it can be assigned to), so that $1-1+1-...=1/2$ under Caesaro summation and Abel's and any other summation that assign a value to such series.

In addition to that, there are series like $1+2+3+...$ , that are not Caesaro or Abel summable, but can be summed under other methods like zeta regularization; This implies that a series that is not summable under certain summation method(say Caesaro's), can be summable under other summation methods(like zeta).

This last fact leads me to my question:

-Can every divergent series be regularized? That is, for every series that is not summable under certain summation methods, can we find a new summation method that sums it up?

-If the answer is yes to the last question, then, does there exist a summation method such that it can sum(regularize) every single divergent series?


In the most general sense, a summation is a partial function from the set of summand sequences to $\mathbb R$ (or $\mathbb C$). This sounds like we could assign more or less arbitrary values and if we want we really can. However, certain properties of summations are preferred to hold, such as

  • Regularity that is, our summation method should be an extension of the standard -convergent-sequence-of-partial-sums method
  • Linearity that is, if we define $\sum a_n$ and $\sum b_n$ then we also define $\sum(ca_n+b_n)$ and have $\sum(ca_n+b_n)=c\sum a_n+\sum b_n$
  • Stability $\sum a_n$ is defined if and only if $\sum a_{n+1}$ is defined and we have $\sum a_n=a_2+\sum a_{n+1}$

To repeat: not all summation methods (not even all methods in practical use) obaey all three criteria. But if we concentrate on methods obeying all three then indeed we often get that certain (classically) divergent series are always assigned the same value under any summation method. For example, $\sum x^n=\frac1{1-x}$ follows for all $x\ne 1$ where we define the sum by, merely playing around with stability and linearity.

So how high can we try? We can use Zorn's lemma to find a maximal regular, linear, stable summation method. But will "maximal" imply "total", i.e., that all series become summable? And will the summation thus obtained be well-defined? Unfortunately, the answer to both is no. This can already be exemplified with $\sum 1$, which has do be a solution of $\sum 1 = 1+\sum 1$ per statbility. (Then again, you have have read that regularization can assign $1+1+1+\ldots =-\frac12$; apparently those methods are not linear or nopt stable ...)


A new method for summing series is available with the hyperreal numbers, but depends on an assumption that the limit of $-1^\omega = 0$, where $\omega$ is the hyperreal unit infinity. Essentially, the steps are:

  1. Take the series representation, and derive a formula for the sum at the point of the $k$th term. This is done by using discrete calculus, or a standard summation method.
  2. Substitute the infinite unit $\omega$ for $k$
  3. Reduce the result using appropriate rules, including $-1^\omega = 0$.
  4. Optionally, round to the whichever "level of infinity" you desire (i.e., take the standard part, or leave it with infinities/infinitesimals, etc.

Example:

Let's say you wanted to know the sum of the natural numbers: $1 + 2 +3 + \ldots$. This is a simple arithmetic series. The formula for the $k$th partial sum of an arithmetic series is $$\sum_{n = 1}^k a + (n - 1)d = \frac{k^2d}{2} + \frac{k(2a - d)}{2}$$

So, since we are going to infinity (i.e., $\omega$), and $a = 1$ and $d = 1$, then the result will be $\frac{\omega^2}{2} +\frac{\omega}{2} \simeq \frac{\omega^2}{2}$ (the $\simeq$ indicates that the principal value of the result is $\frac{\omega^2}{2}$ - once you square an infinity, the next infinite power down plays essentially no role in the value).

Now, let's look at the series $1 + 3 + 5 + \ldots$. Here, $a = 1$ and $d = 2$, so the result will be simply $\omega^2$.

Using this method, you can do things like divide series by each other. So, we can say: $$\frac{(1 + 2 + 3 + \ldots)}{(1 + 3 + 5 + \ldots)} = \frac{\frac{\omega^2}{2}}{\omega^2} = \frac{1}{2}$$

Using this method, you have a lot of better-behaved divergent series. In this method, $(1 + 3 + 5 + \ldots)$ is not the same series as $(1 + 0 + 3 + 0 + 5 + \ldots)$, even though they look like they might be. The latter has a value of $\frac{\omega^2}{4}$.

This method winds up yielding consistent values over a wide range of divergent series.

You can see the introduction to the method, and an application of it to the question of Ramanujan summation here.