Why do we stop at exponentiation stage in arithmetic of natural numbers?

In natural numbers the unary successor operator $S$ is the most natural function which maps each number to the next one. Furthermore we may consider the binary relation $+$ as an iteration of $S$. Also $\times$ is an iteration of $+$ and $\exp$ is an iteration of $\times$ i.e.

$\forall m,n\geq 1$

$m+n:=\underbrace{S(S(S(\cdots S}_{n\ \text{times}}(m))))$

$m\times n:=\underbrace{m+m+m+\cdots+m}_{n\ \text{times}}$

$m^n:=\underbrace{m\times m\times m\times \cdots\times m}_{n\ \text{times}}$

We build the rich arithmetic of natural numbers via above three natural operators. Also they show many complicated mutual relations with each other. But, why do we stop here in arithmetic of natural numbers and don't go forward with continuing iterating operators again and again? i.e.

$m*n:=\underbrace{m^{m^{m^{.^{.^{.^{m}}}}}}}_{n\ \text{times}}$

$m\circledast n:=\underbrace{m*m*m*\cdots*m}_{n\ \text{times}}$

$m\circledcirc n:=\underbrace{m\circledast m\circledast m\circledast \cdots\circledast m}_{n\ \text{times}}$

$\vdots$

The point is that maybe there are rich interactions between these new natural operators and the ordinary arithmetic operators of natural numbers. These interactions may unfold some deep aspects of long standing open questions of number theory which hopefully can lead us to a solution itself.

Question: Why do we stop at exponentiation stage in arithmetic of natural numbers? Is there any mathematical or philosophical problem about defining such generalized operators and working with them as well as successor, sum, multiplication and exponentiation? Are these "unnatural" in any sense? If yes, what does this "unnatural" essence mean? Did these extended set of operators on natural numbers appeared in any text before? If yes, please introduce your references.


Solution 1:

Why do we stop at the exponentiation stage in the arithmetic of natural numbers?

We don't actually stop there. Knuth's up-arrow notation generalizes the operations up to any level you want and Conway's chained arrow notation even further. Note though that all these generalizations are applicable only for $n\in\mathbb{N}$.

Inability to generalize and hence stop at exponentiation, is a problem only when you try to extend the nomenclature to include definitions for REAL $n$. That's an entirely different problem and opens a rather big can of worms. See tetration on Wiki for details.

Solution 2:

Others have done a good job of explaining how exponentiation can be generalized. I'm going to address the question of why mathematicians are not interested in these generalizations.

I think the main reason why Knuth's up arrow has not taken off in the same way that, say, exponentiation has is that some of the very nice properties enjoyed by addition and multiplication start to break down upon further iteration. Some of these nice properties are

  1. Associativity: $(a+b)+c=a+(b+c)$ and $(ab)c=a(bc)$.
  2. Commutativity: $a+b = b+a$ and $ab=ba$.
  3. Existence of an identity: $a + 0 = a$ and $a\cdot 1 = a$.
  4. Existence of an inverse: $a - a = 0$ and $a\cdot \frac 1a=1$.
  5. Distributivity: $a(b+c) = ab + ac$ and $(a+b)c = ac + bc$.

Mathematicians have found it very fruitful to abstract out these properties and study other "number systems" that obey these rules, in addition to a few others. This train of thought leads to commutative algebra and the study of commutative rings, among other fertile branches of mathematics. We therefore have a good "moral" reason to be interested in binary operations that satisfy some subset of the five properties above: there's a lot to say about them.

Let's see how exponentiation stacks up: is exponentiation a "nice" binary operation?

  1. Associativity: $(a^b)^c\neq a^{(b^c)}$ in general.
  2. Commutativity: $a^b\neq b^a$ in general.
  3. Existence of an identity: $a^1 = a$ but there is no $x$ such that $x^a=a$ for all $a$. So exponentiation only has a partial identity.
  4. Existence of an inverse: it doesn't really make sense to talk about an inverse because there's only a partial identity. It is possible to solve the equation $x^b=c$ by taking the $b$th root of $c$, at least when $c$ is nonnegative. It is also possible to solve the equation $b^x = c$ by using the logarithm function, at least when $b$ and $c$ are positive and not equal to $1$.
  5. Distributivity: $a^{bc} \neq a^ba^c$, although $(bc)^a = b^ac^a$. So exponentiation is only partially distributive.

You can already see that exponentiation is a lot more complicated than addition and multiplication. There are really two exponentiation functions: $x\mapsto x^a$ and $x\mapsto a^x$, which behave very differently. The first is inverted by taking roots and the second by taking logarithms; both also start to behave badly if the arguments are negative. As you can imagine, the higher iterates of exponentiation are even more complicated.

I don't want to suggest that mathematicians never study objects that don't have "nice" behavior -- there are many counterexamples to such a claim. But we only tend to study badly behaved objects when there is some clear use for them, and Knuth's up arrow doesn't seem to be much more than a good way for talking about really large numbers.

Solution 3:

Sure, large numbers do appear moderately often, if you look up the research literature in Combinatorics. The examples I can think of are the Knuth's Up-Arrow Notation, Conway Chained-Arrow Notation, Hyperoperators, Tetration, Cutler's Bar Notation, Steinhaus-Moser Notation.

A good starting point for entry into the study of such operators would be this article on Wikipedia : Knuth's Up-Arrow Notation

Also, if you interested in large numbers, you might want to check this out : Googol

You could also read Lore of Large Numbers published by American Mathematical Society.

Solution 4:

Instead of thinking of multiplication as repeated addition, we can define it as the unique commutative, associative operator that distributes over addition and has 1 as the identity. This suggests another approach to generalizing beyond multiplication, by asking what operator distributes over multiplication. Define the operator $$ a *_n b = \exp^n (\log^n(a) + \log^n(b)) $$ where $\exp^n$ denotes repeated application of the $\exp$ function, and similarly for $\log^n$. The first few cases are: $$ a *_0 b = a + b \\ a *_1 b = ab \\ a *_2 b = a^{\log(b)} = b^{\log(a)} $$ There are known as Bennett's commutative hyperoperations. The operator $*_2$ distributes over multiplication, in the sense that $$ a *_2 (bc) = (a *_2 b)(a *_2 c) $$ In general, the operator $*_n$ distributes over $*_{n-1}$: $$ a *_n (b *_{n-1} c) = (a *_n b) *_{n-1} (a *_n c) $$ In contrast to exponentation and tetration, these operators are all commutative, associative, and can be applied to complex numbers. We can even extend the sequence in the other direction, by asking "what does addition distribute over?". The answer is the operator $*_{-1}$, defined as: $$ a *_{-1} b = \log(\exp(a) + \exp(b)) \\ a + (b *_{-1} c) = (a + b) *_{-1} (a + c) $$