Ultrafinitism and the denial of existence of $\lfloor e^{e^{e^{79}}} \rfloor$

I was reading about Ultrafinitism and the denial of existence of $\lfloor e^{e^{e^{79}}} \rfloor$ by ultrafinitists.

I am wondering if they were to deny the existence of $\lfloor e^{e^{e^{79}}} \rfloor$ shouldn't they actually deny the very existence of $e$ in the first place, let alone forming $e^{e^{e^{79}}}$. Since $e$ in itself is defined/obtained as a limit, if the ultrafinitists were to deny the existence of large numbers then certainly the concept of limit doesn't exist for them. Am I right?


Note that ultrafinitism is not a single philosophy, it has various flavors and often these are not defined rigorously. Most are ideas and criticisms of the state of affairs in the classical view of mathematics. The term ultrafinitism is used to refer to several of these opinions which are concerned about feasibility of mathematical objects.

Classical real numbers (a la Dedekind or a la Cauchy) are already not finite objects, so if you want to take the classical view of real numbers as infinite objects then as a finitist you shouldn't accept $e$.

However there is a way around this. One can view $e$ as referring to a particular algorithm computing approximation to $e$. Then one can consider it as a legitimate finitist object.

If you followed the explaination above, the situation for ultrafinitism can be similar. Since the algorithm for computing approximations to $e$ is quite short and efficient, an ultrafinitist can accept its existence. Then the problem is not with $e$ but the exponentiation function that cannot be efficiently computed. In other words, there is no efficient algorithm to compute the bits of exponentiation of efficiently given real numbers. The floor of exponentiation of a given real number is a finite object, so a finitism can consider it as an actual mathematical object, however there doesn't seem to be any feasible way of obtaining its bits efficiently.

I would say there are two issues here about $\lfloor e^{e^{e^{79}}} \rfloor$ because it is not given explicitly:

  • the size of the representation of a mathematical object, and
  • the efficiency of obtaining essential information from that representation (in this case the bits of the number).

What we care about is if the object can be given explicitly. Here it is given implicitly using algorithms. If we have both of these conditions then we can consider that as a legitimate mathematical object from ultrafinitist perspective as it can be turned into an explicitly given object of feasible size. Otherwise it is not clear if it is a legitimate feasible object.

If we want to explain it more, we can think of it in the following way:

efficient algorithms of feasible size can be used to represent feasible objects

since such implicitly given objects can be turned into explicitly given objects of feasible size and this can be done using feasible amount of resources. You can think of "explicitly given" as objects given in their normal form, e.g. for natural numbers it means numbers represented like $SSSSS0$ (though decimal would also be fine at least according to some ultrafinitist views like Nelson's Predicative Arithmetic). However exponentiation is not acceptable, other than our intuition about its unfeasibility there are other reasons to regard it as so, for example you read Nelson's Predicative Arithmetic for his arguments.

(I think another argument from classical perspective can be based on non-uniqueness of definition of the exponentiation in non-standard models of arithmetic, and one can find more arguments why exponentiation should not be considered a feasible operation.)