How to evaluate fractional tetrations?
Ah yes, a fave topic of mine. Basically, there is no universally-agreed on way to do this. The problem is, that, in general, there isn't a unique way to interpolate the values of tetration at integer "height" (which is what the "number of exponents in the 'tower'" may be called). So in theory, you could define it to be anything.
In the case of exponentiation, one has the useful identity $a^{n + m} = a^n a^m$, which enables for a "natural" extension to non-integer values of the exponent. Namely, you can see, for example, that $a^1 = a^{1/2 + 1/2} = (a^{1/2})^2$, from which we can say that we need to define $a^{1/2} = \sqrt{a}$ if we want that identity to hold in the extended exponentiation. No such identities exist for tetration.
You may also want to look at Qiaochu Yuan's answer here, where he explores some of this from a viewpoint of higher math:
https://math.stackexchange.com/a/56710/11172
One could, perhaps, compare this problem to the question of the interpolation of factorial $n!$ to non-integer values of $n$. There is, in general, no simple identity that provides a natural extension for this, either. But, when an extension is desired, the usual choice is to use what is called the "Gamma function", defined by
$$\Gamma(x) = \int_{0}^{\infty} e^{-t} t^{x-1} dt$$.
Then, you can extend $n!$ to non-integer $x$ by $x! = \Gamma(x+1)$. However, usually one does not use $x!$ for non-integer factorials, but rather the Gamma function notation.
One can give a uniqueness theorem involving soem simple analytical conditions; it is called the Bohr-Mullerup theorem. In addition, the gamma function has various nice number-theoretic and analytic properties, and turns up in a number of different areas of math.
But in the case of tetration, there are no nice integral representations known. Henryk Trappmann and some others recently proved a theorem that gives a simple uniqueness criterion for the inverse of tetration (with respect to the "height") here, presuming extension not just to the real, but the complex numbers:
http://www.ils.uec.ac.jp/~dima/PAPERS/2009uniabel.pdf
The solution that satisfies the condition is one that was developed by Hellmuth Kneser in the 1940s. I call it "Kneser's tetrational function" or simply "Kneser's function". It defies simple description.
On this site:
http://math.eretrandre.org/tetrationforum/index.php
an algorithm was posted to compute the Kneser solution (though I'm not sure if it's been proven) for various bases of tetration. Using this solution, the answer to your question would be
$$^{4/3} 3_\mathrm{Kneser} = 4.834730793026332...$$
Other interpolations for tetration have been proposed, some of which give different results. But this is the only one that seems to satisfy "nice" properties like analyticity and has a simple uniqueness theorem via its inverse. Yet as I said in the beginning, I don't believe that it's universally agreed by the general mathematical community that this is "the" answer.
Here is a q&d - implementation in Pari/GP to get some intuition about what is going on at all. The "Kneser-Method" is much more involved, but it seems there is a good possibility, that the simple method below (I call it the "polynomial method") is asymptotic to/approximates the Kneser-method when the size of the matrices gets increased without bounds.
n=32 \\ Size for matrix and power series. If n=48 ...
default(realprecision,800) \\ ... choose realprecision at least 2000!
\\ For n=64 Pari/GP needs much more digits
\\ and time
default(format,"g0.12") \\ display only 12 significant digits
[b =3, bl=log(b)] \\ we choose exponentiation/tetration to base bb=3
Bb = matrix(n,n,r,c,(bl*(c-1))^(r-1)/(r-1)!) ; \\ create the Carleman-matrix
\\ for iterable z1 = 3^z0
tmpM=mateigen(Bb); \\ invoke diagonalization to
tmpW = tmpM^-1; \\ allow fractional powers of
tmpD=diag(tmpW*Bb*tmpM); \\ the matrix Bb
\\ ==============================================================================
h=4/3 \\ the tetration-"height" can be fractional;
\\ and is best in the interval 0..1
coeffs=tmpM * vectorv(n,r, tmpW[r,2]*tmpD[r]^h)
\\ coeffs of the new power series for h=4/3
z0 = 1.0 \\ default starting value with
\\ tetration is usually z0=1
z1 = sum(k=0, n-1, z0^k * coeffs[1+k]) \\ = z0 tetrated to height ...
\\ ... 4/3 with base 3
\\ results:
\\ 4.8347111352647465948 \\ n=32 use matrix-size n=32
\\ 4.8347252436478228906 \\ n=48 when run with matrixsize n=48
\\ \\ n=64 : expected to approximate Kneser-value
\\ \\ if matrix size is increased
\\ 4.834730793026332... \\ reference by kneser-method as shown by @Mike4ty4
This may be a possible answer.
Let $f(x)=\log_{10}(x+1)$ and $g(x)=10^x-1$ (inverse function of $f$).
Then let $f^n(x) = f(f(\cdots(f(x))\cdots))$ with n $f$'s, similarly for $g$.
$$^{x+2}10\approx\lim_{n\to \infty} g^n(f^n(10^{10})\cdot(\ln10)^x)$$
The values behave fairly well for positive x-values. With $x=0.5$ and $n=40$, I got $^{2.5}10\approx4.106483157\times10^{294}$. With $x=1$ and $n=40$, I got $^310\approx9.881444237\times10^{9,999,999,999}$ (not exact because I only did 40 iterations).
Disclaimer: My proofs of the following facts may be faulty, but numerical computations suggest otherwise.
The factorial can be uniquely extended by enforcing the condition that it grows at a certain asymptotic rate, namely that $(n+x)!\sim n^x\cdot n!$ as $n\to\infty$ by inducting on the fact that $(n+1)!\sim n\cdot n!$.
Similarly, tetration can be uniquely extended by enforcing a condition on its asymptotic rate of growth when it converges monotonically. With the help of some calculus, one can show that for $1<a<e^{1/e}$ we have
$$^{n+1}a-{}^\infty a\sim({}^na-{}^\infty a)\ln({}^\infty a)$$
and hence the natural condition is
$${}^{n+x}a-{}^\infty a\sim({}^na-{}^\infty a)\ln({}^\infty a)^x$$
which, when rearranged in terms of ${}^xa$ gives us the definition:
$$^xa=\lim_{n\to\infty}\log_a^{\circ n}({}^\infty a+({}^na-{}^\infty a)\ln({}^\infty a)^x)\tag{$\star$}$$
where $\log^{\circ n}$ is the $n$-fold logarithm e.g. $\log^{\circ2}(x)=\log(\log(x))$.
A chain of posts discusses this definition, including proofs that this does in fact give a tetration, as it satisfies the basic properties:
$^0a=1$
$^{x+1}a=a^{({}^xa)}$
as well as having the additional advantages that it is unique, satisfies nice asymptotics, and is analytic. Due to the last property, it is possible to analytically continue this definition outside of the initial interval, though the computation is largely difficult for two main reasons:
The definition itself requires large amounts of precision, since $^na-{}^\infty a$ involves large amounts of cancellation and loss of significant figures. This can be somewhat avoided, however, with the use of acceleration techniques such as Aitken's delta squared method.
Analytic continuation is a fairly difficult task to perform numerically, and I've yet to find an alternative representation of this tetration that converges further out. This can be somewhat circumvented using modern techniques, such as conformal mappings (e.g. expanding ${}^x(a+t/(1-t))$ about $t=0$), to allow for convergence even on the entire interval $[1,\infty)$.
Some attempts at defining $e$ to itself $s$ times when $s$ is a real number besides just a whole number involve adding a sort of socket variable to turn the problem into finding continuous iterates of $exp(x)$.
We want to find a unique function $exp_s (x)$ of a single variable $x$ where $s$ is a parameter for the number of iterations.
... $exp_1$(x) = $exp(x)$
and for $s$ and $t$ real
... $exp_s (exp_t (x)) = exp_{s + t} (x)$.
Then $exp_s (1)$ will answer your question.
It’s hard to find a continuou iterate of a function that doesn't have a fixed point, like $exp(x)$ on the real line. H. Kneser’s method involves going into the complex plane to find a fixed point. See the links other people here have provided.
The trouble is that there is more than one fixed point in the complex plane, which leads to singularities and non-uniqueness. There are ways of dealing with this but they don’t convince everybody.
George Szekeres (1911 - 2005) tackled the problem solely in the real domain. His method is explained, and some serious gaps in his argument patched up, in the article:
“The Fourth Operation” http://ariwatch.com/VS/Algorithms/TheFourthOperation.htm