Best approximation of log 3?
For $0 \lt x \leq 2$, I can approximate $\log{x}$ using Taylor series. How to do so when $2\lt x$?
My attempt is that:
- approximate $\log{c}$, where $1\lt c\lt 2$.
- find $n \in \mathbb{N}$ such that $c^{n} \leq x \lt c^{n+1}$.
- then, $n\log{c} \leq \log{x}\lt (n+1)\log{c}$ !!!
However, for accuracy at 3, $c$ should be small, but if $c$ is small, $n$ is large and can't calculate without a computer.
One way to use the series you have is to observe that $\log x =-\log \frac 1x$
Given any positive number $x$ we can define
$u=(x-1)/(x+1)$
so that $x=(1+u)/(1-u)$. Then
$\log x = \log (1+u)-\log(1-u)$
where the Taylor series converges for both terms on the right side because $-1<u<+1$.
One advantage of this method is that for a given value of $x$ the series converges faster than using $\log x =-\log(1/x)$, because the series argument is smaller in size.. For example, with $\log3$ we have $u=1/2$, whereas for $\log(1/3)$ we would have to use the series to compute $\log(1-2/3)$.
If you are expanding a Taylor series to cover $(0,2)$, you are expanding around $1$ and hand computations will give better results for inputs near $1$. ("Better" here means requires fewer terms of the series to obtain a prescribed error.) Taking logs to the base $b$, \begin{align*} \log_b 3 &= \log_b \left( b \cdot \frac{3}{b} \right) \\ &= 1 + \log_b \frac{3}{b} \text{.} \end{align*} This last can be awkward by hand, especially for $b = \mathrm{e}$, so use Mark Bennet's answer to continue \begin{align*} \log_b 3 &= 1 - \log_b \frac{b}{3} \end{align*} (This idea, dividing by integers is easy and dividing by non-integers is hard, is why we have the process of rationalizing denominators.)
Let's see how this goes for $b = \mathrm{e}$. $\ln 3 = 1 - \ln \mathrm{e}/3$. If we have $\mathrm{e} = 2.718281828 \dots$, which is relatively easy to remember, dividing by $3$ is fast. Just keeping a few digits, $\mathrm{e}/3 = 0.906094\dots$, which is near $1$. (In subsequent computations, I have only kept this $6$-digit precision; no guard digits, no arbitrary precision anything.) The Taylor series of $\ln$ expanded around $1$ is $$(x-1) - \frac{1}{2}(x-1)^2 + \frac{1}{3}(x-1)^3 - \frac{1}{4}(x-1)^4 + \cdots \text{.} $$ Putting $x = 0.906094$ in, the sequence of partial sums is $$ 1.09391, 1.09832, 1.09859, 1.09861 $$ The target value is $\ln 3 = 1.098612289 \dots$, with which we agree to the precision displayed after only four terms in the series and only computing $x$ to six places.
More generally, say you know $c \approx b^p$. Then $c/b^p$ is close to $1$ and therefore the power series step will rapidly approach the value you need. Then we use \begin{align*} \log_b c &= \log_b \left( b^p \cdot \frac{c}{b^p} \right) \\ &= p + \log_b \frac{c}{b^p} \text{ or } \\ &= p - \log_b \frac{b^p}{c} \text{.} \end{align*} Of the last two lines we pick whichever division is easier.
One way to find a $p$ that is better than just an integer is to have a table of power-of-$2$ powers of $b$, $$ \dots, b^{1/8}, b^{1/4}, b^{1/2}, b^1, b^2, b^4, b^8, \dots$$ (You can't have an infinite concrete list. Either use a finite list, or generate these powers on the fly.) If $c$ is on the list, great, done. Otherwise, $c$ will be between two members of this list, $b^{2^a} < c < b^{2^{a+1}}$ and closer to one or the other. (For maximum goodness, we don't want linearly closer, we want logarithmically closer. But if you could calculate this logarithm, you wouldn't need this method, so use linearly closer.) Divide $c$ by that list member and call the quotient $c_1$. Repeat for successive quotients. We find $$c = b^{2^{a_1}} \cdot b^{2^{a_2}} \cdot \cdots \cdot b^{2^{a_k}} \cdot c_k$$ for some decreasing sequence of $a_i$s and some eventual quotient $c_k$. The $a_i$ are the binary decomposition of the power of $b$ to use above. That is, the $a_i$ are the positions of the "$1$" bits in the binary expansion of $p$ in the above. (If the $a_i$ are $3, 1, -2$, then $p = 1010.01_2 = 10.25_{10}$.) And $c_k$, which we have arranged to be as close to $1$ as your table of powers of $b$ and the precision of our calculations will permit, is $c/b^p$, which you drop into your Taylor series.
Expanding on the answer by @Mark_Bennet, we have the Maclaurin series for $\ln(x)$ as follows: $$-\ln(1-x)=\sum_{n=1}^\infty\frac{x^n}{n},\quad |x|<1$$
And since $\ln(3)=-\ln(1-\frac{2}{3})$, we get
$$\ln(3)=\sum_{n=1}^\infty\frac{(2/3)^n}{n}$$
Accelerating Convergence
But, as @J... points out, this is a pretty slowly converging series. In fact, it takes $17$ terms for only $3$ decimal places. So we'd want to accelerate the convergence of the series. One thing we can do is group together pairs of terms as follows:
$$\begin{aligned}\ln(3)&=\left(\frac{2}{3}+\frac{2^2}{2\cdot3^2}\right)+\left(\frac{2^3}{3\cdot3^3}+\frac{2^4}{4\cdot3^4}\right)+\ldots\\ &=\frac{2}{3}\left(\frac{5-1}{3(2-1)}\right)+\frac{2^3}{3^3}\left(\frac{5\cdot2-1}{3\cdot2(2\cdot2-1)}\right)+\frac{2^5}{3^5}\left(\frac{5\cdot3-1}{3\cdot3(2\cdot3-1)}\right)+\ldots\\ &=\sum_{n=1}^\infty\left(\frac{2}{3}\right)^{2n-1}\frac{5n-1}{3n(2n-1)}\\ &=\frac{1}{2}\sum_{n=1}^\infty\left(\frac{2}{3}\right)^{2n}\frac{5n-1}{n(2n-1)} \end{aligned}$$
This series gives us $3$ decimal places in only $9$ terms, so it's a bit quicker than before. But we can go further: let's label each partial sum up to $N$ as $S_N$ and examine the ratio-of-differences between $S_N$.
$$\begin{array}{|c|c|c|} \hline N & S_N & \frac{S_N-S_{N-1}}{S_{N-1}-S_{N-2}}\\ \hline 1 & 0.888\,889& - \\ \hline 2 & 1.037\,037& 0.167\\ \hline 3 & 1.078\,006& 0.277\\ \hline 4 & 1.091\,245& 0.323\\ \hline 5 & 1.095\,869& 0.349\\ \hline 6 & 1.097\,562& 0.366\\ \hline \end{array}$$
Looking at the column on the right, the ratio of differences is approaching a value about $0.4$. In fact, this trend continues for at least the next $40$ terms (but we wouldn't know that if we were really doing this by hand so shhh!). The ratio-of-differences in first series I gave is also fairly constant, at around $0.6$.
So, let's say $\frac{S_N-S_{N-1}}{S_{N-1}-S_{N-2}}\approx0.4$. With a bit of algebra, we can see: $$\lim_{N\to\infty}S_N\approx S_N+\frac{0.4}{1-0.4}(S_N-S_{N-1})$$
Conclusion
This extrapolation gives us about $1$ more decimal place. So we have the approximation: $$\ln(3)\approx\left[\frac{1}{2}\sum_{n=1}^N\left(\frac{2}{3}\right)^{2n}\frac{5n-1}{n(2n-1)}\right]+\frac{1}{3}\left(\frac{2}{3}\right)^{2N}\frac{5N-1}{N(2N-1)}$$
When we compute this for each $N$ and simplify, we get the sequence of approximations:
$$\begin{array}{|c|c|l|} \hline N & \text{Approximation}, X & \text{Error}= \ln(3)-X\\ \hline 1 & ^{40}/_{27}& -0.383 \\ \hline 2 & ^{92}/_{81} & -0.037\\ \hline 3 & ^{7\,252}/_{6\,561} & -0.006\,707\\ \hline 4 & ^{757\,844}/_{688\,905} & -0.001\,458\\ \hline 5 & ^{20\,440\,988}/_{18\,600\,435} & -0.000\,340\\ \hline \end{array}$$
We can see we're down to needing just $5$ terms to get $3$ decimal places, which is quite an improvement from $17$.
Some of the fractions are way too complicated before simplification. E.g. for $N=2$, each of the numerator has $8$ digits, then at $N=5$, it has $26$ digits. It would be pretty trivial to find the $\gcd$ for each case and simplify, but I can't imagine how boring that would be.
To sum up, I think this is a decent approximation and well within the remit of someone doing it by hand (as long as they're patient enough!). If anyone sees a mistake I've made or has better techniques for convergence acceleration, please feel free to leave a comment.
Here's the python3 code I used to compute the terms.
def a(n): #numerator in series
if n==1: return 16
else: return 2**(2*n)*(5*n-1)
def b(n): #denominator in series
if n==1: return 9
else: return 3**(2*n)*n*(2*n-1)
def X(N): #partial sum and extrapolation (for N>1)
A=a(1)
B=b(1)
for n in range(2,N+1): #partial sum up to N (ignores 1/2 term)
A=b(n)*A+a(n)*B
B=B*b(n)
C=A*3*b(N)+2*B*a(N) #extrapolation and incorporation of 1/2 and 1/3 terms
D=2*B*3*b(N)
return (C,D)