An alternative way to calculate $\log(x)$
Solution 1:
Contrary to popular belief, you can do better that power series. The trick is in the use of continued fractions and the related Padé approximants.
One continued fraction for the logarithm (due to Khovanskiĭ) goes a bit like this:
$$\log(1+z)=\cfrac{2z}{z+2-\cfrac{z^2}{3(z+2)-\cfrac{4z^2}{5(z+2)-\cfrac{9z^2}{7(z+2)-\cdots}}}}$$
The beauty of this is that it has a wider domain of applicability: it is valid as long as $|\arg(1+z)| < \pi$.
One can use the Lentz-Thompson-Barnett method on this CF, of course, but one could also choose to exploit argument reduction here, by suitably exploiting the identity $\log(ab)=\log\,a+\log\,b$. If you take that route, you can be justified in just using a truncation of the continued fraction. That truncation is what's called a Padé approximant.
I'll edit later with more details if needed.
Solution 2:
In the interest of demonstrating that there is more than one way to skin a cat, I display here Borchardt's algorithm for computing $\log\,x$, which is a modification of the more conventional arithmetic-geometric mean iteration:
$a_0=\dfrac{1+x}{2};\quad b_0=\sqrt{x};$
$\text{repeat}$
$a_{k+1}=\dfrac{a_k+b_k}{2}$
$b_{k+1}=\sqrt{a_{k+1}b_k}$
$k=k+1$
$\text{until }|a_k-b_k| < \varepsilon$
$\log\,x\approx 2\dfrac{x-1}{a_k+b_k}$
This of course presumes that you have a square root function available, but that remains doable even with your restrictions.
If you find that the convergence rate is too slow for your taste, Carlson shows that you can use Richardson extrapolation to speed up the convergence in the article I linked to. From my own experiments, I've found that the convergence rate of the unmodified Borchardt algorithm is already pretty decent, so unless you do need the speed-up (and remembering that Richardson extrapolation requires an auxiliary array for its implementation, which adds to the storage cost), vanilla Borchardt is fine as it is.
Solution 3:
Curiously, nobody proposed the CORDIC algorithm that was very useful when the 'price' of multiplication and division was high and/or the CPU limited.
The trick is to use a precomputed table of logarithms (say $\ln(10), \ln(2), \ln(1.1), \ln(1.01)... \ln(1.000001)$) and compute any logarithm using only addition/subtraction and shift operations (code example here).
A little late but...
Solution 4:
If number $N$ (base 10) is $n$-digit then
$$n-1 \leq \log_{10}(N) < n$$
Then logarithm can be approximated using
$$\log_{10}(N) \approx n-1 + \frac{N}{10^{n} - 10^{n-1}}$$
Example,
$\log_{10}(53) = 1.72427587 $
Here $n= 2, N=53$ then,
$$\log_{10}(53) = 2 -1 + \frac{53}{100-10}=1.588888$$
Logarithm maps numbers from 10 to 100 in the range 1 to 2 so log of numbers near 50 is about 1.5. But this is only a linear approximation, good for mental calculation and toy projects but not that good for serious research.
Solution 5:
The Wikipedia article Generalized continued fraction has a Khovanskiĭ-based algorithm that differs only in substituting $x/y$ for $z$, and showing an intermediate step:
$$ \log \left( 1+\frac{x}{y} \right) = \cfrac{x} {y+\cfrac{1x} {2+\cfrac{1x} {3y+\cfrac{2x} {2+\cfrac{2x} {5y+\cfrac{3x} {2+\ddots}}}}}} = \cfrac{2x} {2y+x-\cfrac{(1x)^2} {3(2y+x)-\cfrac{(2x)^2} {5(2y+x)-\cfrac{(3x)^2} {7(2y+x)-\ddots}}}} $$