Calculating Hyperbolic Sin faster than using a standard power series
Note that $$\sinh x=\frac{e^x-e^{-x}}2$$ So all you need is a fast way to calculate the exponential $e^x$. You can use the regular Taylor series, but that's slow. So you can use the definition $$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n$$ For calculation purposes, use $n$ as a power of $2$, $n=2^k$. You calculate first $y=1+\frac x{2^k}$, then you repeat the $y=y\cdot y$ operation $k$ times. I've got the idea about calculating the fast exponential from this article.
Let me consider the problem from a computing point of view assumin that you do not know how to compute $e^x$.
The infinite series is $$\sinh(x)=\sum_{n=0}^\infty \frac{x^{2n+1}}{(2n+1)!}$$ If you compute each term independently of the other, for sure, it is expensive since you have to compute each power of $x$ as well as each factorial.
But suppose that you write instead $$\sinh(x)=\sum_{n=0}^\infty T_n \qquad \text{where} \qquad T_n=\frac{x^{2n+1}}{(2n+1)!}\qquad \text{and} \qquad T_0=x$$ then $$T_{n+1}= \frac {t\,\, T_n}{m(m+1)}\qquad \text{where} \qquad t=x^2\qquad \text{and} \qquad m=2n+2$$ This would be much less expensive in terms of basic operations and number of them.
You could use the same trick for most functions expressed as infinite series.
A much better option than Andrei's answer is to use the identity $\exp(x) = \exp(x·2^{-k})^{2^k}$ for judicious choice of $k$, and use the Taylor series to approximate $\exp(x·2^{-k})$ to sufficient precision.
Suppose we want to compute $\exp(x)$ using the above identity to $p$-bit precision, meaning a relative error of $2^{-p}$. We shall perform each arithmetic operation with $q$-bit precision, where $q=p+k+m$, and $k,m$ are positive integers that we will determine later. To compute $\exp(x·2^{-k})^{2^k}$ with relative error $2^{-p}$ we need to compute $\exp(x·2^{-k})$ with relative error at most about $r_0 := 2^{-p-k-1}$, because on each squaring the error $r_n$ changes to about $r_{n+1} ≤ 2r_n+2^{-q}$, giving $r_k+2^{-q} ≤ (r_0+2^{-q})·2^k$ and hence $r_n ≤ 2^{-p}$.
Therefore we need to use enough terms of the Taylor expansion for $\exp(x·2^{-k})$ so that our error is at most $|\exp(x·2^{-k})|·2^{-p-k-1}$. If $k = \max(0,\log_2 |x|) + c$ for some positive integer $c$, then $|x·2^{-k}|<2^{-c}$ and so $|\exp(x·2^{-k})| > \exp(-1/2) > 1/2$, and thus it suffices to have our error less than $2^{-p-k-1}/2$. We allocate this error margin to two halves, one half for the Taylor remainder and one half for error in our arithmetic operations. Letting $z := x·2^{-k}$, we have $\sum_{i=n}^∞ |z^i/i!| ≤ |z|^n/n! · \sum_{i=0}^∞ (|z|/n)^i ≤ |z|^n/n! ≤ 2^{-c·n}$ for any $n ≥ 1$, so we only need to compute $\sum_{i=0}^{n-1} z^i/i!$ where $n ≥ 1$ and $2^{-c·n} < 2^{-p-k-1}/4$, both of which hold if $c·n ≥ p+k+3$. Each term requires one addition, one multiplication and one division, via the trivial identity $z^{n+1}/(n+1)! = z^n/n! · z/n$, and so if we start with $z$ at $q$-bit precision then the cumulative relative error is at most about $2n·2^{-q}$ since each "$· z/n$" introduces an extra error factor of about $(1+2^{-q})^2$. Since we want $2n·2^{-q} < 2^{-p-k-1}/4$, equivalently $n < 2^{m-4}$, it is enough to have $m = \log_2 n + 4$.
Even if we use schoolbook multiplication, namely that multiplying at $q$-bit precision takes $O(q^2)$ time, the above method yields a relatively efficient algorithm by choosing $k$ appropriately. The Taylor phase takes $O(n·q^2)$ time, and the exponentiation phase takes $O(k·q^2)$ time. If we choose $c = \sqrt{p}$ we can choose $n = \sqrt{p}+k$, which will give $k,n ∈ O( \sqrt{p} + \log |x| )$ and $q ∈ O( p + \log |x| )$. Thus for $x$ within a bounded domain, the whole algorithm takes $O(p^{2.5})$ time.
The above is based on purely elementary techniques. A more careful analysis using the same techniques yields an even better algorithm (see Efficient Multiple-Precision Evaluation of Elementary Functions).
There are ways to do much much better, basically coming down to using an AM-GM iteration to compute $\ln$, and then using Newton-Raphson inversion to obtain $\exp$ (see The Arithmetic-Geometric Mean and Fast Computation of Elementary Functions).