How are the Taylor Series derived?
I know the Taylor Series are infinite sums that represent some functions like $\sin(x)$. But it has always made me wonder how they were derived? How is something like $$\sin(x)=\sum\limits_{n=0}^\infty \dfrac{x^{2n+1}}{(2n+1)!}\cdot(-1)^n = x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}\pm\dots$$ derived, and how are they used? Thanks in advance for your answer.
Solution 1:
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle #1 \right\rangle} \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace} \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil #1 \right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left( #1 \right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ Note that $$ \fermi\pars{x} = \fermi\pars{0} + \int_{0}^{x} \fermi'\pars{t}\,\dd t \,\,\,\stackrel{t\ \mapsto\ x - t}{=}\,\,\, \fermi\pars{x} = \fermi\pars{0} + \int_{0}^{x}\fermi'\pars{x - t}\,\dd t $$
Integrating by parts: \begin{align} \color{#00f}{\fermi\pars{x}}&= \fermi\pars{0} + \fermi'\pars{0}x + \int_{0}^{x}t\fermi''\pars{x - t}\,\dd t \\[5mm] & = \fermi\pars{0} + \fermi'\pars{0}x + \half\,\fermi''\pars{0}x^{2} +\half\int_{0}^{x}t^{2}\fermi'''\pars{x - t}\,\dd t \\[8mm]& = \cdots = \color{#00f}{\fermi\pars{0} + \fermi'\pars{0}x + \half\,\fermi''\pars{0}x^{2} + \cdots + {\fermi^{{\rm\pars{n}}}\pars{0} \over n!}\,x^{n}} \\[2mm] & + \color{#f00}{{1 \over n!}\int_{0}^{x}t^{n} \fermi^{\rm\pars{n + 1}}\pars{x - t}\,\dd t} \end{align}
Solution 2:
This is the general formula for the Taylor series:
$$\begin{align} &f(x) \\ &= f(a) + f'(a) (x-a) + \frac{f''(a)}{2!} (x - a)^2 + \frac{f^{(3)}(a)}{3!} (x - a)^3 + \dots + \frac{f^{(n)}(a)}{n!} (x - a)^n + \cdots \end{align}$$
You can find a proof here.
The series you mentioned for $\sin(x)$ is a special form of the Taylor series, called the Maclaurin series, centered $a=0$.
The Taylor series is an extremely powerful because it shows that every function can be represented as an infinite polynomial (with a few disclaimers, such as interval of convergence)! This means that we can differentiate a function as easily as we can differentiate a polynomial, and we can compare functions by comparing their series expansions.
For instance, we know that the Maclaurin series expansion of $\cos(x)$ is $1-\frac{x^2}{2!}+\frac{x^4}{4!}-\dots$ and we know that the expansion of $\sin(x)$ is $x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}\dots$. If we do term-by-term differentiation, we can clearly confirm that the derivative of $\sin(x)$ is $\cos(x)$ by differentiating its series.
We can also use the Maclaurin series to prove that $e^{i\theta}=\cos{\theta}+i\sin{\theta}$ and thus $e^{\pi i}+1=0$ by comparing their series:
$$\begin{align} e^{ix} &{}= 1 + ix + \frac{(ix)^2}{2!} + \frac{(ix)^3}{3!} + \frac{(ix)^4}{4!} + \frac{(ix)^5}{5!} + \frac{(ix)^6}{6!} + \frac{(ix)^7}{7!} + \frac{(ix)^8}{8!} + \cdots \\[8pt] &{}= 1 + ix - \frac{x^2}{2!} - \frac{ix^3}{3!} + \frac{x^4}{4!} + \frac{ix^5}{5!} - \frac{x^6}{6!} - \frac{ix^7}{7!} + \frac{x^8}{8!} + \cdots \\[8pt] &{}= \left( 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} + \frac{x^8}{8!} - \cdots \right) + i\left( x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots \right) \\[8pt] &{}= \cos x + i\sin x \ . \end{align}$$
Also, you can use the first few terms of the Taylor series expansion to approximate a function if the function is close to the value on which you centered your series. For instance, we use the approximation $\sin(\theta)\approx \theta$ often in differential equations for very small values of $\theta$ by taking the first term of the Maclaurin series for $\sin(x).$
Solution 3:
Taylor's theorem can be proved using only the Fundamental Theorem of Calculus, basic algebraic and geometric facts about integration, and some combinatorics. Although it's a little long to write out, the basic ideas are pretty simple.
The FTOC gives us: $$f(x) = f(a) + \int_a^x f'(x_1)dx_1$$ $$f'(x_1) = f'(a) + \int_a^{x_1} f''(x_2)dx_2$$ $$f''(x_2) = f''(a) + \int_a^{x_2} f'''(x_3)dx_3$$ $$\ldots$$ $$f^{(m)}(x_m) = f^{(m)}(a) + \int_a^{x_{m}} f^{(m+1)}(x_{m+1}) dx_{m+1},$$ where $f^{(m)}$ denotes the $m$'th derivative of $f$. Substitute the second, third, ... expressions successively into the first gives: $$f(x) = f(a) + \int_{a<x_1<x} f'(a) dx_1 +\iint_{a<x_2<x_1<x} f''(a)dx_2dx_1 + \ldots + {\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} f^{(m)}(a)dx_{m} \ldots dx_1 + {\int \ldots \int}_{a<x_{m+1}< \ldots < x_1 < x} f^{(m+1)}(x_{m+1})\,dx_{m+1} \ldots dx_1 $$ For all the multiple integrals except the last one, the integrand is constant and can be pulled outside the integral. This gives us terms of the form: $$f^{(m)}(a){\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} \,dx_{m} \ldots dx_1$$ The ordering of variables $a<x_{m}< \ldots < x_1 < x$ is one of $m!$ orderings of the variables $x_1,\ldots,x_m$. Each one of these orderings corresponds to a region in $m$-dimensional space. These regions are all disjoint, by symmetry (or change of variable) they all have the same volume, and their union is an $m$-cube with volume $(x-a)^m$. From this we conclude: $${\int \ldots \int}_{a<x_{m}< \ldots < x_1 < x} dx_{m} \ldots dx_1 = \frac{(x-a)^m}{m!}.$$ Hence we have $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + f^{(m)}(a)\frac{(x-a)^m}{m!} + {\int \ldots \int}_{a<x_{m+1}< \ldots < x_1 < x} f^{(m+1)}(x_{m+1})dx_{m+1} \ldots dx_1 $$ As to the last integral, we have bounds on the integrand: $$ \min_{a<y<x} f^{(m+1)}(y) \le f^{(m+1)}(x_{m+1}) \le \max_{a<y<x} f^{(m+1)}(y),$$ which gives us: $$f(x) = f(a) + f'(a)(x-a) + f''(a)\frac{(x-a)^2}{2!} + \ldots + f^{(m)}(a)\frac{(x-a)^m}{m!} + R_{m+1} $$ where $$\left(\min_{a<y<x} f^{(m+1)}(y) \right) \frac{(x-a)^{m+1}}{(m+1)!} \le R_{m+1} \le \left(\max_{a<y<x} f^{(m+1)}(y) \right) \frac{(x-a)^{m+1}}{(m+1)!}.$$ Note that this proof does not even require that $f^{(m+1)}$ be continuous. If $f^{(m+1)}(y)$ is continuous on $a \le y \le x$, then the more conventional form of the remainder follows immediately from the intermediate value theorem.
Solution 4:
Well, what we really want to do is approximate a function $f(x)$ around an value, $a$.
We will call our Taylor series $T(x)$. Naturally we want our series to have the exact of $f(x)$ when $x = a$. For this, we will start our Taylor approximation with the constant term $f(a)$. We have $$T(x) = f(a)$$ as our first approximation and it is good assuming the function doesn't change much near $a$.
We can obtain a much better approximation of our function had the same slope (or derivative) as $f(x)$ at $x = a$. We want $T'(a) = f'(a)$. The best way to accomplish this is to add the term $f'(x)(x-a)$ to our approximation. We now have $T(x) = f(a) + f'(a)(x-a)$. You can verify that $T(a) = f(a)$ and that $T'(a) = f'(a)$.
If we were to continue this process we would derive the complete Taylor series where $T^{(n)}(a) = f^{(n)} (a)$ for all $n \in \mathbb{Z}^{+}$ (or n is a positive integer).
This is where the series comes from. If you write it in summation notation you reach what Juan Sebastian Lozano Munoz posted.
Solution 5:
Another way you can use Taylor series that I've always liked -- using the definition of a derivative to show that $$\frac{d}{dx} e^x = e^x.$$
The definition is $$\lim \limits_{h \to 0} \frac{e^{x+h} - e^x}{h},$$
Which is equal to
$$\lim \limits_{h \to 0} \frac{e^x(e^h - 1)}{h}.$$
If we can show that $\lim \limits_{h \to 0} \frac{e^h - 1}{h} = 1$, we'll be home free. This is where Taylor/MacLaurin series come in. We know that $e^x = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \dots$, so we can substitute:
$$\lim \limits_{h \to 0} \frac{-1 + 1 + h + \frac{h^2}{2!} + \frac{h^3}{3!} + \dots}{h}$$
$$\lim \limits_{h \to 0} \frac{h + \frac{h^2}{2!} + \frac{h^3}{3!} + \dots}{h}$$
$$\lim \limits_{h \to 0} 1 + \frac{h}{2!} + \frac{h^2}{3!} + \dots$$
$$ = 1$$