Summation Theorem how to get formula for exponent greater than 3
I'm studying in the summer for calculus 2 in the fall and I'm reading about summation. I'm given these formulas: \begin{align*} \sum_{i=1}^n 1 &= n, \\ \sum_{i=1}^n i &= \frac{n(n+1)}{2},\\ \sum_{i=1}^n i^2 &= \frac{n(n+1)(2n+1)}{6},\\ \sum_{i=1}^n i^3 &=\left[ \frac{n(n+1)}{2}\right]^2. \end{align*}
But how do I come up with the right formula for exponents greater than 3?
You can obtain these formulae recursively. Look at this
$$n^5 = \sum_{k=1}^n \left( k^5 - (k-1)^5\right) $$
Expand the second term. Cancel the $k^5$ terms. Then apply the identities above. You can solve for $\sum_{k=1}^n k^4$. Continue in this fashion to get sums for higher powers.
General method for indefinite summation $\def\nn{\mathbb{N}}$ $\def\zz{\mathbb{Z}}$
Define the forward difference operator:
$D = ( \text{function $f$ on $\zz$} \mapsto ( \text{int $n$} \mapsto f(n+1) - f(n) ) )$
Namely for any function $f$ on $\zz$ and $n \in \zz$, $D(f)(n) = f(n+1) - f(n)$.
If you think of the functions as sequences (infinite in both directions), then taking the forward difference means replacing each term with the value of the next term minus itself. For example the following shows what happens when you repeatedly take the forward difference of the sequence of cubes:
...,-27,-8,-1, 0, 1, 8,27,...
..., 19, 7, 1, 1, 7,19,37,...
...,-12,-6, 0, 6,12,18,24,...
..., 6, 6, 6, 6, 6, 6, 6,...
..., 0, 0, 0, 0, 0, 0, 0,...
..., 0, 0, 0, 0, 0, 0, 0,...
Then we have:
$D\left( \text{int $n$} \mapsto \binom{n}{k+1} \right) = \left( \text{int $n$} \mapsto \binom{n}{k} \right)$ for any $k \in \zz$.
This is to be expected because it follows directly from Pascal's triangle, especially if we define $\binom{n}{k}$ using the triangle.
This means that if we have any function $f$ on $\zz$ such that $f(n) = \sum_{k=0}^\infty a_k \binom{n}{k}$ for any $n \in \zz$, then we get:
$D(f)(n) = \sum_{k=0}^\infty a_{k+1} \binom{n}{k}$ for any $n \in \zz$.
From a high-level perspective, this is the discrete version of the Taylor series, and indeed for such a function we easily see that $f(n) = \sum_{k=0}^\infty D^k(f)(0) \binom{n}{k}$ for any $n \in \zz$, because $\binom{0}{0} = 1$ while $\binom{0}{k} = 0$ for any $k \in \nn^+$.
This works for any polynomial function $f$ on $\zz$, since $D^k(f)$ is the zero function once $k$ is larger than the degree of $f$, so we can use it to immediately find the series for $(\text{int n} \mapsto n^3)$, and then just take the anti-difference by shifting the coefficients of the series the other way. The undetermined constant that appears will drop out once we perform a definite sum like if we want the sum of the first $m$ cubes.
Sum of $p$ powers
For example if we want $\sum_{k=1}^{n-1} k^2$ we first find the iterated forward differences of the sequence of squares $( n^2 )_{n \in \zz}$:
...,0,1,4,9,...
...,1,3,5,...
...,2,2,...
...,0,...
So we immediately get $n^2 = 0 \binom{n}{0} + 1 \binom{n}{1} + 2 \binom{n}{2}$ and hence $\sum_{k=0}^{n-1} = 0 \binom{n}{1} + 1 \binom{n}{2} + 2 \binom{n}{3}$. It is trivial to simplify it to $\sum_{k=0}^{n-1} = \frac{1}{6}(n-1)n(2n-1)$.
Computation efficiency
This is far more efficient than the method given by ncmathsadist because the series using binomial coefficients is easy to manipulate and easy to compute. For sum of $p$-powers we only need $O(p^2)$ arithmetic operations to find the forward-differences and then $O(p^2)$ more to simplify the series form into a standard polynomial form. In contrast, the other method requires $O(p^3)$ arithmetic operations.
Indefinite summation of non-polynomials
Also, for a wide class of non-polynomial functions, we can still compute the indefinite sum without using the series, by using the discrete analogue to integration by parts, here called summation by parts.
To derive it, simply check that $D(f \times g)(n) = f(n+1) g(n+1) - f(n) g(n) = f(n+1) D(g)(n) - D(f)(n) g(n)$ and so we get the product rule:
$D(f \times g) = R(f) \times D(g) + D(f) \times g$
where $R$ is the right-shift operator defined as:
$R = ( \text{function $f$ on $\zz$} \mapsto ( \text{int $n$} \mapsto f(n+1) ) )$
Namely for any function $f$ on $\zz$ and $n \in Z$, $R(f)(n) = f(n+1)$.
For convenience we also define the summation operator:
$S = ( \text{function $f$ on $\zz$} \mapsto ( \text{int $n$} \mapsto \sum_{k=0}^{n-1} f(k) ) )$
Then we have the important property that $DS(f) = f$ for any function $f$ on $\zz$, analogous to the fundamental theorem of calculus.
Now by substituting $f$ with $S(f)$ into the product rule and taking summation on both sides we get summation by parts:
$S( f \times g ) = S(f) \times g - S( R(S(f)) \times D(g) ) + c$ for some constant function $c$ on $\zz$.
Example indefinite sum
Using this we can easily compute things like $\sum_{k=1}^n k^3 3^k$ by applying it three times, each time reducing the degree of the polynomial part. There are other ways to achieve this using differentiation, but this method is purely discrete.
You can find the formulas here (they get progressively messier), however for Calculus 2 you won't need anything past what you've already got.
The way I like to do this is to consider that we're looking for a polynomial $P(x)$ of degree $n+1$ satisfying: $$P(x)=x^n+P(x-1).$$ $$P(0)=0$$ Notice that when we expand, we get: $$P(x)=x^n+P(x-1)=x^n + (x-1)^n + P(x-2) = \ldots = x^n + (x-1)^n + \ldots + 2^n + 1^n + 0$$ which is what we desire. It's not too hard to show that such a polynomial always exists*, but once we know it does, this gives a way to find it. So, for $n=1$ we want a polynomial of degree $2$ - so $P(x)=ax^2+bx$ (noting that the constant term is obviously $0$) - satisfying $$[ax^2 + bx] = x + [a(x-1)^2 + b(x-1)]$$ which expands as: $$ax^2 + bx = ax^2 + (1+b-2a)x + (a-b)$$ and noting that the coefficient of $x$ and the constant term must be equal on both sides, we get: $$b=1+b-2a$$ $$0=a-b$$ yielding $a=b=\frac{1}2$ so $$P(x)=\frac{1}2\left(x^2+x\right)$$ is the polynomial giving the sum of $1+2+3+\ldots + x$. We can repeat this procedure for all degrees of polynomials - and may indeed replace the $x^n$ term by any polynomial of degree $n$ (which is often helpful). It takes some work to do things this way, but it always works without too much cleverness.
(*My way to do this would be to consider the linear map on polynomials of degree at most $n+1$ taking $P$ to a pair $(Q,c)$ where $Q(x)=P(x)-P(x-1)$ and $c=P(0)$. It turns out that this is map is invertible. Notice that for $Q$ to be zero, $P$ must be constant - and for $c$ to be zero too, $P$ must be the zero polynomial, meaning that the kernel of the map is trivial. Additionally, the degree of $Q$ is at most $n$, so the space of pairs $(Q,c)$ is $n+1$ dimensional too, so we may conclude that any system of equations of similar form to the one presented has a unique solution - which by the same methods as used in the post, can be shown to mean that summing over a polynomial yields another polynomial, of degree one higher)