Apostol proof divides by zero?

The following is a proof from Apostol's linear algebra book that $\{1,t,t^{2},...\}$ is independent. To my eye, he's dividing by zero repeatedly. Is this really as huge an error as it seems, or are there missing details that would make this rigorous?

It suffices to show that for each $n$ the $n+1$ polynomials $1,t,...,t^{n}$ are independent. If these polynomials span the zero polynomial we have $$\sum_{k=0}^{n}c_{k}t^{k}=0$$ for all real $t$. Putting $t=0$ we see that $c_{0}=0$. Now divide by $t$ and put $t=0$ again to find $c_{1}=0$. Repeating the process we find every $c_{k}$ is $0$, so $1,t,...,t^{n}$ are independent.

Edit: I'm treating "polynomial" as a garden-variety function from $\mathbb{R}$ to $\mathbb{R}$, and it seems like that may be part of my problem.


Solution 1:

As David mentioned in his comment, the issue here is that continuity is implicitly used. When one deduces that $c_0 = 0$, you are left with $$ t(c_1 + c_2 t + c_3 t^2 + \dots + c_n t^{n-1}) = 0 $$ which means that for all $t \neq 0$, $c_1 + c_2 t + \dots + c_n t^{n-1} = 0$. But since polynomials are continuous functions over the reals, we can use the continuity of polynomials to see that $c_1 + c_2 t + \dots + c_n t^{n-1}$ = 0 also when $t =0$, where we find that $c_1 = 0$ and the process can continue.

This is where we use a very nice property of polynomial functions over the reals ; the only polynomial with real coefficients that vanishes everywhere over the reals is the zero polynomial.

I'm saying this because I've studied for a summer polynomials with this "vanishing property", i.e. that when evaluated in their ground ring, they're worth zero everywhere. Here the "ground ring" would be $\mathbb R$ (I wouldn't say "ground ring" is a standard term, but it works for me here.). Replace $\mathbb R$ by say, $\mathbb Z / p^n \mathbb Z$, and things get all messed up. Here's a nice result (just for show =D ) that I've managed to find out :

Theorem. Let $f(x) \in \mathbb Z / p^n \mathbb Z$. Then the associated polynomial function $f : \mathbb Z / p^n \mathbb Z \to \mathbb Z / p^n \mathbb Z$ is identically zero if and only if when we write $$ f(x) = \sum_{i=0}^n c_i (x)_i $$ where $(x)_i = x(x-1)\dots(x-(i-1))$ with the convention $(x)_0 = 1$, we obtain that $c_i \cdot i! \equiv 0 \mod p^n$.

Proof. ($\Longleftarrow$) Suppose $c_i \cdot i! \equiv 0 \mod p^n$ for $0 \le i \le n$. Then $$ f(0) = c_0 \equiv c_0 \cdot 0! \equiv 0 \mod p^n. $$ Assuming this is true for $k-1$, then $$ f(k) = \sum_{i=0}^k c_i (k)_i $$ and since $(k)_i = i! \begin{pmatrix} k \\\ i \end{pmatrix}$, we have $$ \sum_{i=0}^k c_i (k)_i = \sum_{i=0}^k c_i i! \begin{pmatrix} k \\\ i \end{pmatrix} \equiv \sum_{i=0}^k 0 \equiv 0 \mod p^n, $$ because binomial coefficients are integers.

($\Longrightarrow$) Suppose $f$ is always zero $\mod p^n$. Then since for $k \ge 0$, we have $$ f(k) = \sum_{i=0}^n c_i i! \begin{pmatrix} k \\\ i \end{pmatrix}. $$ Computing $f(0)$ gives $0 \equiv f(0) = c_0 \mod p^n$. By induction, $$ 0 \equiv f(k) = \sum_{i=0}^k c_i i! \begin{pmatrix} k \\\ i \end{pmatrix} \equiv c_k k! \begin{pmatrix} k \\\ k \end{pmatrix} = c_k k! \mod p^n, $$ so that $c_k k! \equiv 0 \mod p^n$.

This gives you a wide class of rings over which there is a bunch of polynomials that are non-zero and can vanish everywhere. Note that these rings are "discrete" in some sense, i.e. there is no notion of continuity that works in a similar fashion than with the reals, which is why we had hope that those polynomials might exist. One familiar example is of course $x^{p-1} - 1$ in $\mathbb Z / p \mathbb Z$ but there are many other such polynomials, for instance $(x^{p-1} - 1)^n q(x) \mod p^n$ where $q(x)$ is an arbitrary non-zero polynomial in $\mathbb Z / p^n \mathbb Z$. Note that those polynomials also form an ideal, which can be interesting. =)

Hope that was fun to read! And that it helps.

Solution 2:

No, no division by zero is involved. There are two processes at work here: manipulation of polynomials, and evaluation of polynomials. When we manipulate polynomials, we do so as formal expressions, where "$t$" is a symbol that doesn't take on a particular value. We can then evaluate any polynomial we create this way at some given point, but the fact that we've divided out $t$ along the way doesn't mean that we've suddenly divided by zero. As an example, consider what happens in the following case: if I have a polynomial $a + bt$, and then I learn that $a = 0$, all I'm doing by dividing out by $t$ is considering the related polynomial $b$.