Interpreting the Cayley-Hamilton theorem
Solution 1:
It's simple to see that every matrix $n\times n$ has to be a zero of some polynomial of degree at most $n^2$, simply because the space of $n\times n$ matrices has dimension $n^2$. The Cayley-Hamilton Theorem says that you can find such a polynomial of much smaller degree.
Another way to see this is as follows: For each fixed vector $v$, the vectors $v$, $Av$, ..., $A^n v$ cannot be linearly independent and so there is a polynomial $p$ of degree at most $n$ such that $p(A)v=0$. However, this polynomial depends on $v$. The Cayley-Hamilton Theorem gives you a polynomial that works for all vectors $v$.
Finally, for applications, having a polynomial $p$ such that $p(A)=0$ allows you to compute all powers of $A$ as a linear combination of $I$, $A$, ..., $A^{n-1}$. Indeed, assuming $p$ monic, you can write $p(X)=X^n+q(X)$ with $q$ of degree less than $n$. So $A^n = -q(A)$. Then $A^{n+1}=-A q(A)$. If $A^n$ appears in $A q(A)$, replace it by $-q(A)$. Do the same for $A^{n+2} = A \cdot A^{n+1}$, etc. For a concrete example, see http://en.wikipedia.org/wiki/Cayley-Hamilton_theorem#Illustration_for_specific_dimensions_and_practical_applications.
Solution 2:
First of all, the Cayley-Hamilton Theorem is a somewhat surprising result on its own. As for uses, it does not immediately give rise to many important statements, but it pops up in the proofs of other results occasionally. It is often involved in proving the Jordan canonical form, and I have also seen it used it to prove the main result regarding Kummer extensions in Galois theory. In particular, if a matrix vanishes on some polynomial, then its eigenvalues are a subset of the roots of the polynomial.
Solution 3:
Let $K$ be an algebraically closed field. Let $V$ be a vector space over $K$ of dimension $n$. Let $u\colon V \rightarrow V$ be a $K$-linear map. Let $K[X]$ be the polynomial ring with one variable. $V$ can be regarded as a $K[X]$-module as follows. Let $f(X) \in K[X], x \in V$. We define $f(X)x$ as $f(u)(x)$. It is easy to see that $V$ is a finitely generated torsion $K[X]$-module. Hence $V$ has a composition series: $V = V_0 \supset V_1 \supset\cdots \supset V_{r-1} \supset V_r = 0$. Since each $V_i/V_{i+1}$ is a simple $K[X]$-module, it is isomorphic to $K[X]/(X - \alpha_i)$, where $\alpha_i \in K$. It is easy to see that $f(X) = \prod_i (X - \alpha_i)$ is the characteristic polynomial of $u$. It is also easy to see that $f(X)V = 0$. This is the Cayley-Hamilton theorem.
Now let's consider the following analogy of the above argument. Let $G$ be a finitely generated torsion $\mathbb{Z}$-module, i.e. a finite abelian group. $G$ has a composition series: $G = G_0 \supset G_1 \supset\cdots \supset G_{r-1} \supset G_r = 0$. Each $G_i/G_{i+1}$ is isomorphic to $\mathbb{Z}/p_i\mathbb{Z}$, where $p_i$ is a prime number. It is easy to see that $|G|= \prod_i p_i$. It is also easy to see that $|G|G = 0$.
Therefore, in our analogy, $|G|$ corresponds to a characteristic polynomial and $|G|G = 0$ corresponds to the Cayley-Hamilton theorem.