Why is it that $\det(\phi-x\text{id})=\sum_{i=0}^n (-1)^ic_ix^i$?

I'm trying to understand a certain formula for the determinant in a more general setting.

Say you have a free module $M$ of rank $n$ over a (commutative) ring $R$. Let $\phi\in\operatorname{End}(M)$, the $R$-module of $M$ endomorphisms, and denote by $c_i$ the trace of $\phi$ in the exterior algebra $\Lambda^i M$ (a.k.a alternating algebra, grassman algebra).

How come $$ \det(\phi-x\text{id})=\sum_{i=0}^n (-1)^ic_ix^i? $$

This seems like it would follow easily from induction, but unless I'm missing something obvious, I don't see how the trace fits in. Can someone explain why this polynomial identity is true? Many thanks.


If $M$ is a module over $R$, any endomorphism $\phi:M\to M$ induces an endomorphism $\Lambda^r \phi:\Lambda^rM\to \Lambda^rM$.
If $M$ is free with basis $e_1,...,e_n$, then $\phi $ has a matrix $A=(a_{ij})$ in this basis.
The module $ \Lambda^rM$ is also free, with basis $(e_H)_{H\in\mathcal H}$ where $\mathcal H$ is the set of strictly increasing sequences $H=(1\leq i_1\lt ...\lt i_r\leq n)$ and $e_H=e_{i_1}\wedge ...\wedge e_{i_r}$.

The key point is that with respect to this basis the linear mapping $\Lambda^r \phi$ has a matrix $\Lambda^r A=B= (b_{H,K})$ and that the entries $b_{H,K}$ can be computed:
the result is $b_{H,K}=\operatorname { det } (A_{H,K})$, the minor obtained by extracting from $A$ the lines numbered by $H$ and the columns numbered by $K$.
Hence we have the formula at the heart of the answer to your question $$\operatorname { Tr }(\Lambda^r \phi) =\sum_H b_{H,H}=\sum_H \operatorname { det }(A_{H,H})$$ From this follows the required formula for the characteristic polynomial of $\phi$ $$ \chi_\phi(X)= \operatorname { det } ( X\cdot 1_n-A)=\sum_{r=0}^n (-1)^r (\sum_H \operatorname { det }(A_{H,H}))X^{n-r} =\sum_{r=0}^n (-1)^r \operatorname { Tr }(\Lambda ^r\phi) X^{n-r} $$

Remark
The formula $b_{H,K}=det (A_{H,K})$ giving the matrix of of the exterior product of an endomorphism is very useful, in the study of Plücker embeddings and Grassmannians for example, and is a modern avatar of the venerable Laplace expansion of determinants.
It is, in my opinon, underappreciated and the very notion of exterior power $\Lambda^r A$ of a matrix $ A$ is very rarely mentioned.
[Amusingly the notion of tensor product of matrices, aka Kronecker product, seems to have made a comeback thanks to quantum computation and quantum information]


This business about working over a commutative ring $R$ is a red herring. Ultimately this is a collection of $n$ polynomial identities in $n^2$ variables $x_{ij}$ over the integers; that is, it suffices to prove this identity over $\mathbb{Z}[x_{ij}]$ as an equality of integer polynomials. But two integer polynomials are equal abstractly if and only if they're equal, say, when the $x_{ij}$ are set to arbitrary complex numbers. So it actually suffices to prove the identity over a specific algebraically closed field of characteristic zero such as $\mathbb{C}$ to prove it in general.

At this point you can take any proof you like that works over $\mathbb{C}$. Here's one:

  • The identity is obvious for diagonal matrices. Since the identity is conjugation-invariant, it follows for all diagonalizable matrices.
  • The diagonalizable matrices are dense and polynomials are continuous (alternately: the diagonalizable matrices are Zariski-dense and polynomials are Zariski-continuous), so the identity is true for all matrices.

You didn't just ask for a proof, though: you asked for an explanation. I addressed this question in another math.SE answer. Briefly, one can think of the RHS of your identity as the "trace," in an appropriate sense, of the action of a linear transformation on the exterior algebra, and then the result follows for diagonalizable matrices by an observation about how the exterior algebra functor behaves on direct sums.