Generalization of Cayley-Hamilton
I'm having trouble following a proof of this generalization of the Cayley-Hamilton theorem:
Suppose that $M$ is an $A$-module generated by $n$ elements, and that $\varphi \in \operatorname{Hom}_A(M,M)$; let $I$ be an ideal of $A$ such that $\varphi(M) \subset IM$. Then there is a relation of the form $$(**)\quad \varphi^n + a_1 \varphi^{n-1} + \cdots + a_{n-1}\varphi + a_n = 0,$$ with $a_i \in I^i$ for $1 \leqslant i \leqslant n$ (where both sides are considered as endomorphisms of $M$).
This is theorem 2.1 in Matsumura's Commutative Ring Theory. The proof there goes as follows:
Let $M = A\omega_1 + \dots + A\omega_n$; by the assumption $\varphi(M) \subset IM$ there exist $a_{ij} \in I$ such that $\varphi(\omega_i) = \sum_{j=1}^n a_{ij}\omega_j$. This can be rewritten $$\sum_{j=1}^n(\varphi\delta_{ij} - a_{ij})\omega_j = 0 \quad (\text{for }\ 1\leqslant i \leqslant n),$$ where $\delta_{ij}$ is the Kronecker symbol. The coefficients of this system of linear equations can be viewed as a square matrix $(\varphi\delta_{ij} - a_{ij})$ of elements of $A'[\varphi]$, the commutative subring of the endomorphism ring $E$ of $M$ generated by the image $A'$ of $A$ together with $\varphi$; let $b_{ij}$ denote its $(i,j)$th cofactor, and $d$ its determinant. By multiplying the above equation through by $b_{ik}$ and summing over $i$, we get $d\omega_k = 0$ for $1 \leqslant k \leqslant n$. Hence $d\cdot M = 0$, so that $d = 0$ as an element of $E$. Expanding the determinant $d$ gives a relation of the form $(**)$.
I would like to understand this particular proof, but I'm having trouble concluding $d\omega_k = 0$ for $1 \leqslant k \leqslant n$ from the multiplication of the above equation by $b_{ik}$ and summing over $i$. It's clear that the right hand side remains zero, but I'm having trouble expanding the left hand side into a useful form. I'm pretty sure I have to use the fact that $d = \sum_{i=1}^n b_{ik} (\varphi\delta_{ik} - a_{ik})$ by the cofactor expansion of the determinant along the $k$th column, but I can only seem conclude $d\omega_k - d\omega_k = 0$, which is not very enlightening.
Thanks in advance.
Solution 1:
I can see clearly now that I have thought about it longer. One thing that confused me is that the matrix for $\varphi$ is the transpose of $(a_{ij})$, rather than $(a_{ij})$ itself. This is evident from the identity $\varphi(\omega_i) = \sum_{j=1}^n a_{ij}\omega_j$.
Let $c_{ij} = \varphi\delta_{ij} - a_{ij}$ and put $v_i = \sum_{j=1}^n c_{ij}\omega_j$. Proposition: $u_k = \sum_{i=1}^n b_{ik}v_i = 0$ implies $d\omega_k = 0$.
Proof: The coefficient of $\omega_j$ in $u_k$ is $\sum_{i=1}^n b_{ik} c_{ij}$, which equals $d$ when $j = k$ and zero otherwise. When $j = k$ it's the cofactor expansion of the determinant along the $k$th column. When $j \neq k$ it's a cofactor expansion of the determinant of a matrix where the $j$th and $k$th columns are equal, which is zero.
In terms of the adjugate matrix this can be stated as $X\operatorname{adj}(X) = \operatorname{adj}(X)X = (\det X)I$.