If a matrix commutes with a set of other matrices, what conclusions can be drawn?

I have a very specific example from a book on quantum mechanics by Schwabl, in which he states that an object which commutes with all four gamma matrices,

$$ \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & -1 & 0 & 0\\ -1 & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & -i\\ 0 & 0 & i & 0\\ 0 & i & 0 & 0\\ -i & 0 & 0 & 0\\ \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1\\ -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{pmatrix}, $$ must be a multiple times the unit matrix. These matrices don't seem to span all $4 \times 4$ matrices so why would this be the case? I have asked around but no one seems to know the answer.


Solution 1:

Call your four matrices $A,B,C,D$ respectively. While they indeed don't span $M_4(\mathbb C)$, the point is that the algebra they generate is the whole matrix space. So, any matrix that commutes with $A,B,C,D$ must in turn commute with all members of $M_4(\mathbb C)$. In fact, if we put $X=\frac{B\,(AC-C)\,A}{2i}$ and $Y=\frac{B\,(AC+C)\,A}{2i}$, the canonical basis of $M_4(\mathbb C)$ can be obtained as polynomials in $A,B,C,D$: \begin{align*} E_{11}&=\frac12(X^2+X),&E_{14}&=E_{11}B,&E_{13}&=E_{11}D,\\ E_{22}&=\frac12(X^2-X),&E_{23}&=E_{22}B,&E_{24}&=-E_{22}D,\\ E_{33}&=\frac12(Y^2+Y),&E_{32}&=-E_{33}B,&E_{31}&=-E_{33}D,\\ E_{44}&=\frac12(Y^2-Y),&E_{41}&=-E_{44}B,&E_{42}&=E_{44}D,\\ E_{12}&=E_{13}E_{32},\\ E_{21}&=E_{24}E_{41},\\ E_{34}&=E_{31}E_{14},\\ E_{43}&=E_{42}E_{23}. \end{align*}

Solution 2:

Consider the matrix $$ A=\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34}\\ a_{41}&a_{42}&a_{43}&a_{44} \end{bmatrix}. $$

Then, if $A$ commutes with your first matrix, then $$ \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1 \end{bmatrix}\begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34}\\ a_{41}&a_{42}&a_{43}&a_{44} \end{bmatrix}= \begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34}\\ a_{41}&a_{42}&a_{43}&a_{44} \end{bmatrix} \begin{bmatrix} 1&0&0&0\\ 0&1&0&0\\ 0&0&-1&0\\ 0&0&0&-1 \end{bmatrix} $$ In other words, $$ \begin{bmatrix} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ -a_{31}&-a_{32}&-a_{33}&-a_{34}\\ -a_{41}&-a_{42}&-a_{43}&-a_{44} \end{bmatrix}= \begin{bmatrix} a_{11}&a_{12}&-a_{13}&-a_{14}\\ a_{21}&a_{22}&-a_{23}&-a_{24}\\ a_{31}&a_{32}&-a_{33}&-a_{34}\\ a_{41}&a_{42}&-a_{43}&-a_{44} \end{bmatrix} $$ This tells you that $a_{31}=0=a_{32}=a_{41}=a_{42}=a_{13}=a_{14}=a_{23}=a_{24}$. Already, $A$ is significantly simplified. The second matrix gives $a_{11}=a_{33}$, $a_{12}=a_{34}$, $a_{21}=a_{43}$, and $a_{22}=a_{44}$. Then, keep going.

Solution 3:

For the example you gave, the conclusion follows from Schur's Lemma since the gamma matrices form an irreducible representation of the complexification of the Clifford algebra $Cl_{1,3}(\mathbf{R})_{\mathbf{C}}$. This comes up when considering irreducible representations of the Lorentz group (specifically the spin representation), which is central to discussions of spin in physics.

Solution 4:

For two matrices to commute, it is necessary that each matrix preserves the eigenspace of the other matrix (that is, it can't map part of an eigenspace onto a different eigenspace). As multiples of the identity matrix do not change eigenspaces and have all vectors in the same eigenspace, they commute with all other matrices.

If $A$ and $B$ commute, and $x$ is an eigenvector of $A$ with eigenvalue $\lambda$, then, $$ ABx=BAx=B\lambda x=\lambda Bx $$ and thus $Bx$ must also be an eigenvector of $A$ with the same eigenvalue... or a zero vector.

Suppose that $v=(a,b,c,d)$ is an eigenvector of our matrix. Then we must find, in the same eigenspace, $(a,b,-c,-d)$, $(d,c,-b,-a)$, $(-d,c,b,-a)$, and $(c,-d,-a,b)$ - (I have dropped the $i$ from the third matrix, as it's just a constant multiplier).

Adding and subtracting the middle two together, we can also see that $(0,c,0,-a)$ must be in the eigenspace, as must $(d,0,-b,0)$. At least one of these must be non-zero. Let's assume that $(0,c,0,-a)$ is non-zero.

Then we can also see that $(0,c,0,a)$ is in the eigenspace (from the first matrix), and so both $(0,c,0,0)$ and $(0,0,0,a)$ are in the eigenspace. Let's assume that $(0,0,0,a)$ is non-zero, and thus $(0,0,0,1)$ is in the eigenspace. Now, from the last matrix, $(0,0,1,0)$ is in the eigenspace. From the third matrix, we can then determine that $(0,1,0,0)$ and $(0,0,0,1)$ are in the eigenspace.

A similar analysis works if you assume $b$, $c$, or $d$ is non-zero.

Therefore, the eigenspace is the set of all 4D vectors, all sharing the same eigenvalue. This tells us that our matrix must be the identity matrix multiplied by that eigenvalue.