How to prove that the normalizer of diagonal matrices in $GL_n$ is the subgroup of generalized permutation matrices?
I'm trying to prove that de normalizer $N(T)$ of the subgroup $T\subset GL_n$ of diagonal matrices is the subgroup $P\in GL_n$ of generalized permutation matrices. I guess my biggest problem is that I don't really know how diagonal and permutation matrices (don't) commute. Because it is not true that $DM=MD$ when $D\in T$ and $M\in P$ since the permutation is either horizontal or vertical, but sometimes it seems like you can do something like it.
So far, I have proved that $P\subset N(T)$, in the following way. Let $M_\sigma\in P$. Then $M_\sigma=VS_\sigma$, with $V\in T$ and $S_\sigma$ a permutation matrix. So $M_\sigma DM^{-1}=VS_\sigma D S_\sigma^T V^{-1}$. Thus if we prove that $S_\sigma D S_\sigma^T$ is diagonal we are done. This is true since $S_\sigma D S_\sigma^T=(x_1 e_{\sigma(1)} \dots x_n e_{\sigma(n)}) (e_{\sigma^{-1}(1)} \dots e_{\sigma^{-1}(n)})=(x_{\sigma^{-1}(1)}e_1 \dots x_{\sigma^{-1}(n)}e_n)$, where $e_i$ are the standard basis vectors. Even this is hopelessly written out. I'm trying to find a way to see what the product $S_\sigma D S_\sigma^T$ is without writing it in vectors.
For the other way around I don't really know what to do. I'm having a hard time rewriting matrix products in a useful way. Perhaps there is a way of proving this using something completely different? Maybe you can prove it using $N(T)/T\simeq S_n$, but this actually what I want to use my question for. When I just write what I know about a matrix $M\in N(T)$ I just get a big system of equations that isn't really handy.
Solution 1:
Let $S\in N(T)$. Then $SDS^{-1}$ is diagonal for each diagonal matrix $D$. Now, conjugation preserves the spectrum, which is exactly the diagonal in the case of diagonal matrices. So the diagonal of $SDS^{-1}$ has to be the same as the diagonal of $D$ up to a permutation. From this it's not hard to setup the equations to see that $S$ has to have a unique nonzero entry per row and column, i.e. $S$ is a generalized permutation matrix.
Concretely, let us write $\{E_{kj}\}$ for the set of canonical matrix units (i.e. $E_{kj}$ is the matrix with a $1$ in the $k,j$ position and zeroes elsewhere). Let $D=E_{11}+2E_{22}+\cdots+n E_{nn}$ (any other diagonal matrix with all different entries in the diagonal would do). From the first paragraph, we know that $SDS^{-1}$ is $W=\sigma(1)E_{11}+\cdots+\sigma(n)E_{nn}$ for some permutation $\sigma$. Since $SD=WS$, we get that $$ S_{kj}(j-\sigma(k))=0. $$ For each $j\ne\sigma(k)$, we have $S_{kj}=0$; so the only nonzero entry in the $k^{\rm th}$ row of $S$ is $S_{k,\sigma(k)}$. In other words, each row of $S$ contains a single nonzero entry, so $S$ is a generalized permutation matrix.
Solution 2:
The set of eigenvectors common to all elements of $T$ is that of the nonzero multiples of the standard basis vectors. Any element of $N(T)$ must permute these common basis vectors among each other (if $P\in N(T)$ and $v$ is a common eigenvector of $T$, then so is $P\cdot v$). This means all columns of $P$ must have a single nonzero entry, and of course these entries have to be in distinct rows as well. Hence $N(T)$ is contained in the set of generalised permutation matrices. The reverse inclusion follows from a simple computation to show that permutation matrices normalise $T$ (as of course do elements of $T$ itself). Or show this using the fact that $t\in T$ whenever all standard basis vectors are eigenvectors of $t$ (nearly the converse of the property used at the beginning).
Solution 3:
If $P$ is the permutation matrix corresponding to permutation $\pi$, i.e. $P_{i,\pi(i)} = 1$ for each $i$, and $D$ is a diagonal matrix, then $(PDP^{-1})_{ij} = \sum_k \sum_\ell P_{ik} D_{k\ell} P^{-1}_{\ell j}$. For a term to be nonzero, you need $k = \pi(i)$, $k=\ell$ and $\ell = \pi(j)$, so $\ldots$