Symmetric powers and alternating powers are irreducible modules over the general linear Lie algebra $\mathfrak{gl}(n)$

Solution 1:

Fix a basis $(x_1,\dots,x_n)$ of $V$. A basis of $S^k V$ is given by the monomials $m_n=\prod x_i^{n_i}$ where $n=(n_i)$ ranges over $n$-uples with $\sum n_i=k$. The $m_n$ have distinct weights (*) for the action of the group of diagonal matrices. In particular, every invariant subspace $W$ has a basis made up of such monomials, and whenever some element $W$ has a nonzero coordinate in some monomial, then this monomial belongs to $W$.

Now start with a monomial $m_n$ in $W$. If $n$ is not supported by a single element, applying a suitable unipotent element, we can increase the value of some element. Iterating, we see that $W$ contains a monomial of the form $x_i^k$, and using permutations matrices, it contains all such monomials.

Again using unipotent elements, we see it contains all $x_i^\ell x_j^{k-\ell}$, and so on, we eventually get all monomials. (Argue by induction on $n$: then we have all monomials with $n_n=0$; then we get $m=\prod_{i<n}x_i^{n_i}x_n^s$ by induction on $s$: apply a unipotent matrix to the monomial $mx_1/x_n$).

The case of $\Lambda^k V$ starts similarly and finishes even more easily: $W$ is generated by monomials (with the restriction $n_i\le 1$ now). Then given a single monomial, we get all others using permutation matrices.


(*) If you're not familiar with weights, here's a way to avoid it (even if it's the same principle). Choose a an algebraically free family $(t_1,\dots,t_n)$ and consider the diagonal matrix $d$ with diagonal $(t_1,t_2,\dots,t_n)$. Its action on $S^k V$ can be diagonalized and $\prod x_i^{n_i}$ is an eigenvector for the eigenvalue $\prod_{i=1}^nt_i^{n_i}$. The freeness assumption implies that these eigenvalues are pairwise distinct (actually is would have been enough to assume that the $t_i$ are multiplicatively a $\mathbf{Z}$-linear free family, so picking $t_i$ to be the $i$-th prime number works too). The argument also works for $\Lambda^k V$ since then the eigenvalues are the same, with the restriction $n_i\le 1$. Since $d$ is diagonal with eigenspaces of dimension 1, any $d$-invariant subspace is a sum of some of those eigenspaces.

Solution 2:

This is basically the same as YCor's answer, which I just read and learned from.

I want to express the key idea in a more detailed way, since I had to think about it for an hour. Maybe this will help others.

The main lemma is this:

If $W \subset S^k(V)$ is $GL(V)$ invariant, and $f = \Sigma a_I m_I \in W$, for $m_I$ distinct monomials and $a_I$ scalars, then $m_I \in W$. (I think the same argument works for $\wedge^K V$. It isn't true for $T^k V$.)

Part I (Special case) : This statement is reminiscent of the scaling trick that one uses to show that $\mathbb{C}^*$ invariant subspaces $W$ of $R[x]$ are generated by their homogeneous parts, where $R$ is a $\mathbb{C}$ algebra.

The $\mathbb{C}^*$ action is on the domain of the $x$ coordinate. I will review that argument.

((Commentary: It will be convenient to have proven for coefficients in a $\mathbb{C}$ algebra. I see no reason why $R$ has to be commutative, so this also works for the wedge and tensor products. I saw this argument originally when $R = \mathbb{C}$, when studying projective space. Perhaps this version is familiar.)

In that case, one observe that if $f(x) = f_0(x) + \ldots f_d(x)$ is the decomposition of $f$ into homogeneous parts then $f(\alpha x) = f_0 + \alpha f_1 + \ldots + \alpha^d f_d$. (More precisely, $f_s(x)= a_s x^s$, for $a_d \in R$.) For $\alpha = 1, \alpha_1, \ldots, \alpha_d$, we can write these polynomials as the application of the vandermont matrix:

$(f, f(\alpha_1 x), \ldots , f(\alpha_d x))^T = \begin{bmatrix}1&1&\ldots &1\\1&\alpha_1& \ldots & \alpha_1^d \\ \vdots \\ 1 & \alpha_d & \ldots & \alpha_d^d \end{bmatrix} (f_0, \ldots, f_d)^T$.

Since the field is infinite, we can choose the $\alpha_i$ to be distinct, in which case this matrix is invertible by the formula for the determinant of a vendermont matrix. By inverting it, we can write each $f_i$ as a $\mathbb{C}$ linear combination of the $f(\alpha_i x) \in W$.

Part II: (Proof of Lemma)

For any $f \in W$, with $x_0, \ldots, x_n$ a basis for $V$ we write $R = \mathbb{C} [x_1, \ldots, x_n]$. Then $f = g_0 + x_0 g_1 + \ldots + x_0^n g_n$, with $g_i \in R$. We have see that each $x_0^k g_k \in W$ by the argument in part $I$. Relabelling coordinates and arguing by induction, each monomial factor of $x_0^k g_k$ was in $W$ (I'm skipping some details here -- the moral, I think, is that we next show that show $x_0^{k_0} x_1^{k_1} g_{k,k-1} \in W$, and continue pulling powers of variables out. We use commutativity or anti-commutativity at this step.) Thus each monomial factor of $f$ is in $W$.

Rmk: Perhaps there is a multivariate version of the vandermont matrix that one can use. I think it is easier to argue by induction.

Now one can proceed essentially as in YCors answer. For any invariant subspace, the lemma above shows that the subspace is spanned by monomials. In particular, it contains a monomial, $x_0^{k_0} \ldots x_m^{k_m}$, with $\Sigma k_i = k$. We apply the upper triangular matrix with all ones to this, and expanding we get a polynomial that contains the term $x_0^k$. Thus $x_0^k \in W$ by the lemma. Now apply a transformation taking $x_0$ to $x_0 + \ldots + x_m$, so that $(x_0 + \ldots + x_m)^k \in W$. But the monomial factors of $(x_0 + \ldots + x_m)^k$ are every possible monomial, and by the lemma every monomial in is in $W$. Thus $W = S^k(V)$.