Hermitian (?) representations of $su(2)$

I'm searching for the irreducible representations of $su(2)$. To be clear on my setting :

  • $su(2)$ is the real vector space of skew-hermitian traceless matrices.
  • I'm searching for the irreducible representations on a complex vector space.

The standard way is to initiate with the momenta algebra $[J_i,J_j]=i\sum_k\epsilon_{ijk}J_k$.

Then we define the ladder operators and we consider an eigenvector of $\rho(J_3)$ with the largest eigenvalue and so on.

Here is a bunch of questions, that are (I think) equivalent :

  • Why does $\rho(J_3)$ has at least one eigenvector with a real eigenvalue ?
  • Why should $\rho(J_i)$ be hermitian ? More precisely, I know that the "definition" representation is made up with hermitian matrices (Pauli). But why should every representation be made of hermitian matrices ?
  • Can one build a scalar product on the representation space $V$ for which $\rho(J_i)$ is hermitian ? I know that it can be done at the group level to make a group representation unitary. Is there an analogous result for Lie algebras ?

If I can convince myself that $\rho(J_i)$ is hermitian, I'm done because of the spectral theorem for normal operators.

EDIT: If $(\rho,V)$ is a representation, we can write a scalar product in $V$ such that the operators $\rho(J_i)$ are not hermitian. So my question becomes :

  • Is every representations of $su(2)$ equivalent to an hermitian one ?
  • If a representation is given on $V$, can one define a scalar product on $V$ such that $\rho(J)$ is hermitian for every $J$ in the algebra ?

Expected answer : yes. If $V$ is a representation space for the Lie algebra, this is a representation space for the corresponding Lie group $G$. So, if the group is compact we can define a scalar product on $V$ such that the group representation is unitary. For that scalar product, I expect the algebra Representatives to be hermitian.

At the group level, see for example here on page 5


Solution 1:

There is indeed a general statement that for each representation $\rho$ of a compact semisimple real Lie algebra $\mathfrak{g}$ on a finite dimensional $\mathbb C$-vector space $V$, there exists a non-degenerate hermitian form on $V$ which is invariant with respect to the $\mathfrak g$-action; equivalently, all matrices $\rho(x)$ ($x\in \mathfrak g$) are antihermitian. This is proven along the lines of your "Expected answer" e.g. in Bourbaki's volume on Lie Groups and Lie Algebras -- compare in particular vol. IX §1 no.1, where the statement is translated to the corresponding one about a $G$-invariant form for the corresponding Lie group $G$ (which now is given by hermitian matrices), and the existence of such a form, in turn, is proven with averaging over a Haar measure.

Note that often in physicists' notation, everything on the Lie algebra level is multiplied through with the imaginary unit $i$, in which case one might have hermitian matrices in both cases. However, you say that for you, $su(2)$ consists of antihermitian matrices:

$$ su(2) = \lbrace \pmatrix{ai & b+ci\\-b+ci & -ai} : a,b,c \in \mathbb R \rbrace,$$

and since this already shows the statement for the defining representation on $V= \mathbb C^2$, we should stick with those. In the following I'd just like to point out that for this basic case $\mathfrak{g}=su(2)$, everything can be shown explicitly and more precisely.

Namely, the irreps of $su(2)$ you are interested in are in one-to one-correpondence with the irreps $\sigma$ of the complexified $su(2)\otimes \mathbb C$ which is $\simeq sl_2(\mathbb C)$, and the correspondence is given by just restricting such an irrep to $su(2) \subset sl_2(\mathbb C)$. Concretely, one commonly looks at the basis

$$ h=\pmatrix{1&0\\0&-1}, \quad x=\pmatrix{0&1\\0&0}, \quad y= \pmatrix{0&0\\-1&0}$$

of $sl_2(\mathbb C)$, looks how in the representation these act as matrices, and then from these can get e.g. the matrices corresponding to the basis of $su(2)$

$$ih =\pmatrix{i&0\\0&-i}, \quad x+y= \pmatrix{0&1\\-1&0}, \quad ix-iy=\pmatrix{0&i\\i&0}.$$

Irreps of $sl_2(\mathbb C)$, in turn, are well-known and should be listed in literally every set of notes or book about representations of Lie algebras: For each $n \ge 1$ there is up to equivalence one such irrep $(\sigma_n, V \simeq \mathbb C^n)$ of dimension $n$; it is often given by explicitly defining operators $X, Y, H$ corresponding to $x,y,h$. These operators are rarely written down as matrices, but it's easy to do this, and the whole point of "weight decomposition" which these sources talk about is that there is a basis $v_1, ..., v_n$ of $V_n$ such that in this basis, $h$ acts via (i.e. $\sigma_n(h)$ is given by)

$$H = \pmatrix{n-1&0&\cdots&0&0\\ 0&n-3&\cdots&0&0\\ 0&0&\ddots&0&0\\ 0&0&\cdots&3-n&0\\ 0&0&\cdots&0&1-n}.$$

In particular, when we restrict to $su(2)$, the matrix $iH$ (which is the one via which $ih \in su(2)$) acts, is already antihermitian:

$$iH = \pmatrix{(n-1) i&0&\cdots&0&0\\ 0&(n-3)i&\cdots&0&0\\ 0&0&\ddots&0&0\\ 0&0&\cdots&(3-n)i&0\\ 0&0&\cdots&0&(1-n)i}.$$

The other two operators $X$ and $Y$ might look differently with different normalisations for the basis vectors. E.g. the way Bourbaki defines them (loc. cit. vol. VIII §1, first abstractly in no.2 and then with homogeneous polynomials in no.3), we have

$$X = \pmatrix{0&n-1&0&\cdots&0&0\\ 0&0&n-2&\cdots&0&0\\ 0&0&0&\cdots&0&0\\ 0&0&0&\ddots&2&0\\ 0&0&0&\cdots&0&1\\ 0&0&0&\cdots&0&0}, Y = \pmatrix{0&0&0&\cdots&0&0\\ -1&0&0&\cdots&0&0\\ 0&-2&0&\cdots&0&0\\ 0&0&0&\ddots&0&0\\ 0&0&0&\cdots&0&0\\ 0&0&0&\cdots&1-n&0}.$$

At first that looks disheartening because even though it gives back the original matrices for $n=2$, for all $n \ge 3$ the matrices $X+Y$ and $iX-iY$ are not yet antihermitian. However, now it's an exercise in linear algebra: For any $1 \le k \le n$, and $\lambda_k \in \mathbb C^*$, scaling the basis vector $e_k$ to $\lambda_k e_k$ will not change the matrix $H$, but it does change the matrices $X$ (whose $k$-th column gets multiplied with $\lambda_k$, and whose $k+1$-th colum with $\lambda_k^{-1}$) and $Y$ (whose $k-1$-th column gets $\cdot \lambda_k^{-1}$, and whose $k$-th column $\cdot \lambda_k$). Now write down the equations and find that you can always find $(\lambda_1, ..., \lambda_n)$ such that the new matrices $X$, $Y$ are real and negative transposes of each other, which makes $X+Y$ and $iX-iY$, and hence the entire representation, antihermitian with respect to the standard hermitian product on the new basis vectors $(\lambda_1 e_1, ..., \lambda_n e_n)$.

Concretely, e.g. for $n=4$ I get $\lambda_1 = \lambda_4=1, \lambda_2 = \lambda_3 = \sqrt3^{-1}$ and thus

$$X = \pmatrix{0&\sqrt 3&0&0\\ 0&0&2&0\\ 0&0&0&\sqrt 3\\ 0&0&0&0}, Y = \pmatrix{0&0&0&0\\ -\sqrt3 &0&0&0\\ 0&-2&0&0\\ 0&0&-\sqrt3&0\\}$$

which makes

$$X+Y = \pmatrix{0&\sqrt 3&0&0\\ -\sqrt 3&0&2&0\\ 0&-2&0&\sqrt 3\\ 0&0&-\sqrt 3&0}, iX-iY = \pmatrix{0&i\sqrt 3&0&0\\ i\sqrt 3&0&2i&0\\ 0&2i&0&i\sqrt 3\\ 0&0&i\sqrt 3&0}$$

nicely antihermitian.

This kind of rescaling to make the operators obviously antihermitian is rarely ever done*; one reason is that, easy as it is, it uses existence of square roots in $\mathbb R$, whereas the normalisation that Bourbaki uses works over any field, or actually, over $\mathbb Z$.

*ADDED: My claim that this is "rarely ever done" is wrong. This question made me look at the Wikipedia article on "Spin" in quantum mechanics, and in the section about "Higher Spins" I recognise exactly the scaled matrices I cooked up above (my example $n=4$ being exactly the case of spin $\frac32$), except for that physicists' convention that they multiply everything through with the imaginary unit $i$ (there might be further minor sign flips due to $Y \mapsto -Y$ and/or $-iH$ instead of $iH$ or something, but the idea is definitely the same).