Relation of this antisymmetric matrix $r = \left(\begin{smallmatrix}0 &1\\-1&0\end{smallmatrix}\right)$ to $i$

I was reviewing some matrices and found this interesting

if $r = \begin{pmatrix} 0&1\\ -1&0 \end{pmatrix}$ then $rr=-I$, also $$\exp{(\theta r)} = \cos\theta I + \sin\theta r$$ No wonder, the matrix $R(\theta) = e^{\theta r}$ is the 2d rotation matrix, just like $e^{i\theta}$ rotates a vector in the Argand plane. I have a very cursory knowledge of complex analysis, so I would like to know where I can find the details, i.e what is the unifying theme and in which literature can it be found.


joriki's answer is really nice and to the point as usual, but let me add my two cents and add the relation to the matrix representation of complex numbers.

First notice that multiplication by $i$ corresponds to a (counterclockwise) $90^{\circ}$-rotation around the origin in the complex plane. Now we can consider $\mathbb{C}$ as a $2$-dimensional vector space over $\mathbb{R}$ with basis $1,i$ so that $z = a + bi = \begin{pmatrix}a\\b\end{pmatrix}$. Let me denote the $\mathbb{R}$-linear map $z \mapsto iz$ by $\mathbf{J}$. Note that for $z = a + bi = \begin{pmatrix}a\\b\end{pmatrix}$ we have $iz = ia -b = -b + ia = \begin{pmatrix}-b\\a\end{pmatrix}$, so we must have $\mathbf{J} = \begin{pmatrix}0&-1\\1&0\end{pmatrix}$. You can of course also see this by remembering that a rotation around the angle $\alpha$ has the matrix $\begin{pmatrix}\cos{\alpha}&-\sin{\alpha}\\\sin{\alpha}&\cos{\alpha}\end{pmatrix}$.

So far so good, but this is only the beginning of the story! Now clearly we have $\mathbf{J}^2 = - \mathbf{1}$, $\mathbf{J}^3 = -\mathbf{J}$ and $\mathbf{J}^4 = \mathbf{1}$, so $\mathbf{J}$ satisfies very similar properties as the ones we're used to from $i$...

Given this, it is natural to try and look at matrices of the form $a\mathbf{1} + b\mathbf{J} = \begin{pmatrix}a&-b\\b&a\end{pmatrix}$.

Since we're working in a vector space of matrices, addition behaves in exactly the same way as usual, so let us look at multiplication. You should convince yourself that matrix multiplication gives $(a\mathbf{1} + b\mathbf{J})(c\mathbf{1}+d\mathbf{J}) = (ac-bd)\mathbf{1} + (ad+bc)\mathbf{J}$ giving us back the multiplication rule for complex numbers from matrix multiplication. Also note that complex conjugation simply corresponds to transposition. Also, the determinant encodes the square of the absolute value, as you can check easily.

If editing were not so painfully slow at the moment, I'd have loved to elaborate further by plugging in complex values and ending up with the quaternions and the Pauli matrices but for the moment a simple wikipedia link will have to do. See in particular the passage on matrix representations of the quaternions.


The connection is due to the fact that this matrix has eigenvalues $\mathrm i$ and $-\mathrm i$. Since the eigenvalues of the square of a matrix are the squares of its eigenvalues, the eigenvalues of the square are both $-1$, and thus the square must be $-I$. The same is true for any square matrix of any dimension that has only eigenvalues $\pm\mathrm i$.