Why represent a complex number $a+ib$ as $[\begin{smallmatrix}a & -b\\ b & \hphantom{-}a\end{smallmatrix}]$? [duplicate]

I am reading through John Stillwell's Naive Lie Algebra and it is claimed that all complex numbers can be represented by a $2\times 2$ matrix $\begin{bmatrix}a & -b\\ b & \hphantom{-}a\end{bmatrix}$.

But obviously $a+ib$ is quite different from $\begin{bmatrix}a & -b\\ b & \hphantom{-}a\end{bmatrix}$, as the latter being quite clumsy to use and seldom seen in any applications I am aware of. Furthermore, it complicates simple operations such as matrix multiplication whereby you have to go one extra step and extract the complex number after doing the multiplication.

Can someone explain what exactly is the difference (if there is any) between the two different representations? In what instances is a matrix representation advantageous?


This representation of $\,\Bbb C\,$ arises from viewing $\,\Bbb C\cong \Bbb R^2$ as two-dimensional vector space over $\,\Bbb R,\,$ where $\rm\:\alpha = a+b\,{\it i}\:$ is an $\:\Bbb R$-linear map $\rm\:x\to \alpha\, x.\,$ Computing the coefficients of the matrix of $\,\alpha\,$ wrt to the basis $\,[1,\,{\it i}\,]^T\:$ we obtain

$$\rm (a+b\,{\it i}\,) \left[ \begin{array}{c} 1 \\ {\it i} \end{array} \right] \,=\, \left[\begin{array}{r}\rm a+b\,{\it i}\\\rm -b+a\,{\it i} \end{array} \right] \,=\, \left[\begin{array}{rr}\rm a &\rm b\\\rm -b &\rm a \end{array} \right] \left[\begin{array}{c} 1 \\ {\it i} \end{array} \right]$$

What is the point of such linear representations? By making explicit the innate linear structure we can apply the powerful techniques of linear algebra.

For example, let's look at some analogous linear algebra of Fibonacci numbers. Recall Binet's formula that $\, f_n = (\varphi^n + \bar \varphi^n)/\sqrt{5}\,$ where $\,\varphi,\bar \varphi = (1\pm\sqrt{5})/2\,$ are the roots of $\,x^2-x-1.\,$ Here it is natural to work in $\,\Bbb Q(\varphi) = \Bbb Q(\sqrt{5}) = \{a + b\sqrt{5}: a,b\in\Bbb Q\},\,$ a two-dimensional vector space over $\,\Bbb Q\,$ with basis $\,[\varphi,1].\,$ Here multiplication by $\,\varphi\,$ has the matrix $M$ displayed below

$$\rm {\it \varphi}\, \left[ \begin{array}{c} {\it \varphi} \\ 1 \end{array} \right] \,=\, \left[\begin{array}{r}\rm \varphi + 1\\\rm \varphi + 0 \end{array} \right] \,=\, \left[\begin{array}{rr}\rm 1 &\rm 1\\\rm 1 &\rm 0 \end{array} \right] \left[\begin{array}{c} \varphi \\ 1 \end{array} \right]$$

This leads to the following matrix representation of Fibonacci numbers.

$$\qquad M^n\ =\ \left[\begin{array}{ccc} \,1 & 1 \\\ 1 & 0 \end{array}\right]^n =\ \left[\begin{array}{ccc} F_{n+1} & F_n \\\ F_n & F_{n-1} \end{array}\right] $$

The above allows us to quickly compute the Fibonacci's numbers by computing the powers of $\,M\,$ by repeated squaring. Further, it yields an easy proof of the Fibonacci addition law

$$\begin{eqnarray} M^{n+m} = M^n M^m &=&\, \left[\begin{array}{ccc} F_{n+1} & F_n \\\ F_n & F_{n-1} \end{array}\right]\ \left[\begin{array}{ccc} F_{m+1} & F_m \\\ F_m & F_{m-1} \end{array}\right] \\ \\ \Rightarrow\ \ \left[\begin{array}{ccc} F_{n+m+1} & F_{n+m} \\\ \color{#c00}{F_{n+m}} & F_{n+m-1} \end{array}\right]\! &=&\,\left[\begin{array}{ccc} F_{n+1}F_{m+1} + F_nF_m & F_{n+1}F_m + F_nF_{m-1} \\\ \color{#C00}{F_nF_{m+1} + F_{n-1}F_m} & F_{n}F_{m} + F_{n-1}F_{m-1} \end{array}\right]\end{eqnarray}$$

which contains the sought addition law.

$$\color{#c00}{F_{n+m} = F_nF_{m+1} + F_{n-1}F_m} $$

That is but a small glimpse of the power afforded by linear representations.


Simply notice that $$ \begin{bmatrix}a & -b\\ b & a\end{bmatrix}=aI+bJ $$

where $ I=\begin{bmatrix}1 & 0\\ 0 & 1\end{bmatrix} $ and $ J=\begin{bmatrix}0 & -1\\ 1 & 0\end{bmatrix}\;. $ Now observe that $I$ and $J$ behave like $1$ and $i$ respectevely (in fact $I$ is the identity for $2\times2$ matrices and $J^2=-I$), which are an $\Bbb R$-basis for $\Bbb C$.

Thus writing $\begin{bmatrix}a & -b\\ b & a\end{bmatrix}$ is equivalent to writing $a+ib$.


In analytic geometry the matrices $$ \left[ \begin{array}{cc} \cos\theta & -\sin\theta\\ \sin\theta & \cos\theta \end{array} \right] $$ Are rotations and in complex numbers the numbers $e^{i\theta}$ do the same work on vectors, so this is a good reason to take matrices of this form equal to it's complex form.