The form of $2 \times 2$ unitary matrices
I've been working through "Groups and Symmetry" (Armstrong) and came across this problem in chapter 9 which I can't figure out. Any hints/help would be greatly appreciated!
Show that every $2\times 2$ unitary matrix has the form
$$ \left(\begin{array}{c c} w & z \\ -e^{i \theta} z^{*} & e^{i \theta} w^{*} \end{array}\right) $$
for some $\theta\in\mathbb{R}$ and $w,z\in\mathbb{C}$. (A matrix is said to be unitary if it is invertible with its adjoint as the inverse. The symbol "*" denotes complex conjugate.)
Start with the facts you know, i.e. that you have a 2-by-2 complex matrix $\begin{pmatrix}w&z\\c&d\end{pmatrix}$, such that when you multiply it by its adjoint $\begin{pmatrix}w^*&c^*\\z^*&d^*\end{pmatrix}$ you get $\begin{pmatrix}1&0\\0&1\end{pmatrix}$. That means you have $ww^*+zz^* = 1$, $cc^*+dd^* = 1$ and $cw^*+dz^*=0$. Don't forget the other way, so you get $ww^*+cc^* = 1$, $zz^*+dd^* = 1$ and $w^*z+c^*d=0$. With these equations you should notice something immediately about both $cc^*$ and $dd^*$. You can work from there.
Adding a faster approach than the accepted answer since this post is the first result returned by Google.
For $A := \left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)$ with $a,b,c,d\in\mathbb{C}$, we instead compare the matrix entries in the defining equation $A^* = A^{-1}$.
Since $A^{-1} = \frac{1}{\det A} \left(\begin{smallmatrix}d&-c\\-b&a\end{smallmatrix}\right)$ and complex conjugation (which I will denote by $\overline{a}$) of a matrix is just the operation applied elementwise, we immediately get from just three of the entries $$d = \overline{a} \cdot \det A,\ \ a = \overline{d} \cdot \det A\ \ \text{and}\ \ c = -\overline{b} \cdot \det A.\quad \quad \quad (1)$$
The first two equations give $d = d \cdot \overline{\det A} \cdot \det A = d \cdot \|\det A\|^2$ so for $d\neq 0$ this forces $\|\det A\|^2=1$, thus the scaling factor is $\det A = e^{i\theta}$. As $\det A\in\mathbb{C}$, this is true because for any complex number $z\in\mathbb{C}$, having unit modulus $\|z\|=1$ implies $z$ is on the unit circle, and so $z$ is determined by the angle $\theta$ such that $z=e^{i\theta}$.
Now substituting $\det A = e^{i\theta}$ back into $(1)$ yields $$c = -e^{i\theta} \overline{b}\ \ \text{and}\ \ d = e^{i\theta} \overline{a},$$ hence $$ A = \begin{pmatrix} a & b\\ -e^{i\theta} \overline{b} & e^{i\theta} \overline{a} \end{pmatrix}. $$
$\rule{19cm}{0.4pt}$
Zero-division technicality: We can't have both $d$ and $c$ equal to $0$ for $A$ unitary as this would give singular determinant $\det A = ad-bc = 0$. We know this because assuming otherwise leads to a contradiction: by definition of $A$ being unitary, we know that $A^{-1}=A^*$ exists, so in particular $A$ is invertible and in turn $\det A \neq 0$... contradiction. Thus if $d=0$ occurs, for $A$ unitary we know the that $c \neq 0$, so we can instead argue just as we did but instead deriving $\|\det A\|=1$ in the same way using $c = \overline{b} \cdot \det A$ and $b = -\overline{c} \cdot \det A$, rather than the first two equations of $(1)$.