Prove that determinant complex conjugate is complex conjugate of determinant
Solution 1:
This can easily be shown by induction also. If you assume the truth for an $n\times n$ matrix, then as the $(n+1)\times (n+1)$ matrix determinant is the sum of $n\times n$ determinants (a la cofactor expansion), you're essentially done.
The base case (and to illustrate how it works for the general case):
$$\overline{\det\left(\begin{matrix}a & b\\ c& d\end{matrix}\right)} = \overline{a\det(d)-c\det(b)} = \overline{a}\overline d-\overline c\overline b = \overline a\det(\overline d)-\overline c\det(\overline b ) =\det\left(\begin{matrix}\overline a & \overline b\\ \overline c& \overline d\end{matrix}\right)$$
Solution 2:
It suffices to note that the determinant is a polynomial (with real coefficients) on the entries of a matrix. In general, we have
- $\overline{x + y} = \bar x + \bar y$
- $\overline{xy} = \bar x \bar y$
- $\overline{\alpha x} = \alpha \bar x \quad (\text{for } \alpha \in \Bbb R)$
It can be deduced (inductively, if you wish) that for arbitrary real polynomials $p(x_1,\dots,x_n)$, we have $$ \overline{p(x_1,\dots,x_n)} = p(\overline{x_1},\dots,\overline{x_n}) $$
Solution 3:
From your definition of determinant it is immediate that the determinant is a (in general very complicated) expression built up of the matrix entries using multiplication, addition and subtraction. Any such expression has the property that applying complex conjugation to each of the entries leads to complex conjugation of the final value of the expression. This is proved in principle by induction on the structure of the expression, using the basic facts $\overline a+\overline b=\overline{a+b}$, $\overline a-\overline b=\overline{a-b}$, and $\overline a\,\overline b=\overline{ab}$; however this "percolation" of the operation is so easy to see that one would usually not prove this formally (and certainly not specifically for the determinant expression).
In fact, one should see this in even more generality: the fundamental fact used about complex conjugation is that it is a ring homomorphism (this just means it has the stated relation to the operations $+$, $-$, $\times$, and sends $1\mapsto 1$). So by the same proof, if $f:R\to S$ is any ring homomorphism between commutative rings $R,S$, then for any matrix $A\in M_n(R)$ (a square matrix with entries in $R$) one has $\det(f(A))=f(\det(A))$, where $f(A)\in M_n(S)$ designates the matrix obtained by applying $f$ to all entries of $A$, so that $\det(f(A))$ is computed using arithmetic operations in$~S$. Your result is just the special case $R=S=\Bbb C$ and $f:z\mapsto\overline z$.
This is very useful in practice, especially since computing a determinant in $S$ may be easier than in$~R$. For instance it allows you to use modular reduction (and hence the Chinese remainder theorem) for determinants with integer entries: since modular reduction $f:\Bbb Z\to\Bbb Z/n\Bbb Z$ is a ring homomorphism, the remainder modulo any$~n$ of the determinant can be computed by reducing all the entries modulo$~n$, and computing the resulting determinant in $\Bbb Z/n\Bbb Z$. Probably the most important instance of the principle is usually considered so obvious that many don't even recognise there is something to prove: if a matrix has polynomial entries, and its determinant is computed in $\Bbb C[X]$ (say), then evaluating the resulting polynomial in a value at $X=\lambda\in\Bbb C$ gives the same result as evaluating all entries at $X=\lambda$ and then computing the resulting numerical determinant (here one uses that $f:\Bbb C[X]\to\Bbb C$ defined by evaluating at $X=\lambda$ is a ring homomorphism). Thus for instance the roots$~\lambda$ of the characteristic polynomial of $A$ are indeed eigenvalues (i.e., values such that the matrix $A-\lambda I_n\in M_n(\Bbb C)$ has zero determinant).