Non surjectivity of the exponential map to GL(2,R)

I was asked to show that the exponential map $\exp: \mathfrak{g} \mapsto G$ is not surjective by proving that the matrix $\left(\matrix{-1 & 0 \\ 0 & -2}\right)\in \text{GL}(2,\mathbb{R})$ can't be the exponential of any matrix $A \in \mathfrak{gl}(2,\mathbb{R})$.


My proof (edited)

Lemma: A diagonal matrix $M \in \rm{GL}(2,\mathbb{R})$ is the exponential of a matrix $A\in\mathfrak{gl}(2,\mathbb{R})$ if it has positive eigenvalues or a unique (double) negative eigenvalue.

For any diagonal matrix $A = \left(\matrix{a & 0 \\ 0 & d}\right)$, we have $\displaystyle\text{e}^A=\sum_{k=0}^\infty\dfrac{A^k}{k!} = \left(\matrix{\text{e}^a & 0 \\ 0 & \text{e}^d}\right)$. Then, $$\forall\; \lambda,\,\mu > 0, \qquad\left(\matrix{\lambda & 0 \\ 0 & \mu}\right) = \exp \left(\matrix{\ln\lambda & 0 \\ 0 & \ln\mu}\right)$$

By the Cayley-Hamilton theorem, every square matrix is a root of its characteristic polynomial. Thus I can express any power of $A$ as a linear combination of $A$ itself and the identity matrix. For instance, $A^2= tA-d\mathbb{I}\;$ and $A^3=(t^2-d)A-td\mathbb{I}\;$, where $t=\text{tr} A, \; d=\det A$.
It follows that $M=\text{e}^A=\alpha A+\beta\mathbb{I}\;$ (even if I don't know how to calculate those coefficients).

  • If $M$ has distinct negative eigenvalues, $\alpha \neq 0$: otherwise we'd have $M = \beta \mathbb{I}$, a contradiction. This implies that $A$ must be diagonal too, but then it can't exponentiate to something with negative eigenvalues (see below the conclusion of my exercise).

  • If $M$ has a double negative eigenvalue, it is a negative multiple of $\mathbb{I}$ and $\alpha$ must be $0$. Observing that $\exp\left(\matrix{0 & \pi \\ -\pi & 0}\right)=\left(\matrix{-1 & 0 \\ 0 & -1}\right)=-\mathbb{I}$ and experimenting a bit, it is straightforward to prove that $$\forall\; \lambda>0, \qquad -\lambda\mathbb{I}=\left(\matrix{-\lambda & 0 \\ 0 & -\lambda}\right)=\exp\left(\matrix{\ln\lambda & \pm\, m\pi \\ \mp\, m\pi & \ln\lambda}\right), \qquad \forall m \; \text{odd integer}$$

The conclusion is immediate: the diagonal matrix I'm given must be the exponential of a diagonal matrix (if it exists) $A = \left(\matrix{a & 0 \\ 0 & d}\right)$ such that $\text{e}^a=-1\;\text{and}\;\text{e}^d=-2$, which is impossible.


Alternative approach, problems and questions

My efforts were aimed at using that $\exp(PXP^{-1})=P\exp XP^{-1},\; \forall X\in \mathfrak{gl}(n,\mathbb{R}),\forall P\in\text{GL}(n,\mathbb{R})$. The problem is that if $\exp X$ is a diagonal non-scalar matrix, then it doesn't lie in the center of the algebra and thus doesn't commute with $P$.
That's a pity, because I can (almost) always diagonalize $A=\left(\matrix{a & b \\ c & d}\right)$, writing $A=PXP^{-1}$, where $X$ is diagonal:

  1. If $\det A = 0$, since $\text{tr}A \neq 0$, $A$ has two distinct eigenvalues ($0$ and $\text{tr}A$) and is diagonalizable
  2. If $\det A \neq 0$, the characteristic equation is $\lambda^2-\text{tr}A\lambda +\det A = 0$ and

    • if its discriminant is $\Delta \neq 0$, then again I get two (possibly complex) distinct eigenvalues and $A$ is diagonalizable.
    • if $\Delta = 0$, the only eigenvalue is $\lambda=\dfrac12 \text{tr}A$, whose geometric multiplicity is only $1$, making $A$ not diagonalizable.

This line of thought seems quite promising to me. I was wondering if it exists some way to conclude the proof of the "diagonalizable $A$" case from $P\left(\matrix{\text{e}^{\lambda_1} & 0 \\ 0 & \text{e}^{\lambda_2}}\right)P^{-1} = \left(\matrix{-1 & 0 \\ 0 & -2}\right)$ and then to handle the "non-diagonalizable $A$" case.


Bonus question

I was also trying to work with the other property $\exp(X+Y)=\exp X \exp Y \quad \text{iff}\quad [X,Y]=0$, exploiting some decomposition of $A$, such as $$A = \left(\matrix{a & b \\ c & d}\right) = A_{\text{sym}}+A_{\text{antisym}}\;, \quad\text{where}\quad A_{\text{sym}}=\frac12(A+A^T) \;,\;A_{\text{antisym}}=\frac12(A-A^T)$$ $$\text{or} \quad A = \left(\matrix{a & b \\ c & d}\right) = A_{\text{diag}}+A_{\text{antidiag}}\;, \quad\text{where}\quad A_{\text{diag}}=\left(\matrix{a & 0 \\ 0 & d}\right) \;,\;A_{\text{antisym}}=\left(\matrix{0 & b \\ c & 0}\right)\;. $$

Unfortunately, both decomposition don't help, because $[A_{\text{sym}},A_{\text{antisym}}] \neq 0$, $[A_{\text{diag}},A_{\text{antidiag}}] \neq 0$.

Is there a chance that something like this could lead to a solution?

Update: Thanks to Pink Elephants, who pointed out a flaw in my proof, I modified the statement of the lemma and added some detail. I think now it works well for my exercise.


I looked at the question linked by Marc van Leeuwen, but thought I would outline a different (though fairly standard) approach. It works for complex matrices. I'll write $\exp(A)$ for $e^{A}$, when $A$ is an $n \times n$ complex matrix. Note that $\exp(MAM^{-1}) = M\exp(A)M^{-1}$ for any invertible $n \times n$ matrix $M.$ Since $MAM^{-1}$ is in Jordan normal form for a suitable matrix $M,$ we first try to understand $\exp(A)$ when $A$ is in Jordan normal form. In that case, on considering the Jordan blocks of $A$ one at a time, we may write $A = D+N$ where $D$ is diagonal, $N$ is upper triangular with $0$s on the main diagonal, and $DN = ND.$ Then it is clear that $\exp(A) = \exp(D)\exp(N).$ Now $\exp(A)$ is diagonal, and $\exp(N)$ is upper triangular with $1$s on the main diagonal. It follows that if $R$ is the set of roots of the characteristic polyomial of $A,$ then $\exp(R)$ is the set of roots of the characteristic polynomial of $\exp(A),$ with algebraic multiplicities matching as expected. Since $MAM^{-1}$ and $Mexp(A)M^{-1}$ have the same characteristic polynomials as $A$ and $\exp(A)$ respectively, the same statement holds for any complex $n \times n$ matrix. This is all general theory.

Now we can ask if there is a real $2 \times 2$ matrix $A$ such that $\exp(A) = \left( \begin{array}{clcr}-1 & 0\\0 & -2 \end{array} \right)$. If $A$ has eigenvalues $\alpha$ and $\beta$ (possibly equal), then $\exp(A)$ has eigenvalues $e^{\alpha}$ and $e^{\beta}.$ Hence we may suppose that $e^{\alpha} = -1$ and $e^{\beta} = -2.$ Hence $\alpha$ and $\beta$ are certainly non-real complex numbers. However, $A$ is a real matrix, so $\alpha$ and $\beta$ are the roots of the same real quadratic equation. Hence they must be complex conjugates. But then $e^{\alpha}$ and $e^{\beta}$ have the same absolute value, which is a contradiction.