A weak converse of $AB=BA\implies e^Ae^B=e^Be^A$ from "Topics in Matrix Analysis" for matrices of algebraic numbers.

Solution 1:

The proof comes from Wermuth's "Two remarks on matrix exponential" (DOI 10.1016/0024-3795(89)90554-5)

It seems to me that it is enough to assume that no two distinct eigenvalues of $A$ or $B$ differ by an integer multiple of $2\pi i$ (which is true if, but not only if $\pi$ is transcendental with respect to their entries). This is of course true if $A$ and $B$ have algebraic entries.

The basic idea is to somehow reverse the exponential function, and express $A$ and $B$ as power series (in fact, polynomials) in their exponentials.

Let $m(\lambda)=\prod_j (\lambda-\lambda_j)^{\mu_j}$ be the minimal polynomial of $A$. Then by assumption $e^{\lambda_j}$ are all different, so by Hermite's interpolation theorem we can find a polynomial $f$ such that for $g=f\circ \exp$ we have that $g(\lambda_j)=\lambda_j$, and if $\mu_j>0$, then $g'(\lambda_j)=1$ and $g^{(l)}(\lambda_j)=0$ for $2\leq l< \mu_j$.

Then we have $g(A)=A$ (this part I' don't really see, but apparently it's common knowledge, it's probably a corollary of Jordan's theorem on normal form), but $g(A)=f(e^A)$, so $A=f(e^A)$, and similarly for some polynomial $h$ we have $B=h(e^B)$, so $AB=f(e^A)h(e^B)=h(e^B)f(e^A)=BA$.

Solution 2:

A broad-brush approach is as follows. We want to prove that if $e^A$ and $A$ have the same number of eigenvalues, then $e^A$ is a polynomial in $A$. This is true for diagonal matrices and hence for diagonalizable matrices. Since the diagonalizable matrices are dense, it is true for all matrices.

We can avoid the density argument though. Any square matrix $A$ can be written in exactly one way as $A=S+N$ where $S$ and $N$ are polynomials in $A$ and $S$ is diagonalizable and $N$ is nilpotent. Using the series expansion for $e^A$ we find that $$ e^A = e^D(I+N+\cdots+N^{k-1}) = e^D + M $$ with $M=e^D(N+\cdots+N^{k-1})$. Note that $M^k=0$. As $e^D$ and $M$ are polynomials in $A$ and $N$, they are polynomials in $A$; they are also the diagonalizable and nilpotent parts in the decomposition of $e^A$.

Now suppose $e^A$ and $e^B$ commute. Using the above we have $$ e^A = e^{D_A}+M_A,\quad e^B = e^{D_B}+M_B. $$ Then any two of the six matrices here commute (because the first three matrices are polynomials in $e^A$ and the second three polynomials in $e^B$) and therefore $e^{D_A}$ and $e^{D_B}$ commute. Since $e^A=e^{D_A}(I-N_A)^{-1}$ etc we see that that $(I-N_A)^{-1}$ and $(I-N_B)^{-1}$ commute, which implies that $N_A$ and $N_B$ commute. So if $D_A$ and $D_B$ are polynomials in $e^{D_A}$ and $e^{D_B}$ respectively, then $AB=BA$. As $D_A$ and $D_B$ are diagonalizable, we are OK provided the number of distinct eigenvalue of $e^{D_A}$ is equal to the number of distinct eigenvalues of $D_A$ (and ditto for $e^{D_B}$ and $D_B$).