$\def\sp{\mathrm{Sym}^+}$Let $\sp \subset GL(n,\mathbb R)$ denote the manifold of positive-definite symmetric $n \times n$ matrices. I am interested in functions $A : \sp \to \sp$ that are equivariant under the natural conjugation action of $O(n)$; i.e. such that$$A(R^T X R) = R^T A(X) R$$ for all $X \in \sp, R \in O(n)$. By choosing $R \in O(n)$ to diagonalize $X$ and then letting $R$ range over reflection and permutation matrices, one can characterize these $A$ as exactly those of the form $$A(X) = \sum_{k=1}^n a(\lambda_k; \lambda_1, \ldots, \widehat{\lambda_k}, \ldots, \lambda_n)e_k \otimes e_k$$ where $\lambda_k>0$ are the (repeated) eigenvalues of $X$ with corresponding (orthonormal) eigenvectors $e_k$ and $a : (0,\infty)^n \to (0,\infty)$ is symmetric in its last $n-1$ arguments. (The $\widehat \lambda_k$ denotes omission.)

Since $a(\lambda_1;\lambda_2,\ldots,\lambda_n) = A^{11}(\mathrm{diag}(\lambda_1,\ldots,\lambda_n))$, we know that $A \in C^\infty \implies a \in C^\infty$. My question is:

Does the converse hold; i.e. if $a$ is smooth can we conclude that $A$ is smooth?

In the analogous problem for $O(n)$-invariant maps $A : \sp \to \mathbb R$ (which reduce to symmetric functions $a : (0,\infty)^n \to \mathbb R$ of the eigenvalues), we can solve this problem using Glaeser's "differentiable Newton's theorem" - we get that a smooth symmetric function of the eigenvalues is a smooth function of the symmetric matrix invariants, which are in turn smooth functions of the matrix itself. However, I'm unsure how to transfer this kind of idea to the matrix-valued setting - all I can find are references about invariant scalars (e.g. Schwarz is a nice generalization of Glaeser's result, but still not obviously of use to me). I guess my issue is that I don't know how to retain any regularity when "packing the eigenvalues back in", since the eigenspaces are not smooth functions of the matrix.

I guess one way you could think of this is as a generalization of functional calculus - if we restrict to $a$ that depend only on their first argument, then (from what I understand) functional calculus is exactly the construction of $A$ from $a$.

Some progress: I have managed to prove the polynomial version by finding a recurrence relation of equivariant matrices that induces Newton's identities on the eigenvalues:

If $a : (0,\infty)^n \to (0,\infty)$ is a polynomial symmetric in its last $n-1$ arguments, then the output components of the corresponding map $A : \sp \to \sp$ are polynomials in the input components.

However, in retrospect I'm not sure if this helps at all in attaining the smooth version. Any input from someone more familiar with this kind of stuff would be greatly appreciated - my representation/invariant/????? theory background is lacking.


Here is a partial answer : I find a simpler sufficient condition for your implication when $n=2$. In that dimension, the generic form of a $M\in {\mathrm{Sym}}^+$ is

$$ M=\left(\begin{array}{cc} u & v \\ v & w \\ \end{array}\right)\tag{1} $$

with $u>0,v>0,uw>v^2$. The characteristic polynomial of $M$ is $X^2-(u+w)X+uw-v^2$, and the eigenvalues are $\frac{u+w\pm s}{2}$ where $s=\sqrt{(u-w)^2+4v^2}$. Let us put $a_1=a(\frac{u+w- s}{2},\frac{u+w+ s}{2})$ and $a_2=a(\frac{u+w+ s}{2},\frac{u+w- s}{2})$. After some algebra (see Appendix below), we find that

$$ A(M)=\frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2s}\left(\begin{array}{cc} \frac{u-w}{2} & -v \\ -v & \frac{w-u}{2} \\ \end{array}\right) \tag{2} $$

So it would suffice to show that the maps $G=\frac{a_2-a_1}{s}$ and $H=\frac{a_1+a_2}{2}$ are ${\cal C}^{\infty}$. Note that $G$ can be written in a denominator-free form : $G=\int_{0}^{1} g(t) dt$ where $g(t)=\frac{\partial a}{\partial \lambda_1}(p)-\frac{\partial a}{\partial \lambda_2}(p),p=(c+(t-\frac{1}{2})s, c-(t-\frac{1}{2})s)$, with $c=\frac{u+w}{2}$.

Appendix : computation of $A(M)$

! There is a matrix $R\in SO(2)$ such that
$$ M=R^{T}\left(\begin{array}{cc} \frac{u+w- s}{2} & 0 \\ 0 & \frac{u+w+ s}{2} \\ \end{array}\right)R= \frac{u+w}{2}I_2+sR^{T}\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R \tag{3} $$ We deduce $$ A(M) = R\left(\begin{array}{cc} a_1 & 0 \\ 0 & a_2 \\ \end{array}\right)R^T= \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2}R\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R^T\tag{4} $$ Since $R\in SO(2)$, there is a $\theta\in{\mathbb R}$ such that $$ R=\left(\begin{array}{cc} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \\ \end{array}\right)\tag{5} $$ It follows from (5) that $$ R\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R^T= \left(\begin{array}{cc} -\cos(2\theta) & -\sin(2\theta) \\ -\sin(2\theta) & \cos(2\theta) \\ \end{array}\right), R^T\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R= \left(\begin{array}{cc} -\cos(2\theta) & \sin(2\theta) \\ \sin(2\theta) & \cos(2\theta) \\ \end{array}\right)\tag{6} $$ And hence $$ R\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R^T=\phi\left(R^{T}\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R \right), \ \text{where} \ \phi\left(\begin{array}{cc} m_{11} & m_{12} \\ m_{21} & m_{22} \\ \end{array}\right)= \left(\begin{array}{cc} m_{11} & -m_{12} \\ -m_{21} & m_{22} \\ \end{array}\right) \tag{7} $$ Injecting (7) into (4), we obtain $$ \begin{array}{lcl} A(M) &=& \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2}R\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R^T \\ &=& \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2}\phi(R^T\left(\begin{array}{cc} -1 & 0 \\ 0 & 1 \\ \end{array}\right)R) \\ &=& \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2}\phi\bigg(\frac{M-\frac{u+w}{2}I_2}{s}\bigg) \\ &=& \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2s}\phi\left(\begin{array}{cc} \frac{u-w}{2} & v \\ v & \frac{w-u}{2} \\ \end{array}\right) \\ &=& \frac{a_1+a_2}{2}I_2+\frac{a_2-a_1}{2s}\left(\begin{array}{cc} \frac{u-w}{2} & -v \\ -v & \frac{w-u}{2} \\ \end{array}\right) \\ \end{array} $$