How to understand the spectral decomposition geometrically?

Let $A$ be a $k\times k$ positive definite symmetric matrix. By spectral decomposition, we have

$$A = \lambda_1e_1e_1'+ ... + \lambda_ke_ke_k'$$

and

$$A^{-1} = \sum_{i=1}^k\frac{1}{\lambda_i}e_ie_i'$$

How to understand spectral decomposition and the relationship between $A$ and $A^{-1}$ geometrically?


You are requiring that the matrix $A$ be positive definite, but need not be so to be diagonalizable. In fact over the complex numbers, we would only need $A$ to commute with its adjoint (transpose and complex conjugated of the original, $A^†$), which then $A$ is called a normal matrix, i.e. if $A\cdot A^†=A^†\cdot A$ then there is a basis of $\mathbb C^k$ formed by a complete set of linearly independent orthonormal eigenvectors $\lvert e_i\rangle$, corresponding to eigenvalues $\lambda_i$ (possibly repeated) of $A$, so that $$A=\sum_{i=1}^k \lambda_i\lvert e_i\rangle\langle e_i\lvert\;\; \Rightarrow\;\; A^{-1}=\sum_{i=1}^k \frac{1}{\lambda_i}\lvert e_i\rangle\langle e_i\lvert.$$ The $\lvert e_i\rangle\langle e_i\lvert$ are witten in Dirac's notation and are your $e_ie'_i$. Concretely, each $\lvert e_i\rangle\langle e_i\lvert$ is a projector onto the line spanned by $\lvert e_i\rangle$; indeed if we write the scalar product by $\langle e_i\,\lvert\,v\rangle$, each projector annihilates all the components of $\lvert v\rangle$ except $v_i\lvert e_i\rangle$ because of orthonormality of $\{\lvert e_j\rangle\}_{j=1}^k$, so $\lvert e_i\rangle\langle e_i\lvert$ gives the length of the projection of the segment $\lvert v\rangle$ projected onto the line spanned by the unitary eigenvector $\lvert e_i\rangle$. This is the "spectral decomposition" of $A$ and gives the action of the matrix $A$ in terms of a linear superposition of projectors. In fact, since the eigenvalues $\lambda_i$ may appear more than once (because of multiplicity $n_i$), you can collect the elementary $n_i$ projectors $\{\lvert e_j\rangle\langle e_j\lvert\}_{j=i}^{i+n_i}$ with common eigenvalue $\lambda_i$ into one projector $P_i:=\sum_{j=i}^{i+n_i}\lvert e_j\rangle\langle e_j\lvert$, so that your matrix just expands into the different $m\leq k$ eigenvalues: $$A=\sum_{i=1}^m \lambda_i P_i\;\; \Rightarrow\;\; A^{-1}=\sum_{i=1}^m \frac{1}{\lambda_i}P_i.$$

Thus, to interpret geometrically what $A, A^{-1}$ do, we have just to understand the action of the projectors $P_i$. This is straightforward: $P_i$ is a linear combination of elementary projectors of any vector onto the lines spanned by $\lvert e_j\rangle$, for $j=i,\dots,i+n_i$, so $P_i\lvert v\rangle$ projects $\lvert v\rangle$ onto the linear subspace (e.g. line, plane, hyperplane...) spanned by those eigenvectors, thus $P_i\lvert v\rangle$ is the vectorial component of $\lvert v\rangle$ in that subspace. That is to say $\lvert v\rangle=\sum_i P_i\lvert v\rangle$.

Therefore we can interpret geometrically the spectral decomposition of $A$ like this: if $A$ is a normal $k\times k$ matrix acting on a vector of $\mathbb C^k$, it corresponds to a linear map, endomorphism of $\mathbb C^k$, which takes vectors and linearly gives new vectors; its action on a vector $\lvert v\rangle$ can be seen as a composition of actions for each component of the vector on each of the $m$ eigenspaces of $A$, which are linear subspaces of $\mathbb C^k$ of dimension $n_i$ corresponding to each of the possible eigenvalues $\lambda_i$ of $A$. The eigenspaces are invariant subspaces because $A$ transforms vectors in them to new vectors in them, without mixing eigenspaces (e.g. taking vectors of an eigenplane into the same plane). Hence, $\lambda_i P_i\lvert v\rangle$ takes the vectorial component of $\lvert v\rangle$ in the eigenspace spanned by the eigenvectors corresopnding to the eigenvalue $\lambda_i$ and scales that component by $\lambda_i$, i.e. just makes that component equal, bigger or smaller according to $|\lambda_i|\geq 1$ or $|\lambda_i|< 1$ (and change the orientation of that component if $\lambda_i <0$). Thus the action of $A$ on $\lvert v\rangle$ stretches its components on each of the eigenspaces by a factor $\lambda_i$, giving at the end a new vector with those new components. The action of $A^{-1}$ is analogue but with each of the stretching factors inverted (so what $A$ scales up, $A^{-1}$ scales down and vice-versa). For example, if $A$ is a positive definite real symmetric matrix, as you say in the question, then its eigenvalues are all positive since $\langle e_i\lvert A\lvert e_i\rangle=\lambda_i > 0$, by positivity and $\{\lvert e_i\rangle\}_{i=1}^k$ being orthonormal. Thus, in this particular case, the action of $A$ on a vector does not change the orientation of its projections onto each of the eigenspaces of $A$, it justs scales them (contrarily to the example of the picture below).

enter image description here

ADDITION: A similar geometric interpretation can be attempted for the canonical Jordan form of a matrix; in that general case, besides diagonal terms, there appear cross term projectors of type $\lambda_i\lvert e_{i+1}\rangle\langle e_i\lvert$ (or $\lvert e_{i-1}\rangle\langle e_i\lvert$ depending on the order of vectors in the generalized eigenbasis) in the decomposition of $A$. Therefore, along with scaling of the component in the direction of the generalized eigenvector, there is also a correction of direction by that component but in the direction of the previous (or next) generalized eigenvector of the basis. For all finite dimensional vector spaces over a field, the matrices or linear maps having all their eigenvectors over that field (i.e. their characteristic polynomial splits into linear factors over it), always have a Jordan canonical form; thus, in those cases (e.g. always over the complex numbers) all matrices/linear maps can be understood geometrically by their actions on vectors, as a composition of scalings and displacements of their projections over each generalized eigensubspace. In this case the inverse of a matrix in Jordan form is not as simple, and so its geometric interpretations not so easily related.