Understanding Spectral Theorem

Solution 1:

(1) If you assume known that there is a basis of eigenvectors, then any vector in $V$ can be written as a linear combination of eigenvectors, and since any eigenvector belongs to some $W_i$, the result follows.

(2) This is an assumption made in order to prove that the sum is direct.

(3) This holds because it's the inner product of two vectors, the second of which is zero (see (2)).

(4) $E_i$ is not a vector, it's a projection onto the eigenspace $W_i$. Since the sum of eigenspaces is direct, if you take a basis for each $W_i$ then these bases make up a basis of $V$, the matrix of $E_i$ in this basis is going to be diagonal with $1$'s at the $W_j$ entries and $0$ elsewhere, and the sum of these matrices is the identity matrix.

Solution 2:

Here's an example:

$$ P = \begin{pmatrix} 0 & \frac1{\sqrt2} & \frac1{\sqrt2} \\ 0 & \frac1{\sqrt2} & -\frac1{\sqrt2} \\ 1 & 0 & 0 \end{pmatrix};\; T = P \begin{pmatrix} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 2 \end{pmatrix} P^T = \begin{pmatrix} \frac12 & -\frac32 & 0 \\ -\frac32 & \frac12 & 0 \\ 0 & 0 & -1 \end{pmatrix}. $$

There are two eigenvalues: $c_1 = -1$ and $c_2 = 2$. The orthogonal projections are

\begin{align} E_1 &= P \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix}P^T = \begin{pmatrix} \frac12 & \frac12 & 0 \\ \frac12 & \frac12 & 0 \\ 0 & 0 & 1 \end{pmatrix}, \\ E_2 &= P \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}P^T = \begin{pmatrix} \frac12 & -\frac12 & 0 \\ -\frac12 & \frac12 & 0 \\ 0 & 0 & 0 \end{pmatrix}. \end{align}

This works in general. To obtain $E_i$, you take $T = PDP^T$ and replace all the eigenvalues in $D$ by $1$ if they equal $c_i$ and $0$ if they are a different eigenvalue.

You should convince yourself that

  1. $$ E_1 + E_2 + \dots + E_k = P \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix} P^T = I. $$

  2. $$ c_1 E_1 + c_2 E_2 + \dots + c_k E_k = PDP^T = T. $$

  3. $E_i$ projects onto the $c_i$-eigenspace. For a start: if you multiply $P$ times the modified diagonal matrix, the remaining columns of $P$ are a basis for the $c_i$-eigenspace, in particular the column space (image) of $E_i$ is contained in the $c_i$-eigenspace.

Comments:

  1. This is all easy to see if you just don't write the $P$'s and $P^T$'s and just write the matrices in terms of the eigenbasis rather than the standard basis.

  2. This works for any diagonalizable matrix, if you replace $P^T$ by $P^{-1}$. I'm just using a self-adjoint (i.e. symmetric) operator because that's what the Spectral Theorem applies to. For self-adjoint matrices the special facts are: 1) they are always diagonalizable 2) we can diagonalize them with an orthogonal matrix.