Eigenvectors of real symmetric matrices are orthogonal
Can someone point me to a paper, or show here, why symmetric matrices have orthogonal eigenvectors? In particular, I'd like to see proof that for a symmetric matrix $A$ there exists decomposition $A = Q\Lambda Q^{-1} = Q\Lambda Q^{T}$ where $\Lambda$ is diagonal.
For any real matrix $A$ and any vectors $\mathbf{x}$ and $\mathbf{y}$, we have $$\langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle.$$ Now assume that $A$ is symmetric, and $\mathbf{x}$ and $\mathbf{y}$ are eigenvectors of $A$ corresponding to distinct eigenvalues $\lambda$ and $\mu$. Then $$\lambda\langle\mathbf{x},\mathbf{y}\rangle = \langle\lambda\mathbf{x},\mathbf{y}\rangle = \langle A\mathbf{x},\mathbf{y}\rangle = \langle\mathbf{x},A^T\mathbf{y}\rangle = \langle\mathbf{x},A\mathbf{y}\rangle = \langle\mathbf{x},\mu\mathbf{y}\rangle = \mu\langle\mathbf{x},\mathbf{y}\rangle.$$ Therefore, $(\lambda-\mu)\langle\mathbf{x},\mathbf{y}\rangle = 0$. Since $\lambda-\mu\neq 0$, then $\langle\mathbf{x},\mathbf{y}\rangle = 0$, i.e., $\mathbf{x}\perp\mathbf{y}$.
Now find an orthonormal basis for each eigenspace; since the eigenspaces are mutually orthogonal, these vectors together give an orthonormal subset of $\mathbb{R}^n$. Finally, since symmetric matrices are diagonalizable, this set will be a basis (just count dimensions). The result you want now follows.
Since being symmetric is the property of an operator, not just its associated matrix, let me use $\mathcal{A}$ for the linear operator whose associated matrix in the standard basis is $A$. Arturo and Will proved that a real symmetric operator $\mathcal{A}$ has real eigenvalues (thus real eigenvectors) and that eigenvectors corresponding to different eigenvalues are orthogonal. One question still stands: how do we know that there are no generalized eigenvectors of rank more than 1, that is, all Jordan blocks are one-dimensional? Indeed, by referencing the theorem that any symmetric matrix is diagonalizable, Arturo effectively thew the baby out with the bathwater: showing that a matrix is diagonalizable is tautologically equivalent to showing that it has a full set of eigenvectors. Assuming this as a given dismisses half of the question: we were asked to show that $\Lambda$ is diagonal, and not just a generic Jordan form. Here I will untangle this bit of circular logic.
We prove by induction in the number of eigenvectors, namely it turns out that finding an eigenvector (and at least one exists for any matrix) of a symmetric matrix always allows us to generate another eigenvector. So we will run out of dimensions before we run out of eigenvectors, making the matrix diagonalizable.
Suppose $\lambda_1$ is an eigenvalue of $A$ and there exists at least one eigenvector $\boldsymbol{v}_1$ such that $A\boldsymbol{v}_1=\lambda_1 \boldsymbol{v}_1$. Choose an orthonormal basis $\boldsymbol{e}_i$ so that $\boldsymbol{e}_1=\boldsymbol{v}_1$. The change of basis is represented by an orthogonal matrix $V$. In this new basis the matrix associated with $\mathcal{A}$ is $$A_1=V^TAV.$$ It is easy to check that $\left(A_1\right)_{11}=\lambda_1$ and all the rest of the numbers $\left(A_1\right)_{1i}$ and $\left(A_1\right)_{i1}$ are zero. In other words, $A_1$ looks like this: $$\left( \begin{array}{c|ccc} \lambda_1 & \\ \hline & & \\ & & B_1 & \\ & & \end{array} \right)$$ Thus the operator $\mathcal{A}$ breaks down into a direct sum of two operators: $\lambda_1$ in the subspace $\mathcal{L}\left(\boldsymbol{v}_1\right)$ ($\mathcal{L}$ stands for linear span) and a symmetric operator $\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$ whose associated $(n-1)\times (n-1)$ matrix is $B_1=\left(A_1\right)_{i > 1,j > 1}$. $B_1$ is symmetric thus it has an eigenvector $\boldsymbol{v}_2$ which has to be orthogonal to $\boldsymbol{v}_1$ and the same procedure applies: change the basis again so that $\boldsymbol{e}_1=\boldsymbol{v}_1$ and $\boldsymbol{e}_2=\boldsymbol{v}_2$ and consider $\mathcal{A}_2=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1,\boldsymbol{v}_2\right)^{\bot}}$, etc. After $n$ steps we will get a diagonal matrix $A_n$.
There is a slightly more elegant proof that does not involve the associated matrices: let $\boldsymbol{v}_1$ be an eigenvector of $\mathcal{A}$ and $\boldsymbol{v}$ be any vector such that $\boldsymbol{v}_1\bot \boldsymbol{v}$. Then $$\left(\mathcal{A}\boldsymbol{v},\boldsymbol{v}_1\right)=\left(\boldsymbol{v},\mathcal{A}\boldsymbol{v}_1\right)=\lambda_1\left(\boldsymbol{v},\boldsymbol{v}_1\right)=0.$$ This means that the restriction $\mathcal{A}_1=\mathcal{A}\mid_{\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$ is an operator of rank $n-1$ which maps ${\mathcal{L}\left(\boldsymbol{v}_1\right)^{\bot}}$ into itself. $\mathcal{A}_1$ is symmetric for obvious reasons and thus has an eigenvector $\boldsymbol{v}_2$ which will be orthogonal to $\boldsymbol{v}_1$.