Finding $P$ such that $P^TAP$ is a diagonal matrix
Hermite Reduction.
SEE ALSO Orthogonal basis for this indefinite symmetric bilinear form
Transforming quadratic forms, how is this theorem called?
What is the difference between using $PAP^{-1}$ and $PAP^{T}$ to diagonalize a matrix?
When you have a symmetric matrix of integers, you may use Hermite's method for diagonalizing, the order they want is $P^t A P = D.$ Alright, I will need to do an inverse at the end.
Make a column vector $$ V = \left( \begin{array}{c} x \\ y \end{array} \right) $$ and write out $$ V^T A V = 2 x^2 + 6 xy + 4 y^2 $$ Next, we cancel out all $x$ terms using $$ \left( x + \frac{3}{2} y \right)^2 = x^2 + 3 xy + \frac{9}{4} y^2, $$ and $$ 2 \left( x + \frac{3}{2} y \right)^2 = 2x^2 + 6 xy + \frac{9}{2} y^2. $$ As a result, $$ 2 \left( x + \frac{3}{2} y \right)^2 - \frac{1}{2} y^2 = 2 x^2 + 6 xy + 4 y^2 . $$
MORE TYPING TO COME !!!!
In matrices, the direction I did is $$ \left( \begin{array}{cc} 1 & 0 \\ \frac{3}{2} & 1 \end{array} \right) \left( \begin{array}{cc} 2 & 0 \\ 0 & -\frac{1}{2} \end{array} \right) \left( \begin{array}{cc} 1 & \frac{3}{2} \\ 0 & 1 \end{array} \right) = \left( \begin{array}{cc} 2 & 3 \\ 3 & 4 \end{array} \right) $$
With $$ Q = \left( \begin{array}{cc} 1 & \frac{3}{2} \\ 0 & 1 \end{array} \right) $$ notice that the rows correspond exactly to the linear substitutions, the first row means $x + \frac{3}{2} y$ and the second row means $y.$
EVEN MORE EXCITING TYPING ANY MINUTE !!!!!!!!!! What I did so far is in the order $Q^T D Q = A.$ All we need to do is take $p= Q^{-1},$ which is easier than usual because $\det Q = 1.$ The result is $$ \left( \begin{array}{cc} 1 & 0 \\ -\frac{3}{2} & 1 \end{array} \right) \left( \begin{array}{cc} 2 & 3 \\ 3 & 4 \end{array} \right) \left( \begin{array}{cc} 1 & -\frac{3}{2} \\ 0 & 1 \end{array} \right) = \left( \begin{array}{cc} 2 & 0 \\ 0 & -\frac{1}{2} \end{array} \right) $$
The second example in the question, with 3 by 3 matrix, is $$ x^2 + 4 y^2 + 4 z^2 + 16 yz + 4 zx + 4 xy. $$ This is an example where an extra trick must be used: $$ (x+2y+2z)^2 = x^2 + 4 y^2 + 4 z^2 + 8 yz + 4 zx + 4 xy. $$ All that remains to construct is $8yz$ because we used up the $y^2$ and $z^2.$ The trick is that $(y+z)^2 - (y-z)^2 = 4yz,$ so $$ (x+2y+2z)^2 + 2 (y+z)^2 -2 (y-z)^2= x^2 + 4 y^2 + 4 z^2 + 16 yz + 4 zx + 4 xy. $$ Thus the diagonal matrix gets entries $1,2,-2$ and, in this direction,
$$ Q = \left( \begin{array}{ccc} 1 & 2 & 2 \\ 0 & 1 & 1 \\ 0 & 1 & -1 \end{array} \right) $$ and then $P = Q^{-1}$
$$ P = \left( \begin{array}{ccc} 1 & -2 & 0 \\ 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & \frac{1}{2} & - \frac{1}{2} \end{array} \right) $$
Generally, in the process of diagonalization, it is easiest to approach via calculating the eigenvalues and corresponding eigenvectors to form an orthonormal eigenbasis.
Such an orthogonal matrix is guaranteed to exist by the Spectral Theorem since our matrix, $A$, is a real symmetric matrix.
step 1: calculate eigenvalues
Find the eigenvalues by finding the characteristic polynomial: $\det(A-\lambda I) = (2-\lambda)(4-\lambda) - 3\cdot 3 = 8-6\lambda + \lambda^2 - 9 = \lambda^2 - 6\lambda - 1$
Finding the roots of the characteristic polynomial will find our eigenvalues. Solving via the quadratic formula gives us $\frac{6\pm\sqrt{36+4}}{2}=3\pm \sqrt{10}$
step 2: find the eigenvectors
Now, we try to find the eigenvectors.
Eigenvector for $\lambda_1=3+\sqrt{10}$ would be a vector in the kernel of $A-\lambda_1 I$.
$rref\left(\begin{bmatrix} 2-3-\sqrt{10}&3\\3&4-3-\sqrt{10}\end{bmatrix}\right) = \begin{bmatrix}1&\frac{1-\sqrt{10}}{3}\\0&0\end{bmatrix}$, so the eigenvector $v_1$ is $\begin{bmatrix}\frac{-1+\sqrt{10}}{3}\\1\end{bmatrix}$.
Similarly, the eigenvector for $\lambda_2=3-\sqrt{10}$ would be a vector in the kernel of $A-\lambda_2 I$.
$rref\left(\begin{bmatrix} 2-3+\sqrt{10}&3\\3&4-3+\sqrt{10}\end{bmatrix}\right) = \begin{bmatrix}1&\frac{1+\sqrt{10}}{3}\\0&0\end{bmatrix}$, so the eigenvector $v_2$ is $\begin{bmatrix}\frac{-1-\sqrt{10}}{3}\\1\end{bmatrix}$
step 3: form an orthonormal basis for each eigenspace
A convenient thing about this situation is, thanks to the spectral theorem and the fact that our $A$ is real symmetric, vectors in different eigenspaces are already guaranteed to be orthogonal. Indeed $\langle v_1, v_2\rangle = (\frac{-1+\sqrt{10}}{3})(\frac{-1-\sqrt{10}}{3})+1\cdot 1 = 0$
If we had a repeated eigenvalue, then we would need to apply the gram-schmidt process to the basis vectors of its corresponding eigenspace. In our case, each eigenvalue is of multiplicity one, so we only need to normalize the vectors.
$u_1 = \frac{v_1}{\|v_1\|} = \begin{bmatrix} ((1+\sqrt{10})/(3 \sqrt{1+1/9 (1+\sqrt{10})^2)}\\ 1/\sqrt{1+1/9 (1+\sqrt{10})^2)}\end{bmatrix}$
These numbers were not very pretty to work with... oh well.
You have then $A = PDP^T$ where $P=[u_1,u_2]$ and $D=\begin{bmatrix}\lambda_1&0\\0&\lambda_2\end{bmatrix}$. $P$ is an orthogonal matrix, so $P^T=P^{-1}$ and we have $P^T A P=D$