Maximizing a quadratic function subject to $\| x \|_2 \le 1$
This isn't pretty, but it's direct. We proceed by the usual method of finding the critical points of a function followed by the critical points on the boundary. The maximum occurs at once of these points.
To find the critical points of the objective function, differentiate:
$$ df(x) = (1/2) (dx)^T A x + (1/2) x^T A dx + b^T dx = (dx)^T (Ax + b) $$
so the critical points are solutions to $Ax = -b$.
Using Lagrange multipliers to find the critical points on the boundary of your constrant $x^T x = 1$ says they occur at solutions $(x, \mu)$ to the system of equations
$$ (A + 2 \mu I) x = -b \qquad \qquad \qquad x^T x = 1$$
Most solutions can be obtained by solving the equation
$$ || (A + 2 \mu I)^{-1} b ||_2^2 = 1 $$
for $\mu$. However, we must consider separately the points $\mu$ for which $-2\mu$ is an eigenvalue of $A$.
For each of these $\mu$, obtain the general solution to
$$ (A + 2 \mu I) x = -b $$
and and search the solution space for unit vectors. Typically there won't be any solutions at all! But if there are, you have to search the solution space for unit vectors. This is just solving a polynomial equation in $n$ variables, where $n$ is the multiplicty of the eigenvalue $-2 \mu$.
I get essentially the same thing from the eigen-decomposition. Write everything in terms of an orthonormal basis of eigenvectors. Then, you are trying to maximize
$$ (1/2) \sum_i x_i^2 \lambda_i + x_i b_i$$
subject to the constraint
$$ \sum_i x_i^2 = 1 $$
The criticial points of the objective function are the solutions to
$$ x_i \lambda_i + b_i = 0$$
and Lagrange multipliers gives us a system of equations
$$ \sum_i x_i^2 = 1 \qquad \qquad x_i( \lambda_i + 2 \mu) + b_i = 0 $$
and the solution method to this is effectively the same as the matrix version.