I want to compute the maximum eigenvalue of a diagonal plus rank-one matrix. Are there fast algorithms for the computation of the largest eigenvalue? What is the computational complexity of those algorithms?


The numerical eigenvalue problem for diagonal-plus-rank-one (DPR1) matrices has been considered in the literature, often in a broader context of algorithms for generalized companion matrices.

Typical of these is the recent paper "Accurate eigenvalue decomposition of arrowhead matrices and applications," by N.J. Stor, I. Slapnicara, J. Barlow (arXiv.org, 2013), cf. Sec. 6.3 where the real-symmetric DPR1 eigenvalue problem for $D + uu^T$ is treated (and according to Remark 5 a more general such case is the subject of a forthcoming paper).

Slightly older but perhaps more germane (treating the nonsymmetric case) is the paper "Fast and Stable QR eigenvalue algorithms for generalized companion matrices and secular equations," by Bini, Gemignani, and Pan, Numer. Math. (2005), which gives details of an $O(n^2)$ operation, $O(n)$ space method for finding all eigenvalues to high relative precision by a structure preserving variant of QR steps.

As noted in my Comments above, a power iteration for determining a dominant eigenvalue can take advantage of the special structure of DPR1 matrices in forming matrix-vector products with $O(n)$ operation cost, a significant improvement over the usual $O(n^2)$ cost for matrix-vector products.

This advantage carries over to more sophisticated algorithms which accelerate convergence (by shift & invert) or which improve stability via orthogonal transforms (as with QR; see above). Note that the shift of a DPR1 matrix is DPR1, and the inverse of an invertible DPR1 matrix is again DPR1.

Added: In the special case $D=I$ the eigenvalues/eigenvectors of $I + vu^T$ can be explicitly expressed, just as a spectrum shift for rank one matrix $vu^T$. That is, $(I+vu^T)u = (v\cdot u + 1) u$, and for any $w\cdot u = 0$, then $(I+uv^T)w = w$. The largest eigenvalue $\lambda$ is then $v\cdot u + 1$ or $1$ depending on the sign of $v\cdot u$.

Standard treatments of estimating the largest eigenvalue of a positive definite symmetric matrix often begin with a reduction to tridiagonal form, either by full similarity transformation or by a Lanczos type approximation of smaller dimension. Because the DPR1 structure is already of $O(n)$ size, there is no need for tridiagonal reduction, and instead one can immediately begin searching for the largest eigenvalue (at least in the symmetric case) through evaluations of $\det(\lambda I - (D+uu^T))$.

Provided $\lambda$ differs from any of the entries on diagonal $D$, $\lambda I - D$ is invertible/has nonzero determinant and can be factored out, leaving us with evaluation of $\det(I - wu^T)$ where $w = (\lambda I - D)^{-1} u$. The material discussed in the first added paragraph above can be recast as an explicit characteristic polynomial, and in particular, if $w\cdot u \neq 0$:

$$ \det(I - wu^T) = 1 - w\cdot u $$

whose evaluation has cost $O(n)$.