Using the Arnoldi Iteration to find the k largest eigenvalues of a matrix

I'm trying to obtain a general understanding of this algorithm which determines the k-largest eigenvalues of a matrix $A\in \mathbb{R}^{n\times n}$. How I see it:

power iteration:

  • take random starting vector $b \in \mathbb{R}^{1\times n}$
  • find $K_{n} = \begin{bmatrix}b & Ab & A^{2}b & \cdots & A^{n-1}b \end{bmatrix}.$
  • find orthogonal basis $Q_n$ of $K_{n}$ using Gramm-Schmidt (Numerically unstable)
  • n-th column vector of $Q_n$ is an approximation of n-th eigenvector of $A$ and corresponds to n-th largest eigenvalue $\lambda_n$ of $A$

Arnoldi Iteration:

Is a numerically stable implementation of power iteration.

  • take random starting vector $b \in \mathbb{R}^{1\times n}$

  • find first $q_1..q_n$ arnoldi vectors to form $Q_n$

    • $Q_n$ is an orthonormal basis of $K_n$
    • numerically stable Gramm-schmidt process is used
  • determine Hessenberg Matrix $H_n=Q_n^*AQ_n$
  • solve eig($H_n$) which is now simple because $H_n$ is a Hessenberg matrix, upper triangular, we can use the QR algorithm

Is this the general gist of it? A good, reliable link would already be great.

Any help would be greatly appreciated. Thank you.


Solution 1:

There is a really good exposition of the Arnoldi Method given by Prof. Gilbert Strang in his Video lectures found in MIT Open Course Ware.

Here is the link to the lecture where he describes Arnoldi method:

http://ocw.mit.edu/courses/mathematics/18-086-mathematical-methods-for-engineers-ii-spring-2006/video-lectures/lecture-18-krylov-methods-multigrid-continued/