How to compute the smallest eigenvalue using the power iteration algorithm?

If you know that $A$ is symmetric positive-definite, then the spectral shift $B = A-\lambda_\max I$ will work. Use the power method on $B$, then add $\lambda_\max$ to the result to get the smallest eigenvalue of $A$.

The reason this shift works is that a positive-definite matrix has all positive eigenvalues. Therefore $B$ has all non-positive eigenvalues, with the smallest eigenvalue of $A$ now the largest-magnitude (most negative) eigenvalue of $B$. The power method will then find that eigenvalue.

The same approach works for negative-definite matrices, for the same reason.


Suppose $A$ is invertible and has eigenvalue $\lambda$. Then $A^{-1}$ has eigenvalue $\lambda^{-1}$: this follows straight from the eigenvector equation $$Av = \lambda v \Rightarrow \frac{v}{\lambda} = A^{-1}v.$$

Since the smallest eigenvalue of $A$ is the largest eigenvalue of $A^{-1}$, you can find it using power iteration on $A^{-1}$:

$$v_{i+1} = A^{-1} \frac{v_i}{\|v_i\|}.$$

Unfortunately you now have to perform a linear solve at each iteration (or compute a decomposition of $A$), instead of just taking matrix-vector products.


A general creative answer to this question can be composed the following way:

1)Your symmetric/hermitian Matrix $H$ has a spectrum with positive and negative eigenvalues. Assume you can calculate the eigenvalue with maximum absolute value $\omega$ using power method.

2)Shift the matrix by a constant $\lambda$ to target the part of the spectrum that you are interested $H-\lambda I$. In case you are interested in the smallest magnitude eigenvalue $\lambda = 0$.

3)Square your matrix $H'= (H-\lambda I)^2$. This will make your matrix positive definite.

4)Now the desired eigenvalue will be as close to zero, while the change in the largest magnitude eigenvalue can be computed trivially. You can also use the fact that the matrix spectrum will be bounded by the Hilbert-Schmidt norm and avoid step 1.

5)Shift by the maximum eigenvalue/bound $H'=H'- max(spec(H'))I$. Now you will have a negative-definite matrix with the targeted eigenvalue $x'$ having the highest magnitude which you can compute using power-method.

6)Solve for the variable $x$: $x'=(x-\lambda)^2+max(spec(H'))$ You don't have the information of whether your initial eigenvalue was positive or negative but this can be found by multiplying the original matrix to the eigenvector.

In case you have a Hermitian matrix you have to use the inner product defined for complex Hilbert spaces.


If your matrix isn't too large, you could do linear regression on $\{I,A,A^2,\cdots,A^n\}$ to find the coefficients of the minimal polynomial. The minimal polynomial will have scalar roots the same as the characteristic polynomial but maybe smaller multiplicities. Thing is this polynomial will solve for the eigenvalues over the scalar field too (Caley-Hamilton theorem links these to each other). Then you can solve numerically over your scalar field for the solution closest to the origin, probably by setting 0 as the first guess to your root-finding method.

This does however deviate a bit from the requirement of using a power iterations for both, so maybe it does not answer your question - but on the other hand it is more general, not assuming any definiteness of the matrix.