inverse of diagonal plus sum of rank one matrices

Is there formula for the inverse of a matrix which is diagonal plus a sum of rank one matrices?

$$S=\alpha I + \sum_1^N u_iu_i^T$$

$$S^{-1} = ?$$

Is there any decomposition or special trick that I can solve this inverse faster?


Solution 1:

Use the formula of Mr Woodbury. You can express the sum of the rank one matrices as $UU^T$ where $U=[u_1,...,u_N]$.

EDIT: If $$ S_k = \alpha I + \sum_{i=1}^ku_iu_i^T, $$ you can use it as well (or more precisely the Sherman-Morrison formula) to update the inverse of $S_k$ from that of $S_{k-1}$ (because $S_k=S_{k-1}+u_ku_k^T$): $$ S_k^{-1} = S_{k-1}^{-1} - \frac{w_kw_k^T}{1+u_k^Tw_k}, \quad w_k=S_{k-1}^{-1}u_k. $$ If $\alpha>0$, then there should not be any problem with the denominator since $S_k$ would be positive definite for all $k$.

EDIT: Another solution: It would probably be better to instead of updating the inverse of $S$ to update its factorization. If $\alpha>0$, then $S$ (or any $S_k$) is SPD so that $S$ can be factorized using the Cholesky factorization $S=LL^T$, where $L$ is lower triangular. Now assume that $\tilde{S}=S+uu^T$. You can write $\tilde{S}$ in the form $$ \tilde{S}=\begin{bmatrix}L^T \\ u^T\end{bmatrix}^T\begin{bmatrix}L^T \\ u^T\end{bmatrix}. $$ This matrix $[L,u]^T$ is "almost" upper triangular except the last row. You can apply the product of Givens rotations $Q=Q_1\cdots Q_n$ to zero out the nonzero entries in the row containing $u$ so that you obtain a QR factorization $$ Q^T\begin{bmatrix}L^T\\u^T\end{bmatrix}=\begin{bmatrix}R\\0\end{bmatrix}. $$ You can check that $$ \tilde{S}=\begin{bmatrix}L^T \\ u^T\end{bmatrix}^T\begin{bmatrix}L^T \\ u^T\end{bmatrix}=R^TR, $$ that is, $R^T$ is the Cholesky factor after the rank one update. Note that you need only to update the matrix $L$ and you actually do not need to form the orthogonal matrix $Q$ (just to apply the Givens rotations on the rows of $L^T$ (or equivalently, the columns of $L$) and $u$).

This approach could look like an overkill. It of course depends on the dimensions and the number of vectors. Anyway, this is numerically more stable than updating the inverse using the Sherman-Morrison formula.