What is the best way to compute the pseudoinverse of a matrix?

Mathematica gives the pseudo-inverse of a matrix almost instantaneously, so I suspect it is calculating the pseudo-inverse of a matrix not by doing singular value decomposition. Since the pseudo-inverse of a matrix is unique, is there a good formula that we can use to simplify our calculation in obtaining the pseudo-inverse, in place of compact singular value decomposition?

I'm thinking about the property that $A^\dagger = (A^\ast A)^{-1} A$, and I think this should give the unique pseudo-inverse of $A$. But I'm not too sure. Thanks for any insight!


Without being able to see your results, the following comments may lack full context.

The implementation of the pseudoinverse is modern and efficient, and will bypass the SVD if possible. Computation with numerical matrices are quite fast and are based at BLAS-level optimizations for your hardware.

Examples where $\mathbf{A}^{+}$ is constructed without the SVD are presented by user1551 in Find the pseudo-inverse of the matrix A without computing singular values of A.

Sequence of Jordan normal forms

These example bring your points to life. The SVD takes much longer to compute that the SVD.

$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crcc} 1 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.006 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0003 \text{ s} % \end{array} $$


$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crrc} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.008 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0003 \text{ s} % \end{array} $$
$$ \begin{array}{cc} % \mathbf{A} = \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) % & \mathbf{A}^{+} = \left( \begin{array}{crrr} 1 & -1 & 0 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1 \\ \end{array} \right) \\ % \text{SVD time} = 0.011 \text{ s} & \mathbf{A}^{+}\text{ time} = 0.0004 \text{ s} % \end{array} $$

The SVD is an invaluable theoretical tool as it resolved the four fundamental spaces, aligns them, and computes the scale factors. As seen in How does the SVD solve the least squares problem? it provides a natural expression for the solution of the least squares problem. It is quite useful for understand different forms of the Moore-Penrose pseudoinverse: What forms does the Moore-Penrose inverse take under systems with full rank, full column rank, and full row rank?, Pseudo-inverse of a matrix that is neither fat nor tall?. Yet as you point out, there are oftern faster ways to compute $\mathbf{A}^{+}$.

If you think the SVD is too slow

A common issue is when users create matrices with integers and special constants like $\pi$ and complain the SVD is too slow. You will find relief if you request instead of an arbitrary precision result with SingularValueDecomposition[A] you request a numeric precision result with SingularValueDecomposition[N[A]].


Let $A\in M_{m,n}(K)$ with $rank(A)=r\leq \min(n,m)$ and let $u=\min(m^2n,n^2m)$. Note that $A^+\in M_{n,m}(K)$.

Case 1. You want the exact value of $A^+$.

Then you calculate a rank factorisation of $A$ ($A=BC$ where $B\in M_{m,r},C\in M_{r,n}$ have full rank), using the reduced row echelon form of $A$ (complexity $\approx u$. Then $A^+=C^+B^+=C^*(CC^*)^{-1}(B^*B)^{-1}B^*$ (complexity $\approx 7u$).

Total complexity $\approx 8u$.

Beware, here the word "complexity" measures the number of operations; in an exact calculation, the number of digits can become very large, which substantially increases the true complexity. If you cannot work with a large number of digits, then the method may be unstable when $ A $ is badly conditioned.

Case 2. You want only an approximation of $A^+$.

For the complexity, we assume that $n=1024,m=2048$. Anyway, the complexities of the below methods are greater than the complexity of i); note that here we work with a fixed number of significative digits.

i) You can use the full rank QR factorization by GSO method; for the case when $r=\min(m,n)$, cf.

https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse#The_QR_method

The complexity is $\approx \alpha u$ where $\alpha= ?$.

ii) The best known method is the SVD one. cf.

https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_inverse#Singular_value_decomposition_(SVD)

The complexity is $\approx \dfrac{3}{2}\alpha u$.

Both methods i),ii) are very stable because they use orthogonal systems.

Remark. QR method gives exact results as expressions containing square roots.

SVD does not give exact results.

iii) There is a third method using a full rank Cholesky factorization of possibly singular symmetric positive matrices. Its complexity is $\approx \dfrac{1}{4}\alpha u$. cf.

https://arxiv.org/pdf/0804.4809.pdf