Why does numpy.linalg.solve() offer more precise matrix inversions than numpy.linalg.inv()?
I do not quite understand why numpy.linalg.solve()
gives the more precise answer, whereas numpy.linalg.inv()
breaks down somewhat, giving (what I believe are) estimates.
For a concrete example, I am solving the equation C^{-1} * d
where C
denotes a matrix, and d
is a vector-array. For the sake of discussion, the dimensions of C
are shape (1000,1000)
and d
is shape (1,1000)
.
numpy.linalg.solve(A, b)
solves the equation A*x=b
for x, i.e. x = A^{-1} * b.
Therefore, I could either solve this equation by
(1)
inverse = numpy.linalg.inv(C)
result = inverse * d
or (2)
numpy.linalg.solve(C, d)
Method (2) gives far more precise results. Why is this?
What exactly is happening such that one "works better" than the other?
np.linalg.solve(A, b)
does not compute the inverse of A. Instead it calls one of the gesv
LAPACK routines, which first factorizes A using LU decomposition, then solves for x using forward and backward substitution (see here).
np.linalg.inv
uses the same method to compute the inverse of A by solving for A-1 in A·A-1 = I where I is the identity*. The factorization step is exactly the same as above, but it takes more floating point operations to solve for A-1 (an n×n matrix) than for x (an n-long vector). Additionally, if you then wanted to obtain x via the identity A-1·b = x then the extra matrix multiplication would incur yet more floating point operations, and therefore slower performance and more numerical error.
There's no need for the intermediate step of computing A-1 - it is faster and more accurate to obtain x directly.
* The relevant bit of source for inv
is here. Unfortunately it's a bit tricky to understand since it's templated C. The important thing to note is that an identity matrix is being passed to the LAPACK solver as parameter B
.