Why is solving linear equation more stable than directly computing matrix inverse?
If $x$ is computed by LU factorization, the residual can be bounded by $$ \|b-Ax\|\leq cu\||L||U|\|\|x\|. $$ Assuming for simplicity that $A^{-1}$ is computed exactly and that the only source of error is the matrix-vector multiplication, we have $$ \|b-Ax\|\leq cu\||A||A^{-1}|\|\|b\|. $$ Here, $c$ is a "moderate constant", $u$ is the unit round-off, and $\|\cdot\|$ is the $\infty$-norm.
Often, $\||L||U|\|\approx\|A\|$ (e.g., with a suitable pivoting strategy in LU). The bounds indicate that you might get a much smaller residual with the LU factorization provided that $$\|x\|\approx\|A^{-1}b\|\ll\|A^{-1}\|\|b\|.$$
For illustration, consider this piece of Matlab code:
n = 32;
A = 2 * eye(n) - triu(ones(n));
[U, S, V] = svd(A);
b = U(:,1);
x_LU = A \ b;
x_INV = inv(A) * b;
norm(b - A * x_LU, inf) % 5.551e-17
norm(b - A * x_INV, inf) % 9.543e-10
The choice of $A$ and $b$ are quite extreme ($A$ highly ill-conditioned, $b$ in the direction of the first left singular vector) but they illustrate clearly the issue here (disregarding of course the computational efficiency).