Getting the inverse of a lower/upper triangular matrix

For a lower triangular matrix, the inverse of itself should be easy to find because that's the idea of the LU decomposition, am I right? For many of the lower or upper triangular matrices, often I could just flip the signs to get its inverse. For eg: $$\begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ -1.5 & 0 & 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 1.5 & 0 & 1 \end{bmatrix}$$ I just flipped from -1.5 to 1.5 and I got the inverse.

But this apparently doesn't work all the time. Say in this matrix: $$\begin{bmatrix} 1 & 0 & 0\\ -2 & 1 & 0\\ 3.5 & -2.5 & 1 \end{bmatrix}^{-1}\neq \begin{bmatrix} 1 & 0 & 0\\ 2 & 1 & 0\\ -3.5 & 2.5 & 1 \end{bmatrix}$$ By flipping the signs, the inverse is wrong. But if I go through the whole tedious step of gauss-jordan elimination, I would get its correct inverse like this: $\begin{bmatrix} 1 & 0 & 0\\ -2 & 1 & 0\\ 3.5 & -2.5 & 1 \end{bmatrix}^{-1}= \begin{bmatrix} 1 & 0 & 0\\ 2 & 1 & 0\\ 1.5 & 2.5 & 1 \end{bmatrix}$ And it looks like some entries could just flip its signs but not for others.

Then this is kind of weird because I thought the whole idea of getting the lower and upper triangular matrices is to avoid the need to go through the tedious process of gauss-jordan elimination and can get the inverse quickly by flipping signs? Maybe I have missed something out here. How should I get an inverse of a lower or an upper matrix quickly?


Ziyuang's answer handles the cases, where $N^2=0$, but it can be generalized as follows. A triangular $n\times n$ matrix $T$ with 1s on the diagonal can be written in the form $T=I+N$. Here $N$ is the strictly triangular part (with zeros on the diagonal), and it always satisfies the relation $N^{n}=0$. Therefore we can use the polynomial factorization $1-x^n=(1-x)(1+x+x^2+\cdots +x^{n-1})$ with $x=-N$ to get the matrix relation $$ (I+N)(I-N+N^2-N^3+\cdot+(-1)^{n-1}N^{n-1})=I + (-1)^{n-1}N^n=I $$ telling us that $(I+N)^{-1}=I+\sum_{k=1}^{n-1}(-1)^kN^k$.

Yet another way of looking at this is to notice that it also is an instance of a geometric series $1+q+q^2+q^3+\cdots =1/(1-q)$ with $q=-N$. The series converges for the unusual reason that powers of $q$ are all zero from some point on. The same formula can be used to good effect elsewhere in algebra, too. For example, in a residue class ring like $\mathbf{Z}/2^n\mathbf{Z}$ all the even numbers are nilpotent, so computing the modular inverse of an odd number can be done with this formula.


In case of a lower triangular matrix with arbitrary non-zero diagonal members, you may just need to change it in to: $T = D(I+N)$ where $D$ is a diagonal matrix and $N$ is again an strictly lower diagonal matrix. Apparently, all said about inverse in previous comments will be the same.


Computing the inverse misses the whole point of factorizing into triangular matrices. If you have a triangular matrix, you should almost never need to compute the inverse, because solving triangular systems can be done quickly by back/forward-substitution without ever inverting the matrix.

For example, if you have a lower-triangular matrix $L$, and you see an expression like $L^{-1} y$, you should almost never actually compute $L^{-1}$ and then multiply it by $y$. Instead, you would solve $Lx = y$ by forward-substitution, obtaining $x=L^{-1}y$.

More quantitatively, if you have an $m \times m$ upper/lower triangular matrix $T$, then you can solve $Tx=y$ by back/forward-substitution in $\Theta(m^2)$ operations, whereas computing $T^{-1}$ for a general triangular matrix requires $\Theta(m^3)$ operations. (In general, when you see $A^{-1}y$ and you want to compute it, you should read it as "solve $Ax=y$ by the best available method for $A$"; computational scientists almost never explicitly compute inverse matrices.)