For an invertible matrix $A$ do the nonzero entries remain nonzero when looking at its inverse?

I cannot seem to wrap my head around this (either finding counterexample or attempting to prove it).

Let $A\in \mathcal{M}(n\times n, \mathbb R)$ be invertible, does the following relation hold:

$\forall i,j\in \{ 1,...,n\},\; A_{ij} \neq 0\implies (A^{-1})_{ij}\neq 0$?

If it is true, how can it be proven? If not, do you have a suitable counterexample?


This statement is not true. For example, consider $$ A = \pmatrix{1 & -1&1\\0&1&-1\\0&0&1}, \quad A^{-1} = \pmatrix{1&1&0\\0&1&1\\0&0&1}. $$ We have $A_{13} \neq 0$, but $(A^{-1})_{13} = 0$.


It is not true. For instance,$$\begin{bmatrix}1 & 1 & 1 \\ 1 & 1 & 2 \\ 0 & 1 & 1\end{bmatrix}^{-1}=\begin{bmatrix}1 & 0 & -1 \\ 1 & -1 & 1 \\ -1 & 1 & 0\end{bmatrix}.$$


$$ \left[ \begin{matrix} 1 & 1 \\ 1 & 0 \end{matrix} \right] \left[ \begin{matrix} 0 & 1 \\ 1 & -1 \end{matrix} \right] = \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] $$

Looks like I was beaten to the counterexample by other people already (although this example is a lot simpler), but I'll go a bit more in-depth than just providing a simple example.

Every invertible matrix can be written as a series of elementary matrix operations from the identity matrix, and therefore a product of elementary matrices, since each elementary matrix operation is a linear transformation. To get the matrix back to the identity, we just apply the inverse of each EMO in reverse operation order. If we take the product of the elementary matrices representing these operations, we get a matrix that applies the reverse transformation - the definition of a matrix inverse.

The "scalar multiply row" EMO of course will preserve the zero-ness of entries. However, both adding rows together and swapping rows very well could change the position.

To better demonstrate this intuition, let's look at a more complicated (but actually kind of simpler) example:

$$ \left[ \begin{matrix} 1 \\ & & 1 \\ & & & 1 \\ & 1 \end{matrix} \right] \left[ \begin{matrix} 1 \\ & & & 1 \\ & 1 \\ & & 1 \end{matrix} \right] = \left[ \begin{matrix} 1 \\ & 1 \\ & & 1 \\ & & & 1 \end{matrix} \right] $$

The first one is obtained by swapping 2 and 3 and then 3 and 4. Thus, the second one is obtained by first swapping 3 and 4 and then 2 and 3. These two, of course, are not equivalent operations and thus leave the positions of the 1s from the initial diagonal in different places, therefore not lining up the zeroes.

Of course, infinitely many counterexamples exist, and you can honestly just try a ton of random matrices until you get one. This is just an intuitive explanation of one way (specifically, the way I used) to conclude that this must have exceptions, but you don't have to use this approach to understand this (and in fact there isn't really much to understand it's just a false statement).