I was working on this problem here below, but seem to not know a precise or clean way to show the proof to this question below. I had about a few ways of doing it, but the statements/operations were pretty loosely used. The problem is as follows:

Show that ${\bf A}^{-1}$ exists if and only if the eigenvalues $ \lambda _i$ , $1 \leq i \leq n$ of $\bf{A}$ are all non-zero, and then ${\bf A}^{-1}$ has the eigenvalues given by $ \frac{1}{\lambda _i}$, $1 \leq i \leq n$.

Thanks.


(Assuming $\mathbf{A}$ is a square matrix, of course). Here's a solution that does not invoke determinants or diagonalizability, but only the definition of eigenvalue/eigenvector, and the characterization of invertibility in terms of the nullspace. (Added for clarity: $\mathbf{N}(\mathbf{A}) = \mathrm{ker}(\mathbf{A}) = \{\mathbf{x}\mid \mathbf{A}\mathbf{x}=\mathbf{0}\}$, the nullspace/kernel of $\mathbf{A}$.)

\begin{align*} \mbox{$\mathbf{A}$ is not invertible} &\Longleftrightarrow \mathbf{N}(\mathbf{A})\neq{\mathbf{0}}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\mathbf{0}$}\\ &\Longleftrightarrow \mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}\mathbf{x}=0\mathbf{x}$}\\ &\Longleftrightarrow \mbox{there exists an eigenvector of $\mathbf{A}$ with eigenvalue $\lambda=0$}\\ &\Longleftrightarrow \mbox{$\lambda=0$ is an eigenvalue of $\mathbf{A}$.} \end{align*}

Note that this argument holds even in the case where $\mathbf{A}$ has no eigenvalues (when working over a non-algebraically closed field, of course), where the condition "the eigenvalues of $\mathbf{A}$ are all nonzero" is true by vacuity.

For $\mathbf{A}$ invertible: \begin{align*} \mbox{$\lambda\neq 0$ is an eigenvalue of $\mathbf{A}$} &\Longleftrightarrow \mbox{$\lambda\neq 0$ and there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}\mathbf{x}=\lambda\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq\mathbf{0}$ such that $\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\mathbf{A}^{-1}\mathbf{A}({\textstyle\frac{1}{\lambda}}\mathbf{x}) = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{there exists $\mathbf{x}\neq \mathbf{0}$ such that $\frac{1}{\lambda}\mathbf{x} = \mathbf{A}^{-1}\mathbf{x}$}\\ &\Longleftrightarrow\mbox{$\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$.} \end{align*}


Here is a short proof using the fact that the eigenvalues of ${\bf A}$ are precisely the solutions in $\lambda$ to the equation $\det ({\bf A}-\lambda {\bf I})=0$.

Suppose one of the eigenvalues is zero, say $\lambda_k=0$. Then $\det ({\bf A}-\lambda_k {\bf I})=\det ({\bf A})=0$, so ${\bf A}$ is not invertible.

On the other hand, suppose all eigenvalues are nonzero. Then zero is not a solution to the equation $\det ({\bf A}-\lambda {\bf I})=0$, from which we conclude that $\det({\bf A})$ cannot be zero.

I'll leave the second question to you.


a different approach from Joseph's (which also shows what the form of $A^{-1}$ is)

Let us assume for simplicity that $A$ is diagonalizable (otherwise one can most probably extend the proof using the Jordan normal form). The matrix $A$ can be brought on the form $$A= T D T^{-1}$$ with $D = \text{diag}(\lambda_i)$ a diagonal matrix containing the eigenvalues, and $T$ and invertible matrix. The inverse of $A$ therefore reads $$A^{-1} = (T D T^{-1})^{-1} = T D^{-1} T^{-1}.$$ This inverse exists if $D^{-1}$ exist. But $D^{-1}$ we can calculate easily. It is given by $D^{-1} =\text{diag}(\lambda_i^{-1})$ which exists as long as all $\lambda_i$ are nonzero.


I'd like to add a few things to complement Arturo's and Fabian's answers.

If you take the outer product of a unit vector, $\hat{e}$ ($\lvert e \rangle$, Dirac notation), with its dual, $\hat{e}^\ast$ ($\langle e \rvert$), you get a matrix that projects vectors onto the space defined by the unit vector, i.e.

$$\begin{aligned}

\mathbf{P}_e &= \hat{e} \otimes \hat{e}^\ast \ &= \lvert e \rangle \langle e \rvert \ &= \left( \begin{array}{ccc} \lVert e_{1} \rVert^2 & e_{1} e_2^\ast & \ldots \\ e_2 e_1^\ast & \lVert e_{2} \rVert^2 & \ldots \\ \vdots & \vdots & \ddots \end{array} \right)

\end{aligned} $$

where

$$ \mathbf{P}_e \vec{v} = (\vec{v} \cdot \hat{e}) \hat{e}. $$

In other words, $\mathbf{P}_e$ is a projection operator. Using this, you can rewrite a matrix, $\mathbf{A}$, in terms of its eigenvalues, $\lambda_i$, and eigenvectors, $\lvert e_i \rangle$,

$$ \mathbf{A} = \sum_i \lambda_i \lvert e_i \rangle \langle e_i \rvert$$

which is called the spectral decomposition. From this, it is plain to see that any eigenvector, $\hat{e}_i$, with a zero eigenvalue does not contribute to the matrix, and for any vector component in one of those spaces, $\mathbf{P}_{e_i}\vec{v} = \vec{v}_i$, $$\mathbf{A} \vec{v}_i = 0.$$

This implies that the dimensionality of the space that $\mathbf{A}$ operates on is smaller than the dimension of the space used to describe it. In other words, $\mathbf{A}$ does not posses full rank, and is not invertible.