How to determine the diagonalizability of these two matrices?

I am trying to figure out how to determine the diagonalizability of the following two matrices. For the first matrix

$$\left[\begin{matrix} 0 & 1 & 0\\0 & 0 & 1\\2 & -5 & 4\end{matrix}\right]$$

There are two distinct eigenvalues, $\lambda_1 = \lambda_2 = 1$ and $\lambda_3 = 2$.

According to the theorem, If $A$ is an $n \times n$ matrix with $n$ distinct eigenvalues, then $A$ is diagonalizable.

For the next one $3 \times 3$ matrix

$$\left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$$

We also have two eigenvalues $\lambda_1 = \lambda_2 = 0$ and $\lambda_3 = -2$.

For the first matrix, the algebraic multiplicity of the $\lambda_1$ is $2$ and the geometric multiplicity is $1$. So according to the theorem, this would not be diagonalizable since the geometric multiplicity is not equal to the algebraic multiplicity.

For the second matrix, the algebraic multiplicity and the geometric multiplicity of both lambdas are equal, so this is diagonalizable according to my textbook. But there are still only two distinct eigenvalues in $3 \times 3$ matrix, so why is this diagonalizable if we are to accept the first theorem?

Also, how to determine the geometric multiplicity of a matrix?


Solution 1:

The diagonalization theorem, here for example, states that you can take $$ A = \left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$$ and turn it into a diagonal matrix $$ V = \left[\begin{matrix} 0 & 0 & 0\\0 & 0 & 0\\0 & 0 & -2\end{matrix}\right] $$ where the diagonal elements of $V$ are the eigenvalues $(0,0,-2)$ of $A$ using $$V = P^{-1} A P$$ where $P = (v_1 \quad v_2 \quad v_3)$ is invertible. This only happens if $A$ has $n$ linearly-independent eignevectors $v_1, v_2, v_3.$ In this case, although $\lambda_1 = \lambda_2 = 0$, you have a non-singular $$ P = \left[\begin{matrix} 1 & 0 & -1\\0 & 1 & 3\\1 & 0 & 1\end{matrix}\right] $$ To decide so, first find all eignevectors, form $P$ and check if $P$ is non-singular (equivalently, $v_1, v_2, v_3$ are linearly-independent). In the first matrix, however, $$P = \left[\begin{matrix} 1/4 & 1 & 0\\1/2 & 1 & 0\\1 & 1 & 0\end{matrix}\right] $$ which is singular.

Solution 2:

The converse theorem does not apply. If a matrix(nxn) has NOT n distinct eigenvalues, this doesn't mean that he can't be diagonizable.Actually we can't know that only from the number of distinct eigenvalues.It's just a sufficient (but not necessary) condition. Check the first example out: http://en.wikipedia.org/wiki/Diagonalizable_matrix

Solution 3:

To check whether $A=\left[\begin{matrix} 0 & 1 & 0\\0 & 0 & 1\\2 & -5 & 4\end{matrix}\right]$ is diagonalizable (Assuming that the entries are taken over the field $\mathbb R$):

Suppose with respect to some basis $\beta$ of $\mathbb R^3_\mathbb R,~[T]_{\beta}=A$ for some linear operator $T$ of $\mathbb R^3_\mathbb R.$ Then $\chi_T:(x-1)^2(x-2).$ Consequently the characteristic values of $T$ are $1,2$ (Since $1,2\in\mathbb R$).

Let us first check whether $T$ is diagonalizable:

$E_1(T)=\{v\in\mathbb R^3:Tv=v\}=Ker~(T-I_\mathbb R{^3})$

$E_2(T)=\{v\in\mathbb R^3:Tv=2v\}=Ker~(T-2I_\mathbb R{^3})$

Now

$Rank~[T-1I_\mathbb R{^3}]_\beta=R\left[\begin{matrix} -1 & 1 & 0\\0 & -1 & 1\\2 & -5 & 3\end{matrix}\right]\leq 2\implies Rank~[T-I_\mathbb R{^3}]_\beta=2$ and

$Rank~[T-2I_\mathbb R{^3}]_\beta=R\left[\begin{matrix} -2 & 1 & 0\\0 & -2 & 1\\2 & -5 & 2\end{matrix}\right]= 2$ (operating on rows)$\implies Rank~[T-I_\mathbb R{^3}]_\beta=2.$

Recall: For any two finite dimensional vector spaces $V$ & $W$ over the same field, $~T\in L(V,W)$$\implies Rank~T=Rank$ of any matrix of $T.$

Consequently, $\dim E_1(T)=Nullity~(T-I_\mathbb R{^3})=3-Rank~(T-I_\mathbb R{^3})=3-Rank~[T-I_\mathbb R{^3}]_\beta$$=1.$ Similarly $\dim E_2(T)=3-2=1.$

Now $\dim E_1(T)+\dim E_2(T)\ne\dim \mathbb R^3.$ Consequently $T$ is not diagonalizable.

$($ Alternatively $\chi_T:(x-1)^2(x-2)\neq (x-1)^{\dim E_1(T)}(x-2)^{\dim E_2(T)}.$ Consequently $T$ is not diagonalizable.$)$

Therefore $A$ is not diagonalizable.

Solution 4:

Geometric multiplicity of an eigenvalue $λ$ is the dimension of the solution space of the equation $(A−λI)X=0$.

So, in your first case, to determine geometric multiplicity of the (repeated) eigenvalue $\lambda=1$, we consider $\left[\begin{matrix} -1 & 1 & 0\\0 & -1 & 1\\2 & -5 & 3\end{matrix}\right]$ $(x,y,z)^T=0$ (I found writing two matrices side-by-side may be non-trivial), or after some elementary row operations, $\left[\begin{matrix} 1 & 0 & 0\\0 & 0 & 1\\0 & 1 & 0\end{matrix}\right]$ $(x,y,z)^T=0$ i.e., $x=y=z=0$, or $(x,y,z)=(0,0,0)$. So, the solution space is generated by a single (null) vector. That's why geometric multiplicity of $\lambda=1$ is 1.

For your second example, similarly we have to consider $\left[\begin{matrix} -1 & 0 & 1\\3 & 0 & -3\\1 & 0 & -1\end{matrix}\right]$ $(x,y,z)^T=0$ which has solution $x=z$ i,e., $(x,y,z)=(x,y,x)=x(1,0,1)+y(0,1,0)$. So the solution space is generated by the two vectors $\{(0,1,0),(1,0,1)\}$. Hence its dimension is $2$.