$\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right)$ not diagonalizable
I would like to ask you about this problem, that I encountered:
Show that there exists no matrix T such that $$T^{-1}\cdot \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right)\cdot T $$ is diagonal.
In other words our matrix let's call it A cannot be diagonalizable. (A being the matrix "in between the T's").
I saw the following: $$\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right)=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ \end{array} \right)+\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array} \right)$$ Let's denote them: $$A=D+N$$ Also easy to see is that $DN =ND$ and $N^{2}=0$. It follows that
$(D+N)^{t}=D^{t}+tN = \text{Identity}^{t}+t\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \\ \end{array} \right)=\left( \begin{array}{cc} 1 & t \\ 0 & 1 \\ \end{array} \right)$
Note: the algebraic expression was reduced to this, since all terms $N^2$ and higher are $0$, also $D=\text{Identity}$.
But I somehow fail to see why from here one can deduce (or not) that $A$ is not diagonalizable. Any hint or help greatly appreciated!
Thanks
A different approach:
Your matrix has a single eigenvalue $\lambda=1$ (with multiplicity $2$) so if it was diagonalizable it would be similar to the identity, i.e. $$P^{-1} \begin{pmatrix} 1 & 1 \\ 0 & 1 \\ \end{pmatrix} P=\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} \Rightarrow \\ \begin{pmatrix} 1 & 1 \\ 0 & 1 \\ \end{pmatrix}=P \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix}P^{-1}=PP^{-1}=\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix};$$ contradiction.
More generally the matrix $$ A= \begin{pmatrix} 1 & * &* &\ldots & * & *\\ 0 & 1 &* &\ldots & * & *\\ 0 & 0 &1 &\ldots & * & *\\ \vdots & \vdots &\vdots &\ddots & \vdots &\vdots \\ 0 & 0 &0 &\ldots & 1 & * \\ 0 & 0 &0 &\ldots & 0 & 1 \\ \end{pmatrix} $$
is diagonalizable iff $A=I$.
Note that both the eigenvalues are $1$ for this matrix. If we find the eigenvector(s), $\vec{x} = \begin{bmatrix} x_1\\ x_2\end{bmatrix}$, such that $$A x = x$$ we get that $x_1 + x_2 = x_1$ and $x_2 = x_2$. Hence, the eigenvector is $\vec{x} = \begin{bmatrix} \alpha\\ 0\end{bmatrix}$. Hence, the geometric multiplicity of the eigenvalue is $1$. Hence, the matrix is not diagonalizable.
You have a very neat geometric idea here! To pursue it further, we'll need a bit of theory, though. Sheinman's small brochure on representation theory is enough.
Denote $T(t) = A^t$, and consider $T$ as a complex representation of $\mathbb{R}$. If $A$ is diagonalizable, it is unitary in some basis of $\mathbb{C}^2$, and thus this representation has to be completely reducible (note that we don't get this for free because $\mathbb{R}$ is not compact), and we know that since $\mathbb{R}$ is abelian, its irreducible representations have to have dimension one.
So let's enumerate all subrepresentations of $T$. By a well-known theorem, for simply connected groups there is a natural bijection between representations of a Lie group and its Lie algebra. The Lie algebra of $\mathbb{R}$ is of course the abelian algebra $\mathbb{R}$, and the corresponding representation $\dot{T}$ is generated by $X = \begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}$. We can immediately find one subrep, generated by $\begin{bmatrix}1 \\ 0\end{bmatrix}$, so all that's left is to find a complement. But $X \begin{bmatrix}0 \\ 1\end{bmatrix} = \begin{bmatrix}1 \\ 0\end{bmatrix}$, so clearly such a complement doesn't exist. Thus $T$ is not completely reducible, so it cannot be unitary, Q.E.D.
In fact, by the same argument any matrix $X$ satisfying $X^m = 0$ for some $m$ is diagonalizable iff $X = 0$. If $X \neq 0$ then $\{0\} \subsetneq \operatorname{im} X \subsetneq \operatorname{dom} X = \mathbb{C}^n$ so $\operatorname{im} X$ cannot have an invariant complement, so it cannot be unitarizable.
To continue with the approach you suggested, let
$$ E = T^{-1} A T $$
and suppose $E$ is diagonal. Then, we have
$$ E^t = 1 + t\: T N T^{-1} $$
Now, suppose $\lambda$ is the top-left entry of $E$ and $\mu$ is the top-left entry of $TNT^{-1}$. This equation implies
$$ \lambda^t = 1 + t \mu $$
for all $t$. In particular, we have
$$\lambda = 1 + \mu \qquad \qquad \lambda^2 = 1 + 2 \mu$$
But we also have
$$\lambda^2 = 1 + 2 \mu + \mu^2$$
and therefore $\mu = 0$. The same argument applies to every diagonal entry, not just the top-left one.
Therefore $TNT^{-1} = 0$, and therefore $N = 0$, which is a contradiction.
This is a rather convoluted way to arrive at the conclusion that your matrix is not diagonalizable, but you can deduce this from your calculation $$ A^m=\begin{pmatrix}1&m\\0&1\end{pmatrix}\qquad\text{for }m\in\mathbf N. $$ The $m$-th power of a diagonal matrix has diagonal entries of the form $\lambda^m$ and all other entries of course $0$. Now if you apply a basis transformation, which is obtained by left and right multiplication by fixed matrices, then it is clear that each matrix entry of the result will be a linear combination of exponential expressions in $m$ (with as bases those diagonal entries $\lambda$). However the (linear) function $m$ (of $m$) that occurs as entry of $A^m$ is not a linear combination of exponential expressions (for instance because the asymptotic behavior for large $m$ of such a linear combination cannot be linear growth; the argument is admittedly a bit messy). Therefore $A$ cannot be diagonalizable.