Diagonalization of restrictions of a diagonalizable linear operator

I realized that I have some difficulties for prove this exercise.

Let $T : V \rightarrow V$ be a linear operator on a finite dimensional vector space $V$ over a field $F$, and invariant subspaces $U,W \subset V$ such that $V = U \oplus W$. Show that if $T$ is diagonalizable then $T_{|U}, T_{|W}$ are diagonalizable.

Any help would be greatly appreciated. Thanks!


There is an elementary proof of the more general statement that the restriction of a diagonalisable linear operator$~T$ to a $T$-stable subspace $U$ is again diagonalisable (in the finite dimensional case), along the lines of my other answer. See also this answer for a variant formulation.

Any vector of $u\in U$ decomposes uniquely in$~V$ as $u=v_1+\cdots+v_k$, a sum of eigenvectors for distinct eigenvalues $\lambda_1,\ldots,\lambda_k$, and it suffices to show that those eigenvectors $v_i$ lie in$~U$. Since $U$ is $T$-stable, it is also stable under any $T-\lambda I$. Now to show that $v_i\in U$, apply to the equation successively $T-\lambda_j I$ for $j\in\{1,2,\ldots,k\}\setminus\{i\}$. At each application each term $v_j$ is multiplied by a scalar, which is zero if and only if $T-\lambda_j I$ was being applied. The result is that only a nonzero scalar multiple of $v_i$ remains, and since we started out with a vector $u$ of $U$, this result is still in$~U$. After division by the scalar this shows that $v_i\in U$.

An equivalent formulation is by induction on$~k$, the number of (nonzero!) eigenvectors needed in the sum for$~u$. The starting cases $k\leq1$ are obvious. Otherwise application of $T-\lambda_kI$ gives $$ Tu-\lambda_ku=(\lambda_1-\lambda_k)v_1+(\lambda_2-\lambda_k)v_2+\cdots+(\lambda_{k-1}-\lambda_k)v_{k-1}, $$ and one can apply the induction hypothesis to the vector $Tu-\lambda_ku\in U$ to conclude that all the individual terms (eigenvectors) in the right hand side lie in $U$. But then so do the unscaled $v_1,\ldots,v_{k-1}$ (since all scalar factors are nonzero), and by necessity the remaining term $v_k$ in $u=v_1+\cdots+v_k$ must also lie in$~U$.


If you know about the theorem that says that a linear operator on a finite dimensional vector space over$~F$ is diagonalisable (over$~F$) if and only if it is annihilated by some polynomial that can be decomposed in$~K[X]$ as a product of distinct factors of degree$~1$, then this is easy. Let by the theorem $P$ be such a polynomial for the diagonalisable operator$~T$ (so $P[T]=0$), then certainly $P[T|_U]=0$ and $P[T|_W]=0$, which by the same theorem shows that $T|_U$ and $T|_W$ are diagonalisable. In this high level answer it is irrelevant that both $U,W$ are given, and that they form a direct sum; it shows that more generally the restriction of a diagonalisable operator$~T$ to any $T$-stable subspace is diagonalisable.

There is however a more low level reasoning that applies for this question, based on the fact that the projections on the factors of a $T$-stable direct sum decomposition commute with$~T$. This fact is immediate, since if $v=u+w$ with $u\in U$ and $w\in W$ describes the components of$~u$, then $Tv=Tu+Tw$ with $Tu\in U$ and $Tw\in W$ by $T$-stability, so it describes the components of$~Tv$. This means in particular that the projections on $U$ and $W$ of an eigenvector of$~T$ for$~\lambda$ are again eigenvectors of$~T$ for$~\lambda$ (or one of them might be zero), as the projection of $Av=\lambda v$ is $\lambda$ times the projection of$~v$.

Now to show that $T|_U$ and $T|_W$ are diagonalisable, it suffices to project every eigenspace$~E_\lambda$ onto$~U$, and onto$~W$; its images are eigenspaces for$~\lambda$ of $T|_U$ and $T|_W$, or possibly the zero subspace. As it is given that $V=\bigoplus_\lambda E_\lambda$, the sums of the projections of the spaces $E_\lambda$ in $U$ respectively $W$ (which sums are always direct) fill up $U$ respectively $W$, in other words $T|_U$ and $T|_W$ are diagonalisable. Alternatively, to decompose a vector $u\in U$ as a sum of eigenvectors for $T|_U$, just decompose it into a sum of eigenvectors for$~T$, and project the summands onto$~U$ (parallel to$~W$), which projections clearly add up to$~u$ (and in fact it is easy to see that the projections did nothing; the eigenvectors for$~T$ were already inside$~U$).

Just one final warning: don't take away from this that projections onto $T$-stable subspaces always commute with$~T$, or send eigenspaces to eigenspaces for the restriction. That is not true in general: it only holds when the projection is along another $T$-stable subspace.


Here is an alternative approach if $\mathbb{F} = \mathbb{C}$.

First note that $T$ has a basis of eigenvectors $v_k$.

Then $v_k = u_k+w_k$, where $u_k \in U, w_k \in W$. We have $Tv_k = Tu_k + T w_k = \lambda_k v_k = \lambda_k u_k + \lambda_k w_k$. Since $U,W$ are $T$-invariant and $V = U \oplus W$, we have $Tu_k = \lambda_k u_k$ and $Tw_k = \lambda_k w_k$ (note the $u_k$ or the $w_k$ may be zero).

Furthermore, since the $v_k$ span $V$, we see that the $u_k$ span $U$ and the $w_k$ span $W$. Choose a subset $u_{n_i}$ that forms a basis for $U$, and similarly, a subset $w_{m_j}$ that form a basis for $W$.

Then $T_{|U}$ is diagonal in the basis $u_{n_i}$, and $T_{|W}$ is diagonal in the basis $w_{m_i}$.


$\;T\;$ is diagonalizable (over its definition field , all the time from now on)) iff its minimal polynomial is a product of different linear factors.

If we denote by $\;m_U(x)\;,\;\;m_V(x)\;$ the minimal polynomials of $\;T\;$ over $\;U,V\;$ resp., since $\;m_U(T)=0\;$ , we get that $\;m_V(x)\mid m_U(x)\implies\;$ also $\;m_U(x)\;$ is a product of different linear factors and we're done