If $ A^3=A$ prove that $Ker\left(A-I\right)+Im\left(A-I\right)=V$

If $ A^3=A$ prove that $Ker\left(A-I\right)+Im\left(A-I\right)=V$

I am not sure how to approach this problem, but first things first if we have $A^3=A$ that is $A^2=I$, what does that tell me (what does it imply) about $A$?

The only thing I can tell at this point is that $A$ is it's own inverse and if a matrice is invertible it has a full rank which implies that the dimension of the image space is $n$ and the dimension of the null-space aka kernel is zero. Thus we would have that $Im(A)+Ker(A)=V$, but subtracting $I$ confuses me here.

P.S. $A$ is the matrix representation of a linear operator and $V$ is the vector space for which the operator is defined.


Let $L$ be the linear operator corresponding to the matrix $(A-I)$. Now we'll prove that $\ker(A-I) \cap \text{Im}(A-I) = \{\vec{0}\}$. Assume that $\vec{v} \in \ker(A-I) \cap \text{Im}(A-I)$. Then we have that $\exists \vec{x} \in V$, s.t. $L(\vec{x}) = (A-I)\vec{x} = \vec{v}$ and $L(\vec{v}) = (A-I)\vec{v} = \vec{0}$. Now:

$$\vec{0} = A\vec{v} - I\vec{v} = A^2\vec{x} - A\vec{x} - A\vec{x} + I\vec{x} = -2(A-I)\vec{x} = -2\vec{v} \implies \vec{v} = \vec{0}$$

Now that the claim is proven, using the Rank-Nullity Theorem we have that:

$$\dim(\ker(A-I) + \text{Im}(A-I)) = \dim(\ker(A-I)) + \dim(\text{Im}(A-I)) - \dim(\ker(A-I) \cap \text{Im}(A-I)) = \dim(V)$$

But obviously $\ker(A-I) + \text{Im}(A-I) \subseteq V$, so as they have the same dimension we have that: $\ker(A-I) + \text{Im}(A-I) = V$


This is true even if $V$ is infinite-dimensional (where the dimension argument does not work), provided that the base field is of characteristic other than $2$. We claim that $$V=\text{im}(A-I)\oplus\ker(A-I)\,.$$

First, we have $\text{im}(A-I)\cap \ker(A-I)=0$. To show this, suppose $u\in \text{im}(A-I)\cap \ker(A-I)$. Then, $u=(A-I)x$ for some $x\in V$, and so $(A-I)^2x=(A-I)u=0$. Because $A^3-A=0$, we have $$ \begin{align} 2(A-I)x&=\Big(\left(A^3-A\right)-(A+2I)(A-I)^2\Big)x \\ &=\left(A^3-A\right)x-(A+2I)\big((A-I)^2x\big) \\ &=0x-(A+2I)0=0\,. \end{align}$$ Thus, $u=(A-I)x=0$, recalling that the base field is of characteristic unequal to $2$.

Next, we claim that every $V=\text{im}(A-I)+\ker(A-I)$. We have $$I=-\frac{1}{2}(A-I)(A+2I)+\frac{1}{2}\left(A^2+A\right)\,,$$ whence $$v=-\frac{1}{2}(A-I)(A+2I)v+\frac{1}{2}\left(A^2+A\right)v$$ for all $v\in V$. Clearly, $$-\frac{1}{2}(A-I)(A+2I)v=(A-I)\left(-\frac{1}{2}(A+2I)v\right)\in\text{im}(A-I)\,.$$ In addition, $\frac{1}{2}\left(A^2+A\right)v \in\ker(A-I)$ because $$\begin{align} (A-I)\left(\frac{1}{2}\left(A^2+A\right)v\right)=\frac{1}{2}\left(A^3-A\right)v=\frac{1}{2}(0v)=0\,. \end{align}$$

I haven't thought about what happens if the base field is of characteristic $2$. I am certain that there are counterexamples, even in the finite-dimensional case.


In general, let $K$ be a field and $V$ a (not necessarily finite-dimensional) vector space over $K$. A $K$-linear operator $A:V\to V$ satisfies $p(A)=0$ for some nonzero polynomial $p(X)\in K[X]$. Suppose that $\alpha\in K$ is a simple root of $p(X)$. Then, $V$ is the internal direct sum $\text{im}(A-\alpha I)\oplus \ker(A-\alpha I)$. On the other hand, if $\alpha\in K$ is a root of $p(X)$ of multiplicity $m$, then $$V=\text{im}(A-\alpha I)^m\oplus \ker(A-\alpha I)^m\,.$$ In other words, if a linear operator $A$ satisfies a polynomial equation $p(A)=0$, then it can be Jordanized over the algebraic closure of the base field, namely $$A=\bigoplus_{\alpha}\,\text{ker}\left(A-\alpha I\right)^{m_\alpha}\,,$$ where $\alpha$ runs over all the roots of $p(X)$ and $m_\alpha$ is the multiplicity of a root $\alpha$. If all the roots of this polynomial in the algebraic closure of the base field are simple, the $A$ is diagonalizable over the algebraic closure of the base field, i.e., $$A=\bigoplus_{\alpha}\,\text{ker}\left(A-\alpha I\right)\,,$$ where $\alpha$ runs over all the roots of $p(X)$.


Suppose that $A^3=A$, then one can show that $A$ has an eigenbasis.

One way to see this is to use the Jordan decomposition and observe the behavior of blocks of size larger than one under powers. This is overkill for this problem, so let's consider an alternative approach.

Observe that $A^3-A=A(A-I)(A+I)=0$. Therefore, $A$ satisfies the polynomial $p(x)=x(x-1)(x+1)$ and the eigenvalues of $A$ can be $0$, $1$ or $-1$. Moreover, observe that the powers that these roots appear with are $1$. Since $p$ factors into distinct linear factors, we can find an eigenbasis for $V$. (My favorite ways to prove this uses generalized eigenvectors or the Jordan form, if anyone has a more elementary proof, I'd love to hear it in the comments).

Let $\{u_1,\cdots,u_a\}$ be an eigenbasis for the eigenspace $\lambda=1$. Similarly, let $\{v_1,\cdots,v_b\}$ be an eigenbasis for the eigenspace $\lambda=0$. Finally, let $\{w_1,\cdots,w_c\}$ be an eigenbasis for the eigenspace $\lambda=-1$. It is enough to show that each of these basis vectors is in the given sum since then, by taking linear combinations, all vectors of $V$ are in the sum.

Since $u_i\in\operatorname{Ker}(A-I)$, $u_i$ is in the sum on the left. Since $(A-I)(-v_j)=-Av_j+Iv_j=v_j$, $v_j$ is in the sum on the left. Since $(A-I)\left(-\frac{1}{3}w_k\right)=\frac{1}{3}(-Aw_k+Iw_k)=\frac{1}{3}(2w_k+w_k)=w_k$, $w_k$ is in the sum on the right. This proves equality since the LHS is a subset of the RHS.