A non-diagonal $4\times4$ matrix such that $A^3=A^2=0$ and its rank is $2$

The minimal polynomial $m(\lambda)$ for $A$ must divide $\lambda^{2}(\lambda-1)$ because $$ 0=A^{3}-A^{2}=A^{2}(A-I). $$ The factor of $\lambda-1$ must be present in $m(\lambda)$ because $(A-I)A^{2}=0$ and $A^{2}\ne 0$, which means $A-I$ has a non-trivial null space. A factor of $\lambda$ must be present in $m(\lambda)$ because, otherwise, $A=I$ is diagonalizable. If $m(\lambda)=\lambda(\lambda-1)$, then $A$ is diagonalizable. Therefore, $m(\lambda)=\lambda^{2}(\lambda-1)$.

The presence of $\lambda^{2}$ in the minimal polynomial means that there is a vector $v$ such that $A^{2}(A-I)w=0$ but $A(A-I)w \ne 0$. Therefore, $v=(A-I)w$ satisfies $Av \ne 0$ and $A^{2}v=0$.

The characteristic polynomial $p$ must be divisible by $m$, and $p$ cannot have factors other than $\lambda$ and $\lambda-1$. And $p$ cannot have a second factor of $\lambda-1$ unless the null space of $A-I$ is two-dimensional, which would make the rank of $A$ at least $3$ because the range of $A$ would include the null space of $A-I$ and the vector $Av$ described above, a vector which is independent of the null space of $A-I$. So $p$ has to be $\lambda^{3}(\lambda-1)=\lambda^{4}-\lambda^{3}$.


I believe both (2) and (3) are true. I will use $(N(A),C(A)$ to stand for the null space of $A$ and column space of $A$ respectively.

We can establish the following: $$ (a)\ \dim(N(A)) = \dim(c(A) = 2\\ (b)\ \dim(N(A^2)) = 3, \dim(C(A^2)) = 1\\ (c)\ A \text{ is similar to } \pmatrix{0&1&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&1} $$

proof: of (a) is immediate from the fact $rank(A) = 2$ and nullity theorem $\dim(C(A)) + dim(N(A)) = \text{ number of columns of } A$

(b) that $c(A^2) \subset c(A)$ and $\dim(C(A)) = 2$ gives us $\dim(C(A^2)) = 0, 1, or\ 2.$ $A^2 \neq 0$ rules out the possibility $0.$ suppose $dim(C(A^2)) = 2.$ we will derive a contradiction to the fact that $A$ is not diagonalizable. the fact $A^3 = A^2$ implies that any nonzero $A^2x,$ which is in $C(A^2),$ is an eigenvector of $A$ corresponding to the eigenvalue $1.$ now we have two linearly independent eigenvectors corresponding to eigenvalue $1$ and we already have two eigenvectors corresponding to the eigenvalue in $N(A).$ that makes $A$ diagonalizable and the contradiction. we are done with proving (b).

(c) from the proof of (b) we know that there is $u_2 \neq 0 \in N(A^2) \setminus N(A).$ if we set $u_1 = Au_2,$ then $0 \neq u_1 \in N(A)$ and now find the second $u_3$ so that $\{u_2, u_3\} \text{ is a basis for } N(A)$ now choose an eigenvector $u_4$ corresponding to the eigenvalue $1$ of $A$ so that $\{u_1, u_2, u_3, u_4\}$ is a basis for $R^4.$

let me collect all the information on the u's: $$Au_2 = u_1, Au_1 = 0, Au_3 = 0, Au_4 = u_4 $$ with respect to the basis $\{u_1, u_2, u_3, u_4\}$ the matrix $A$ has the representation given in (c).