Solution 1:

There are a couple of ways of interpreting this question. The two most obvious ones (well, to me):

  • Given $n\gt 0$, is there an upper triangular square matrix $A$ of some size (which may depend on $n$) such that $A^n\neq 0$ but $A^{n+1}=0$?

    The answer to this is "yes", and in fact you can take $A$ to be $(n+1)\times(n+1)$. One possibility is $$A = \left(\begin{array}{cccccc} 0 & 1 & 0 &\cdots & 0 & 0\\ 0 & 0 & 1 & \cdots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 0 & 1\\ 0 & 0 & 0 & \cdots & 0 & 0 \end{array}\right).$$ If $\mathbf{e}_i$ is the vector with $1$ in the $i$th coordinate and $0$s elsewhere, then $A\mathbf{e}_1 = \mathbf{0}$, and $A\mathbf{e}_{k+1} = \mathbf{e}_k$ for $k=1,\ldots,n$. In particular, $A^n\mathbf{e}_{n+1} = \mathbf{e}_1\neq \mathbf{0}$, so $A^n\neq 0$; but $A^{n+1}(\mathbf{e}_i) = \mathbf{0}$ for all $i$, so $A^{n+1}=\mathbf{0}$. In fact, any strictly upper triangular $(n+1)\times(n+1)$ matrix will do.

  • Given $n\gt 0$, is there an $n\times n$ matrix $A$ such that $A^n\neq 0$ but $A^{n+1} = 0$?

    The answer here is "no". You are right that all eigenvalues would have to be equal to $0$: a slightly more formal proof is that if $\lambda$ is an eigenvalue of $A$, then $\lambda^k$ is an eigevalue of $A^k$ (check this! it's not hard); since the only eigenvalue of $0$ is $0$, if $A^{k}=0$ for some $k$, then every eigenvalue of $A$ is equal to $0$. Over the complex numbers (or the algebraic closure of your ground field), that means that the characteristic polynomial of $A$ must be $t^n$; which means that the characteristic polynomial over whatever field you are working on to begin with is also $t^n$, because all the roots in the algebraic closure already lie in the original field. This in turn means that, by the Cayley-Hamilton Theorem, we must have $A^n=0$. So we cannot have $A^n\neq 0$ and $A^{n+1}=0$: if $A^k=0$ for some $k$, then $A^n=0$.

    (By the way: by the definition of the minimal polynomial $m(t)$, if $p(t)$ is any polynomial such that $p(A)=0$, then $m(t)$ must divide $p(t)$; for if we divide $p(t)$ by $m(t)$, $p(t) = q(t)m(t)+r(t)$ with $r=0$ or $\deg(r)\lt\deg(m)$, then since $A$ centralizes the coefficients we get $0=p(A) = q(A)m(A)+r(A) = r(A)$; since $m(t)$ is the monic nonzero polynomial of smallest degree that evaluates to $0$ on $A$, we must have $r(t)=0$, so $m(t)|p(t)$). The Cayley Hamilton Theorem then implies directly that the minimal polynomial must divide the characteristic polynomial. Since the characteristic polynomial is $t^n$, the minimal polynomial must be $t^k$ for some $k$, $1\leq k\leq n$.

    In fact one can prove that every irreducible factor of the characteristic polynomial will divide the minimal polynomial, and the minimal polynomial divides the characteristic polynomial.)