The intuition behind generalized eigenvectors
Solution 1:
Some intuition is afforded by the Jordan Normal Form, and even more is obtained by understanding a proof.
The Jordan form of an endomorphism decomposes it as the direct sum of "Jordan blocks"; if you have an intuition for what a block is doing, you can understand the entire direct sum.
A Jordan block geometrically is the sum of two operations. One rescales everything by a constant, the eigenvalue. The other essentially collapses the module onto a codimension one submodule.
The collapsing is the nilpotent part. A geometric intuition is obtained by considering the simplest nontrivial example, the linear map defined (in coordinates) by $(x,y) \to (y,0)$: two dimensions are collapsed onto one essentially by forgetting the first dimension.
In a Jordan block, the nilpotent operation is generally $(x,y, \ldots, z) \to (y, \ldots, z, 0)$. This establishes a hierarchy in the module: the last dimension generates the kernel of $T$, the last two dimensions are killed by $T^2$, and so on. Iterating $T$ enough times eventually produces the zero map--that's what it means to be nilpotent. Thus the kernel of $(T-\lambda I)^k$ picks up all the Jordan blocks associated with eigenvalue $\lambda$ and, speaking somewhat loosely, each generalized eigenvector gets rescaled by $\lambda$, up to some "error" term generated by certain of the other generalized eigenvectors.
In two dimensions, then, a Jordan block effects a transformation that can be written in suitable coordinates (given by two of the generalized eigenvectors) as $(x,y) \to (\lambda x + y, \lambda y)$: an isothety and skew transformation together. The geometric picture isn't really any different for higher-dimensional blocks.
Solution 2:
Consider first eigenvectors associated to $0$; these are elements of the nullspace. Now, it is possible for a vector to not lie in the nullspace, but for its image to lie in the nullspace (for example, consider the linear transformation $T\colon \mathbb{R}^2\to\mathbb{R}^2$ given by $T(x,y) = (y,0)$; then $T(0,1) = (1,0)$, and $T(T(0,1))=(0,0)$. Or for the image of the image to lie in the nullspace... etc. If you want to think about all the vectors that will eventually map to zero if you keep applying $T$ (which seems like a sensible thing to think about in many circumstances), then you are looking for the vectors that lie in union of the nullspaces of $T^n$, $n=1,2,\ldots$. Notice that $\mathbf{N}(T^n)\subseteq\mathbf{N}(T^{n+1})$, so this is an increasing union. Once a vector maps to $0$ under a power of $T$, it stays there. These vectors are precisely the generalized eigenvectors of $T$ associated to $\lambda=0$.
So, what about arbitrary eigenvalues? Instead of thinking of an eigenvector associated to $\lambda$ as a vector on which $T$ acts by stretching/compressing, think of an eigenvector associated to $\lambda$ as an element of the nullspace of $T-\lambda I$. The generalized eigenvector associated to $\lambda$ are then the vectors that lie in $\mathbf{N}((T-\lambda I)^n)$ for some positive integer $n$.
Solution 3:
Generalized eigenvectors also have an interpretation in system dynamics:
If only generalized eigenvectors can be found the dynamics components (=blocks in Jordan canonical form) cannot be completely decoupled. Only if the respective matrix is fully diagonizable a full decoupling is possible.
See also this instructive video from Stanford university (from about 1:00 hours on):
http://academicearth.org/courses/introduction-to-linear-dynamical-systems