When solving for eigenvector, when do you have to check every equation?

Solution 1:

Your question traces back into how one finds an eigenvector of a square matrix ${\bf{A}}$. Suppose that we are looking for nonzero vectors ${\bf{x}}$ called eigenvectors of ${\bf{A}}$ such that

$${\bf{Ax}} = \lambda {\bf{x}}\tag{1}$$

rewrite the equation in the form

$${\bf{Ax}} = \lambda {\bf{Ix}}\tag{2}$$

where ${\bf{I}}$ is the identity matrix. Now, collecting terms to the left side we can get

$$\left( {{\bf{A}} - \lambda {\bf{I}}} \right){\bf{x}} = {\bf{0}}\tag{3}$$

As you see, this is a system of linear algebraic equations. What is the requirement for this system to have nonzero solutions? Yes! the determinant of coefficient matrix should vanish which means

$$\det ({\bf{A}} - \lambda {\bf{I}}) = 0\tag{4}$$

this is the main equation that you find the eigenvalues from it. So, eigenvalues set the determinant of ${{\bf{A}} - \lambda {\bf{I}}}$ to zero. When the determinant of this matrix is zero, it implies that the equations in (3) are linearly dependent. In your example, your two equations are linearly dependent as you can easily verify that they are really the same

$$\left\{ \begin{array}{l} {v_{12}} = - 2{v_{11}}\\ 2{v_{11}} - {v_{12}} = - 2{v_{12}} \end{array} \right.\,\,\,\,\,\,\,\, \to \,\,\,\,\,\,\,\left\{ \begin{array}{l} 2{v_{11}} + {v_{12}} = 0\\ 2{v_{11}} - {v_{12}} + 2{v_{12}} = 0 \end{array} \right.\,\,\,\, \to \,\,\,\,\,\left\{ \begin{array}{l} 2{v_{11}} + {v_{12}} = 0\\ 2{v_{11}} + {v_{12}} = 0 \end{array} \right.\tag{5}$$

In conclusion, I might say that you will never use all equations in $(2)$ or $(3)$. If ${\bf{A}}$ is a $n \times n$ matrix, then you have to use $n-1$ or less equations.

Solution 2:

What you do is let $\lambda = -2$ and to find the eigenvector. Since $dim(E_\lambda)=1$, there only one linear independent vector in the eigenspace you can find through two equation. Hence, you can only check one equation.

Solution 3:

Let's say you start with a $n \times n$ matrix. Here $n=2$.

The defining equation for eigenvectors/eigenvalues is $Au=\lambda u$, or $(A-\lambda I)u = 0$.

This means that, given an eigenvalue $\lambda$, you are looking for the nonzero vectors $u$ such that $(A-\lambda I)u=0$. This can only happen if $A-\lambda I$ is not regular, i.e. its rank is less than $n$.

What happens then? Eigenvectors form a subspace of $\Bbb R^n$ (or $\Bbb C^n$ if you work in $\Bbb C$).

This subspace is at least of dimension $1$, which happens if $A-\lambda I$ has rank $n-1$. In that case, you will have $n-1$ equations to solve, and one "degree of freedom", that is, one parameter: hence a 1-dimension subspace.

If the rank of $A-\lambda I$ is lower, the subspace of eigenvectors associated to eigenvalue $\lambda$ will be larger, and you will have more parameters, and less equations to solve. If $\mathrm{rank} (A-\lambda I)=k$, then when solving for $(A-\lambda I)u=0$, you can solve for $k$ equations, and there remains $n-k$ parameters, or a subspace of dimension $n-k$.

If the matrix $A$ is diagonalisable, all your eigenspaces will span the whole of your linear space, but this does not always happen. It happens only if, for an eigenvalue of multiplicity $j$, the eigenspace has dimension $j$. If $j=1$, there is allways at least an eigenvector, but problems may arise when $j>1$ for some eigenvalue. Notice that the multiplicity of $\lambda$ is the power $j$ of $(t-\lambda)$ in the factorisation of the characteristic polynomial, which in turn is $\chi_A(t)=\det(A-tI_n)$. All you know for sure is that the rank of $A-\lambda I$ is less than or equal to the multiplicity of $\lambda$. It's always equal when this multiplicity is $1$. Incidentally, this also means that if a matrix has only simple eigenvalues (i.e. all have multiplicity $1$), then it's allways diagonalisable.

Hence, if $A$ is diagonalisable, the eigenvectors form a basis $P$ of your linear space, and you can write $A=P^{-1}DP$.

For instance, an upper triangular matrix with only $1$ on the diagonal can't be diagonalizable if there is any nonzero element above the diagonal. That's because when diagonalized, this matrix would necessarily be the identity (the diagonal elements are the eigenvalues). However, only the identity can yield an identity diagonalized matrix, since the equation $A=P^{-1}DP$ would yield with $D=I$, $A=P^{-1}P=I$ too.


With $2\times2$ matrices, it's simpler: either an eigenspace has dimension $1$ (and there is thus only one equation to solve, as you noticed), either it has dimension $2$ and the initial matrix $A$ was diagonal (same reason as above).

That does not mean a $2\times2$ matrix is always diagonalisable: you still have the case where it has one eigenvalue of multiplicity $2$, but an eigenspace of dimension $1$, for instance:

$$A=\left(\begin{matrix}\lambda & 1\\ 0 & \lambda\end{matrix}\right)$$

Then there is one eigenvalue ($=\lambda$) with multiplicity $2$, and

$$A-\lambda I=\left(\begin{matrix}0 & 1\\ 0 & 0\end{matrix}\right)$$

Thus, you would have to solve for

$$\left(\begin{matrix}0 & 1\\ 0 & 0\end{matrix}\right)\left(\begin{matrix}u \\ v\end{matrix}\right)=\left(\begin{matrix}0 \\ 0\end{matrix}\right)$$

And you get the equation $v=0$. This means the eigenspace associated to $\lambda$ has one dimension, with an eigenvector $\left(\begin{matrix}1 \\ 0\end{matrix}\right)$.