Usually we search the eigenvectors of a matrix $M$ as the vectors that span
a subspace that is invariant by left multiplication by the matrix: $M\vec x= \lambda \vec x$.

If we take the transpose problem $\mathbf x M=\lambda \mathbf x$, where $\mathbf x$ is a row vector, we see that the eigenvalues
are the same, but the ''eigenrows'' are not the transpose of the eigenvector (in general).

E.g., for the matrix $$\left[ \matrix{0&1\\2&1} \right] $$
we find, for the eigenvalue $\lambda=2$, $$ \begin{bmatrix} 0&1\\2&1 \end{bmatrix} \left[ \matrix{1\\2} \right]= 2\left[ \matrix{1\\2} \right] $$ $$\left[ \matrix{1&1} \right] \begin{bmatrix} 0&1\\2&1 \end{bmatrix} = 2\left[ \matrix{1&1} \right] \;\;. $$

So my questions are: Is there some relation between these ''right'' and ''left'' eigenspaces of the
same matrix? Is there some reason why the ''eigenrows'' are not so studied as the eigenvectors?


They are usually called "left eigenvectors". There is no reason for them to be neglected, except that many people (not all) like to consider operators acting on the left of something rather than on the right.

Any matrix is similar to its transpose. Thus the left and right eigenspaces for the same eigenvalue have the same dimension.


In general, there is no simple connection between the two sets of vectors, apart from the fact that they belong to the same eigenvalues.

Of course, if $A$ is symmetric, then the "eigenrows" are simply transposes of eigenvectors.

As for your second question:

The problem of finding eigenrows for a matrix is identical to the problem of finding eigenvectors for the transpose of the matrix. This means there is no real reason to have a separate theory developed for the sake of something that is equivalent to a well established theory.


There is a relation between left and right eigenspaces, as follows:

If $\lambda$ and $\mu$ are distinct real numbers, then the left $\lambda$ eigenspace is orthogonal to the right $\mu$ eigenspace.

Proof: Suppose that $\vec{v}$ is a row vector in the left $\lambda$ eigenspace and $\vec{w}$ is a column vector in the right $\mu$ eigenspace. Then $$\vec{v} A \vec{w} = \vec{v} (\mu \vec{w}) = \mu (\vec{v} \cdot \vec{w})$$ and also $$\vec{v} A \vec{w} = (\lambda \vec{v}) \vec{w} = \lambda (\vec{v} \cdot \vec{w}).$$

So $$\mu (\vec{v} \cdot \vec{w})=\lambda (\vec{v} \cdot \vec{w})$$ and $\vec{v} \cdot \vec{w}=0$.

In particular, if $A$ is symmetric, this shows that the eigenspaces for distinct eigenvalues are orthogonal.


As noted by Robert, your "eigenrows" are more conventionally called "left eigenvectors", and they are merely the eigenvectors of the matrix's transpose.

One particularly important application involving both the left and right eigenvectors is in the concept of the eigenvalue condition number. For every eigenvalue $\lambda$ of $\mathbf A$, there is an associated condition number, defined as the reciprocal of the dot product of (geometrically, secant of the angle between) the (suitably scaled) left and right eigenvectors. This is particularly valuable when $\mathbf A$ is not a normal matrix, since this condition number measures how far $\mathbf A$ is from being a matrix with $\lambda$ as a multiple eigenvalue, and possibly from being a defective matrix. See this reference, among others.