Connection between eigenvalues and eigenvectors of a matrix in different bases

If you have a matrix $A$ you can find its eigenvalues and eigenvectors. If you represent this matrix relative to another basis $\mathcal{D}$ you can again find its eigenvectors and eigenvectors.

My questions
What is the connection between the eigenvalues and eigenvectors of this same matrix in different bases? Why is this so?

And how do you interpret the eigenvalues and eigenvectors in these different bases? Could you please also provide some intuition and an example?

Thank you!

EDIT
I created a follow-up question with a more complicated example here:
Example of eigenvectors in different bases (follow-up question)


The eigenvalues are exactly the same. The eigenvectors are the the coordinate vectors relative to $\mathcal{D}$ of the original eigenvectors.

Intuition. The matrix representation of $A$ relative to another basis gives the same linear transformation, just in a slightly different language; you are just doing a "change of coordinates"; but the action on the vectors (not on their names, but the actual vectors) stays the same. Since the action is the same, the eigenvalues and eigenvectors are the same, just "translated" into the new coordinates. Scalars (eigenvalues) don't need to be translated, so they stay the same.

Put another way: changing the basis and finding the representations relative to the new basis is like translating to a different language. If you take a novel or play written in English (the standard basis), and translate it into, say, Spanish (the different basis $\mathcal{D}$), the names of the characters may change (Henry may become Enrique, Joan of Arc may become Juana de Arco), but the characters will still perform the same actions in Spanish as they did in English. The plot doesn't change just because you translated, though the words used to described the plot did change.

Proof. To see that the eigenvalues are the same, note that $\lambda$ is an eigenvalue of $A$ if and only if $(A-\lambda I)\mathbf{x}=\mathbf{0}$ has a nontrivial solution. If $\mathcal{D}$ is a new basis for $V$, and we let $Q$ be the change-of-basis matrix from $\mathcal{D}$ to the standard basis (that is, the matrix whose columns are the vectors of $\mathcal{D}$), then the coordinate matrix of $A$ relative to the basis $\mathcal{D}$ is $Q^{-1}AQ$.

If $\mathbf{x}\neq \mathbf{0}$ is a solution to $(A-\lambda I)\mathbf{x}=\mathbf{0}$, then $Q^{-1}\mathbf{x}\neq \mathbf{0}$ (since $Q$ is invertible), and $$(Q^{-1}AQ - \lambda I)(Q^{-1}\mathbf{x}) = Q^{-1}AQQ^{-1}\mathbf{x}-\lambda Q^{-1}\mathbf{x} = Q^{-1}A\mathbf{x} - Q^{-1}\lambda \mathbf{x} = Q^{-1}(A-\lambda I)\mathbf{x}=\mathbf{0},$$ so $\lambda$ is also an eigenvalue of $Q^{-1}AQ$.

Now repeat the argument with $B=Q^{-1}AQ$, but going from $\mathcal{D}$ to the original basis to conclude that if $\lambda$ is an eigenvalue of $Q^{-1}AQ$, then it is also an eigenvalue of $A$.

So the eigenvalues are identical.

And as we saw above, if $\mathbf{x}$ is an eigenvector of $A$ corresponding to $\lambda$, then $Q^{-1}\mathbf{x}$ is an eigenvector of $Q^{-1}AQ$ corresponding to $\lambda$; but $Q^{-1}\mathbf{x}$ is just the coordinate vector of $\mathbf{x}$ with respect to the basis $\mathcal{D}$.

Example. Take $$A = \left(\begin{array}{cr} -3 & -8\\ 4 & 9 \end{array}\right).$$ The determinant of the matrix is $5$ and the trace is $6$, so the eigenvalues add up to $6$ and multiply out to $5$. Hence the eigenvalues are $\lambda=5$ and $\lambda=1$. The eigenvectors corresponding to $1$ are all nonzero multiples of $(2,-1)$; the eigenvectors corresponding to $5$ are the nonzero multiples of $(1,-1)$.

Now pick a new basis, say $\mathcal{D}=[(1,1), (-1,1)]$. Relative to this basis, we have that the change-of-basis matrix from $\mathcal{D}$ to the standard basis is the matrix $Q$ whose columns are the vectors of $\mathcal{D}$, so: $$Q = \left(\begin{array}{rr} 1 & -1\\ 1 & 1\end{array}\right),\qquad Q^{-1} = \left(\begin{array}{rr} \frac{1}{2} & \frac{1}{2}\\ -\frac{1}{2} & \frac{1}{2} \end{array}\right),$$ so the coordinate matrix of (the linear transformation given by multiplication by) $A$ relative to $\mathcal{D}$ is $[A]_{\mathcal{D}}$, where $$[A]_{\mathcal{D}} = Q^{-1}AQ = \left(\begin{array}{rr}1 & 0\\12 & 5\end{array}\right).$$ You can see that the eigenvalues are still $1$ and $5$.

As for eigenvectors, the coordinate matrices of the old eigenvectors relative to $\mathcal{D}$ are: $$ Q^{-1}\left(\begin{array}{r}2\\-1\end{array}\right) = \left(\begin{array}{r}\frac{1}{2}\\ -\frac{3}{2}\end{array}\right),\qquad\text{and}\qquad Q^{-1}\left(\begin{array}{r}1\\-1\end{array}\right) = \left(\begin{array}{r}0\\-1\end{array}\right)$$ and you can verify easily that the eigenvectors of $[A]_{\mathcal{D}}$ corresponding to $5$ are nonzero multiples of $(\frac{1}{2}, -\frac{3}{2})$; and the eigenvectors of $[A]_{\mathcal{D}}$ corresponding to $1$ are the nonzero multiples of $(0,-1)$. (Which are "really" the same as the original vectors, only described "in Spanish" [relative to $\mathcal{D}$] instead of "in English" [relative to the standard basis]).


The eigenvalues don't change when you change basis, and the eigenvectors transform by changing basis.

Let $V$ be a vector space over a field $F$ and let $T:V\to V$ be a linear map, and let $T_B$ be the matrix of $T$ with respect to a basis of $V$. Then $\sigma(T)$, the set of eigenvalues of $T$, is given by $$\sigma(T)=\{\lambda\in F\colon \exists v\in V:\, T(v)=\lambda v\}.$$ On the other hand, $\sigma_B(T_B)$, the set of eigenvalues of the matrix $T_B$, is given by $$\sigma_B(T_B)=\{\lambda\in F\colon \exists v\in V_B:\, T_B v_B=\lambda v_B\}.$$ where $V_B=F^{|B|}$ (thought of as a set of column vectors, each of length $|B|$) and $v_B\in V_B$ is the column vector in $V_B$ given by writing $v\in V$ with respect to the basis $B$ (and every vector in $V_B$ is of this form). However, $\big(T(v)\big)_B=T_Bv_B$, so $\sigma(T)=\sigma_B(T_B)$.

Now the eigenvectors of $T$ are the vectors $v$ featuring in the expression for $\sigma(T)$ above; so the (column) eigenvectors of $T_B$ are the vectors $\{ v_B : v \text{ is an eigenvector of } T\}$. In particular, if $C$ is another basis, then the eigenvectors of $V_C$ are the vectors $\{v_C\colon v \text{ is an eigenvector of } T\}$; so you can pass from eigenvectors of $T_B$ to eigenvectors of $T_C$ by changing basis.


Seeing how the eigenvectors are related can be outlined nicely by the answers above. Since a change of coordinates implies that the matrices are similar, a simpler way to note that the eigenvalues are identical is to understand that the determinant of similar matrices is invariant, and thus, the characteristic polynomial for said matrices will be the same. Note that it requires some minor calculations to show that A and B similar implies that A-lamdaI is similar to B-lamdaI.