Continuity of a simple eigenvalue and its corresponding eigenvector

It is a classical result that you can always manage for the eigenvalues to be continuous, even when they are multiple. For hyperbolic matrices (i.e. with real eigenvalues) the result of Bronshtein ensures Lipschitz continuity for the eigenvalues.

For your question in your framework the eigenvalue is smooth: with $p(X,t)$ be the characteristic polynomial, you have at a simple root $$ p(\lambda_0,t_0)=0,\quad \partial p/\partial X(\lambda_0,t_0)\not=0, $$ so that you can use the implicit function theorem to get near $(\lambda_0,t_0)$ the equivalence $$ p(\lambda,t)=0\Longleftrightarrow \lambda=\alpha(t),\alpha(t_0)=\lambda_0\text{ with a smooth function $\alpha$}. $$

The eigenvectors could be very unstable at multiple roots. Consider the $2\times 2$ matrices $$ A_1=\begin{pmatrix}1&0\\0&1\end{pmatrix},\quad A_0=\begin{pmatrix}0&1\\1&0\end{pmatrix} $$ and the smooth matrix $ A(t)=H(-t)e^{-1/t^2}A_0+H(t)e^{-1/t^2}A_1,\quad H=1_{\mathbb R_+}. $ The eigenvectors of $A(t)$ for $t<0$ are $e_1,e_2$ whereas the eigenvectors of $A(t)$ for $t>0$ are $$ e_1\pm e_2, $$ so the (normalized) eigenvectors are discontinuous.


Bazin's answer works for the general case, but the two-dimensional case is exactly solvable, which affords it a direct proof which may or may not be more useful in your case.

Let $A=\begin{pmatrix}a&c\\b&d\end{pmatrix}$, and consider its eigenvalue equation $$\det(\lambda-A)=\det\begin{pmatrix}\lambda-a&-c\\-b&\lambda-d\end{pmatrix}=\lambda^2-(a+d)\lambda+ad-bc=0,$$ whose general solution is $$ \lambda=\frac{a+d\pm\sqrt{(a+d)^2+4(ad-bc)}}{2}=\frac{a+d}2\pm\sqrt{\left(\frac{a-d}2\right)^2+bc}. $$ The eigenvalues, then, are continuous functions of the matrix entries. If you ever come up on a branch cut you'll run out of eigenvalue but the other one will pop into existence there, which in practice means you can ignore the potential discontinuities of the square root.

Once you have an eigenvalue $\lambda$, you can get an eigenvector $x = \begin{pmatrix}u\\v\end{pmatrix}$ by solving $$(\lambda-a)u-cv=0,$$ with the other equation automatically satisfied for $\lambda$ an eigenvalue, which means that $$x = u\begin{pmatrix}1\\ \frac{\lambda-a}{c}\end{pmatrix} =\frac{1}{\sqrt{1+\left(\frac{\lambda-a}{c}\right)^2}}\begin{pmatrix}1\\ \frac{\lambda-a}{c}\end{pmatrix} =\frac{1}{\sqrt{c^2+\left({\lambda-a}\right)^2}}\begin{pmatrix}c\\ {\lambda-a}\end{pmatrix}$$ is an eigenvector even when $c=0$, and depends continuously on $\lambda$ and the matrix entries. (In the complex case take squares of the moduli, and the last relation is only true up to a phase, but the end result is still a continuously-dependent eigenvalue).

This breaks only in the case that $c=0$ and $\lambda=a$ simultaneously, in which case the eigenvalue is not simple, which you've discarded, or the matrix is diagonal. In this case, there's more you can say about the eigenvalue, because $a$ and $d$ are bounded from each other but $c$ is zero or small, in which case $$ \lambda=\frac{a+d}2 + \left(\frac{a-d}2\right)\left(1+\frac{2bc}{(a-d)^2}+O(b^2c^2)\right)=a+\frac{bc}{(a-d)}+O(b^2c^2). $$ Plugging this back in and rearranging, you get that $$x =\frac{1}{\sqrt{1+\frac{b^2}{(a-d)^2}+O(c)}}\begin{pmatrix}1\\\frac{b}{a-d}+O(c)\end{pmatrix}$$ is your eigenvalue, and again it's continuous at $c=0$ as long as $a$ is bounded from $d$.

The take-home message, then, is that the two-dimensional case is very often amenable to direct manipulations that make you not need the general results for higher dimensionality. If you do things carefully enough, this can usually help you understand your problem better anyway (though if you don't it will get messy fast).