Two matrices that are not similar have (almost) same eigenvalues

I have two matrices

$$ A=\begin{pmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{pmatrix} \quad \text{ and } \quad B=\begin{pmatrix} d & e & f \\ d & e & f \\ d & e & f \end{pmatrix} $$

In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.

Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C

$$C = A+B=\begin{pmatrix} a & 0 & 0 \\0 & b & 0 \\0 & 0 & c \end{pmatrix}+\begin{pmatrix} d & e & f \\d & e & f \\d & e & f \end{pmatrix}=\begin{pmatrix} a+d & e & f \\d & b+e & f \\d & e & c+f \end{pmatrix}$$

I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices?

The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.

EDIT:

I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:

    dim = 50;

    A = DiagonalMatrix[Flatten[RandomInteger[{0, 10}, {1, dim}]]];

    mat = RandomReal[{0, 100}, {1, dim}];
    B = ArrayFlatten[ConstantArray[{mat}, dim]];

    c = A + B;

    Abs[Eigenvalues[A]]
    Round[Abs[Eigenvalues[c]], 0.01]

    (*{10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7, 
    6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2, 
    1, 1, 1, 0, 0, 0}*)

    (*{2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54, 
    8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5., 
    5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2., 
    1.21, 1., 1., 0.33, 0., 0.}*)


The phenomenon you're observing happens because your example is not a generic one, but have many repeated eigenvalues.

First, if all the eigenvalues of $A$ are distinct then the rank-one perturbation $A+bk^T$ may have an arbitrary set of eigenvalues iff the pair $(A,b)$ is controllable, or equivalently the matrix $R=[b\ Ab\ \ldots\ A^{n-1}b]$ is of full rank. The result is known as the pole placement in control theory. In our case, $b=[1\ 1\ \ldots\ 1]^T$ and $R$ becomes the Vandermonde matrix, which is clearly invertible under our assumption on the eigenvalues of $A$. Conclusion: in general, you cannot say anything about the perturbed eigenvalues if you just know the eigenvalues of $A$ and not the perturbation.

What happens if the eigenvalues are repeated as in your example? Define $A=\operatorname{diag}\{a_i\}$. Introduce the polynomials $$ p(\lambda)=\det(\lambda I-A)=\prod_{i=1}^n(\lambda-a_i),\qquad p_i(\lambda)=\frac{p(\lambda)}{\lambda-a_i}=\prod_{j\ne i}(\lambda-a_j), $$ and calculate the characteristic polynomial for $A+bk^T$ using Sylvester's determinant theorem \begin{align} \det(\lambda I-A-bk^T)=p(\lambda)(1-k^T(\lambda I-A)^{-1}b)=p(\lambda)-\sum_{i=1}^n k_ib_ip_i(\lambda). \end{align} Note that all the polynomials are going to have the common factor $\lambda-a$ if $a$ is an eigenvalue of multiplicity more than $1$, thus, this $a$ is a perturbed eigenvalue as well. It has got the multiplicity one less than that of the corresponding eigenvalue in $A$. This is what you see in your numerical example. Hence the rule is

If you have an eigenvalue $a$ for $A$ of multiplicity $k>1$ then you gonna have the same perturbed eigenvalue $a$ for $A+bk^T$ of multiplicity at least $k-1$.


EDIT: A simple example, take $A=I$, the $n\times n$ identity matrix. Then $$ \det(\lambda I-I-bk^T)=\det((\lambda-1)I-bk^T)=[\mu=\lambda-1]=\det(\mu I-bk^T)=0. $$ The eigenvalues of the rank-one matrix $bk^T$ are $n-1$ zeros and one more whatever. Those $n-1$ zeros for $\mu$ are $n-1$ ones for $\lambda$.


Your second matrix has rank $1$, because every row is a linear combination of the first row (namely, they're equal).

That means that the kernel is $n-1$-dimensional. If you're lucky, many of the eigenvectors $A$ lie in or near that kernel, in which case $$ (A+B)v = Av + Bv = Av + 0 = Av = \lambda_v v $$ so it's still an eigenvector with the same eigenvalue.

That this does not happen in general is well illustrated in 3-space by picking $(d,e,f)$ so that none of the standard basis vectors are perpendicular to it (i.e., none lie in the kernel of $B$). If you pick $(d,e,f) = (6,6,6)$, and $(a,b,c)=(4,-2, 8)$, for instance, you'll find that the eigenvalues of the sum differ substantially from those of $A$. Here's a transcript of a matlab session demonstrating this:

>> A = diag([4, -2, 8], 0)

A =

     4     0     0
     0    -2     0
     0     0     8

>> B = repmat([6,6,6], 3, 1)

B =

     6     6     6
     6     6     6
     6     6     6

>> eig(A)

ans =

    -2
     4
     8

>> eig(A+B)

ans =

   -0.1217
    5.9193
   22.2024

So for your big matrix, either (a) many of the standard basis vectors are nearly eigenvectors of $B$ (perhaps because the "row" in $B$ has lots of small-ish entries, compared to those in $A$, or the eigenvalues of $A$), or (b) something far weirder is happening, and it's because of the structure of the matrices that you're getting this phenomenon. I'm betting on case "a", but that's just a wild guess.