There exists $C\neq0$ with $CA=BC$ iff $A$ and $B$ have a common eigenvalue
Solution 1:
For the => direction, if $A$ is diagonalizable, then it's easy to finish off your argument: Let $x_1, \ldots, x_m$ be a basis of $V$ consisting of eigenvectors for $A$, then since $C$ is nonzero, some $Cx_i$ must be nonzero, so your argument shows that $\lambda$ is an eigenvalue of $B$.
If $A$ is not diagonalizable then it's trickier. One way is to use generalized eigenvectors: A non-zero vector $v \in V$ is a generalized eigenvector of $A$ if for some positive integer $k$ and scalar $\lambda$, we have $(A-\lambda I)^k v = 0$. The scalar $\lambda$ is always an eigenvalue if such an equation holds. The key fact about generalized eigenvectors is that for every matrix $A$, there is a basis for $V$ consisting of generalized eigenvectors of $A$.
Take a basis $x_1, \ldots, x_m$ of generalized eigenvectors of $A$, with corresponding eigenvalues $\lambda_1, \ldots, \lambda_m$. Then, as before, some $Cx_i$ is nonzero, and a short computation using the condition $CA=BC$ shows that $$(B-\lambda I)^k C = C (A-\lambda I)^k$$ Using this, we conclude that $Cx_i$ is a generalized eigenvector of $B$ and therefore $\lambda_i$ is an eigenvalue of $B$.
For the <= direction, again the diagonalizable case is easy: Take a common eigenvalue $\lambda$ and map an eigenvector of $A$ to an eigenvector of $B$, just as you did above. The rest of the basis of eigenvectors of $A$, you send to 0. You can check that $CA=BC$ on the eigenvectors basis.
The general case is done by using generalized eigenvectors. Take a common eigenvalue $\lambda$. Now we have to be careful: We can't map an eigenvector $A$ to an eigenvector of $B$. (Let $A$ be $$\left( \begin{matrix} 1 & 1\cr 0 & 1 \end{matrix} \right)$$ and $B = I$; show that every matrix $C$ satisfying $CA=BC$ must take the unique eigenspace of $A$ to zero.) Consider the generalized eigenspaces $$V_{\mu} = \{v \in V : (A - \mu I)^k v = 0 \mbox{ for some positive integer } k\}.$$ The space $V$ is the direct sum of all the $V_{\mu}$. For $\mu \not = \lambda$, you send $V_{\mu}$ to 0. The tricky part is what to do with $V_{\lambda}$ itself.
Since we're just worrying about $V_{\lambda}$ now, we can replace $V$ by $V_\lambda$, thus we may assume that the $\lambda$ is the only eigenvalue of $A$. Let $k$ be the smallest positive integer such that $(A-\lambda I)^k v = 0$ for all $v \in V$. For lack of a better term, let's call $k$ the index of $A$ for $V$. We proceed by induction on $k$. If $k=1$ we are in the diagonalizable case.
If $k>1$, let $v$ be an eigenvector of $A$ for $\lambda$. Then $A$ fixes the space $\langle v \rangle$ and therefore acts on the quotient $\overline{V} = V/\langle v \rangle$. Now we can see that $(A-\lambda I)^{k-1}$ kills $\overline{V}$ (since otherwise $(A - \lambda I)^k$ would not kill $V$). Hence the index of $A$ in $\overline{V}$ is less than $k$. Inductively, we have a nonzero map $\overline{C} : \overline{V} \to W$ such that $\overline{C} A = B \overline{C}$, and by composing with the projection from $V$ to $\overline{V}$, we get a nonzero map $C : V \to W$ satisfying $CA = BC$.
EDIT: I don't think the fact that the index of $A$ in $\overline{V}$ is smaller than $k$ is as trivial as I made it sound above. You need to look at the Jordan blocks of $A$. Or, instead of induction on $k$, just note that $\dim \overline{V} < \dim V$ and use induction on $\dim V$ instead.
Solution 2:
(This is not an answer! Modified question/comment:)
As we are considering $C$ to be non zero linear transformation such that $CA=BC$, and it implies that $A$ and $B$ have a common eigen value.
If $C$ is invertible, then $A$ and $B$ will have all common eigen values.
So does the number of common eigen values depend on rank of $C$? Since $C$ is non-zero certainly means its rank is atleast $1$.
[I couldn't find the button of "comment" below question, therefore posting the remark/sub-question here.]
Solution 3:
Here is mild generalization. Let $A$ be a principal ideal domain, and $V$ and $W$ two finitely generated torsion modules. Then there is a nonzero $A$-linear map from $V$ to $W$ if and only if there is an irreducible element $p$ such that $pV\not=V$ and $pW\not=W$.
We can assume $V=A/(p^r)$, $W=A/(q^s)$, where $p$ and $q$ are irreducible elements, and $r$ and $s$ are positive integers, for in the general case $V$ and $W$ will be finite direct sums of modules of this form. We must check that there is a nonzero $A$-linear map from $V$ to $W$ if and only if $(p)=(q)$.
If $(p)=(q)$ we compose the canonical projection $A/(p^r)\to A/(p)$ with the $A$-linear map $A/(p)\to A/(p^s)$ induced by the multiplication by $p^{s-1}$.
Assume $(p)\not=(q)$. Let $f:A/(p^r)\to A/(q^s)$ be $A$-linear, and let $x$ be in $A/(q^s)$. We have $p^rf(x)=f(p^rx)=f(0)=0$. As $p^r$ is invertible mod $q^s$, the proof is complete.
EDIT. Here is a further generalization. Let $A$ be an a priori non commutative ring. "Module" shall mean left $A$-module.
Assume that $V$ and $W$ are finite length nonzero modules, that $S$ and $T$ are simple modules, that all simple subquotients of $V$ are isomorphic to $S$, and that all simple subquotients of $W$ are isomorphic to $T$. Then $\mathrm{Hom}(V,W)$ if nonzero and only if $S$ and $T$ are isomorphic.
This is clear.
Solution 4:
"Only if" part. Let us also abuse the notations a little bit so that $A,B,C$ represent both linear transformations and their matrix representations under some bases. Let $v=\dim(V)$ and $w=\dim(W)$. [Edit: There was a paragraph explaining why we may assume $v=w$ WLOG, but as Ted pointed out in the comment below, this is not needed. So I snipped it out.]
Now, the "only if" part has been proved in my answer to a similar question, but I can repeat the answer here. Since $AC=CB$, we have $$ A^jC = A^{j-1}(AC) = A^{j-1}CB = A^{j-2}(AC)B = A^{j-2}CB^2 = \ldots=CB^j $$ for any $j\ge0$. Hence $g(A)C=Cg(B)$ for any polynomial $g$. In particular, if we take $g$ as the minimal polynomial of $B$, then $g(A)C=Cg(B)=0$. If $A$ and $B$ does not share a common eigenvalue, then $g(A)$ is invertible and hence $C=0$, which contradicts our assumption.
"If" part. Suppose $A$ and $B$ share a common eigenvalue $\lambda$. Choose two ordered bases $\{v_1,v_2\ldots\}$ and $\{w_1,w_2\ldots\}$ of $V$ and $W$ so that the matrix representations of $A$ and $B$ with respect to them are Jordan forms and both the first Jordan blocks of $A$ and $B$ correspond to the eigenvalue $\lambda$. Let $m$ be the size of the first Jordan block of $A$. Define $C$ by $Cv_m=w_1$ and $Cv_i=0$ for $i\not=m$. That is, in the matrix representation of $C$, only the $(1,m)$-th entry is equal to 1 and the rest are zeroes. Then $C$ is a nonzero linear map such that $CA=BC$. In fact, with respect to the above bases of $V$ and $W$, the matrix representation of $CA$ or $BC$ is a $w$-by-$v$ matrix with a $\lambda$ in the $(1,m)$-th entry and zeroes elsewhere.
Edit for the "if" part. Alternatively, if $A$ and $B$ are complex square matrices (of possibly different sizes), $(\lambda,v)$ is a left eigenpair of $A$ and $(\lambda,u)$ is a right eigenpair of $B$, then $CA=\lambda C=BC$ when $C=uv^T$.
Solution 5:
Here is a simple solution for the if condition.
Let $\lambda$ be the common eigenvalue. Let $u$ be a right eigenvector for $B$, that is $$Bu= \lambda u$$
and $v$ be a left eigenvector for $A$, that is $$v^TA= \lambda v^T$$
Then $C =uv^T$ is a non-zero $n \times n$ matrix which works: $$CA = u v^T A = \lambda u v^T =\lambda C$$ $$BC= B u v^T= \lambda u v^T= \lambda C$$