Is there a fundamental theorem of algebra for matrices?
The fundamental theorem of algebra says we can do this ($z\in\mathbb{C}$ of course)
$$\sum_{k=0}^n a_kz^k= a_n\prod_{k=1}^n (z-\omega_k)=0$$
for some set $\{\omega_k \in\mathbb{C}\}_{k=1,2,\ldots , n}$. Is there a similar answer for square matrices with complex entries (i.e. for linear operators $A_k:\mathbb{C}^n \to\mathbb{C}^n,\,\,(k=0,1,2,\ldots, n))$? That is, can we do such a factorization
$$\sum_{k=0}^n A_kz^k= A_n\prod_{k=1}^n (z\mathbb{1}-\Omega_k)=0$$
where $\mathbb{1}$ is the identity operator and $\Omega_k : \mathbb{C}^n\to\mathbb{C}^n$ are linear operators? Certainly we're restricted here by the non-commutativity of matrices in general, so the order of this decomposition would matter. But, maybe there are some types of matrices that admit commutative "root" matrices?
Sorry if my notation is awkward, and also if the answer to this is considered common knowledge.
Solution 1:
This is a compilation of answers given in comments by @darijgrinsberg and @dineshdileep, hence community wiki.
It's true for simultaneously diagonalizable matrices (and possibly other matrices?). It's also not true in general.
Recall that the product of simultaneously diagonalizable matrices $A$ and $B$, $AB$, is simultaneously diagonalizable with $A$ and with $B$. In general, any linear combination of arbitrary products of (possibly more than $2$) simultaneously diagonal matrices is simultaneously diagonalizable with all its components. So if all matrices $A_k$ are simultaneously diagonalizable, the problem is trivially solved by diagonalizing all the terms and applying the standard FTA.
Here is a counterexample for the general case:
\begin{equation*} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} z^2 + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} = 0 \end{equation*}
If we could decompose this into linear factors, we'd have \begin{equation*} \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} (zI - A)(zI - B) = z^2I - z (A + B) + AB \end{equation*} hence $A+B=0 \implies A=B$ so we require $AB=A^2=\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$. But it is known that there are no square roots for that matrix.
Solution 2:
When $n>1$, to write $\sum_{k=0}^n A_kz^k=0$ is generically non-sense. Indeed, that implies that the $(A_k)_k$ are linearly dependent over $\mathbb{C}$. More precisely, the previous equation is equivalent to say that $n^2$ complex polynomials have a common complex root, that is generically false when we consider only $2$ polynomials !!
EDIT. The correct question can be written as follows: let $(A_k)_{0\leq k\leq n})\in M_n(\mathbb{C})$; our problem: (*) can we factorize $\sum_{k=0}^n z^kA_k$ in the form $A_n \Pi_{i=1}^n (zI-X_i)$ where the $X_i\in M_n(\mathbb{C})$ ? Of course, there are counter-examples; mathematicians have a great time when they find such results; the best known example is probably linked to the equation $X^2=U$; the mathematician people immediately says that this equation may have no solutions; yet, everybody knows that, for a generic $U$, this equation has always $2^n$ solutions ! Why not say it first? Mathematics are beautiful when they work.
Proposition. Assume that $n\leq 3$. For generic $(A_i)_i$, (*) has always $6$ simple solutions when $n=2$ and $1680$ simple solutions when $n=3$.
Remarks. 1. "Generically" is considered in the sense "Zariski open dense set" or in the probabilistic sense (the entries of the $(A_i)$ are iid complex that follow a normal law; the result is true with probability $1$). The results and definitions used or cited in this post are in my paper [1] http://arxiv.org/pdf/1304.2506.pdf (published in Linear Algebra and its Applications).
Clearly, we can also consider equations of degree $p$ in $M_n(\mathbb{C})$.
I wrote a proof only when $n\leq 3$; I think that the result can be generalized when $n>3$.
Proof. Since $A_n$ is generic, it is invertible and we may assume that it is $I$.
Case 1. $n=2$. $U,V$ are known generic matrices and the unkowns $X,Y$ satisfy $z^2I-Uz+V=(zI-X)(zI-Y)$, that is $U=X+Y,V=XY$; then $X^2-XU+V=0,Y=U-X$. The first previous equation is an unilateral equation of degree $2$ (a Riccati one) and (generically) admits $6$ solutions in $X$ and then, $6$ solutions in $(X,Y)$.
Case 2. $n=3$. $U,V,W$ are known generic matrices and the unkowns $X,Y,Z$ satisfy $z^3I-Uz^2+Vz-W=(zI-X)(zI-Y)(zI-Z)$, that is $U=X+Y+Z,V=XY+YZ+XZ,W=XYZ$; then $Z=U-X-Y$ and putting $K=XY,L=X+Y$, we obtain the system $V=K+L(U-L),W=K(U-L)$ in the unknowns $K,L$. Since $Z=U-L$, from $V(U-L)=W+L(U-L)^2$ we deduce (1) $Z^3-UZ^2+VZ-W=0$. Moreover, we can apply the previous result concerning $Z$ to the transpose of the equality (*); we obtain (2) $X^3-X^2U+XV-W=0$.
Equations (1) and (2) are unilateral and, consequently, have $\binom{3^2}{3}=84$ simple solutions $Z$ or $X$ (cf. Theorem 1 in [1]). Let $Z$ be a solution of (1). Then $X+Y=U-Z,XY=V-UZ+Z^2$ and $Y$ is solution of a unilateral equation of degree $2$. In all probability, $U-Z,V-UZ+Z^2$ are generic (and independent) and there are $\binom{2\times 3}{3}=20$ solutions $Y$. Finally, we obtain $20\times 84=1680$ solutions in $X,Y,Z$.