Is there a ring homomorphism $M_2(\mathbb Z)\to \mathbb Z$?

Solution 1:

No, not at all. A homomorphism must take nilpotent elements to zero, since $\Bbb Z$ has no proper nilpotents. The matrix that’s all zero except for a $1$ in the upper right corner must thus be taken to $0$. Similarly for the matrix with $1$ in the lower left. But their sum squares to the identity matrix, so your homomorphism is zero. (There are much better abstract proofs.)

Solution 2:

The kernel of a ring homomorphism is an ideal and the ideals of $M_n(\mathbb{Z})$ are of the form $M_n(k\mathbb{Z})$, for $k\ge0$. Since $$ M_n(\mathbb{Z})/M_n(k\mathbb{Z})\cong M_n(\mathbb{Z}/k\mathbb{Z}) $$ we see that the image of a homomorphism is either a finite subring of $\mathbb{Z}$ (if $k>0$) or the homomorphism is injective (if $k=0$).

The second possibility is ruled out, because $M_n(\mathbb{Z})$ is not commutative, for $n>1$. The second possibility only gives the zero homomorphism (if you don't require the identity is mapped to the identity).


The characterization of the ideals in the full matrix ring $M_n(R)$ over the (commutative) ring $R$ as being of the form $M_n(I)$, where $I$ is an ideal of $R$, is well known.

Once we accept it, we can generalize the statement. If $\varphi\colon M_n(R)\to R$ is a ring homomorphism, then $\ker\varphi=M_n(I)$ for some ideal $I$ of $R$. It's easy to see that $M_n(R)/M_n(I)\cong M_n(R/I)$, so we have an injective homomorphism $$ \hat{\varphi}\colon M_n(R/I)\to R $$ If $R$ is commutative, this forces $n=1$ or $I=R$, because $M_n(R)$ is not commutative for $n>1$ unless $R$ is the zero ring.

If we consider $R$ not the zero ring and ring homomorphisms to carry the unity to the unity, we conclude that, for every $n>1$, there is no ring homomorphism $M_n(R)\to R$.

Solution 3:

Let $S$ be a commutative unital ring and $R:=\text{Mat}_{n\times n}(S)$ where $n\in\mathbb{Z}_{>1}$. Suppose that $\phi:R\to T$ is a (not necessarily unitary) $S$-algebra homomorphism from $R$ to an $S$-algebra $T$ without zero divisors. (In the given problem, $S:=\mathbb{Z}$, $n:=2$, and $T:=\mathbb{Z}$.)

The ring $R$ is generated by the matrices $E_{i,j}$ for $i,j\in\{1,2,\ldots,n\}=:[n]$, where $E_{i,j}$ is the matrix with $1$ at the $(i,j)$-entry and $0$ everywhere else for every $i,j\in[n]$. As noted by Lubin, $E_{i,j}$ must be mapped to $0$ when $i\neq j$, as the matrix is nilpotent (this is where the assumption that $T$ have no zero divisors is used).

Let $u_i$ be the image of $E_{i,i}$ under $\phi$ for $i\in [n]$. Then, for a matrix $A=\sum\limits_{i,j\in[n]}\,a_{i,j}E_{i,j} \in R$, where $a_{i,j}\in S$ for all $i,j\in[n]$, we get $$\phi(A)=\sum_{i=1}^n\,a_{i,i}u_i\,.$$ As $\phi$ is multiplicative, we must have $$0=0\cdot \phi(A)=\phi(E_{i,j})\cdot \phi(A)=\phi\left(E_{i,j}\cdot A\right)=a_{j,i}u_i$$ whenever $i\neq j$. As $a_{i,j}$ for $i,j\in [n]$ are arbitrary, $u_i=0$ for all $i\in[n]$.

Hence, the zero map is the only possible ring homomorphism from $R$ to $T$. If you require the homomorphism to be unitary (i.e., the multiplicative identity of $R$ must be sent to $1\in T$), then there are no such homomorphisms.

Solution 4:

Let $i$ be the image of the identity matrix. Then $i^2=i$ and so $i=0$ or $i=1$.

If $i=0$, then the map is the zero homomorphism because $A=AI$.

If $i=1$, let $J=\pmatrix{0&-1\\1&0}$. Then $J^2=-I$ translates to $j^2=-1$, which cannot happen in $\mathbb Z$.

Therefore, the only ring homomorphism $M_2(\mathbb Z)\to \mathbb Z$ is the zero map. If you require that a ring homomorphism must preserve the multiplicative identity, then there is none.

Solution 5:

I show here that there is no non-trivial ring homomorphism $\phi:M_2(\mathbb{Z})\to \mathbb{Z}$. To start, let us denote

\begin{align*} A=\begin{bmatrix}1 & 0\\ 0 & 0\end{bmatrix}& &B=\begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}\\ C=\begin{bmatrix}0 & 0\\ 1 & 0\end{bmatrix}& &D=\begin{bmatrix}0 & 0\\ 0 & 1\end{bmatrix} \end{align*}

Assuming that there is a non-trivial ring homomorphism $\phi:M_2(\mathbb{Z})\to\mathbb{Z}$, we would have that $\phi(I)=1$. Notice that we have the following sums and products:

\begin{align*} \begin{bmatrix}1 & 0\\ 0 & 0\end{bmatrix}\cdot \begin{bmatrix}0 & 0\\ 0 & 1\end{bmatrix}=\begin{bmatrix}0 & 0\\ 0 & 0\end{bmatrix} &\Rightarrow A\cdot D=0. &\text{(i)}\\ \begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}\cdot \begin{bmatrix}0 & 0\\ 1 & 0\end{bmatrix}=\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix} &\Rightarrow B\cdot C=A.&\text{(ii)}\\ \begin{bmatrix}0 & 0 \\ 1 & 0\end{bmatrix}\cdot \begin{bmatrix}0 & 1\\ 0 & 0\end{bmatrix}=\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix} &\Rightarrow C\cdot B=D.&\text{(iii)}\\ \begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}+\begin{bmatrix}0 & 0\\ 0 & 1\end{bmatrix}=\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix} &\Rightarrow A+D=I.&\text{(iv)} \end{align*}

Hence, from (i) we have that $\phi(A)\cdot \phi(D)=\phi(A\cdot D)=\phi(0)=0$, so one of the values $\phi(A),\phi(D)$ must be equal to $0$.

From (iv), we have that $\phi(A)+\phi(D)=\phi(A+D)=\phi(I)=1$, and we conclude that while of the values $\phi(A),\phi(D)$ is equal to 0, and the other one is equal to 1.

Without loss of generality let us assume that $\phi(A)=0$ and $\phi(D)=1$. Then, from (ii) we have $0=\phi(A)=\phi(B)\cdot \phi(C)$, and we have that one of the values $\phi(B),\phi(C)$ is equal to zero.

This leads to $1=\phi(D)=\phi(C)\cdot \phi(B)=0$ using (iii), a contradiction.