Solution 1:

You can try completing your matrix to a Markov chain, adding a self loop at the additional state. The new Markov chain is irreducible and aperiodic and so has a unique stationary distribution, which is concentrated on the additional state. It is also the unique eigenvector with eigenvalue at least $1$.

Now take a purported eigenvector for your original matrix with eigenvalue $1$, and add a zero coordinate. The result is an eigenvector for the Markov chain, contradicting the properties we listed above.

In effect, you have a Markov chain with an invisible absorbing state, which is actually reachable from any other state. This ensures that in the long run the state will be reached, and so applying your matrix lots of times on any given vector will yield the zero vector. So all eigenvalues must be less than 1 in magnitude.

Solution 2:

A bit late to the game, but I thought of this proof.

Suppose $A$ is an irreducible sub-stochastic matrix and $\lambda$ is the the Perron-Frobenius eigenvalue of $A$ (i.e. $\rho\left(A\right) = \lambda$) with $v$ the corresponding eigenvector normalized such that $\|v\|_{1} = 1$. By the Perron-Frobenius theorem for irreducible non-negative matrices, the entries of $v$ must be positive. Using this, we have the following.

\begin{align} |\lambda| &= \|\lambda v\|_{1} \\ &= \|vA\|_{1} \\ &= \sum_j\sum_k v_jA_{jk} \end{align} Let $\epsilon_j \doteq \frac{1}{N}\left(1 - \sum_{k=1}^N A_{jk}\right)$. If we add $\epsilon_j$ to each element of the $j$th row of A, the row sum will become one. Let $\boldsymbol\epsilon$ be the row vector containing the values of $\{\epsilon_j\}$. \begin{align} |\lambda| &= \sum_j \sum_k v_j\left(A_{jk} + \epsilon_j - \epsilon_j\right) \\ &= \sum_j \sum_k v_j\left(A_{jk} + \epsilon_j\right) -\sum_j \sum_k v_j \epsilon_j \\ &= \left\|v\left(A + \boldsymbol\epsilon^T\mathbf{1}\right)\right\|_1 - N \left(\boldsymbol\epsilon\cdot v\right) \end{align}

We define $\hat A \doteq A + \boldsymbol\epsilon^T\mathbf{1}$ and note that it is a proper stochastic matrix. Since $v$ is positive and $\boldsymbol\epsilon$ is non-negative with at least one positive entry we have $\boldsymbol\epsilon\cdot v > 0$. \begin{align} |\lambda| &= \left\| v \hat A \right\|_{1} - N \left(\boldsymbol\epsilon\cdot v\right) \\ &= 1 - N \left(\boldsymbol\epsilon\cdot v\right) \\ &< 1 \end{align}

Solution 3:

This is essentially Yuval's probabilistic argument with the probability removed. The goal is to show that powers of $M$ converge to zero.

For any state $i$ and integer $n\geq 0$, let $r^n_i=\sum_k M^n_{i k}$ denote the $i$th row sum of $M^n$. For $n=1$, we write $r_i$ rather than $r^1_i$. Since $M$ is substochastic we have $0\leq r^n_i\leq 1$.

Let $k^*$ be an index with $r_{k^*}<1$, and note that for $n\geq 1$
$$r^n_{k^*}=\sum_k M_{k^* k}\ r_k^{n-1}\leq \sum_k M_{k^* k} =r_{k^*}<1.$$

By irreducibility, for any $i$, there is an $m$ with $M_{i k^*}^m>0$. In fact, if $M$ is $N\times N$ matrix, and $i\neq k^*$ then we can assume $m<N$. (Take the shortest path from $i$ to $k^*$ with positive "probability").
Since $M_{i k}^m$ puts positive weight on the index $k=k^*$, we have $$r^N_i=\sum_k M^m_{i k}\ r^{N-m}_k < r^m_i \leq 1.$$

That is, every row sum of $M^N$ is strictly less than one. Now you can show that $M^{jN}\to 0$ as $j\to \infty$ and this shows that $M^N$ (and hence $M$) cannot have any eigenvalue with modulus 1.

Solution 4:

Sorry for the almost decade-old necropost; I saw this question and my interest was piqued since this is something tangential to a topic I worked on. I apologize in advance for the self-plug.


Let $M$ be a substochastic matrix. Let's call $M$ irreducibly substochastic if it is irreducible and has at least one row that sums to strictly less than one (this is exactly the condition stated in your post). The other answers to this post establish \begin{equation} M \text{ is irreducibly substochastic} \implies \rho(M) < 1. \end{equation} We can actually establish a stronger "iff" by relaxing the requirement on the left hand side: \begin{equation} \label{IFF} M \text{ is weakly chained substochastic} \iff \rho(M) < 1 \tag{IFF}. \end{equation}

The definition of weakly chained substochastic is given below.

Definition. $M$ is weakly chained substochastic if and only if for each row $i$ at least one of the following is true:

  1. row $i$ sums to less than one (i.e., $\sum_j M_{ij} < 1$) or
  2. there is a walk $i \rightarrow i_1 \rightarrow \cdots \rightarrow i_k$ in the adjacency graph of $M$ s.t. row $i_k$ sums to less than one.

Any irreducibly substochastic matrix is trivially a weakly chained substochastic matrix since it has at least one row that sums to less than one (call it $j$) and for any row $i$ we can, by irreducibility, construct a walk from $i$ to $j$.

You can find a proof of \eqref{IFF} in Corollary 2.6 in Azimzadeh, Parsiad. "A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix." Mathematics of Computation 88.316 (2019): 783-800. [arXiv].


Example. The matrix $$ M = \begin{pmatrix}0.1 & 0.9 & 0 \\ 0 & 0.2 & 0.8 \\ 0 & 0 & 0.3 \end{pmatrix} $$ is substochastic. It is, however, reducible. Note that $1 \rightarrow 2 \rightarrow 3$ and $2 \rightarrow 3$ are walks in the adjacency graph of $M$. Moreover, row $3$ sums to less than one. As such, we can apply the above theorem to conclude that $\rho(M) < 1$.

Example. On the other hand, the adjacency graph of the matrix $$ M = \begin{pmatrix}0.1 & 0.9 & 0 \\ 0.8 & 0.2 & 0 \\ 0 & 0 & 0.3 \end{pmatrix} $$ has two disjoint strongly connected components $\{1, 2\}$ and $\{3\}$. The first component does not have a row that sums to less than one, and hence $\rho(M) = 1$.