Solution 1:

The best results have been obtained for random matrices with Normally distributed entries, but some of them--like Wigner's Circular Law--extend to uniformly distributed entries. (Independence is crucial.) Wigner's law applies only to symmetric random matrices. Girko's Circular Law holds without the symmetry condition, but afaik requires Normally distributed entries. It says that for largish $n$ the eigenvalues are approximately uniformly distributed on a disk. For smaller $n$, before these asymptotics are reached, there is a preference for real eigenvalues extending beyond the disk. At any rate, these asymptotics will immediately give you the distribution of eigenvalues of positive integral powers of such matrices, especially when you consider that the probability all eigenvalues are distinct (and therefore the matrix is diagonalizable over $\mathbb{C}$) is one. For example, the largest eigenvalue of $M^k$ will be approximately $n^{k/2} \left( 1 - \frac{1}{2} (3 \pi / (2 n) ) ^ {2/3} \right) ^ k$.

Solution 2:

I don't have enough reputation to leave a comment, so here goes....

As other have observed, the growth rate of $M^k$ is determined by the largest eigenvalue of $M$. I just want to note that for a nonnegative matrix, the largest eigenvalue is always between the largest and smallest column sum.

That, in principle, could allow you to obtain upper and lower bounds by trying to bound how big or small the column sums usually get for the random matrix you described.

Solution 3:

The following analysis treats only the particular case of symmetric matrices $M$, which can be diagonalized by an orthogonal transformation. The key observation is that the diagonalizing matrix $P$ and the eigenvalue matrix $D$ are statistically independent. The diagonalizing matrix elements are $O(n)$ Haar distributed. Now, it is easy to see that every matrix element is a scalar product of two $O(n)$ Haar distributed n-vectors weighted by the k-th power of the eigenvalues.

At large values of k, the maximal eigenvalue dominates all eigenvalues and the contribution from the diagonalizing matrix is fixed. Thus the growth rate of all matrix elements is the natural logarithm of the maximal eigenvalue.

Moreover, the contribution of the maximal eigenvalue to the $(i,j)$ element is given by $P_{im} D_{mm}^k P_{jm}$ ($D_{mm}$ is the maximal eigenvalue). Thus the matrix element for which the product $P_{im} P_{jm}$ is maximal dominates all matrix elements of the power matrix. This element will be most likely on the diagonal because it is multiplied by the square of a Haar distributed element having a non-zero mean. The off-diagonal elements are multiplied by the product of two different Haar distributed elements thus having a zero mean.

Solution 4:

Powers of a matrix are more easily calculated by first diagonalizing it. Let $P, D$ be matrices, $D$ being a diagonal matrix with $M = PDP^{-1}$, then $$M^k = PD^{k}P^{-1}.$$ The entries in D are the eigenvalues of $M$, so the entries in $M^k$ are growing exponentially with the rate of the logarithm of the largest eigenvalue of $M$, each entry being a linear combination of these eigenvalues.

So to point 2: No entry should dominate the others significantly.

This is only a partial answer.. maybe you can find something about the distribution of eigenvalues of random matrices. =)