When does the game end?
A recent question has inspired the following one.
$M$ people ($M\ge2$) play the following game. Each round every person admitted to the round rolls a $K$-sided dice ($K\ge2$), the sides being marked with numbers from $1$ to $K$. If only a single person has the highest score, the game ends. Otherwise the people with the highest score are admitted to the next round and the game continues. What is the probability that the game ends exactly in $n$ rounds?
Neat problem! Thanks for posting.
The first thing to notice is that at any state of the game, if you have $a$ players remaining, then the probability of yielding $b$ players in the next round is independent of which round you're at. Hence we can talk about $P(a,b)$, the probability of transitioning from $a$ players to $b$ players in one round.
Playing a round with $a > 1$ players, there $K^a$ possible rolls. To find the number of ways we end up with $b \geq 1$ players, we first note that there are $a \choose b$ ways to pick which $b$ players continue. Then, letting $k$ denote the value that these $b$ players roll, we find that there are $(k-1)^{a-b}$ ways for the players to roll this result: the $b$ continuing players must all roll $k$, and the remaining players must all roll between $1$ and $k-1$. Hence we can write (with the convention $0^0 = 1$) $$P(a,b) = \frac{{a\choose b}\displaystyle\sum_{k=1}^K (k-1)^{a-b}}{K^a}$$
Once you have these probabilities, the game itself is a Markov chain. Your states are the integers from $1$ to $M$ denoting the number of players remaining. In any step, the probability of transitioning from state $a$ to state $b$ with $a \ge b \ge 1$ and $a \ne 1$ is exactly $P(a,b)$. State 1 is an absorbing state, and transitions to state 1 with probability 1. All other transition probabilities are 0. So your transition matrix looks like $$T=\left(\begin{matrix}P(M,M) & P(M,M-1) & P(M,M-2) & \cdots & P(M,1)\\ 0 & P(M-1,M-1) & P(M-1,M-2) & \cdots & P(M-1,1)\\ &&\vdots\\0 & 0 & \cdots & P(2,2) & P(2,1)\\ 0 & 0 & \cdots & 0 & 1\end{matrix}\right)$$
The entry in the upper right corner of $T^n$ is the probability of being in state 1 (i.e. the game has ended) after $n$ rounds. But, since this is an absorbing state, this is the probability that the game ends in round $n$ or before. To get the probability of ending after exactly $n$ rounds, you need to subtract the entry in the upper right corner of $T^{n-1}$ from the upper right entry of $T^n$.
The aim of this answer is to present a closed-form solution for the problem. It represents further development of the idea described in the answer of Jeremy Dover. The hard and probably boring part of the proof will be given in the appendix.
For convinience we rewrite the $M\times M$ dimensional matrix $T(M,K)$ introduced by Jeremy Dover as: $$ T_{ij}=\frac 1{K^i}\binom ij\sum_{k=0}^{K-1}k^{i-j}. $$ In the sense of transition states the process starts at $T_{MM}$ and ends at $T_{M1}$.
As the matrix $T$ is triangular its eigenvalues are trivially its diagonal elements $\lambda_m=T_{mm}=\frac{1}{K^{m-1}},\; m=1\dots M.$ As all eigenvalues are distinct the matrix is apparantly diagonalizable by a similarity transformation: $$ \text{diag}(\lambda_1,\lambda_2,\dots,\lambda_M)=X^{-1}TX. $$ Knowing the transformation matrix $X$ the powers of the matrix $T$ can be computed as: $$ T^n=X\text{diag}(\lambda_1^n,\lambda_2^n,\dots,\lambda_M^n)X^{-1}.\tag1 $$
Fortunately, the structures of matrices $X$ and $X^{-1}$ are rather simple (see Appendix): $$ X_{ij}=(1-\delta_{i,j-1})\binom i{j-1};\quad X^{-1}_{ij}=\frac 1i\binom ij b_{i-j}, $$ where $b_k$ are the Bernoulli numbers ($b_1=-\frac12$).
Plugging the values into (1) one obtains: $$ T^n_{M1}(M,K)=\sum_{m=1}^M X_{Mm}\lambda_m^n X^{-1}_{m1} =\sum_{m=1}^M\binom M{m-1}\frac{b_{m-1}}{K^{(m-1)n}} =\sum_{m=0}^{M-1}\binom M{m}\frac{b_{m}}{K^{nm}}.\tag2 $$
Finally, the probability for the game to end in the $n$-th round reads: $$ \boxed{P_n(M,K)=T^n_{M1}-T^{n-1}_{M1}=\sum_{m=0}^{M-1}\binom M{m}\frac{1-K^m}{K^{nm}}b_{m}.} $$
Note added:
It may be convenient to rewrite the expression (2) as: $$ T^n_{M1}(M,K)={\cal B}_M\left(\frac1{K^n}\right), $$ where the function ${\cal B}_M(x)$ is defined as: $$ {\cal B}_M(x)=\sum_{m=0}^{M-1}\binom M{m}b_{m}x^m\equiv x^M\left[B_M\left(\frac1x\right)-b_M\right], $$ with $B_M(x)$ being the Bernoulli polynomial.
Appendix
Proposition 1. The following identity holds for any $p,n\in\mathbb Z_+$: $$ \sum\limits_{k=0}^{p-1} {\binom {p} k} \sum\limits_{j=0}^{n-1} j^k=n^p.\tag{*} $$
Proof: $$\begin {align} &(j+1)^{p}=\sum\limits_{k=0}^{p} {\binom {p} k} j^k\\ \implies& (j+1)^{p} - j^{p} =\sum\limits_{k=0}^{p-1} {\binom {p} k} j^k\\ \implies&\sum\limits_{j=0}^{n-1} \left[(j+1)^{p} - j^{p}\right] = \sum\limits_{j=0}^{n-1} \sum\limits_{k=0}^{p-1} {\binom {p} k} j^k\\ \implies& n^{p}= \sum\limits_{k=0}^{p-1} {\binom {p} k} \sum\limits_{j=0}^{n-1} j^k. \end {align} $$ Introducing the notation $S^{(n)}_k= \sum\limits_{j=0}^{n-1} j^k$ the set of the equations $(*)$ with the exponent of $n$ varying from $1$ to $p$ is equivalent to the matrix equation: $$ \left(\begin{array}l n\vphantom{\binom00}\\ n^2\vphantom{\binom00}\\ n^3\vphantom{\binom00}\\ \vdots\\ n^p\vphantom{\binom00} \end{array}\right)= \begin{pmatrix} \binom10&0&0&\cdots& 0\\ \binom20&\binom21&0&\cdots& 0\\ \binom30&\binom31&\binom32&\cdots& 0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ \binom p0&\binom p1&\binom p2&\cdots& \binom p{p-1}\\ \end{pmatrix} \left(\begin{array}l S^{(n)}_0\\ S^{(n)}_1\\ S^{(n)}_2\\ \vdots\\ S^{(n)}_{p-1} \end{array}\right). $$
Denoting the matrix of binomial coefficients as $X$, one observes that its determinant is non-zero and the equation can be inverted obtaining: $$ \left(\begin{array}l S^{(n)}_0\\ S^{(n)}_1\\ S^{(n)}_2\\ \vdots\\ S^{(n)}_{p-1} \end{array}\right)= \begin{pmatrix} X^{-1}_{11}&0&0&\cdots& 0\\ X^{-1}_{21}&X^{-1}_{22}&0&\cdots& 0\\ X^{-1}_{31}&X^{-1}_{32}&X^{-1}_{33}&\cdots& 0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ X^{-1}_{p1}&X^{-1}_{p2}&X^{-1}_{p3}&\cdots& X^{-1}_{pp}\\ \end{pmatrix} \left(\begin{array}l n\vphantom{\binom00}\\ n^2\vphantom{\binom00}\\ n^3\vphantom{\binom00}\\ \vdots\\ n^p\vphantom{\binom00} \end{array}\right). $$
Comparing the expression with the well-known one: $$ S^{(n)}_{p-1}=\frac1p\sum_{j=0}^{p-1}\binom pj b_j n^{p-j}= \frac1p\sum_{j'=1}^{p}\binom p{j'} b_{p-j'} n^{j'},\tag{**} $$ one obtains:
Proposition 2. The matrix elements of $X^{-1}$ are: $$ X^{-1}_{ij}=\frac 1i\binom ij b_{i-j}, $$ where $b_k\equiv b^-_k$ are the Bernoulli numbers taken with the convention $b_1=-\frac12$. Observe that on the cited above Wikipedia page the formula similar to (**) is given for $S^{(n+1)}_p$ in terms of $b^+$ without a direct mention of this.
Proposition 2 can be proved also directly.
Proposition 3. The columns of the matrix $X$ are the eigenvectors of the matrix $T$.
Proof. Acting by matrix $T$ on the $m$-th column of the matrix $X$: $$ v^{(m)}_j=\begin{cases} 0,& 1\le j<m;\\ \binom{j}{m-1},& m\le j\le M. \end{cases} $$ one obtains: $$\begin{align} \sum_{j=1}^M T_{ij} v^{(m)}_j &=\sum_{j=m}^M\frac 1{K^i}\binom ij\binom{j}{m-1}\sum_{k=0}^{K-1}k^{i-j}\\ &=\frac 1{K^i}\binom i{m-1}\sum_{j=m}^i\binom{i+1-m}{i-j}\sum_{k=0}^{K-1}k^{i-j}\\ &=\frac 1{K^i}\binom i{m-1}\sum_{j'=0}^{i-m}\binom{i+1-m}{j'}\sum_{k=0}^{K-1}k^{j'}\\ &\stackrel{(*)}=\frac 1{K^i}\binom i{m-1}K^{i+1-m}\\ &=\frac 1{K^{m-1}}\binom i{m-1}=\lambda_m v^{(m)}_i. \end{align} $$