How does Elo rating scale with games played?

Solution 1:

The standard formula for Elo rating change is

$$ \Delta R = K(S-E) $$

(see e.g. Wikipedia), where $R$ is the change in rating, $S$ is the player’s score in the game ($0$, $\frac12$ or $1$), $E$ is the expected score (based on the current ratings of the players), and $K$ is a factor, for the choice of which there are many different conventions (see e.g. Wikipedia). Since you didn’t specify a $K$ factor, I’ll leave it variable.

The expected score based on the player’s rating $R$ and the opponent’s rating $O$ is

$$ E=\frac1{1+10^{(O-R)/400}}\;. $$

In your situation with only two players starting from rating $0$, we will always have $O=-R$, so this becomes

$$ E=\frac1{1+10^{-R/200}}\;. $$

If we focus on the losing player, their score is always $S=0$, so we have the difference equation

$$ \Delta R=-\frac K{1+10^{-R/200}}\;. $$

Approximating this by a differential equation yields

$$ R'(t)=-\frac K{1+10^{-R(t)/200}}\;. $$

Wolfram|Alpha yields a complicated and unenlightening closed form for this. More insight is gained if we neglect the term $1$ in the denominator for large negative $R$, yielding

$$ R'(t)=-K\cdot10^{R(t)/200}\;. $$

The solution is

$$ R(t)=-200\log_{10}\left(\frac{K\log10}{200}t+c\right)\;, $$

so the magnitude of the players’ ratings increases logarithmically.