Calculating the Average Number of Games Required to Reach a Theoretical True Elo Skill Rating from a given Initial Elo Rating

1k Views Asked by At

The USCF uses the following formula for Elo rating adjustments:

$$R'=R_0+K(S-E)$$

$$E=\frac{1}{1+10^{(R_n-R_0)/400}}$$

Where $R'$ is the new rating

$R_0$ is the initial rating

$K$ is a pre-determined constant that affects how quickly ratings change

$S$ is the score (so 1 for a win, .5 for a draw, and 0 for a loss)

$E$ is the expected score

And $R_n$ is the opponent's rating


For the sake of this question assume that K = 50, draws will never occur, and all opponents are the same rating as the player's current rating. What that means is that every win will result in +25 points and every loss will result in -25 points.

However the actual probability of the player winning is not equal to 50%, instead it is equal to $E=\frac{1}{1+10^{(R_n-R_t)/400}}$ where $R_t$ is the player's true skill rating. Therefore over time the player will gain rating.

For example, if $R_g = 1375$ and $R_t=1400$ Then the result of game 1 has a $E=\frac{1}{1+10^{(R_n-R_t)/400}}$ chance of being a win, since $R_n$(the opponent's rating) equals the player's current rating, then for Game 1 $R_n = 1375$ therefore $E=\frac{1}{1+10^{(R_n-R_t)/400}}=\frac{1}{1+10^{(1375-1400)/400}}\approx0.5359$

Given $R_t$, the player's true skill level and $R_g$ the player's initial rating, calculate the average number of games it will take the player to reach their true skill rating.


To calculate number of games the following formula can be used:

$$G_a(R_t,R_g)=\sum_{n=1}^\infty nP_t(n)$$

Where $G_a$ is the number of average games required to reach $R_t$ from $R_g$, $n$ is the number of games played, and $P_t(n)$ is the probability that the player will have reached $R_t$ after $n$ games.

Going back to the example used before:

$$G_a(1400,1375)=1*P_t(1)+2*P_t(2)+\sum_{n=3}^\infty nP_t(n)$$

After game 1 there is a 53.59% chance that the player reached $R_t$ in 1 game, and a 46.41% chance that the player is now 2 games away from $R_t$. Therefore $P_t(1) = .5359$ It is then impossible for the player to reach $R_t$ in 2 games, so $P_t(2) = 0$

$$G_a(1400,1375)=.5359+\sum_{n=3}^\infty nP_t(n)$$

$P_t(3)$ requires the player lose their first game then win the next two, so the probability of that is:

$$P_t(3)=(1-\frac{1}{1+10^{(1375-1400)/400}})*\frac{1}{1+10^{(1350-1400)/400}}*\frac{1}{1+10^{(1375-1400)/400}}\approx.1421$$

Therefore

$$G_a(1400,1375)=.5359+3*.1421+\sum_{n=5}^\infty nP_t(n)=.9623+\sum_{n=5}^\infty nP_t(n)$$

Although I'm not positive, from an intuitive perspective I am fairly sure that the series will converge, and the point at which it converges is the average number of games needed to reach $R_t$


I have no experience with writing computer programs to preform large numbers of computations outside of basic usage of excel spreadsheets, so hopefully there is a solution to this without requiring a sort of brute force with computer algorithms method.

1

There are 1 best solutions below

3
On
Given Rt, the player's true skill level and Rg the player's initial rating,
calculate the average number of games it will take the player to reach their
true skill rating.

"Average" in this case, is a red herring. In reality you need the $\mathit{least}$ number of games to win, in order to reach your true rating.

For simplicity assume that it's me (ioannis-i) against a bunch of 1200 players. Maple code follows:

restart;
inew,rnew;#updated ratings for all players
NR := proc (AR, IR, AS, IS)
local AE, IE, AK, IK, ARN, IRN;
global rnew, inew;
AK := 50; IK := 50;#assumed K=50 for all players
AE := 1/(1+10^((1/400)*IR-(1/400)*AR));#expected player
IE := 1/(1+10^((1/400)*AR-(1/400)*IR));#expected ioannis
ARN := AR+AK*(AS-AE);#score adjustment players
IRN := IR+IK*(IS-IE);#score adjustment ioannis
print("ELO player", evalf(ARN));
print("ELO ioannis", evalf(IRN));
rnew := evalf(ARN);#update score player
inew := evalf(IRN);#update score ioannis
end proc;

rnew := 1200; inew := 1200;#start with all players = 1200
games:=0;
IT:=1400;#ioannis' true rating (going for...)

IR := inew#ioannis starts with 1200 also

while IR < IT do#while true rating still not reached, repeat...
AR := 1200; IR := inew;#set player to 1200 and ioannis to new rating...
AS := 0; IS := 1;#force a game of ioannis/player=1-0
games := games+1;#one more game
NR(AR, IR, AS, IS);#calculate new scores for all
end do:

print("at least ", games-1, "wins needed to reach true skill");

Running the Maple sheet, you will get games=12 for this particular configuration (All players start=1200 and all players K=50), which is the LEAST number of games you need to win, in order for your ELO to climb up to true skill level, 1400.

All this, provided you win in all your 12 games. If you interpret "average number of games" to also include lost games, then obviously "12-wins" is only a lower bound, since for each loss you need at least an additional win (to the 12 ones) to balance it out.


Major Edit, after seeing your clarification.


The answer is yes, the sequence you are talking about will indeed converge, but Imo you are doing way too much work trying to do it theoretically. The principal advantage of the ELO distribution is that although it is fairly difficult for theoretical calculations, it is a snap for arithmetical calculations, so if you are not willing to accomodate a few elementary calculations for the situation, then you can safely ignore the rest of my answer as redundant or off-topic, since it's not showing any theoretical results to your satisfaction.

With that out of the way, the code I gave upstairs, can be modified to run a suitable simulation and I just modified it to actually simluate a run of games. Several things needed changing, like changing the oppponent's upgraded score, and finally randomizing the wins and losses to a ratio $r$ and you get convergence. The latter offsets correctly the results of the distribution according to the formulas you present in your explanation.

As you'd expect, it's either convergence to 1400 or divergence to a really small value, below around 1090 points, where the score of the player pans out for good. This is also true in chess. If you fall below a small rating, then practically you are a fairly useless player, with very small chances of ever reaching respectable scores.

I include some sample test runs, below, with the results in pictures according to win/lose bias which offsets the distribution to a correctly distributed random one. When the sequence crashes at the bottom, the maximum score shown is less than 1400.

50/50 Win\Lose "Convergence (games):", 68, "maximum reached:", 1277.081344 img

2/3 Win/Lose "Convergence (games):", 99, "maximum reached:", 1321.303487 enter image description here

3/4 Win/Lose "Convergence (games):", 164, "maximum reached:", 1400.106277 enter image description here

4/5 Win/Draw "Convergence (games):", 102, "maximum reached:", 1400.255874 enter image description here

Needless to say that if the bias is on the opponent, things are worse and panning out is almost guaranteed, it seems. Here is:

2/3 Lose/Win "Convergence (games):", 19, "maximum reached:", 1225. enter image description here

If you think the code is of any help in this case, then comment me and I will send it clean, to not clutter the space here.