I know how to calculate elo rating (in chess) and etc but why when it was made the inventor of elo rating decided to use constant $400$ and $10$ in expected score formula? The formula: $$ExpectedScoreA=\frac{1}{1+10^{(RactingB-RatingA)/400}}$$
why does elo rating uses 400 and 10?
1.1k Views Asked by Bumbble Comm https://math.techqa.club/user/bumbble-comm/detail AtThere are 2 best solutions below
On
Using base 10 the stronger player has ~ .7597 as pointed out above.
However using a base of 9 would give exactly = .75. I wonder why he did not use 9?
Seems like a more intuitive way would be using a divisor of ~ .125 instead of 400. That way everyone's rating would fall mostly between 0 and 1. So use base of 9 and divisor of .125.
The strongest humans are ~2900. The strongest computers are ~3500. The weakest humans are ~100. So between 2900 to 100 there is a range of 2,800 - or 7 blocks of 400 each. But 1/8 is easier to count from 0 to 1 than in blocks of 1/7 (for decimal equivalents).
Edit - OK - after thinking about this for a few weeks, I now realize the whole form of 1/(1+Base^(rating diff/divisor)) is mathematically wrong to begin with. Meaning that there is there is no base and divisor that gives the right answer. This form must have just been an approximation that Elo came up with in the era before computing power was trivial. Turns out the correct mathematical formula for probability is the normal distribution of rating difference divided by square root of 2 * standard deviation. In excel terms, (((Rb-Rb)/(SQRT(2)*std_deviation)),0,1,TRUE). That understanding comes from Elo's paper.
I would assume the $10$ is just because we like computing powers of $10$. Then Wikipedia claims:
And indeed, $\frac{1}{1+10^{-1/2}}\approx 0.7597$.
(If we'd started with a base of $e$ instead of $10$, the scale factor would probably have ended up as $200$ or $225$.)