There is a certain game that people play. Player A on average has scored 20 more points than the opponents he has played. Player B on average, has scored 16 less points than his opponents. If Player A and Player B plays against each other, what should be their expected difference in scores? Is it 18 or 36?
I have seen arguments supporting both side, dividing by 2 or not.
Support for 36 (not dividing by 2):
Player B is obviously a terrible player, well below the average player since he consistently scores 16 less points than his opponents. Player A is obviously a great player. It wouldn't make logically sense for Player A to only have +18 score difference against player B who is below average, when Player A on average is able to get a 20 points advantage against his opponent.
Let's say there is a Player that averages 100 points each game. When Player A plays him, he'll score 120 points against him. When Player B plays against him he'll score 84 points against him.
120-84= 36.
Support for 18 (or 2):
(Using diff numbers for simplicity) Let's say that Player A only plays against player B. Each game, Player A always score 120 120 120 120 120 120, Player B always score 80 80 80 80 80.
A on average gets 40 more points than his opponent. B on average gets 40 less points than his opponent.
If they play again, the difference between their score will obviously be 40.
(40 - (-40)) / 2 = 40 and not 80.
Théophile has rightly pointed out that we don't have enough information to predict the expected difference in scores between A and B. However, we can still express a preference for one of your predictions. Under the assumptions (which may or may not apply to this "certain game", but which are at least somewhat reasonable) that A and B have been playing the same opponents, or at least opponents randomly drawn from the same group, and that the score linearly reflects a skill (i.e. each player has some skill value and their score is a linear function of that value), the given data indicate that A's skill level corresponds to $20$ points above the average of the opponents and B's skill level corresponds to $16$ points below the average of the opponents. In this case, we'd expect the score difference between A and B to be $36$.
Your second argument, which suggests that this should be divided by $2$, assumes that A and B have been playing each other. This assumption is inconsistent with the data, since in this case A would have to have exactly as many points more than her opponents as B has less.