Setting:
I am developing an app for a card game with 4 players.
Every player can see his own hand, but has normally only probabilities, which cards are in the hand of the other players.
Lets take as an example a fictitious card game played with only 8 cards (C1 … C8).
Initially, all cards are distributed among all players, i.e. every player has a hand of 2 cards.
Player P1 knows his hand, but has no information about the hand of the other players. Thus he stores probabilities that a player has a certain card in a model for each other player, i.e. M2, M3 and M4.
The corresponding table of card probabilities looks then as follows (empty cells are 0):
The sum over each row must be 1, since every card has to be somewhere.
The sum over each column yields the number of cards of the respective player.
Problem:
If a card is played, the card probabilities change. If e.g. a player plays a card, he has 1 card less in his hand, and the probability that this card is in any players hand is 0. This is demonstrated in the following 2 tables, where first P2 plays C2, and then P3 plays C3.
Depending on the rules of the game, it can happen that P1 can draw some conclusions, when another player plays a card. If the game requires to serve, e.g., a color, and P3 does not serve it, P1 can conclude that P3 does not have certain cards, e.g. C8, or neither C6 nor C8. This is demonstrated in the following 2 tables.
Question:
How can I compute the card probabilities for each cell that is neither 1, nor 0.


