Precise definition of a "game of incomplete information" (Game Theory)

882 Views Asked by At

Question: In game theory, what is the precise definition of a "game of incomplete information"?

What I've found so far:

  • In the standard first year graduate economics textbook on microeconomics (MWG), the best I can find is this:

Games in which “players know all relevant information about each other” “are known as games of complete information" (p. 253).

But what does "know" mean? And what does "relevant information" mean?

  • And in the standard graduate textbook on game theory (Fudenberg and Tirole), the best I can find is this:

When some players do not know the payoffs of the others, the game is said to have incomplete information (p. 209).

But again, what does "know" mean?

  • Briefly Googling, the only precise definition I can find of a game of incomplete information is the below (Levin, 2002, p. 3). However, this definition then prompts the question: "What is a game of complete information?" There does not seem to be any clear way to negate this definition (of a game with incomplete information) to produce a definition of a game with complete information.

Definition $\bf 1$ A game with incomplete information $G=(\Theta,S,P,u)$ consists of:

  1. A set $\Theta=\Theta_1\times\ldots\times\Theta_I$, where $\Theta_i$ is the (finite) set of possible types for player $i$.
  2. A set $S=S_1\times\ldots\times S_I$, where $S_i$ is the set of possible strategies for player $i$.
  3. A joint probability distribution $p(\theta_1,\ldots,\theta_I)$ over types. For finite type space, assume that $p(\theta_i)\gt0$ for all $\theta_i\in\Theta_i$.
  4. Payoff functions $u_i:S\times\Theta\to\Bbb R$.
3

There are 3 best solutions below

0
On

The course of a game can be described by a sequence of states connected by moves. The move (which can be either chosen by a player or random) determines the next state. The state determines what possible moves can occur next and, in the case of a random move, what are the probabilities for the various possible moves. For a game of complete information, the current state is available to a player when deciding on a move, in the sense that the player's strategy is allowed to be an arbitrary function of the state. For a game of incomplete information, only part of the state may be available.

7
On

To be honest "incomple information" is not a good terminology. If players do not know all the relevant information, the game is not really well specified. But for historical reasons, the term has stuck.

When you want to model a situation of "incomplete information" what you do in practice is to use Harsanyi's trick: you replace the incomplete information game by a (Bayesian) game with random moves by Nature (player zero).

Suppose two players are going to bid in an auction for an object but player 1 does not know the valuation of player 2. You can add an initial move by Nature, which is not observed by player 1, where Nature chooses the valuation of player 1 to be high or low with say probability $p$ and $1-p$.

In the above example, everybody knows the probability that player 1 is high valuation. But now suppose you want to model incomplete information also about beliefs and not only payoffs. Suppose you want to model a situation where player 2 is in doubt whether player 1 knows 2's valuations or player 1 is uncertain about 2's valuation. You can accomplish that by using the same trick. Add an additional move by Nature that is not observed by player 2: with prob. $q$ Nature chooses that player 1 knows 2's valuations and with prob. $1-q$ player 1 only thinks that 2's valuations are high and low with prob $p$ and $1-p$.

To answer the second part of your question "what means that a player knows something", one needs to add a knowledge model. You start with a set of states of the world $\Omega$. Each point in $\Omega$ is a complete description of the world (payoffs, players' beliefs about payoffs, players' beliefs about beliefs, and so on...). Then you give each player a partition of $\Omega$. Please see Rubisntein and Osborne's A Course in Game Theory, Chapter 5, Knowledge and Equilibrium. You can dowload freely and legally here http://arielrubinstein.tau.ac.il/books.html

Finally using the definition you gave, a game of complete information is a game of incomplete information where $\Theta$ is a singleton (i.e. $\Theta$ has only one element).

0
On

Building a little on what Sergio Parreiras and d.k.o. said in regard to "what means that a player knows something":

In general the states of the world are elements of a probability space $(\Omega, \mathcal{F}, \mu)$. Player $i$'s information structure is a $\sigma$-subalgebra of $\mathcal{F}_i$ of $\mathcal{F}$. Player $i$ knows a random variable (or just a measurable map) $f$ if $f$ is measurable with respect to $\mathcal{F_i}$.

In your particular case, $\mathcal{F}$ is the type space $\Theta$, which is a product

$$ \Theta = \Theta_1 \times \cdots \times \Theta_n. $$

The probability measure $\mu$ specifies the joint distribution of types. Player $i$'s information set is generated by the projection onto the $i$-th coordinate

$$ (\theta_1, \cdots, \theta_n) \mapsto \theta_i. $$

So in general player $i$'s only knows (the realization of) his own type but not the types of players $-i$. Some special cases: If player $j$ has only one type, then everyone knows her type in the interim stage. If the measure is concentrated at a single point, everyone knows everyone's type, i.e. game of complete information.

I'd point out that this makes introducing Nature, Harsanyi's trick, unnecessary.

More generally, players can have different beliefs/priors, given by different measures $\mu_i$. (Common prior is needed for Harsanyi's trick.)

A player's strategy is then a map into his action set that is again measurable with respect to $\mathcal{F}_i$.

Implicit here is that players know each other's information structures. E.g. in your case, upon realization of his type player $1$ knows player $2$ knows player $2$'s type. And player $1$ maximizes his expected utility accordingly, with respect to his prior.

In the more general formulation, player $i$ knows player $j$ knows $F_j \in \mathcal{F}_j$ condition on $F_i \in \mathcal{F}_i$ if

$$ F_j \cap F_i \neq \emptyset. $$

By induction, you can model all orders of knowledge this way.