Expressing "common knowledge" in terms of probability

61 Views Asked by At

I have been assigned to give a brief talk about the paper Agreeing to disagree by Robert J. Aumann, and I am struggling with the definition of common knowledge.

I have come across another post: Explanation of Aumann's "agreeing to disagree" in modern notation

In the post he proposes the following definition based on the paper from Tyrrell McAllister (https://www.lesswrong.com/posts/8h5Tz593J2hvKG9Pe/an-explanation-of-aumann-s-agreement-theorem)

Definition: the two agents' posteriors, $p(X=x\mid A=a)$ and $p(X=x\mid B=b)$, are common knowledge if there exists some information $c$ such that

  1. $p(c\mid A=a)=1$ and $p(c\mid B=b) = 1.$ (i.e. both agents know $c$ with certainty, after learning their information.)

  2. $p(X=x\mid c,\,A=a') = p(X=x\mid A=a)$ for all $a'$ in the domain of $A$. That is, knowing $c$ completely determines $\mathbf{A}$'s posterior for $x$, independently of any other information that $A$ might have learned.

  3. $p(X=x\mid c,\,B=b') = p(X=x\mid B=b)$ for all $b'$ in the domain of $B$. (As above, but for $\mathbf{B}$.)

But as it is mentioned in the post:

I'm still struggling to see clearly how this definition of "common knowledge" corresponds to the intuitive description that Aumann gives, which is that not only do $\mathbf{A}$ and $\mathbf{B}$ know each other's posteriors, but they also know that each other know, know that they know that they know, etc.

Any help understanding this would be helpful.