Lower Expectation

148 Views Asked by At

Let $X$ be, for simplicity, a finite set (with the discrete topology). Denote with $M(X)$ the set of probability measures on $X$ endowed with the weak topology. For $\mu\in M(X)$ and a (necessarily measurable) function $f:X\rightarrow[-1,1]$ denote with $E_{\mu}(f)$ the expected value of $f$.

For a (closed) set $A\subseteq M(X)$, define the lower expectation of $A$, denoted by $E(A)$, as the following functional of type $(X\rightarrow [-1,1])\rightarrow [-1,1]$:

$$E(A)(f)= \displaystyle \inf \{ E_{\mu}(f) \ | \ \mu \in A \}.$$

For a (closed) set $A\subseteq M(X)$ denote with $H(A)$ its convex hull of $A$, defined as expected.

It is easy to see that:

Proposition: For all $A,B\subseteq M(X)$, if $H(A)=H(B)$ then $E(A)=E(B)$.

Now my question is about the inverse direction of the previous statement.

QUESTION 1: Is it true that for $A,B\subseteq M(X)$, if $E(A)=E(B)$ then $H(A)=H(B)$?

QUESTION 2: What about restricting attention to functions $f$ of type $X\rightarrow [0,1]$?

Remark: note that, restricting even further to characteristic functions $f:X\rightarrow\{0,1\}$, the statement of QUESTION $1$ is not true anymore and the following is an example:

Example. Consider $X=\{a,b,c\}$, $\mu_{1}= \{ a\mapsto 0.3, b\mapsto 0.3, c\mapsto 0.4\}$, $\mu_{2}=\{ a\mapsto 0.4, b\mapsto 0.3, c\mapsto 0.3\}$ and $\mu_{3}=\{a\mapsto 0.5, b\mapsto 0.4, c\mapsto 0.1 \}$. Now consider $A=\{\mu_{1},\mu_{2}\}$ and $B=\{\mu_{1},\mu_{2},\mu_{3}\}$. Now $H(A)\neq H(B)$ because $\mu_{3}$ is not a convex combination of $\mu_{1}$ and $\mu_{2}$. Yet, for every set $Y\subseteq X$ (i.e., function $f:X\rightarrow\{0,1\}$, it holds that $E(A)(Y)=E(B)(Y)$.

2

There are 2 best solutions below

1
On

There is a one-to-one correspondence between convex sets of probability distributions and affinely superadditive lower expectations (or lower previsions in Walley's terminology). You should check http://sites.poli.usp.br/p/fabio.cozman/research/credalsetstutorial/introduction/node5.html

0
On

The first question has already been answered in the comments and by @Kenny: yes! Taking the convex closure of the set of probability measures has no influence on the lower expectation, so $E(A)=H(A)$ (and also $E(B)=H(B)$). Therefore, $E(A)=E(B)$ implies $H(A)=H(B)$ because the functionals $E$ and $H$ are identical.

As for the second question, the answer is yes too.

To see why, choose any variable $g\colon X\to\mathbb{R}$. Note that $g$ is bounded, and in particular has a minimum and a maximum, because $X$ was assumed to be finite. Let \begin{align} \alpha&:=\min g \\ \beta&:=\max g - \min g \\ f&:=\begin{cases} \frac{g-\alpha}{\beta} & \text{if $\beta>0$} \\ 0 & \text{if $\beta=0$} \end{cases} \end{align} Note that $f\colon X\to[0,1]$ and $g=\alpha+\beta f$ with $\beta\ge 0$. Also, $$E(A)(g)=E(A)(\alpha+\beta f)=\alpha +\beta E(A)(f)$$ In other words, $E(A)$ is completely determined by its restriction to variables of the form $f\colon X\to [0,1]$.

Now assume $E(A)(f)=E(B)(f)$ for all $f\colon X\to [0,1]$. Then for any $g\colon X\to\mathbb{R}$, by the previous part, we know that there are $f\colon X\to [0,1]$, $\alpha\in\mathbb{R}$ and $\beta\in\mathbb{R}_{\ge 0}$ (as constructed before) such that $g=\alpha+\beta f$, and $$E(A)(g)=\alpha +\beta E(A)(f)=\alpha +\beta E(B)(f)=E(B)(g)$$

Note that we never used countable additivity. So these arguments also extend to infinite $X$, bounded variables (using $\sup$ and $\inf$ instead of $\max$ and $\min$), and closed sets of finitely additive probability charges (under an appropriate topology).