If the expectation of a random vector is an extreme point, then it is almost surely that point

54 Views Asked by At

I was wondering how to show the following intuitive statement:

Let $X$ be a random vector in $\mathbb{R}^d$ such that $\Pr(X \in A) = 1$ for some convex set $A$. If $\mathbb{E}(X) = p$, where $p$ is an extreme point of $A$, then $X = p$ almost surely.

If $X$ is a simple function then the claim seems obvious, but I'm not sure how to extend this to general measurable $X$.

1

There are 1 best solutions below

0
On

Here are two ideas, but I am not sure either is rigorous enough... Comments / suggestions / corrections welcome.

What does it mean for $p\in A$ to be an extreme point of convex set $A$? It means if there is a convex linear combination s.t. $p = \sum \alpha_i p_i$ where each $p_i \in A$, each $\alpha_i \ge 0$, and $\sum \alpha_i = 1$, then the combination is the trivial one i.e. $p = 1 \times p$.

If you are willing to generalize the $\sum$ to $\int$, and interpret the $\alpha$ as probability density, then that is exactly the statement $P(X=p) = 1.$


Alternatively (warning: hand-waving!) if $p$ is an extreme point then it "sticks out". :) More specifically I'm thinking there is a point $c$ in the interior of $A$ ("just inside" of $p$) s.t. the dot product $(x - c) \cdot (p - c)$ is maximized over all $x \in A$ when $x = p$.

Now consider the real-valued r.v. $Y = (X - c) \cdot (p - c)$, and we have $Y$ attaining its max value $(p-c)\cdot (p-c)$ iff $X=p$. However, $E[X] = p$ and linearity together imply $E[Y] = (E[X] - c) \cdot (p - c) = (p-c)\cdot (p-c)$, i.e. the expected value is the max value. This (I think) implies $P(X = p) = 1$.