Bounding the variance of the maximum of convex functions of a random variable

221 Views Asked by At

Consider a random variable $X$ with density $f$, and a finite set of convex and increasing functions $Y_i(x)$. I am interested in bounding the variance of the max of the $Y_i$'s.

I have the following conjecture: $$ \text{Var}\left(\max_i Y_i(X)\right)\leq \sum_i p_i \text{Var}\left(Y_i(X)\right) $$ where $p_i$ is the natural distribution over $i$ created by the max operator, i.e. $p_i$ is the probability that function $i$ is selected by the max operator when integrating over $X$ $$ p_i=\int_x\mathbb{1}\left(Y_i(x)\geq Y_j(x)\text{ for all }j\right)f(x)dx $$ (if multiple $Y_i$'s yield the maximum value for some $x$ split the mass $f(x)$ equally among them).

Any idea on how to prove/disprove this result? Any other bound that is known on the variance of the max? Please feel free to impose additional assumptions if they can help.

1

There are 1 best solutions below

2
On BEST ANSWER

This inequality is false and here is a counterexample:

Let $X$ be uniform on $[0,1]$, $Y_1(x) = \delta$ for all $x$, $Y_2(x) = 0$ for $0 \leq x \leq 1 - \epsilon$, and a linear function afterwards such that $Y_2(1) = M$. (Think of $\epsilon, \delta$ as small and $M$ large). You can check that $Z = \max Y_1, Y_2$ is very similar to $Y_2$, so it has a very similar variance. However, on one side of this inequality you have the variance of $Z$, and on the other, you have $\epsilon$ Var($Y_2$). (I would do the computations explicitly, but your requirement that these are convex made me put this piecewise linear function, so I am just giving you a picture instead)

Here is a general bound (it is your bound without the $p_i$s), and why it is tight: https://stats.stackexchange.com/questions/15970/what-is-the-variance-of-the-maximum-of-a-sample

EDIT: Edited $Y_1$ to $Y_2$ as per the comment.