Show that the optimum of an expectation is monotone

99 Views Asked by At

Setting: Let $Z$ be a $[0,1]$-valued random variable. I'm interested in solutions $f$ to $$\max_{f : [0, 1] \to [0, 1]} \mathbb{E}[Z f(Z)]$$ $$\text{ subject to } \mathbb{E}[(Z - 1) f(Z)] \geq c,$$ for some $c \in [-1, 0]$.

Note: I can't assume $f$ is smooth (in fact, generally I expect solutions to look like threshold functions.). Thus, the solution won't be unique (e.g., changing $f$ on any set of probability $0$ doesn't affect the objective or the constraint), and I can't use arguments based on a derivative of $f$.

Questions:

  1. Among the solutions, it seems clear that there should be some monotone (non-decreasing) solution $f$, since both the objective and the constraint improve with $\mathbb{E}[Zf(Z)]$ while the constraint otherwise depends only on $\mathbb{E}[f(Z)]$. How can I formalize this intuition to show that there exists a monotone solution?
  2. I think that, if $Z$ is continuous (i.e., has a density w.r.t. Lebesgue measure), then there must be a solution $f$ that is a threshold function (i.e., $f(z) = 1\{z > t\}$ for some $t \in [0, 1]$). Is this true, and, if so, how can I show it?
1

There are 1 best solutions below

0
On BEST ANSWER

I was able to show a somewhat stronger result, namely that, if a solution exists, then there exists a solution of the form $f(z) = p \cdot 1\{z = t\} + 1\{z > t\}$ for some $p, t \in [0, 1]$. Below, I refer to functions of this form as generalized threshold functions. Both questions follow from this fact, since, (a) generalized threshold functions are clearly monotone, and (b) if $Z$ has a density w.r.t. Lebesgue measure, then setting $p = 0$ does not change the objective or the constraint.

First, a simple lemma showing that, if a measure $\mu$ is nonzero, then there exists an arbitrarily small interval on which it is non-zero:

Lemma 1: Let $\mu$ be a measure on $[0, 1]$ with $\mu([0, 1]) > 0$. Then, there exists $z \in \mathbb{R}$ such that, for all $\epsilon > 0$, $\mu([0, 1] \cap (z - \epsilon, z)) > 0$.

Proof of Lemma 1: We prove the contrapositive. Suppose that, for every $z \in [0, 1]$, there exists $\epsilon_z > 0$ such that $\mu([0, 1] \cap (z - \epsilon_z, z)) = 0$. The family $\mathcal{S} := \{[0, 1] \cap (z - \epsilon_z, z) : z \in \mathbb{R} \}$ is an open cover of $[0, 1]$. Since $[0, 1]$ is compact, there exists a finite sub-cover $\mathcal{S}' \subseteq \mathcal{S}$ of $[0, 1]$. Thus, by countable subaddivity of measures, $$\mu([0, 1]) \leq \sum_{S \in \mathcal{S}'} \mu(\mathcal{S}) = 0.$$

Proof of main result: Suppose that there exists a solution $f$. We will construct a solution that is a generalized threshold function. Specifically, first, we will construct a monotone solution. Second, we will show that this monotone solution is equal to a generalized threshold function except perhaps on a set of probability $0$ with respect to $Z$. Both desired conclusions follow from the existence of such as solution.

Construction of a Monotone Solution: Define $$g(z) := \text{essup}_{P_Z} f([0, z]) \quad \text{ and } \quad h(z) := \text{esinf}_{P_Z} f((z, 1]),$$ where the essential supremum and infimum are taken with respect to the measure $P_Z$ of $Z$, with the conventions $g(z) = 0$ whenever $P_Z([0, z]) = 0$ and $h(z) = 1$ whenever $P_Z((z, 1]) = 0$. We first show that, for all $z \in [0, 1]$, $g(z) \leq h(z)$. We will then use this to show that $g = f$ except on a set of $P_Z$ measure $0$ (i.e., $P_Z(\{z \in [0, 1] : g(z) \neq f(z)\}) = 0$). Therefore, both $\mathbb{E}[Z g(Z)] = \mathbb{E}[Z f(Z)]$ and $\mathbb{E}[(1 - Z) g(Z)] = \mathbb{E}[(1 - Z) f(Z)]$. Since $g : [0, 1] \to [0, 1]$ is clearly monotone non-decreasing, the result follows.

Suppose, for sake of contradiction, that, for some $z \in [0, 1]$, $g(z) > h(z)$. Then there exist $A \subseteq [0, z]$ and $B \subseteq (z, 1]$ such that $\inf_{z \in A} f(z) > \sup_{z \in B} f(z)$ and $P_Z(A), P_Z(B) > 0$. Define $z_A := \mathbb{E}[Z|Z \in A]$ and $z_B := \mathbb{E}[Z|Z \in B]$, and note that, since $A \subseteq [0, z]$ and $B \subseteq (z, 1]$, $z_A < z_B$. Define, \begin{align*} \epsilon := \min \left\{ \frac{P_Z(A) (1 - z_A)}{P_Z(B) (1 - z_B)} \inf_{z \in A} f(z),\quad \sup_{z \in B} f(z) \right\} > 0 \end{align*} and define $\phi : [0, 1] \to [0, 1]$ by $$\phi(z) := \left\{ \begin{array}{cc} f(z) - \epsilon \frac{P_Z(B) (1 - z_B)}{P_Z(A) (1 - z_A)} & \text{ if } z \in A \\ f(z) + \epsilon & \text{ if } z \in B \\ f(z) & \text{ otherwise,} \end{array} \right. $$ noting that, by construction of $\epsilon$, $\phi(z) \in [0, 1]$ for all $z \in [0, 1]$. Then, by construction of $\phi$, \begin{align*} \mathbb{E}[(1 - Z) \phi(Z)] - \mathbb{E}[(1 - Z) f(Z)] & = - (1 - z_A) \epsilon \frac{P_Z(B) (1 - z_B)}{P_Z(A) (1 - z_A)} P_Z(A) + (1 - z_B) \epsilon P_Z(B) \\ & = 0, \end{align*} while \begin{align*} \mathbb{E}[Z \phi(Z)] - \mathbb{E}[Z f(Z)] & = - z_A \epsilon \frac{P_Z(B) (1 - z_B)}{P_Z(A) (1 - z_A)} P_Z(A) + z_B \epsilon P_Z(B) \\ & = \left( - \frac{z_A}{1 - z_A}(1 - z_B) + z_B \right) \epsilon P_Z(B) > 0, \end{align*} since the function $z \mapsto \frac{z}{1 - z}$ is strictly increasing. This contradicts the assumption that $f$ is a solution, implying $g \leq h$.

We now show that $g = f$ except on a set of $P_Z$ measure $0$. First, note that, if $g(z) \neq f(z)$, then $g(z) = \text{essup} f([0, z]) = \text{essup} f([0, z))$, and so $g$ is left-continuous at $z$.

For any $\delta > 0$, define $$A_\delta := \left\{ z \in [0, 1] : g(z) < f(z) - \delta \right\} \quad \text{ and } \quad B_\delta := \left\{ z \in [0, 1] : g(z) > f(z) + \delta \right\}.$$ Since $$\{z \in [0, 1] : g(z) < f(z)\} = \bigcup_{j = 1}^\infty \left\{ z \in [0, 1] : g(z) < f(z) - \frac{1}{j} \right\}$$ and $$\{z \in [0, 1] : g(z) > f(z)\} = \bigcup_{j = 1}^\infty \left\{ z \in [0, 1] : g(z) > f(z) + \frac{1}{j} \right\},$$ by countable subadditivity, it suffices to show that $P_Z(A_\delta) = P_Z(B_\delta) = 0$ for all $\delta > 0$.

Suppose, for sake of contradiction, that $P_Z(A_\delta) > 0$. Applying Lemma 1 to the measure $E \mapsto P_Z(A_\delta \cap E)$, there exists $z \in \mathbb{R}$ such that, for any $\epsilon > 0$, $P_Z(A_\delta \cap (z - \epsilon, z)) > 0$. Since $g$ is continuous at $z$, there exists $\epsilon > 0$ such that $g(z - \epsilon) \geq g(z) - \delta$, so that, for all $z \in A_\delta \cap (z - \epsilon, z)$, $f(z) > g(z) + \delta$. Then, since $P_Z(A_\delta \cap (z - \epsilon, z)) > 0$, we have the contradiction $$g(z) \geq \text{essup} f(A_\delta \cap (z - \epsilon, z)) > g(z).$$

On the other hand, suppose, for sake of contradiction, that $P_Z(B_\delta) > 0$. Applying Lemma 1 to the measure $E \mapsto P_Z(B_\delta \cap E)$, there exists $z \in \mathbb{R}$ such that, for any $\epsilon > 0$, $P_Z(B_\delta \cap (z - \epsilon, z)) > 0$. Since $g$ is continuous at $z$, there exists $\epsilon > 0$ such that $g(z - \epsilon) \geq g(z) - \delta$. At the same time, since $g$ is non-decreasing, for $t \in B_\delta \cap (z - \epsilon, z)$, $f(t) < g(t) - \delta \leq g(z) - \delta$. Thus, since $P_Z(B_\delta \cap (z - \epsilon, z)) > 0$, we have $h(z - \epsilon) < g(z) - \delta < g(z - \epsilon)$, contradicting the previously shown fact that $g \leq h$.

To conclude, we have shown that $P_Z(\{z \in [0, 1] : g(z) \neq f(z)\}) = 0$.

Existence of a Generalized Threshold Solution: We now construct a solution that is equal to a generalized threshold function except on a set of $P_Z$-measure $0$. To show this, it suffices to construct a function $f : [0, 1] \to [0, 1]$ such that (a) $f$ is monotone non-decreasing and (b) the set $f^{-1}((0, 1))$ is the union of the singleton $\{t\}$ and a set of $P_Z$-measure $0$.

From the previous step of this proof, we may assume that we have a solution $f$ that is monotone non-decreasing. It suffices therefore to show that $A := f^{-1}((0, 1))$ is the union of a singleton and a set of $P_Z$-measure $0$. Define $$t_0 := \inf \{z \in [0, 1] : P_Z(A \cap [0, z]) > 0\} \quad \text{ and } \quad t_1 := \sup\{z \in [0, 1] : P_Z(A \cap [z, 1]) > 0\}.$$ Then, for all $\epsilon > 0$, $P_Z(A \cap [0, t_0 - \epsilon]) = P_Z(A \cap [t_1 + \epsilon, 1]) = 0$. If $t_0 = t_1$, then, since $$A \backslash \{t_0\} = \bigcup_{j = 1}^\infty A \cap \left( [0, t_0 - 1/j] \cup [t_0 + 1/j, 1] \right)$$ by countable subadditivity, $P_Z(A \backslash \{t_0\}) = 0$, which implies that $A = \{t_0\} \cup (A \backslash \{t_0\})$ is the union of a singleton and a set of measure $0$.

It suffices therefore to prove that $t_0 = t_1$. Suppose, for sake of contradiction, that $t_0 < t_1$. Then, there exists $t \in (t_0, t_1)$, and, by definition of $t_0$ and $t_1$, both $P_Z(A \cap [0, t)) > 0$ and $P_Z(A \cap (t, 1]) > 0$. For any $\delta \geq 0$, define $$B_\delta := \{z \in [0, t): \delta < f(z) < 1 - \delta\} \quad \text{ and } \quad C_\delta := \{z \in (t, 1]: \delta < f(z) < 1 - \delta\},$$ so that $P_Z(B_0) > 0$ and $P_Z(C_0) > 0$. By countable subadditivity, there exists $\delta > 0$ such that $P_Z(B_\delta) > 0$ and $P_Z(C_\delta) > 0$.

Define $\epsilon := \delta \cdot \min \{P_Z(B_\delta), P_Z(C_\delta)\}\} > 0$. Define $g : [0, 1] \to \mathbb{R}$ for all $z \in [0, 1]$ by $$g(z) = \left\{ \begin{array}{cc} f(z) - \frac{\epsilon}{P_Z(B_\delta)} & \text{ if } z \in B_\delta \\ f(z) + \frac{\epsilon}{P_Z(C_\delta)} & \text{ if } z \in C_\delta \\ f(z) & \text{ otherwise}. \end{array} \right. ,$$ and note that, by definition of $\epsilon$, $B_\delta$, and $C_\delta$, $g : [0, 1] \to [0, 1]$. Then, $$\mathbb{E}[g(Z)] - \mathbb{E}[f(Z)] = -\frac{\epsilon}{P_Z(B_\delta)} P_Z(B_\delta) + \frac{\epsilon}{P_Z(C_\delta)} P_Z(C_\delta) = 0,$$ while \begin{align*} \mathbb{E}[Z g(Z)] - \mathbb{E}[Z f(Z)] & = -\mathbb{E}[Z | Z \in B_\delta] \frac{\epsilon}{P_Z(B_\delta)} P_Z(B_\delta) + \mathbb{E}[Z | Z \in C_\delta] \frac{\epsilon}{P_Z(C_\delta)} P_Z(C_\delta) \\ & = \epsilon \left( \mathbb{E}[Z | Z \in C_\delta] - \mathbb{E}[Z | Z \in B_\delta] \right). \end{align*} Since $B_\delta \subseteq [0, t)$ and $C_\delta \subseteq (t, 1]$, this difference is strictly positive, contradicting the assumption that $f$ is a solution.