Hoeffding inequality for conditional probability (conditioned on event)

744 Views Asked by At

Suppose I have independent $X_1\sim\text{Bin}(n,\theta_1)$, $X_2\sim\text{Bin}(n,\theta_2)$ with $X=X_1+X_2$. Suppose that $\theta_1,\theta_2\in(0,1)$. Define the constant (but still depends on $n$) $a=n\theta_0+o(n^{2/3})$, for $\theta_0\in(0,1)$. I'd like to show the following: $$ P\left(X>c\mid X_1\leq a,X_2\leq a\right)\to 0 $$ as $n\to\infty$ for some "large enough" $c$. Note that the parameters, $\theta$'s, for $X_1$ and $X_2$ may not be the same, so that $X$ is not necessarily a binomial random variable.

Usually, in the marginal case, without the conditioning event, Hoeffding's inequality can be used to get the convergence. Is there a known inequality (like Chernoff or Hoeffding's) to show the exponential convergence for such a quantity conditioned on an event?

2

There are 2 best solutions below

2
On BEST ANSWER

The FKG inequality, or more specifically Harris inequality, gives $$P(X>c \mid X_1 \leq a, X_2 \leq a) \leq P ( X > c).$$

I will describe in more detail how FKG applies. Let $A$ be the event $X> c$ and let $B$ be the event that $X_1,X_2\leq a.$ We want to prove $P(A\mid B)\leq P(A).$

The FKG lattice condition is always satisfied for a product measure $\mu=\mu_1\times\mu_2$ defined on a Cartesian product of finite totally ordered sets, here $L=\{0,1,\dots,n\}^2.$ We are using $$(\mu_1\times\mu_1)(\{(x_1,x_1)\})=\mathbb P[X_1=x_1, X_2=x_2].$$

Let $1_A$ and $1_B$ denote the indicator functions of the events $A$ and $B$ respectively. Since $1_A$ is an increasing function on $L$ and $1_B$ is a decreasing function on $L,$ the FKG inequality says $$\sum_{x\in L} 1_A(x)1_B(x)\mu(x)\sum_{x\in L}\mu(x)\leq \sum_{x\in L} 1_A(x)\mu(x)\sum_{x\in L} 1_B(x)\mu(x),$$ which is just $P(A\cap B)\cdot 1\leq P(A)P(B),$ so $P(A\mid B)\leq P(A).$

3
On

You might just try using the definition of conditional expectation. Since \begin{align}P ( X > c | X _ { 1 } \leq a , X _ { 2 } \leq a ) &= \frac{P ( X > c \cap X _ { 1 } \leq a , X _ { 2 } \leq a )}{P(X_1 \leq a, X_2 \leq a)} \\ &\leq \frac{P ( X > c)}{P(X_1 \leq a, X_2 \leq a)}, \end{align} you could just apply Hoeffding's inequality to the numerator assuming $a$ is fixed.