If $X$ is a nonnegative $\sigma$-subGaussian random variable with $P(X=0)\ge p$, what is a good upper bound for $P(X \ge h)$?

130 Views Asked by At

Let $X$ be a nonnegative random variable and let $\sigma \in [0,\infty)$ and $p \in (0,1)$ such that

  • (1) $P(X=0) \ge p$
  • (2) $Var(X) \le \sigma^2$

For $h \ge 0$, define $c_X(h):=P(X \ge h)$. The following result was established in a paper of S. Bobkov.

For every $h \ge \dfrac{\sigma}{\sqrt{p(1-p)}}$, it holds that $P(X \ge h) \le \dfrac{p\sigma^2}{ph^2-\sigma^2}$.

In the referenced paper, the above inequality is labeled as (2.6).

Now, suppose we replace condition (2) with the following condition

  • (2') $X$ is $\sigma^2$-subGaussian, meaning that $P(|X-EX| > t) \le 2\exp(-t^2/(2\sigma^2))$ for all $t \ge 0$.

Question. What is a good upper bound for $c_X(h)$ as a function of $p$, $\sigma$, and $h$, in this case ?

One would expect to get stronger to obtain a stronger tail-bound than previously.

N.B.: Of course, if the worst comes to the worst, I'll be fine with a bound which works only works sufficiently large $h$.

1

There are 1 best solutions below

0
On

I have tried using some simple inequalities to get (hopefully) something useful:

I have focused on the case when $p$ is small.

Since $P(X=0) = p$, the subgaussian concentration implies that $p \leq 2 \exp\left(- \frac{(\mathbb E X)^2}{ 2 \sigma ^2}\right)$. This gives us the following: $$\mathbb E X \leq \sigma \sqrt{2 \log \frac{2}{p}}=: h_0.$$

Given this bound on $\mathbb E X$, we can now calculate upper tail inequalities: for any $h = \alpha h_0$ for $\alpha \geq 1$, we have that $$ \begin{align} P(X \geq h) &= P(X - \mathbb E X \geq (\alpha-1) h_0 ) \\ &\leq 2\exp\left(- \frac{(\alpha-1)^2h_0^2}{2 \sigma^2}\right) \\ &= 2\exp\left(-(\alpha-1)^2 \log \frac{2}{p} \right) = 2\left(\frac{p}{2}\right)^{(\alpha-1)^2}. \end{align} $$

For comparison, the bound in paper has $h_0' := \frac{\sigma}{\sqrt{p(1 - p)}} $ and gives the result for any $h = \alpha h_0'$, $P(X \geq \alpha h_0) \leq \frac{p(1-p)}{\alpha^2 - 1 + p}$. Thus subgaussianity allows us to get much tighter bounds, especially when $p \to 0$. Please let me know if I made an error somewhere.