How to bound probability density functions

562 Views Asked by At

For a random variable $X$ with PDF $p(x)$, my goal is to give a convergence rate to $$ \int_\epsilon^{2\epsilon}p(x)dx\overset{\epsilon\to 0}{\to} 0 $$ Note that this is not always the case; e.g. $p(x) = \delta(x)$.

My approach is, find $B$ where $p(x) < B$ for all $x$. Then, this quantity is certainly upper bounded by $B\epsilon$. However, $p(x)$ is super hard to characterize ($X$ is like the sum and product of a lot of things).

1) Are there nice techniques out there used to bound the PDF value of complicated random variables? I feel like with sums I can do something with convolution, but products I'm completely lost.

2) Are there better ways of approaching this goal? I feel like an exponential bound would be excellent, rather than a linear bound. It doesn't quite feel like a concentration inequality would work well here, though.

Thanks for any thoughts!

Edit: Some progress I made. Consider $A$ and $B$ independent random variables, and $C = f(A,B)$. Define $p_A$, $p_B$, $p_C$ the PDFs of $A$, $B$, and $C$, and for any random variable $X$, $p_X^\infty$ the max PDF value. Then $$ p_C(x) = \int_{-\infty}^\infty p_{C|A}(f(A,B) | a) p_A(a) da \overset{(a)}{=}\int_{-\infty}^\infty p_{C|A}(f(A,B)) p_A(a) da\leq p_{C|A}^\infty \int_{-\infty}^\infty p_A(a) = p_{C|A}^\infty $$ So, for example, if $C = (A+B)/2$, then $p_{C|A}^\infty = p_{B/2}^\infty$. Taking $A$ and $B$ Gaussian $\mathcal N(0,1)$, then we can get stuff like $$ p_{A}^\infty = p_B^\infty = \frac{1}{\sqrt{2\pi}\sigma} \Rightarrow p_{B/2}^\infty = \frac{2}{\sqrt{2\pi}\sigma} \leq p_C^\infty $$ This is consistent with what we know about Gaussian random variable behavior.

In other words, if $f$ is a reasonable mixture of independent random variables all with "non-peaky" PDFs, then the PDF of $f$ cannot be peaky.

However, none of this holds if $A$ and $B$ are dependent (step (a) is key). This is now my main challenge; how can we get around this?