Two normal distributions with the same variance and two disjoint intervals

119 Views Asked by At

Consider two normal distributions with the same variance $\sigma^2$, one with mean 0 and one with mean $\delta > 0$. Let $F_1$ be the CDF of the zero mean distribution and $F_2$ that of the other distribution ($F_2(x) = F_1(x-\delta)$).

Prove that for any $u_1 < v_1 < u_2 < v_2$ (think two disjoint intervals $(u_1,v_1)$ and $(u_2,v_2)$ with the latter lying on the 'right')

$$(F_2(v_2)-F_2(u_2))\cdot (F_1(v_1)-F_1(u_1)) > (F_2(v_1)-F_2(u_1))\cdot (F_1(v_2)-F_1(u_2)) $$

or equivalently, $$(F_1(v_2-\delta)-F_1(u_2-\delta))\cdot (F_1(v_1)-F_1(u_1)) > (F_1(v_1-\delta)-F_1(u_1-\delta))\cdot (F_1(v_2)-F_1(u_2)) $$

Intuitively, this is saying that given two normal distributions with the same variance and two disjoint intervals; the likelihood of a sample from the distribution with the larger mean 'landing' in the 'right' interval and a sample from the distribution with the smaller mean 'landing' in the 'left' interval is higher than the other way around.

I've written some codes to try to verify this and it seems to be true. However, I haven't figured out how to mathematically prove this. Any pointers are greatly appreciated! May be it has something to do with the normal cdf being log-concave?

2

There are 2 best solutions below

0
On

The trick is to look at derivatives. Let me assume for simplicity (but without loss of generality) that $\sigma=1$ and let $\varphi$ denote the density of a $N(0,1)$. Then, for all $t_1<t_2$, \begin{align*} \ln(\varphi(t_2 -\delta))-\ln(\varphi(t_1 -\delta))& =-\frac{1}{2}\left[(t_2-\delta)^2 - (t_1-\delta)^2\right]\\ & = -\frac{1}{2}\left[t^2-t_1^2 - 2\delta(t_2-t_1)\right]\\ & > \ln(\varphi(t_2))-\ln(\varphi(t_1)). \end{align*} Therefore, for all $t_1<t_2$, $$\varphi(t_2 - \delta)\varphi(t_1)> \varphi(t_1 - \delta)\varphi(t_2).$$ Integrating $t_2$ over $[u_2,v_2]$ ($u_2>t_1$), we obtain $$\left[F_1(v_2 - \delta)-F_1(u_2 - \delta)\right]\varphi(t_1)> \varphi(t_1 - \delta)\left[F_1(v_2)-F_1(u_2)\right].$$ The result follows by integrating $t_1$ over $[u_1,v_1]$ (with $v_1<u_2$).

Edit: you were right, my first inequality is just log-concavity of the normal distribution.

0
On

I found another proof which works for any two normal distributions with the same variance $\sigma^2$ and means $\mu_1 < \mu_2$. The proof reduces to the rearrangement inequality \begin{equation*} \begin{aligned} \Pr(X_1 \in [u_1, v_1], X_2 \in [u_2,v_2]) &= C\cdot \int_{u_1}^{v_1} \int_{u_2}^{v_2} \exp{-\frac{(x_1 - \mu_1)^2}{2\sigma^2} }\exp{-\frac{(x_2-\mu_2)^2}{2\sigma^2}} dx_2 dx_1\\ &\text{[where $C$ captures all the constant terms]}\\ &= C\cdot \int_{u_1}^{v_1} \int_{u_2}^{v_2} \exp{-\frac{(x_1 - \mu_1)^2+(x_2-\mu_2)^2}{2\sigma^2} }dx_2 dx_1\\ \end{aligned} \end{equation*} It suffices to prove that $$ \exp{-\frac{(x_1 - \mu_1)^2+(x_2-\mu_2)^2}{2\sigma^2}} > \exp{-\frac{(x_1 - \mu_2)^2+(x_2-\mu_1)^2}{2\sigma^2}}\,.$$ To this end, it can be seen that \begin{equation*} \begin{aligned} \exp{-\frac{(x_1 - \mu_1)^2+(x_2-\mu_2)^2}{2\sigma^2}} &> \exp{-\frac{(x_1 - \mu_2)^2+(x_2-\mu_1)^2}{2\sigma^2}}\\ \Leftrightarrow \frac{(x_1 - \mu_1)^2+(x_2-\mu_2)^2}{2\sigma^2} &< \frac{(x_1 - \mu_2)^2+(x_2-\mu_1)^2}{2\sigma^2}\\ \Leftrightarrow -2x_1\mu_1 - 2x_2\mu_2 &< -2x_1\mu_2 - 2x_2\mu_1\\ \Leftrightarrow x_1\mu_1 + x_2\mu_1 &> x_1\mu_2 + x_2\mu_1 \end{aligned} \end{equation*} The last inequality is true due to the rearrangement inequality.