Suppose that X is a continuous random variable, normally distributed with expected value $E(X) = 0$. What is the standard deviation $\sigma(X)$ that gives maximum value to $P(a<X<b)$?
My attempt:
Since $E(X) = 0$, we can write:
$$P(a<X<b) = \int_0^{b/\sigma}e^{-t^2/2}dt - \int_0^{a/\sigma}e^{-t^2/2}dt$$
Then to find maximim value of $P(a<X<b)$ we can find derivative of this expression:
$$\frac{\partial}{\partial \sigma}\left(\int_0^{b/\sigma}e^{-t^2/2}dt - \int_0^{a/\sigma}e^{-t^2/2}dt\right) = -\frac{b}{\sigma^2} \cdot e^{-\frac{b^2}{2\sigma^2}} + \frac{a}{\sigma^2} \cdot e^{-\frac{a^2}{2\sigma^2}}$$
But I don't know how to maximize $P(a<X<b) $ on $\sigma$ from this because the derivative is $0$ only when $a = b = 0$.
Maybe there is another way to do this?
One knows that $X=\sigma X_0$ where $X_0$ is standard normal with PDF $\varphi$ and CDF $\Phi$ hence, for every $\sigma>0$, $$P(a<X<b)=P(a/\sigma<X_0<b/\sigma)=g(1/\sigma)$$ where, for every $u\geqslant0$, $$g(u)=\Phi(bu)-\Phi(au)$$ Now, $g(0)=0$ and, for every $u$, $$g'(u)=b\varphi(bu)-a\varphi(au)$$ In particular, $$g'(0)=b-a>0$$ If $a<b<0$ or if $b>a>0$ then $g(+\infty)=0$ hence $g(u)$ is maximum on $u>0$ at some $u_*$ solving $g'(u_*)=0$. Thus, the optimal $\sigma$ exists and is $\sigma_*=1/u_*$, where $u_*$ solves $$be^{-b^2u^2/2}=ae^{-a^2u^2/2}$$ hence $$\sigma_*^2=\frac{a^2-b^2}{2\log(a/b)}$$ The other case is when $a\leqslant0\leqslant b$ with $a\ne0$ or $b\ne0$, then $g'(u)>0$ for every $u\geqslant0$ hence the supremum of $g(u)$ is located at the limit $u\to\infty$. That is, there is no optimal $\sigma$ and the supremum of $P(a<X<b)$ corresponds to the limit $\sigma\to0$ (a conclusion that was rather obvious from the start).