Question: Consider $n$ random variables $x_1, x_2,\cdots x_n\sim \mathcal{N}(0,1)$. The expected value of the $i$th order statistic (the maximum) can be written as
$E(\mathcal{O}^n_1)=\int_{-\infty}^{+\infty}nx\Phi(x)^{n-1}\phi(x)\:dx$.
I wish to show that for $n_1<n_2<n_3$,
$E(\mathcal{O}^{n1}_1)<E(\mathcal{O}^{n2}_1)<E(\mathcal{O}^{n3}_1)$, and
$E(\mathcal{O}^{n3}_1)-E(\mathcal{O}^{n2}_1<E(\mathcal{O}^{n2}_1-E(\mathcal{O}^{n1}_1)$,
where $\mathcal{O}^{n1}_1$ is the first order statistic (maximum) for a sample of $n1$.
Progress so far I've managed to prove the part for first-order derivative by invoking the concept of first-order stochastic dominance. But still no progress on the second-order...
Any help will be greatly appreciated. Thanks!
For the first inequality, you could (prove and) use the identity $$E(\mathcal O^n)=\int_\mathbb R(\mathbf 1_{x\gt0}-\Phi(x)^n)\,\mathrm dx,$$ and the fact that, for every $x$, the sequence $(\Phi(x)^n)$ is decreasing. (Hint: Start from the identity in your post and integrate by parts, using the functions $u=x$ and $v=\Phi(x)^n$.)
The second inequality cannot hold in full generality since $\mathcal O^n\to+\infty$ almost surely when $n\to\infty$, hence the limit of the inequality when $n_3\to\infty$, $n_1$ and $n_2$ fixed, is absurd.