Expectation of Brownian motion increments under a permutation.

96 Views Asked by At

Tl;dr, here is my question. Let $$\gamma(x):=\frac{e^{-x^2/2}}{\sqrt{2\pi}}$$ denote the standard Gaussian density, fix an integer $n\in\mathbb N$, and let $(B_t)_{t\geq0}$ denote a standard Brownian motion. Given any permutation $\sigma\in S_n$ on $n$ symbols, does it hold that \begin{align*} E[\gamma(B_{\sigma(1)})\gamma(B_{\sigma(2)}-B_{\sigma(1)})\cdots\gamma(B_{\sigma(n)}-B_{\sigma(n-1)})] &\leq E[\gamma(B_1)\gamma(B_2-B_1)\cdots\gamma(B_n-B_{n-1})]\\ &=E[\gamma(B_1)]^n?\tag{1} \end{align*}


My reason for asking is the comparison of the two following computations. One the one hand, consider the integral $$I=\int_{\mathbb R^n}\left(\sum_{\sigma\in S_n}\gamma(x_{\sigma(1)})\gamma(x_{\sigma(2)}-x_{\sigma(1)})\cdots\gamma(x_{\sigma(n)}-x_{\sigma(n-1)})\right)^2~d x_1\cdots dx_n,$$ where $S_n$ denotes the symmetric group of permutations on $n$ symbols.

By Jensen's inequality, we have $$I\leq n!\sum_{\sigma\in S_n}\int_{\mathbb R^n}\gamma(x_{\sigma(1)})^2\gamma(x_{\sigma(2)}-x_{\sigma(1)})^2\cdots\gamma(x_{\sigma(n)}-x_{\sigma(n-1)})^2~d x_1\cdots dx_n,$$ which, up to relabeling the $x_i$ variables, gives $$I\leq (n!)^2\int_{\mathbb R^n}\gamma(x_{1})^2\gamma(x_{2}-x_{1})^2\cdots\gamma(x_{n}-x_{n-1})^2~d x_1\cdots dx_n.$$ Then, \begin{align*} \int_{\mathbb R^n}\gamma(x_{1})^2\gamma(x_{2}-x_{1})^2\cdots\gamma(x_{n}-x_{n-1})^2~d x_1\cdots dx_n &=E[\gamma(B_1)\gamma(B_2-B_1)\cdots\gamma(B_n-B_{n-1})]\\ &=E[\gamma(B_1)]^n,\end{align*} so we conclude that $$I\leq(n!)^2E[\gamma(B_1)].\tag{2}$$


On the other hand, if we expand the square in $I$ directly, up to a relabeling of indices, we get \begin{align*} I&=n!\sum_{\sigma\in S_n}\int_{\mathbb R^n}\prod_{k=1}^n\gamma(x_k-x_{k-1})\gamma(x_{\sigma(k)}-x_{\sigma(k-1)})~d x_1\cdots dx_n\\ &=n!\sum_{\sigma\in S_n}E[\gamma(B_{\sigma(1)})\gamma(B_{\sigma(2)}-B_{\sigma(1)})\cdots\gamma(B_{\sigma(n)}-B_{\sigma(n-1)})]. \end{align*} Comparing this with inequality $(2)$, this suggests that $(1)$ might be true, but I have no probabilistic intuition/explanation for why this should or should not be the case.

1

There are 1 best solutions below

0
On

This is in fact true, and follows from the following more general claim.

Let $A$ be a $n\times n$ matrix with determinant $|A|=1$. If $G=(G_1,\ldots,G_n)$ is a standard Gaussian vector, then $$(2\pi)^{-n/2}E[e^{-\|A G\|^2/2}]\leq (2\pi)^{-n/2}E[e^{-\|G\|^2/2}].$$

Indeed, by a simple application of Cauchy-Schwarz and a change of variables, \begin{align} (2\pi)^{-n/2}E[e^{-\|A G\|^2/2}]&=\int\frac{e^{-\|Ax\|^2/2}}{(2\pi)^{-n/2}}\frac{e^{-\|x\|^2/2}}{(2\pi)^{-n/2}}~dx\\ &\leq\left(\int\frac{e^{-\|Ax\|^2}}{(2\pi)^{-n}}~d x\right)^{1/2}\left(\int\frac{e^{-\|x\|^2}}{(2\pi)^{-n}}~dx\right)^{1/2}\\ &=\left(\int\frac{e^{-\|x\|^2}}{(2\pi)^{-n}}~d x\right)^{1/2}\left(\int\frac{e^{-\|x\|^2}}{(2\pi)^{-n}}~dx\right)^{1/2}\\ &=\int\frac{e^{-\|x\|^2/2}}{(2\pi)^{-n/2}}\frac{e^{-\|x\|^2/2}}{(2\pi)^{-n/2}}~dx\\ &=(2\pi)^{-n/2}E[e^{-\|G\|^2/2}]. \end{align}