Slepian's Lemma states that if one has two Gaussian vectors $\vec X, \vec Y$ such that
- the means and variances of $\vec X_i, \vec Y_i$ agree for each $i$, and
- the covariances $\mathbb{E}[X_iX_j] \leq \mathbb{E}[Y_iY_j]$,
then
$$\Pr[\cup_i\{X_i>u_i\}]\geq \Pr[\cup_i\{Y_i>u_i\}]$$
One application of this is for tail-bounding Gaussians. If we want a bound on $\vec Y$ (which may have complicated coordinate-wise dependencies), Slepian's Lemma allows us to replace this with $\vec X$, with no dependencies.
This seems cool, and was how I motivated it to myself when I first came across it. Upon further thought though, this reasoning doesn't seem great. The natural way to bound $\Pr[\cup_i \{X_i>u_i\}]$ is via the union bound, which
- is relatively tight for the suprema of Gaussians, and
- also doesn't care about dependence between coordinates.
So the previously-described setting does not seem particularly useful for Slepian's lemma.
When is it useful then?