Convergence of a sequence involving the maximum of i.i.d. Gaussian random variables

2.8k Views Asked by At

It's well known that, for a sequence of $n$ i.i.d. standard Gaussian random variables $X_1,\ldots,X_n$, where $X_\max=\max(X_1,\ldots,X_n)$, the following convergence result holds:

$$P\left(\lim_{n\rightarrow\infty}\frac{X_\max}{\sqrt{2\log n}}=1\right)=1$$

or, $\frac{X_\max}{\sqrt{2\log n}}\rightarrow1$ almost surely (for a proof of this convergence, see Example 4.4.1 in Galambos "Asymptotic Theory of Extreme Order Statistics").

I am wondering what happens to the following limit:

$$L=\lim_{n\rightarrow\infty}\left[\left(\frac{X_\max}{\sqrt{2\log n}}-1\right)f(n)\log(n)\right]$$ where $f(n)=o(1)$.

Is $L=0$ or infinite? Does it depend of $f(n)$? I am not sure how to deal with the indeterminate form here...

2

There are 2 best solutions below

8
On BEST ANSWER

Let $M_n=\max\{X_k;1\leqslant k\leqslant n\}$ and let us first recall how the first order asymptotics of $M_n$ obtains. For every $x$, $$ P[M_n\leqslant x]=P[X_1\leqslant x]^n, $$ and standard estimates of the gaussian tail show that, when $x\to\infty$, $$ P[X_1\gt x]=1/\theta(x),\qquad \theta(x)\sim x\sqrt{2\pi}\mathrm e^{x^2/2}. $$ Thus, if $\theta(u_n)\ll n$, then $P[M_n\leqslant u_n]\to0$ while, if $\theta(v_n)\gg n$, then $P[M_n\leqslant v_n]\to1$. This holds with $u_n=(1-\varepsilon)\sqrt{2\log n}$ and $v_n=(1+\varepsilon)\sqrt{2\log n}$, for every positive $\varepsilon$, hence $M_n/\sqrt{2\log n}$ converges in probability to $1$.

To go further, assume that $x_n=(1+z_n)\sqrt{2\log n}$, with $z_n\to0$. Then, $$ n^{-1}\theta(x_n)\sim2\sqrt\pi\exp\left( (2z_n+z_n^2)\log n+\tfrac12\log\log n\right). $$ In particular, if $2z_n\log n=t-\tfrac12\log\log n$ for some fixed $t$, then $n^{-1}\theta(x_n)\sim\sqrt{4\pi}\mathrm e^{t}$ hence $P[M_n\leqslant x_n]\to\exp(-\mathrm e^{-t}/\sqrt{4\pi})$. This means that $$ T_n=2\log n\left(\frac{M_n}{\sqrt{2\log n}}-1\right)+\frac12\log\log n+\frac12\log(4\pi) $$ converges in distribution to a random variable $T$ such that, for every $t$, $$ P[T\leqslant t]=\exp(-\mathrm e^{-t}). $$ In particular, $$ U_n=\frac{\log n}{\log\log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)\to-\frac14\ \text{in probability.} $$ Edit: For every $n\geqslant2$, consider the random variable $$ V_n=\frac{\log n}{\log\log n}\left(\frac{X_n}{\sqrt{2\log n}}-1\right). $$ The asymptotics on the gaussian tail used above shows that, for every fixed $t$, $$ P[V_n\geqslant t]\sim\frac1{2\sqrt\pi\cdot n\cdot(\log n)^{1/2+2t}}. $$ If $t\lt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ diverges hence Borel-Cantelli lemma (difficult part) shows that, almost surely $V_n\geqslant t$ for infinitely many $n$. Since $U_n\geqslant V_n$, almost surely $U_n\geqslant t$ for infinitely many $n$.

If $t\gt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ converges hence Borel-Cantelli lemma (easy part) shows that, almost surely $V_n\leqslant t$ for every $n$ large enough. Thus, $V_n\leqslant t$ for every $n$ with positive probability, hence $U_n\leqslant t$ for every $n$ with positive probability. Since $M_n\to\infty$ almost surely, asymptotically $U_n$ does not depend on $(X_i)_{i\leqslant k}$, for every $k$. Thus, $\limsup U_n$ is an asymptotic random variable and $[\limsup U_n\leqslant t]$ has probability $0$ or $1$.

Finally, $$ \limsup\limits_{n\to\infty}U_n=+\frac14\ \text{almost surely.} $$

3
On

As in @Did's answer, we let $f(n)=(\log \log n)^{-1}$. Also, for consistency with the existing literature (and Did's answer), let $M_n=\max\{X_k;1\leq k\leq n\}$.

Theorem: $P\left(\lim_{n\rightarrow\infty}\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)\in\left[-\frac{1}{4},\frac{1}{4}\right]\right)=1$ if it exists.

First we state a couple of helpful lemmas for Embrechts, Klüppelberg and Mikosch's "Modeling Extremal Events". The first follows from the Borel-Cantelli lemma; the sketch of the proof of the second is on page 170, with references to the full proof.

Lemma 1 (abridged Theorem 3.5.1 in Embrechts et al.): Suppose $(u_n)$ is non-decreasing. Then $\sum_{n=1}^\infty P(X_1>u_n)<\infty$ implies that $P(M_n>u_n~~\text{i.o.})=0$.

Lemma 2 (abridged Theorem 3.5.2 in Embrechts et al.): Suppose $(u_n)$ is non-decreasing and that the following conditions hold: $P(X_1\geq u_n)\rightarrow 0$ and $nP(X_1\geq u_n)\rightarrow \infty$. Then $\sum_{n=1}^\infty P(X_1>u_n)\exp[-nP(X_1>u_n)]<\infty$ implies that $P(M_n\leq u_n~~\text{i.o.})=0$.

(i.o. here means "infinitely often").

Proof: First we prove the upper bound; and then the lower bound.

For upper bound let $u^u_n=\sqrt{2\log n}+\frac{\sqrt{2}(\frac{1}{4}+\epsilon)\log\log n}{\sqrt{\log n}}$ where $\epsilon>0$. Using standard approximation for the distribution of the tail of a Gaussian random variable $P(X_1>x)\approx\frac{1}{\sqrt{2\pi}x}e^{-x^2/2}$, we obtain: $$\begin{array}{rcl}P(X_1>u^u_n)&\approx&\frac{\exp\left[-\log n -2(\frac{1}{4}+\epsilon)\log\log n-\mathcal{O}\left(\frac{(\log\log n)^2}{\log n}\right)\right]}{\sqrt{2\pi}\left(\sqrt{2\log n}+\mathcal{O}\left(\frac{\log\log n}{\sqrt{\log n}}\right)\right)}\\ &=&\frac{1}{C_1n(\log n)^{\frac{1}{2}+2(\frac{1}{4}+\epsilon)}} \end{array}$$ where $C_1=\mathcal{O}(1)$ captures the lower-order terms and the constants. Using the Cauchy condensation test, one can easily check that: $$\sum_{n=1}^\infty n^{-1}(\log n)^{-1-2\epsilon}<\infty$$ By Lemma 1, $P(M_n>u^u_n~~\text{i.o.})=0$. A few arithmetic manipulations yield that: $$\tag{1}P\left(\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)>\frac{1}{4}+\epsilon~~\text{i.o.}\right)=0$$

For the lower bound let $u^l_n=\sqrt{2\log n}-\frac{\sqrt{2}(\frac{1}{4}+\epsilon)\log\log n}{\sqrt{\log n}}$ where $\epsilon>0$. Again using the same approximation for the distribution of the tail of a Gaussian random variable, we obtain: $$\begin{array}{rcl}P(X_1>u^l_n)&\approx&\frac{\exp\left[-\log n +2(\frac{1}{4}+\epsilon)\log\log n-\mathcal{O}\left(\frac{(\log\log n)^2}{\log n}\right)\right]}{\sqrt{2\pi}\left(\sqrt{2\log n}+\mathcal{O}\left(\frac{\log\log n}{\sqrt{\log n}}\right)\right)}\\ &=&\frac{(\log n)^\epsilon}{C_2n} \end{array}$$ where $C_2=\mathcal{O}(1)$ captures the lower-order terms and the constants. Clearly $P(X_1>u^l_n)\rightarrow 0$ and $nP(X_1>u^l_n)\rightarrow\infty$, so the conditions necessary for Lemma 2 are satisfied. Now,

$$P(X_1>u^l_n)\exp[-nP(X_1>u^l_n)]=\frac{(\log n)^\epsilon}{C_2 ne^{(\log n)^\epsilon/C_2}}$$

Again employing the Cauchy condensation test we can show that: $$\sum_{n=1}^\infty \frac{(\log n)^\epsilon}{C_2 ne^{(\log n)^\epsilon/C_2}}<\infty$$ By Lemma 2, $P(M_n<u^l_n~~\text{i.o})=0$. A few arithmetic manipulations yield that: $$\tag{2}P\left(\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)<-\frac{1}{4}-\epsilon~~\text{i.o.}\right)=0$$ Combining (1) and (2) yields the desired result.

Remark: This demonstrates a bound on the limit if the limit exists (in fact, as Did points out, this shows $\limsup$ is $\leq 1/4$ and $\liminf$ is $\geq 1/4$). I wonder if one can determine if the limit in fact converges to some fixed number $C\in\left[-\frac{1}{4},\frac{1}{4}\right]$ or "bounces around" (akin to a sine wave). Perhaps Did's convergence in probability to $-\frac{1}{4}$ can somehow be leveraged here?