Almost sure convergence of maximum in a sequence of Gaussian random variables

1.5k Views Asked by At

Let $X_1, X_2,\ldots,X_n$ be an i.i.d. sequence of standard Gaussian variables and $M_n=\max(X_1, X_2,\ldots,X_n)$. I am trying to understand the mechanics of the proof of almost sure convergence $\frac{M_n}{\sqrt{2\ln n}}\rightarrow 1$. In particular, I am using the example 3.5.4 on page 174 Embrechts, Klüppelberg and Mikosch's "Modeling Extremal Events". For now I am just concerned with how $\limsup_{n\rightarrow\infty}\frac{M_n}{\sqrt{2\ln n}}=1$ almost surely (I hope that if I understand this, then I'll understand the $\liminf_{n\rightarrow\infty}\frac{M_n}{\sqrt{2\ln n}}=1$ almost surely).

First the authors choose:

$$u_n(\epsilon)=\sqrt{2\ln\left(\frac{(\ln_0n\ln_1n\cdots\ln_rn)\ln_r^\epsilon n}{\sqrt{\ln n}}\right)}$$ where $r\geq0$, and $\ln_k x$ denotes the iterated logarithm defined as follows: $$\ln_0x=x,~~\ln_1x=\max(0,\ln x),~~\ln_k x=\max(0,\ln_{k-1}x)$$

On page 175 they state:

An application of Theorem 3.5.1 together with (3.64) yields $$P(M_n>u_n(\epsilon)~~\text{i.o.})=0~~\text{or}~~=1$$ according as $\epsilon>0$ or $\epsilon<0$ for small $|\epsilon|$, and, hence, by Corollary 3.5.3 [the convergence follows].

Their equation (3.64) is the well-known approximation of the tail of standard Gaussian distribution: $P(X_n>x)\sim\frac{1}{\sqrt{2\pi}x}e^{-x^2/2}$. Theorem 3.5.1 is on page 169 and states:

Suppose that a sequence $u_n$ is non-decreasing. Then $$P(M_n>u_n~~\text{i.o.})=P(X_n>u_n~~\text{i.o.})$$ [where $X_n$ are i.i.d. non-degenerate r.v.'s but not necessarily Gaussian]. In particular, $$P(M_n>u_n~~\text{i.o.})=0~~\text{or}~~=1$$ according as $$\sum_{n=1}^\infty P(X>u_n)<\infty~~\text{or}~~=\infty$$

The authors point out that the second statement comes from Borel-Cantelli lemma and its partial converse for independent events, and prove the first statement.

The first part of Corollary 3.5.3 starts on page 173, which is cut off in Google books. The relevant statement is:

(a) Assume that the sequences $u_n(\epsilon)=c_n(1+\epsilon)+d_n$, $n\in\mathbb{N}$, are non-decreasing for every $\epsilon\in(-\epsilon_0,\epsilon_0)$. Then the relation $$\sum_{n=1}^\infty P(X>u_n(\epsilon))<\infty~~\text{or}~~=\infty$$ according as $\epsilon\in(0,\epsilon_0)$ or $\epsilon\in(-\epsilon_0,0)$ implies that $$\limsup_{n\rightarrow\infty}c_n^{-1}(M_n-d_n)=1~~\text{a.s.}$$

I am confused as to how to interpret the above. Do we need to show that both $P(X_n>u_n(\epsilon)\rightarrow 0$ when $\epsilon>0$ and $P(X_n>u_n(\epsilon)\rightarrow 1$ when $\epsilon<0$ or do we have to show just one of the statements is true? I can see by plugging in $r=0$ in the $u_n(\epsilon)$ at the top of this post that the first statement is true (i.e. when $\epsilon>0$, then $\lim_{n\rightarrow\infty}P(X_n>u_n(\epsilon)=0$, however, I think that when $\epsilon<0$, the limit is also zero (or did I make a mistake in taking the limit?).

Another point of confusion is that, when $r=0$ is plugged in, $u_n(\epsilon)=\sqrt{2(1+\epsilon)\ln n-\ln\ln n}$, which doesn't match the normalization factor $\sqrt{2\ln n}$ which we are trying to prove. What am I missing? Can someone clarify? Perhaps there is a better reference for these types of results?