How to numerically test a limsup? (Example : numerical simulation of the law of iterated logarithm)

349 Views Asked by At

I have a random walk $S_n$ (the increments are Bernoulli $\pm 1$ with probability $1/2$ each). I'd like to test numerically the Law of iterated logarithm:

$$\limsup_{n \rightarrow \infty} \underbrace{\frac{S_n}{\sqrt{2 n (\log \log n)}}}_{Y_n} = 1, \qquad \rm{a.s.}$$

My attemps have failed (see this question) since, when you do a numerical simulation, you can never evaluate this quantity that would be required for the $\limsup$ evaluation (because the computer memory is not infinite...):

$$Z_k=\sup_{\ell \geq k}Y_{\ell}$$

but only:

$$Y_{k,n}=\max_{k\leq \ell \leq n}Y_\ell$$

Question: how can you do a simulation that showcases that the $\limsup$ is $1$? (and have a plot showing a convergence to 1, in the contrary of this failed attempt).


Sidenote: in my case, the increments are not exactly independent, but close to it. I'd like to numerically test if a law-of-iterated-logarithm-like result holds. But for now, I would already be more than happy if I could get a numerical evidence of the standard law in the standard case where increments are independent.

Sidenote2: code for failed attempt:

import numpy as np
import matplotlib.pyplot as plt
N = 10*1000*1000
B = 2 * np.random.binomial(1, 0.5, N) - 1       # N independent +1/-1 each of them with probability 1/2
B = np.cumsum(B)                                # random walk
plt.plot(B); plt.show()
C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N))))
M = np.maximum.accumulate(C[::-1])[::-1]        # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization
plt.plot(M); plt.show()