Prove that mean maximum coordinate of a random walking point is $O(\sqrt n)$

337 Views Asked by At

This is a question similar to my previous one.

Consider a point on a coordinate line. The starting coordinate of it is $0$, and it performs $n$ walks, each one shifts the point either by $1$ to the left or by $1$ to the right. Each direction is chosen with probability $p=\frac{1}{2}$. Prove that the mean maximum coordinate is $O(\sqrt n)$.

I've considered representing the maximum coordinate as a random variable $\xi=\max_{t=1\ldots n} \sum_{i=1}^t X_i$, where $X_i$ is $\pm 1$, depending on which direction was chosen at the $i$-th step, but $\mathbb E\xi\not=\max_{t=1\ldots n} \mathbb E\sum_{i=1}^t X_i$ as far as I know.

Edit: I have made some progress: $$\mathbb E\xi=\sum_{i=1}^n P(\xi\geq i)=\sum_{i=0}^n P(\xi > i)=n-\sum_{i=1}^n P(\xi\leq i)=n-\sum_{i=0}^n\prod_{j=i}^n P(\xi_j\leq i)$$ $\xi_j\leq i$ means at least $\frac{j-i}{2}$ walks were made to the left $(-1)$, therefore: $$P(\xi_j\leq i)=\left(\frac{1}{2}\right)^{\frac{j-i}{2}}\binom{j}{\frac{j-i}{2}}$$ $$\mathbb E\xi=n-\sum_{i=0}^n\prod_{j=i}^n \left(\frac{1}{2}\right)^{\frac{j-i}{2}}\binom{j}{\frac{j-i}{2}}$$

1

There are 1 best solutions below

2
On BEST ANSWER

Computing this directly is likely to be quite challenging.

Likely the easiest approach is by appealing to Donsker's Invariance Principle, which would enable you to carry the equivalent result for Brownian motion to the discrete setting.

The result for Brownian motion, along with references, is stated on Wikipedia as having the closed form:

$$ \mathbf E \left[ \max_{0\leq s \leq t} B_s\right] = \sqrt{\frac{2t}{\pi}}.$$

You may also want to look at the Law of the Iterated Logarithm, and its derivation for inspiration. This regards the limsup of a random walk/Brownian motion and not the maximum at a fixed time.


Sketch Proof

Let $S_n$ denote the simple random walk $$S_n = \sum_{k=1}^n X_k,$$ where the $X_k$ are i.i.id with $\mathbf P[X_k = 1] = \mathbf P[X_k = -1] = \frac12$.

We define a continuous time analogue of $S_n$ by interpolating between each step; that is for $t \in \mathbf R$

$$S(t) = S_{\lfloor t \rfloor} + (t - \lfloor t \rfloor )S_{\lfloor t \rfloor +1}.$$

To apply Donsker's Invariance Principle, we consider a copy of this scaled to stay in the interval $[0,1]$

$$ S^*_n(t) = \frac{1}{\sqrt{n}} S(nt), \qquad t \in [0,1].$$

And we can write the expectation that we are interested in as

$$ \frac{1}{\sqrt{n}}\mathbf E \left[ \max_{0 \leq k \leq n} S_k \right] = \mathbf E \left[ \max_{0 \leq k \leq n} \frac{1}{\sqrt{n}} S_k \right] = \mathbf E \left[ \max_{0 \leq s \leq 1} S^*_n(s) \right].$$

Donsker's theorem implies that the process $S_n^*$ converges in distribution to Brownian motion. One of the definitions for this type of convergence is that it is equivalent to convergence of expectations of continuous bounded functions (the Portmanteau Lemma).

The maximum function is continuous, but not bounded so we cannot instantly apply the lemma. Since this is a sketch/heuristic I will simply show that if this condition did hold, we'd be able to state

$$\frac{1}{\sqrt{n}}\mathbf E \left[ \max_{0 \leq k \leq n} S_k \right] \rightarrow \mathbf E \left[ \max_{0 \leq s \leq 1} B_s \right] = \sqrt{\frac2\pi}$$

To be clear - this last step is sketchy since the maximum is not bounded so we cannot apply the Portmanteau lemma. A more formal proof is given here in Chapter 5, section 4.