Large deviation bound on 1-dimensional random walk

148 Views Asked by At

Let $X$ denote the distance from the origin of a $1$-dimensional unbiased random walk on $\mathbb{Z}$. Is there a reasonable bound on $P(X > t \sqrt{n})$ for some $t>1$? There are exact expressions for this, but they take the form of unwieldy sums.

The standard Chernoff bound techniques don't seem to work well, as if $t$ is very small than the probability goes to $1$. Of course the bound doesn't have to hold for $t$ small, but most techniques I can think of would also work for $t$ small.

1

There are 1 best solutions below

1
On

If you are actually interested in large deviation results, Cramer's theorem applies: $X_n=\sum\limits_{i=1}^n D_i$ where $D_i=2B_i-1$ and $B_i$ are iid Bernoulli$(\frac12)$ variables. So in accordance with Cramer's theorem, we define the rate function

$$I(x)=\sup_{t \in \mathbb{R}} tx-\ln(E[e^{tD_i}])=\sup_{t \in \mathbb{R}} tx-\ln \left ( \frac{e^t+e^{-t}}{2} \right ).$$

Then the large deviation principle says that given any measurable set $\Gamma$:

$$-\inf_{x \in \operatorname{Int}(\Gamma)} I(x) \leq \liminf_{n \to \infty} \frac{1}{n} \log P(X_n \in n \Gamma) \leq \limsup_{n \to \infty} \frac{1}{n} \log P(X_n \in n \Gamma) \leq -\inf_{x \in \operatorname{Cl}(\Gamma)} I(x)$$

where $n \Gamma = \{ n y : y \in \Gamma \}$.

The interesting cases of these inequalities in your case are where $\Gamma=(-\infty,x)$ for some $x \in (-1,0)$ or $\Gamma=(x,\infty)$ for some $x \in (0,1)$. In these cases the upper and lower bounds are the same (both are $-I(x)$), so that $P(X_n \in n\Gamma)$ is logarithmically equivalent to $e^{-nI(x)}$. By explicitly calculating $I(x)$ you can get an understanding of what this means in your setting.

However, what you have actually written is a "moderate" deviation, the type that is described by the central limit theorem. Specifically, $X_n$ is asymptotically normally distributed with mean zero and variance $n$, so that $P(X_n>t\sqrt{n})$ converges to $1-\Phi(t)$ as $n \to \infty$. Moreover by a quantitative CLT (e.g. the Berry-Esseen theorem), this convergence is uniform in $t$. The Berry-Esseen theorem specifically gives an error bound of something like $\frac{1}{2\sqrt{n}}$.