Convergence for stochastic processes

71 Views Asked by At

Let $x_t^\epsilon$ be a family of continuous vector-valued stochastic processes. Suppose that $E \sup_{t \in [0,T]} |x_t^\epsilon|^p \to 0$ as $\epsilon \to 0$, for $p>0$. Does this imply that $$\limsup_{\epsilon \to 0} \sup_{t \in [0,T]} |x_t^\epsilon|^p = 0?$$

1

There are 1 best solutions below

0
On

Let $M_{\epsilon}=\sup_t |x_t^\epsilon|$. You are given that $M_\epsilon\stackrel{L_p}\longrightarrow 0$ and asking whether that implies $\limsup_{\epsilon\to0}(M_\epsilon)^p=0$, which is equivalent to $M_\epsilon\stackrel{\text{a.s.}}\longrightarrow0$

As you no doubt know, convergence in mean does not imply almost sure convergence in general. The standard counterexample is a "typewriter" series of a functions. Every function is a bump which moves over the entire sample space $\Omega$, where the bump gets smaller and smaller in mean after each traversal. You have convergence in mean because the bump gets and stays arbitrarily small, but you do not have almost sure convergence since the bump hits every point infinitely many times.

To be concrete, let $(\Omega,\mathcal B,\mathbb P)=([0,1],\mathcal B([0,1]),\lambda)$ be the interval $[0,1]$ with Lebesgue measure, and let $$ x_{t}^\epsilon={\bf1}(|\omega-\sin(1/\epsilon)|<\epsilon) $$ Note that $x_{t}^\epsilon$ does not depend on $t$; it is a constant process, which either constantly $0$ or constantly $1$. The event that the process is nonzero is a interval of width $\epsilon$ around $\sin(1/\epsilon)$. The width of this interval (and therefore the probability of $M_\epsilon$ being large) decreases to zero, while the interval itself osciallates back and forth hitting every $\omega\in [0,1]$ infinitely often. Therefore, for any fixed $\omega$, you will not have $M_\epsilon\to 0$.