Limit theorem for changed time

189 Views Asked by At

This post seems long, but its almost everything proofed in this post. Only one step seems to be left, for the desired proof. I would be very gratefull for any help.

The setup Given a Levy-Process $U_{t}$ with with $E(U_t)=0$ (then $U_t$ is a martingale). Let $U_t$ have finite variance $Var(U_t)=tVar(U_1)$ and $Var(U_1)=\sigma^{2}$ and the limit theorem holds: \begin{align} F_t:=\sqrt{t}\left(\frac{U_t}{t}-E(U_1) \right)=\frac{U_t}{\sqrt{t}}\xrightarrow{d}\mathcal{N}(0,\sigma^{2})\quad as \,\,t\rightarrow \infty.\tag1 \end{align} Let $K_t$ a non-decreasing positive ($K_{t}>0$ a.s.) process with cadlag-paths with the property that $K_{t}\rightarrow \infty$ almost sureley, as $t\rightarrow \infty$.

I want to show that \begin{align} F_{K_t}:=\frac{U_{K_t}}{\sqrt{K_{t}}} \xrightarrow{d}\mathcal{N}(0,\sigma^{2})\quad as \,\,t\rightarrow \infty. \tag2 \end{align} For this one requires a positive non-random cadlag-function $a(t)$ with $a(t)\rightarrow \infty$ as $t\rightarrow \infty$ such that \begin{align} \frac{K_{t}}{a(t)}\rightarrow \theta\quad P\, a.s. \tag3 \end{align} holds. Where $\theta$ is a positive random-variable. Then the convergence in distribution of $F_{t}\xrightarrow{d} \mathcal{N}(0,\sigma^{2})$ implies the convergence in distribution of $F_{K_t}\xrightarrow{d} \mathcal{N}(0,\sigma^{2})$.

The original question from my old account is posted here. However with my reputation here, i am able to start a bounty for the question.

The suggestion of the proof are the following:

For simplicity it is said, that $\theta=1$ and $\sigma^{2}=1$ So that we have $K_{t}\in ((1-\epsilon)a(t),(1+\epsilon)a(t))$ for large $t$. For $0<\theta<\infty$ we could do the same procedure and get the same result.

This is how we go on: For small $m$ we have $$ P(U_{K_t}<x\sqrt{K_t})\leq P\left(K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))\right)+P\left(U_{a_t}<x\sqrt{(1+\epsilon)a(t)}+m\cdot \sqrt{\epsilon a(t))}\right)+ P\left(\sup_{s\in ((1-\epsilon)a(t),(1+\epsilon)a(t))}|U_{s}-U_{a(t)}|>m\cdot \sqrt{\epsilon a(t))}\right) $$ The first term converges to 0 due to (3). The second term converges to $\Phi(x+m)$ (Why?) by the central limit theorem (1) applied to $U_{a(t)}$.

The third term is bounded by martingale inequalitys $L^{2}$ by a factor $$\frac{1}{(m\cdot \sqrt{\epsilon a(t))})^{2}}$$

Otherwise we can state $$ P(U_{K_t}<x\sqrt{K_t})\geq Z\xrightarrow{d} \Phi(x-m) $$ So we have sandwiched it and the desired result (2) holds for arbitrary $0<\theta<\infty$.

HOWEVER (hopefolly the last step) We have with $(3)$ convegence to a strict positive finite randomvariable $\theta$. What is left to proof, considering, that $\theta$ is regarded as a random variable?

Some additional stuffto understand the inequalities

$$ P(U_{k_t}<x \sqrt{K_t})\\ \leq P[U_{k_t}<x \sqrt{K_t},K_{t}\in((1-\epsilon) a(t),(1+\epsilon) a(t))]+P[U_{K_t}<x\sqrt{K_t},K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ \leq P[K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))]+ P[U_{k_t}<x \sqrt{K_t},K_{t}\in((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ \leq P[K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))] \\ +P(U_{K_{t}}<x\sqrt{(1+ \epsilon)a(t)},|U_{K_t}-U_{a(t)}|\leq m\sqrt{\epsilon a(t)},|U_{K_t}-U_{a(t)}|> m\sqrt{\epsilon a(t)}] \\ \leq P[U_{a(t)}<x\sqrt{(1+\epsilon)a(t)}+m \sqrt{\epsilon a(t)}] + P\left(\sup_{s\in ((1-\epsilon)a(t),(1+\epsilon)a(t))}|U_{s}-U_{a(t)}|>m\cdot \sqrt{\epsilon a(t))}\right)+P\left(K_{t}\notin ((1-\epsilon) a(t),(1+\epsilon) a(t))\right) $$

1

There are 1 best solutions below

3
On
  1. $(3)$ need not hold for general $K$. Say, set $K = t$ for half of $\omega$, and $K = t^2$ for another half. Then, depending on $a$, $\theta$ in $(3)$ will be either zero or infinite for half of $\omega$.

  2. Without the independence assumption, the statement is false in general, even if $(3)$ is true. Specifically, let $U=W$ be a Wiener process and define $K_t$ to be a càdlàg modification of $\inf\{s\ge 0: W_s\ge t\}$. By the self-similarity property of $W$, $(3)$ holds with $a(t) = t^{2}$. ($K$ is even a Lévy subordinator: it increases and has independent stationary increments thanks to the strong Markov property of $W$.) However, $(2)$ cannot hold, obviously. (Moreover, it follows from self-similatity that $F_{K_t}= t/\sqrt{K_t}\overset{d}{=} 1/\sqrt{K_1}$.)

  3. If $K$ is independent of $U$, then the statement is quite obvious. Indeed, denote $\varphi_s(u) = \mathsf{E}[e^{iu F_{s}}]$ the characteristic function of $F_s$. By $(1)$, $\varphi_s(u)\to e^{-u^2\sigma^2/2}$, $s\to\infty$, for any $u\in\mathbb{R}$. In particular, $\varphi_{K_t}(u)\to e^{-u^2\sigma^2/2}$, $t\to\infty$, a.s. On the other hand, $$ \mathsf{E}[e^{iu F_{K_t}}] = \mathsf{E}[\mathsf{E}[e^{iu F_{s}}]|_{s=K_t}] = \mathsf{E}[\varphi_{K_t}(u)] $$ thanks to independence. Then by the dominated convergence theorem, $$ \mathsf{E}[e^{iu F_{K_t}}] \to e^{-u^2\sigma^2/2}, t\to\infty, $$ as claimed.