Let $(\Omega,\mathcal A,\operatorname P)$ be a probability space, $E$ be a $\mathbb R$-Banach space, $(X_t)_{t\ge0}$ be an $E$-valued Lévy$^1$ process on $(\Omega,\mathcal A,\operatorname P)$ and $\mu_t:=\mathcal L(X_t)$ for $t\ge0$.
Remember that $X$ is time-homogeneous and Markov with transition semigroup $$\kappa_t(x,B):=\mu_t(B-x)\;\;\;\text{for }(x,B)\in E\times\mathcal B(E)\text{ and }t\ge0.$$
Since $$Y_n:=X_n-X_{n-1}\;\;\;\text{for }n\in\mathbb N$$ is an independent identically $\mu_1$-distributed process, we know that $$\frac1n\sum_{i=1}^nf(Y_i)\xrightarrow{n\to\infty}\int f\:{\rm d}\mu_1\;\;\;\text{a.s. for all }f\in\mathcal L^p(\mu_1)\text{ and }p\ge1\tag1$$ by a strong law of large numbers.
Question: Is there an analogue of this result in continuous time?
I first thought the natural generalization would be to assume that $X$ is càdlàg and to consider $$Z_t:=\Delta X_t\;\;\;\text{for }t\ge0$$ and whether $$\frac1t\int_0^tZ_s\:{\rm d}s\xrightarrow{t\to\infty}\int f\:{\rm d}\mu_1\;\;\;\text{a.s. for all }f\in\mathcal L^p(\mu_1)\text{ and }p\ge1\tag2,$$ but on a second thought, this will fail whenenver $X$ is continuous, since then $Z_t=0$ for all $t\ge0$.
On the other hand, if $\mu_1$ has a finite first moment, we can choose $f=\operatorname{id}_E$ in $(1)$. For this choice of $f$, the natural generalization of $(1)$ would be $$\frac1tX_t\xrightarrow{t\to\infty}\operatorname E\left[X_1\right]\;\;\;\text{almost surely}\tag3.$$
Can we show this?
Remark: It might be useful to note that $\mu_t=\mu_1^{\ast t}$ (the rhs denoting the $t$th convolution power) for all $t\ge0$.
i.e.
- $X_0=0$;
- $X_{s+t}-X_s$ and $(X_r)_{0\le r\le s}$ are independent for all $s,t\ge0$;
- $X_{s+t}-X_s\sim X_t$ for all $s,t\ge0$.
For the moment, I'm not imposing any continuity assumption, but we might need to in order to obtain a positive answer to this question.
My understanding of Levy process is limited, so if I state anything trivial. Please kindly correct me.
(i) Assume the cadlag property for $(X_t)$
(ii) Assume further that $X_1 \in L^{p}$ for a $p>1$ , that is $\exists M>0$ such that:$$ \mathbb{E}(|X_1|^p)< M$$
For simplicity of the proof, let $ \mathbb{E}(X_1)=0$.
For a $r<1$ that we choose so that $rp>1$, by Doobs inequality we have: $$ \mathbb{P}\left( \max_{0 \le t \le 1} |X_t| \ge n^r \right) \le \dfrac{q^p}{n^{pr}}\mathbb{E}\left(|X_1|^p\right) < \dfrac{q^pM}{n^{pr}}$$ where $q$ is the positive real number such that $ \dfrac{1}{q}+\dfrac{1}{p}=1$
Hence, $$ \mathbb{P}\left( \max_{0 \le t \le 1} |X_{n+t}-X_n| \ge n^r \right) \le \dfrac{q^p}{n^{pr}}\mathbb{E}\left(|X_1|^p\right) < \dfrac{q^pM}{n^{pr}}$$ As $rp>1$, by Borel-Catelli, we imply that: $$\limsup_{n \rightarrow \infty}\left( \dfrac{ \max_{0 \le t \le 1} |X_{n+t}-X_n| }{n^r} \right) \le 1 \text{ a.s (1)} $$ On the other hand, LLN tells us that: $$ \lim_{n \rightarrow +\infty} \dfrac{X_n}{n} \longrightarrow 0 \text{ as (2)}$$ Combining two previous results (1) and (2), and by noting that $\forall t >0$: $$ \left| \frac{X_t}{t} \right| \le \left| \frac{X_n}{n} \right|+\dfrac{\max_{0\le s\le 1} |X_{n+s}-X_n|}{n} $$ Where $n=[t]$
We get our desired result : $$ \lim_{t \rightarrow \infty} \frac{X_t}{t} =0 \text{ as}$$
Discussion: