Convergence in Law implies a.s. convergence (Donsker-like statement)

253 Views Asked by At

I am currently working on a Donsker-like convergence result and I am not quite sure, whether my conclusions are correct (I am dropping the technicalities here):

Let $\hat{N}_n(t)$ be a an estimator for $N(t)$, we then have $$ \sqrt n(\hat{N}_n(t)-N(t))\overset{\mathcal L} \longrightarrow \mathbb G \text{ in } \ell^\infty (\mathbb R) $$ where $\mathbb G $ denotes some Gaussian random variabel (with known covariance structure) and $\ell^\infty$ is the space of bounded real-valued functions (why actually not only continuous-bounded as usual?) equipped with the sup-norm. Convergence in Law for $X_n\overset{\mathcal L}\longrightarrow X$ means here in this setting $$ \forall f\in \ell^\infty:\lim_{n\to\infty}Ef(X_n)=Ef(X) $$ which means we have actually $$ \forall f\in \ell^\infty:\lim_{n\to\infty}Ef\left(\sqrt n(\hat{N}_n(t)-N(t))\right)=Ef\left(\mathbb G \right) $$ I was wondering, in a setting like this, can we conclude $$ \hat{N}_n(t)\to N(t)\text{ a.s.?} $$ Here is my thinking:

We have $$ \forall f\in \ell^\infty:\lim_{n\to\infty}Ef\left(\sqrt n(\hat{N}_n(t)-N(t))\right)=Ef\left(\mathbb G \right) $$ which means that the convergence also holds for the continuous bounded-functions and then by the Portmanteau theorem we have convergence of the distribution functions in all continuous points $c$, we have $$ \forall c:\lim_{n\to\infty}P\{\left(\sqrt n(\hat{N}_n(t)-N(t))\right)\leq c\}=P\{\mathbb G \leq c\} $$

Now we are interested in $(\hat{N}_n(t)-N(t))$, so $$ \forall c:\lim_{n\to\infty}P\{\left(\hat{N}_n(t)-N(t))\right)\leq c\}=\lim_{n\to\infty} P\{\left( \sqrt n(\hat{N}_n(t)-N(t))\right) \leq \sqrt n c\} $$ which is for $c>0$ equal to $1$ and for $c<0$ equal to $0$. For $c=0$ we don't have a continuous point of $F$. So we have the distribution function of random variable a.s. equal to $0$ (Dirac distribution).

Since $(\hat{N}_n(t)-N(t))\to 0 $ in distribution we have even $(\hat{N}_n(t)-N(t)) \overset{\text{a.s.}} \longrightarrow 0 $.

EDIT: As pointed out by Did in the comments, the convergence of $(\hat{N}_n(t)-N(t)) \longrightarrow 0$ is actually in probability. The implication is: Convergence in distribution against a constant implies convergence against this constant in probability.

Is this reasonable or have I overseen anything?