Clarification of an asymptotic order in probability

60 Views Asked by At

In a paper by Philips and Yu I have found the following computation. Assume that $\sigma_t$ is an adapted predictable process and let $\sigma_{t_{n,j}}$ be the values of $\sigma_t$ on a partition of the interval $[0,T]$, that is $0=t_{0,0}<t_{1,n}<t_{2,n}<...<t_{n,n}=T$. Let $\varepsilon_{j,n}$ with $j=1,...,n$ be a sequence of iid random variables with $\text{N}\left(0,1\right)$ distribution. This is what is written on the paper

$$ \sum_{j=1}^n\sigma_{t_{n,j}}^4\,\frac{\varepsilon_{j,n}^4}{n}+O_p\left(\frac{1}{\sqrt{n}}\right) =\sum_{j=1}^n\sigma_{t_{n,j}}^4\,\frac{\mathbb{E}[\varepsilon_{j,n}^4]}{n}+O_p\left(\frac{1}{\sqrt{n}}\right). $$

I am wondering how I can substitute $\varepsilon_{j,n}^4$ with $\mathbb{E}\left[\varepsilon_{j,n}^4\right]$. I know that if $X_n$ is a sequence of r.v. with variance $\sigma_n$ such that $n\,\sigma_n^2\rightarrow 0$ then by Chebyshev's inequality I can write $X_n=\mathbb{E}[X_n]+O_p\left(\frac{1}{\sqrt{n}}\right)$, nevertheless since the $\varepsilon$'s are iid $\text{N}\left(0,1\right)$ this is not the case.