$O_P(\cdot)$ and linearization (Taylor) under an expectation

139 Views Asked by At

I have two independent random variable, $X,Y$, respectively standard $n$-dimensional Gaussian and uniform on $\{-1,1\}^n$. I let $X' := X/\sqrt{n}$, so that $X' = O_p(1)$.

(here $O_P(\cdot)$ refers to stochastic boundedness ("big Oh in probability"))

I am confronted to an expression of the form $$ \mathbb{E}[ \Phi(X)\mathbb{E}[ \Psi(\alpha\cdot\langle X',Y\rangle)\mid X] $$ where $\alpha = o(1)$, and therefore $\alpha\cdot\langle X',Y\rangle = o_P(1)$ (all the stochastic $O_P,o_p$ are wrt $X$). What I would like to do is to do a Taylor expansion of $ \Psi$ to second order inside the inner expectation.

My question: is that "legit"? Are there more assumptions I need to check?

1

There are 1 best solutions below

5
On BEST ANSWER

If $\Psi$ is second-order differentiable, then it makes no difference. However, if $\Psi$ is a random function, then you may have more issues. For example, if $\Psi(X) = 1$ for almost all realizations of $X$, then there is no harm in doing a Taylor expansion.

However, knowing that $\alpha \cdot \langle X', Y \rangle = o_P(1)$ may not be enough to calculate the expectation. I would assume you want to show something like the fact that the expectation tends to zero, but knowing probability bounds is not quite enough. Essentially, while your expression is $o_P(1)$, you do not know about the part that is not covered by the probability. You may need to appeal to Slutsky's theorem (or something similar).

To elaborate, suppose $|Z| = o_P(1)$. Then even a simple calculation shows \begin{align} E|Z| &= \int_{|Z| < \varepsilon}|Z| dP + \int_{|Z| \geq \varepsilon} |Z|dP\\ &\leq \varepsilon P(|Z| < \varepsilon) + \int_{|Z| \geq \varepsilon}|Z|dP. \end{align} Even though the first term above is tending to one, knowing $|Z| = o_P(1)$ is not enough to show that the expectation also tends to zero (although it still may under some regularity conditions).