I read a paper which says the following:
Using Taylor's expansion, we have for any $0 < x \le 1$, $$ x^{1+\epsilon(n)}= x (1 + \epsilon(n) \log(x) + (\epsilon(n) \log(x))^2/2+ O(\epsilon(n)^3 \log(X)^3)) $$ where $\epsilon(n) \downarrow 0$. Then $$ \mathbb E [X^{1+\epsilon(n)}]= \mathbb E [X (1 + \epsilon(n) \log(X) + (\epsilon(n) \log(X))^2/2 + O(\epsilon(n)^3 \log(X)^3))] $$ for any random variable with support on $(0,1]$.
I am confused why this works. The first expansion is only valid for fixed $x$. The implicit constant in the big $O$ depends on $x$. How can we replace it by a random variable and then take expectation? I feel that there must be some extra condition on $X$ that is needed.
I noticed that $$ x^{1+\epsilon(n)} \le x (1 + \epsilon(n) \log(x) + (\epsilon(n) \log(x))^2/2) $$ for $0<x\le 1$ and $0 \le \epsilon(n) \le 1/2$. So for an upper bound, we can actually drop the big-$O$ terms. It's still unclear for me if the corresponding lower bound holds.