Consider the following result (assuming probability space ($\Omega, \mathcal F, \mathsf P$))
If $g : [0, \infty) \to [0,\infty)$ bounded, strictly increasing, $g(x) > 0$ for $x > 0$ and $\lim_{x \to 0^+}g(x) = 0$, then
$$X_n \to 0 \quad \text{in probability} \iff E(g(X_n)) \to 0.$$
I managed to prove this result. I am interested to see what happens if we relax some of the conditions needed for this result. In particular, when we relax boundedness we see that for $X_n(\omega) = n \mathbb {1}_{[0,1/n)}(\omega)$ we have that $X_n \to 0$ in probability but for $g(x) = x$, we see that $$E(g(X)) = n \mathsf P(\{\omega \in [0,1/n)\}) = n(1/n) = 1 \not\to 0.$$ Here is where my question comes in:
Can we find a $g$ and $(X_n)$ from above satisfying all the conditions except for strictly increasing, such that $$X_n \to 0 \quad \text{in probability but} \quad E(g(X_n)) \not\to0?$$
The result is true if g is only bounded and $\lim_{x\to0^+} g(x)=0$, because $0\le Eg(X_n)=\int_{\{X_n\le\delta\}}g(X_n)+\int_{\{X_n>\delta\}}g(X_n)\le \sup\limits_{0<x<\delta}g(x)+P(X_n>\delta)\sup\limits_{0<t} g$
Using that $\lim_{x\to0^+} g(x)=0$, when $n\to\infty$ we get $Eg(X_n)\to 0$