Let $X_1,\ldots,X_n$ be independent, identically distributed with expectation 1 and finite variance. Find the limit distribution of $\sqrt{n}(\bar{X}_n^{-1}-1)$. If the random variables are sampled from a density $f$ that is bounded and strictly positive in a neighborhood of zero, show that $\text{E}|\bar{X}_n^{-1}|=\infty$ for every $n$.
My attempt:
First question is fairly simple. From the central limit theorem we have $$\sqrt{n}(\bar{X}_n-1) \overset{D}{\to} \text{N}(0,\sigma^2)$$ and using Delta Method, where $g(x)=1/x$ is continuously differentiable at $1$ we have $$\begin{align*} \sqrt{n}(g(\bar{X}_n) - g(1)) &\overset{D}{\to} \text{N}(0, g'(1)^2\sigma^2)\\ \sqrt{n}(\bar{X}_n^{-1}-1) &\overset{D}{\to} \text{N}(0,\sigma^2) \end{align*}$$ I am struggling with the second part.
I can see that in a neighborhood around $0$, $1/x$ is going to tend to be very large, and potentially pull the whole integral toward infinity, but I'm not exactly sure how to formally write this.
This is wrong if the density $f$ is allowed to be zero on the left of $0$ (and bounded and strictly positive on the right of $0$): try $f$ the standard exponential density, then the density of $S_n=X_1+\cdots+X_n$ is proportional to $x^{n-1}\mathrm e^{-x}\mathbf 1_{x\gt0}$ hence $E(\bar X_n^{-1})$ is finite for every $n\geqslant2$. More generally, $E(\bar X_n^{-\alpha})$ is finite for every $n\gt\alpha$.
If one insists that the density $f$ should be bounded and strictly positive on a (bilateral) neighborhood of $0$ and if one actually means that $f\geqslant\varepsilon$ uniformly on $(-\varepsilon,\varepsilon)$, then it is enough to prove the result when $f$ is the uniform density on $(-1,1)$.