Let $\xi_1,\xi_2,\ldots,\xi_n$ be an independent sample of Exp($1$) random variables with joint density, $f(x_1,\ldots,x_n) = e^{-(x_1+\cdots+x_n)}$. If $g$ is a bounded continuous function, compute the following limit,
$$ \lim\limits_{n\rightarrow\infty} \int_{\mathbb{R}^n_+}g \left( \frac{x_1+ \cdots +x_n}{n} \right)f(x_1,\ldots,x_n) \, dx_1\cdots dx_n $$
where $\mathbb{R}_+^n = (0,\infty)^n$. My guess is that it will converge to $g(1)$. My reasoning is that we can view the integral as the expectation, $\operatorname{E}[g(\bar{X}_n)]$. By the strong law of large numbers and the coninuous mapping theorem, $\bar{X}_n \xrightarrow{\text{a.s.}} 1$ and $g(\bar{X}_n) \xrightarrow{\text{a.s.}} g(1)$. Since $g$ is bounded, we are permitted to bring the limit inside the integral,
$$ \lim_{n\rightarrow\infty} \operatorname{E}[g(\bar{X}_n)] = \operatorname{E} \left[ \lim_{n\rightarrow\infty} g(\bar{X}_n) \right] = g(1) $$
This is very handwavy, and probably not true though. Can someone lead me in the right direction?
As Shalop mentioned, this is not hand wavy since you checked that you can use the dominated convergence theorem in the probabilistic setting. The sequence $\left(g\left(\overline{X_n}\right)\right)_{n\geqslant 1}$ is bounded by $\sup_{x\in\mathbb R}\left\lvert g(x)\right\rvert$ and converges almost surely to $g(1)$.