Let $\mathbf{X}^m = (X_1^m, ..., X_n^m)$ denote a vector of $n$ i.i.d. and bounded random variables. Suppose that the sequence $ (X_1^1, ..., X_n^1),(X_1^2, ..., X_n^2), (X_1^3, ..., X_n^3) ...$ converges in distribution to the random vector $(X, ..., X) $ as $m \rightarrow \infty$ (i.e. every random variable $X_i^m$ converges in distribution to the random variable $X$). Finally, consider a sequence of bounded functions $ f^1, f^2, f^3, ...$ which converge uniformly to the function $f$.
Under these assumptions, it seems clear that
$$\mathbb{E}[\max(f^m(X_1^m), ..., f^m(X_n^m))] \rightarrow \mathbb{E}[\max(f(X), ..., f(X))]$$
as $m \rightarrow \infty$. But how can one show this rigorously (e.g. by appealing to established results)?
My thoughts so far:
- Since $X_i^m \rightarrow_d X$ and $f^m \rightarrow f$ uniformly, it seems plausible that $f^m(X_i^m) \rightarrow_d f(X)$.
- By the continuous mapping theorem, this would (?) then ensure that $\text{max}\{f^m(X_1^m), ..., f^m(X_n^m)\} \rightarrow_d \text{max}(f(X), ..., f(X))$.
- Finally, use something like the bounded convergence theorem to ensure the convergence of the expected value?
Many thanks in advance for any ideas or pointers!