Question about convergence in probability where random variables depend on another parameter in two different ways

50 Views Asked by At

I am given a sequence of random vectors $\theta_N \in \Theta$ such that $\theta_N \xrightarrow{p} \theta$. I want to show that a $f(x, \theta)$, a continuous scalar-valued function of $\theta$ and $x \in \mathcal{X} \subset \mathbb{R}^D$, satisfies
$$ \frac{1}{N} \sum_{i=1}^N f(x_i, \theta_N) \xrightarrow{p} \mathbb{E}_{x \sim p(x)} \left [ f(x, \theta) \right ], $$ where $x_i \sim p(x)$ are i.i.d samples. I know that since $f$ is continuous, we have that $$ f(x, \theta_N) \xrightarrow{p} f(x, \theta) $$ for any $x$. So for any finite $N'$, we have that $$ \sum_{i=1}^{N'} f(x_i, \theta_N) \xrightarrow{p} \sum_{i=1}^{N'} f(x_i, \theta). $$ I would like take the limit as $N' \to \infty$ in the equation above and invoke the weak law of large numbers, but I am unsure if this a valid way to derive the desired result. Any advice or reading recommendations would be greatly appreciated. Thank you.